TikTok has been sending inaccurate and misleading news-style alerts to users’ phones, including a false claim about Taylor Swift and a weeks-old disaster warning, intensifying fears about the spread of misinformation on the popular video-sharing platform.
Among alerts was a warning about a tsunami in Japan, labeled “BREAKING,” that was posted in late January, three weeks after an earthquake had struck.
The notifications, which sometimes contain summaries from user-generated posts, pop up on screen in the style of a news alert. Researchers say that format, adopted widely to boost engagement through personalized video recommendations, may make users less critical of the veracity of the content and open them up to misinformation.
“Notifications have this additional stamp of authority,” said Laura Edelson, a researcher at Northeastern University, in Boston. “When you get a notification about something, it’s often assumed to be something that has been curated by the platform and not just a random thing from your feed.”
Social media groups such as TikTok, X, and Meta are facing greater scrutiny to police their platforms, particularly in a year of major national elections, including November’s vote in the US. The rise of artificial intelligence adds to the pressure given that the fast-evolving technology makes it quicker and easier to spread misinformation, including through synthetic media, known as deepfakes.
[…]
TikTok, which has more than 1 billion global users, has repeatedly promised to step up its efforts to counter misinformation in response to pressure from governments around the world, including the UK and EU. In May, the video-sharing platform committed to becoming the first major social media network to label some AI-generated content automatically.
[…]
TikTok declined to reveal how the app determined which videos to promote through notifications, but the sheer volume of personalized content recommendations must be “algorithmically generated,” said Dani Madrid-Morales, co-lead of the University of Sheffield’s Disinformation Research Cluster.
Edelson, who is also co-director of the Cybersecurity for Democracy group, suggested that a responsible push notification algorithm could be weighted towards trusted sources, such as verified publishers or officials. “The question is: Are they choosing a high-traffic thing from an authoritative source?” she said. “Or is this just a high-traffic thing?”
labeled “BREAKING,” that was posted in late January, three weeks after an earthquake had struck.
Labeled broken because it’s a broken notification [system] /s