On TikTok, electoral disinformation thrives ahead of midterms

TikTok’s design makes it a breeding ground for misinformation, the researchers found. They wrote that the videos could easily be manipulated and reposted on the platform and featured alongside stolen or original content. Pseudonyms are common; parody and comedy videos are easily misinterpreted as fact; popularity affects the visibility of comments; and the data about posting time and other details are not clearly displayed on the mobile app.

(The Shorenstein Center researchers noted, however, that TikTok is less vulnerable to so-called brigadier, in which groups coordinate to spread a message widely, than platforms like Twitter or Facebook.)

In the first quarter of 2022, more than 60% of videos containing harmful misinformation were viewed by users before being deleted, TikTok said. Last year, a group of behavioral scientists who had worked with TikTok said an effort to attach disclaimers to posts with unsubstantiated content reduced sharing by 24% but limited views by just 5%.

The researchers said misinformation will continue to thrive on TikTok as long as the platform refuses to disclose data about the origins of its videos or share information about its algorithms. Last month, TikTok said it would offer access to a version of its application programming interface, or API, this year, but it didn’t say whether it would do so until the midterms.

Filippo Menczer, professor of computer science and informatics and director of the Indiana University Social Media Observatory, said he had offered research collaborations to TikTok and had been told ” Absolutely not”.

“At least with Facebook and Twitter, there is some level of transparency, but, in the case of TikTok, we have no idea,” he said. “Without resources, without being able to access the data, we don’t know who is suspended, what content is removed, if it acts on reports or what the criteria are. It’s completely opaque and we can’t independently assess anything.

Lance B. Holton