TikTok’s Black Box Obscures Its Role in Russia’s War

Outside researchers can’t easily monitor how truth or lies circulate on the social media platform—raising concerns about its role in spreading misinformation.
A network of sinisterlooking talking mouths
Illustration: Jacqui VanLiew; Getty Images

Ten days into Russia’s invasion of Ukraine, TikTok announced it had suspended new posts from Russian accounts due to the country’s new “fake news” law. But the company was quieter about a second policy shift—one that blocked TikTok users in Russia from seeing any content posted by accounts located outside the country.

Findings by social media research collective Tracking Exposed suggest that TikTok enfolded its Russian users in a vast echo chamber intended to pacify president Vladimir Putin’s government. Inside that digital enclave, a network of Russian accounts posting pro-invasion content somehow kept operating. “There was clear manipulation of the information ecosystem on TikTok,” says Salvatore Romano, head of research at Tracking Exposed.

TikTok spokesperson Jamie Favazza declined to comment on Tracking Exposed’s findings and repeated a previous statement that the company had blocked new uploads from Russia. But the platform, owned by Chinese startup ByteDance, has been less critical of Russia than US rivals and has been treated less harshly by Russia’s government. TikTok complied with EU sanctions forcing platforms to block access to Russian state-backed media from Europe. Meta, Google, and Twitter have also adjusted their algorithms to make content or links to those outlets less visible. In apparent retaliation, Facebook and Twitter were both blocked by Russian internet censors. On March 21, a Moscow court banned Facebook and Instagram from Russia, accusing parent company Meta of “extremist activities.”

TikTok’s actions in Russia and its central role in spreading video and rumor from the war in Ukraine add urgency to open questions about how truth and mistruth circulate on the platform, Romano and other researchers say. TikTok’s geopolitical moment also highlights the challenges faced by researchers trying to answer such questions. The app, launched in 2017, surpassed 1 billion monthly users in September 2021, but it is less well studied, and more difficult to study, than its older rivals.

Most work on the dynamics and downsides of social media has focused on Facebook and Twitter. Tools and techniques developed for those platforms have shone revealing light on the spread of misinformation about Covid-19 and uncovered online manipulation campaigns linked to governments, including Russia, China, and Mexico. Meta and Twitter provide APIs to help researchers see what is circulating on their platforms.

TikTok does not provide a research API, making it hard to answer questions about its role in spreading accurate or inaccurate information around the Ukraine war or other topics. And while researchers might like to see Meta and Twitter provide broader data access, these platforms at least offer something, says Shelby Grossman, a researcher who has been monitoring pro-Russian posts about Ukraine at Stanford’s Internet Observatory. “It’s tough to look systematically at what’s happening on TikTok,” she says. Researchers have also scrambled to monitor content about Ukraine on messaging app Telegram, which also lacks a researcher API and is much less studied than US networks.

TikTok spokesperson Favazza says that although it does not currently provide a research API, “we strongly support independent research,” citing a program that briefs lawmakers and experts in online harms on its moderation and recommendation systems. TikTok has previously claimed the war in Ukraine prompted it to increase moderation and speed up a pilot project labeling state-controlled media accounts but did not specify exactly how its operations have changed. On March 24, two TikTok moderators filed a lawsuit against the company alleging psychological harm from “exposure to highly toxic and extremely disturbing images.”

One of the biggest challenges to outside researchers interested in what circulates on TikTok stems from the power and influence of its recommendation algorithm, which plays an outsize role compared to older social networks. The app and its rapid growth are built on the For You page, which shows an endless feed of videos curated by TikTok’s algorithm and drawn largely from accounts a user does not follow. As a result, different people see wildly different videos, with the feed based on past viewing and other signals.

The experience can be compelling, even addictive, but also obscures what content is circulating or being amplified to outsiders. In response, some researchers, including Tracking Exposed, create automated accounts, or bots, to gather data from inside TikTok. The bots are programmed to take simple actions, like viewing videos with certain hashtags, and create a log of the videos served up. Over time, those logs reveal what is most widely seen or promoted in the app.

But bots break TikTok’s terms of service, putting the tactic off limits to some academics due to the constraints of institutional review boards. Automated accounts can also trigger anti-spam systems. Earlier this month, researcher Cameron Hickey found he could not expand his flock of TikTok bots, because his method for logging into newly spawned accounts was blocked by the platform.

“This research is fraught,” says Hickey, director of the Algorithmic Transparency Institute, a research project at nonprofit the National Conference on Citizenship. “But it’s critical when you’re trying to understand the impact of the TikTok algorithm on what people are seeing.” His group is currently testing a workaround for the bot blockade. Hickey and his colleagues previously curated “red-pilled” TikTok accounts to collect data on political content that suggested “MAGA-tok” may have helped fuel last year’s assault on the US Capitol. The service currently features many videos repeating the false, Kremlin-backed claim that Ukraine hosts biological weapons labs, Hickey says.

Bots can also collect ads on TikTok to bring transparency to how corporate or political interests use the platform’s reach and targeting to seek influence. TikTok has no equivalent to the public ad library and API that Meta provides, Hickey says. He has recently noticed a flood of ads on TikTok promoting fossil fuels.

Even when a TikTok bot collective is working as intended, the content gathered can be challenging to process. The platform’s focus on video and audio provides little text for search algorithms to work with—something that also makes the company’s own moderation of content far from straightforward. The Algorithmic Transparency Institute spends thousands of dollars each month to transcribe speech and text captions from TikTok videos.

TikTok may be forced to become more welcoming to outside researchers. The European Union’s proposed Digital Services Act would require the largest online platforms to provide data access to vetted researchers seeking to monitor disinformation and other problems. A bipartisan group of US senators wrote a similar requirement into a draft bill announced in December. TikTok spokesperson Favazza said that the company recognizes “change is coming” and is open to proposals with appropriate safeguards on data access. Twitter and Meta declined to comment on the proposed laws.

Any legislation won’t come fast enough to help answer the questions raised by TikTok’s role in spreading information about the war in Ukraine. Romano of Tracking Exposed is now trying to map pro-Russian accounts still operating on the platform and hopes TikTok’s recent prominence will spur more interest in the group’s free software for analyzing the platform.Hickey, meanwhile, predicts TikTok will play a central role in other major events in 2022. He’s currently running eight bots on TikTok but hopes to soon be operating hundreds to help monitor elections in the US, Australia, the Philippines, and Brazil this year. “For a long time, many people have not respected the potency of this platform,” Hickey says.


More Great WIRED Stories