How YouTube Can Rewrite the Past and Shape an Election

Philippine researcher Fatima Gaw says the platform has become a hub for pro-Marcos historical revisionism.
Crowd of supporters of Ferdinand BongBong Marcos Jr. former Philippine senator and current presidential candidate at a...
Photograph: Veejay Villafranca/Bloomberg/Getty Images

As the Philippine elections drew to a close on May 9, Ferdinand Marcos Jr, known as Bongbong, seemed all but assured to win the race against sitting vice president Leni Robredo. Despite the Marcos family’s history of corruption and violence, Marcos Jr. has benefited from a consistent disinformation campaign, dedicated to rewriting the family’s history and glorifying their years in power.

A report from Tsek.ph, a fact-checking collaboration between 34 news and civil society organizations, found that “as of April 30, 92 percent of fact checks about Marcos were false or misleading information in his favor.” Nearly all the disinformation about Robredo portrayed her negatively.

Though Facebook remains the country’s most popular platform, Gabby Roxas, Google’s head of marketing in the Philippines, told ABS-CBN news that YouTube saw a 50 percent increase in watch time during the pandemic and has more than 40 million users in the country.

Fatima Gaw, an assistant professor and researcher at the University of the Philippines who co-leads the Philippine Media Monitoring Laboratory, first identified pro-Marcos networks on YouTube in 2020. Her current work focuses on hyper-partisan YouTube channels.

I caught up with Gaw to talk about how a Western-centric view of platform governance and a lack of action has allowed disinformation to flourish. This conversation has been edited for clarity and brevity.

WIRED: Facebook got a lot of criticism after the 2016 election in the Philippines—many say it was instrumental in President Rodrigo Duterte’s win. What drew you to focus on YouTube?

Fatima Gaw: YouTube has flown under the radar simply because it's the second largest platform. All eyes are always going to be on Facebook, because it is linked to malicious actors like Trump and Cambridge Analytica. YouTube was where people organizing from the grassroots might converge and initiate their campaigns. It’s a sleeping giant.

For the Duterte administration, YouTube is increasingly the platform used after he was elected president. He had a lot of partner influencers or commentators out there who would manufacture political interest in the issues he wanted to advance. Mainstream media has always been anti- the administration, so—this is just me speculating—perhaps he realized he needed to create a media ecosystem where his own interests, his own slant, his own issues could be amplified. There's a lot of channels really pushing for his agenda. And he has been using it particularly to push anti-media sentiment, while there were also a lot of anti-media policies happening at the state level.

How did your work looking at Duterte lead you to Marcos?

Duterte and Marcos were allies until last year. [Marcos ran as Duterte’s vice president in 2016 and did not win. The two roles are elected separately.] So a lot of these channels were interchangeable in their content. Some would post about Marcos, some would post about Duterte, or some would post about both, so it’s not as if there’s a clear demarcation. Marcos Jr. knew about the baggage of his father's legacy, so he tried to distance himself from Marcos Sr. But in the years since he lost, there has been a change in tactics—the Marcos family realized, “We need to use the legacy instead to our advantage by whitewashing it.”

There's a lot of historical disinformation, and it’s one of the biggest issues in the Philippines. This ranges from outright denialism, saying that the atrocities during the martial law regime never happened. And there's also the more extreme claims, like the “Marcos gold” myth. We know their wealth comes from stealing from the Filipino people and from public funds, but it allows them to say [they didn’t steal].

A lot of reporters and historians were surprised at the level of propaganda and disinformation on YouTube. But my research shows that even in early 2011 there were videos like these, and the trend accelerated after 2016. Even when students are searching for Philippine history on YouTube, these false claims come up.

Is this something you flagged to YouTube?

We [Gaw and coauthor Cheryll Soriano] did this research in 2020, and we had conversations with YouTube executives. We said, “Here is a list of videos and channels we are flagging as containing historical disinformation and denialism.” And they said they would check and get back to us, but they never did. The people they send to the Philippines are not the ones who really have a voice in drafting content moderation policies.

The problem really is how YouTube defines misinformation—it's a very Western approach. In the Philippines, a lot of political divides are not ideological, they’re patronage based. It’s about what elite family you support, and whose narrative you therefore subscribe to.

[Ivy Choi, a spokesperson for YouTube, says that its hate speech policy and a number of its election misinformation policies are applicable globally, “and take into account cultural context and nuance.” She says YouTube regularly reviews and updates its policies, and “when developing our policies, we consult with internal and outside experts around the globe, and take their feedback into account.”]

Have you seen YouTube take down any of the videos?

No, that's actually the most frustrating part. Early in the election season, they said “We're going to really be serious in making sure that the election is fair and free.” But the part where they actually take action on the content, on the platform, there's really nothing that's happening, nothing meaningful. Even the historical disinformation I flagged two years ago is still there. In fact, because they were not taken down, those 500,000 subscribers now are 2 million. So there's this exponential gain on these channels and videos because they were left untouched by the platform.

If videos are popular they can get brand sponsorships. And because they have a lot of subscribers and they're talking about a very salient topic, there are lots of views. And that's paid for by YouTube—they’re kind of paying for disinformation.

[YouTube’s Ivy Choi says that it removes offensive content “as quickly as possible” and that it removed more than 48,000 videos in the Philippines during Q4 2021 for violating its Community Guidelines. YouTube says it is reviewing the specific channels flagged by WIRED, but that it reviews all of the channels in its YouTube’s partner program and removes those that don’t comply with its policies.]

Is this like, say, the right-wing or alt-right YouTube channels in the US?

It’s not like the alt-right network in the US, where you’ll see influencers in guest appearances on each other’s shows. What we've seen is that they echo the same narratives, but they don't want to be technically associated with each other, because if videos are flagged for violating YouTube's policies or community standards, it's easier to take down the whole network because they're connected.

Their connection is more subtle and algorithmic—they’re not mentioning each other per se. What YouTube will do is take the videos on a case-by-case basis. But even if you take down one or two videos, there's still hundreds left there. Regardless if they mention each other or not, they are recommending each other. So if you're watching, you would see the same people, the same message, referencing the same events and narrative. We see a lot of reposting, where one channel will repost the content of another influencer, but it's a different kind of amplification. And if you take down one video, if it's reposted elsewhere, it will exist on the platform nonetheless.

What do you mean when you say the videos are algorithmically connected?

We don’t know how YouTube’s algorithm works, and it changes all the time, but we can infer that there are things that signal to the algorithm that certain topics are connected. So in my Marcos disinformation research, you see that posters use the same keywords in their video titles, the same tags, signaling to the algorithm they’re talking about the same topics. They categorize themselves as “news, politics, or educational content,” even if they're not educational content at all. They belong to the same self-reported genre, so they probably would be grouped together and recommended to each other. It’s also the timing of when they release the videos, around an event like a presidential debate.

What do you think we can take away from your research for other countries or elections?

If disinformation is not taken down for years, and it’s ignored and neglected by the platform, it can just grow and grow and become entrenched. It's really hard to take the videos down now because they have 500,000 subscribers. There’s a cost in not addressing this problem up front.

[Creators] plant the seeds early on, before the election, and by the time it reaches the election period, they're mature enough. They look credible already, like they have authority to speak about the election, speak about politics. It's not easy to build a subscriber base on YouTube. You need to get at least 1,000 subscribers to become a YouTube partner. You have to work through the process of building your credibility, building your subscriber base, a community that makes you primed to be a prominent voice in the election season. And I think that ultimately the platforms are not paying attention to these smaller, or seemingly smaller, players because there is no political pressure to do so.