A Doctored Biden Video Is a Test Case for Facebook’s Deepfake Policies

Meta’s Oversight Board will review Facebook's decision to not remove a manipulated video of President Biden, and the committee hopes to push Meta to clarify its policies on election deepfakes.
Photoillustration of Joe Biden with parts of him distorted within a stylized Facebook post
Photo-illustration: Jacqui VanLiew; Getty Images

In May, A manipulated video of President Joe Biden appeared on Facebook. The original footage showed Biden during the 2022 midterm elections, placing an “I voted” sticker on his granddaughter’s chest and kissing her on the cheek. The doctored version looped the footage to make it appear like he was repeatedly touching the girl, with a caption that labeled him a “pedophile.”

Meta left the video up. Today, the company’s Oversight Board, an independent body that looks into the platform’s content moderation, announced that it will review that decision, in an attempt to push Meta to address how it will handle manipulated media and election disinformation ahead of the 2024 US presidential election and more than 50 other votes to be held around the world next year.

“Elections are the underpinning of democracy and it’s vital that platforms are equipped to protect the integrity of that process,” says Oversight Board spokesperson Dan Chaison. “Exploring how Meta can better address altered content, including videos meant to deceive the public ahead of elections, is even more important given advances in artificial intelligence.”

Meta said in a blog post that it had determined the video didn’t violate Facebook's hate speech, harassment, or manipulated media policies. Under its manipulated media policy, Meta says it will remove a video if it “has been edited or synthesized … in ways that are not apparent to an average person, and would likely mislead an average person to believe a subject of the video said words that they did not say.” Meta noted that the Biden video didn’t use AI or machine learning to manipulate the footage.

Experts have been warning for months that the 2024 elections will be made more complicated and dangerous thanks to generative AI, which allows more realistic faked audio, video and imagery. Although Meta has joined other tech companies in committing to trying to curb the harms of generative AI, most common strategies, such as watermarking content, have proven only somewhat effective at best. In Slovakia last week, a fake audio recording circulated on Facebook, in which one of the country’s leading politicians appeared to discuss rigging the elections. The creators were able to exploit a loophole in Meta’s manipulated media policies, which don’t cover faked audio.

While the Biden video itself is not AI-generated or manipulated, the Oversight Board has solicited public comments on this case with an eye towards AI and is using the case as a way to more deeply examine Meta’s policies around manipulated videos.

“I think it's still the case that if we look broadly across what's being shared, it's mis-contextualized, mis-edited media, and so policies need to reflect how they're going to handle that,” says Sam Gregory, program director at the nonprofit Witness, which helps people use technology to promote human rights.

The Oversight Board can issue binding decisions as well as recommendations, which Meta can choose whether or not to follow. It has tried to take on cases that it hopes will help shape the company’s approach to the 2024 election year, reviewing Meta’s decision not to remove a call for violence in the lead-up to Cambodia’s elections by then Prime Minister Hun Sen, and a speech by a Brazilian general in the lead-up to the country’s own post-election insurrection.

But Gregory also worries that even if the board issues binding decisions about how Meta should be approaching manipulated media, it has little power to dictate how much money or resources the company actually chooses to put toward the problem, particularly in elections outside the US. Like many other large tech companies, Meta has laid off trust and safety staff members who deal with issues like disinformation and hate speech. Meta has also historically struggled to moderate content that is not in English, and in contexts outside the US.

Unlike Google, which has introduced features to help users figure out whether an image is AI-generated or manipulated, Meta has not created any consumer-facing tools to allow users or fact-checkers to better understand the context of the content they may be seeing. 

Although the Oversight Board may hope to use the Biden video to lay the groundwork for how Meta should handle AI-generated or AI-manipulated content, Gregory says that he anticipates many questions around manipulated media will likely remain unanswered.

“I think it's really helpful to have the Oversight Board provide a clear-eyed assessment of how those policies are working and how they're working globally,” says Gregory. “I don't know if that would come from this single case.”