EU Demands Facebook, TikTok, and Google Start Labeling AI Content to Fight Deepfakes

Dozens of big tech companies must comply or face fines, while Twitter faces further sanctions for refusing to voluntarily comply with digital content laws.

We may earn a commission from links on this page.
An artistic image of a person's head being enclosed by a fake head.
Deepfake images, video, audio, and text are already spreading throughout the internet and has been used for political purposes and to spread misinformation.
Image: Lightspring

The European Union is absolutely set on regulating AI, and now the biggest online platforms in the world need to help folks tell if the growing number of fake images, video, and audio was created with artificial intelligence. Major tech companies including Google, Facebook, and TikTok have until Aug. 25 to start identifying what images, video, or audio contain deep fakes or potentially face multi-million dollar fines from the EU.

In talks, European Commission Vice President for Values and Transparency Věra Jourová said that dozens of tech companies need to start coming up with ways to label “AI generated disinformation.” The official said during a press conference that companies would need to “put in place technology to recognize such content and clearly label this to users.”

Advertisement

Jourová said AI-generated content need to have “prominent markings” denoting they’re deep fakes or were manipulated to some degree. This regulation is being promoted under the European body’s Digital Services Act, a law meant to mandate transparency of online content moderation.

Advertisement

According to information sent to Gizmodo from Jourová’s office, the new guidelines follow from an early May meeting with the task force of the Code of Practice on disinformation which includes representatives from both the companies and regulators. In addition, those platforms that make use of AI chatbots, including for customer service, must let users know they’re interacting with an AI instead of a real flesh and blood human.

Advertisement

Microsoft and Google are locked in a race to develop AI chatbots, and the EU has taken notice how far both seem to be going without any roadblocks or safeguards. According to The Guardian, Jourová met with Google CEO Sundar Pichai last week who told her they were working on developing means to detect fake AI-generated text. Despite how fast these companies have worked on proliferating AI chatbots, few have taken the same resources toward dealing with the mass AI content farms pumping out disinformation.

The DSA is already in force, but the EU still has to offer designations which online platforms fall under its specific restrictions. Late last month, Elon Musk’s Twitter decided to leave the EU’s voluntary Code of Practice against disinformation. EU Commissioner for Internal Markets Thierry Breton announced Twitter’s departure through a tweet, adding the DSA’s disinformation requirements would be applied to all by Aug. 25.

Advertisement

The Commission is working on some of the world’s first hardline AI regulations under the Artificial Intelligence Act. In part, that law would mandate AI developers disclose all the copyrighted materials used to train their AI models. Jourová said that European Parliament could apply rules mandating platforms detect and label AI-generated text content. Current methods for detecting AI-generated text are rather inefficient, so the onus would be on major tech companies to develop new models for determining deep fakes, whether that’s watermarks or some other method of ingraining an immutable AI signifier.

Advertisement