Spot the deepfake: The AI tools undermining our own eyes and ears (2024)

This article is part of a series,Bots and ballots: How artificial intelligence is reshaping elections worldwide,presented byLuminate.

Have you ever seen a deepfake? More importantly, can you spot the difference between these AI-generated images, audio clips and videos and the real thing?

As more than 2 billion voters across 50 countries prepare for national elections in 2024, that question — and the ability of such deepfakes to skew potential voters’ decisions at the polls — has never been more critical.

Case in point: In recent months, people have increasingly flagged alleged AI-powered deepfake images, audio and videos on X (formerly Twitter), according to a Brookings Institution review of the platform’s so-called community notes, a crowdsourced fact-checking initiative on the platform.

POLITICO decided to put you to the test.

Using Midjourney, an AI research lab whose technology can create lifelike images based on someone simply typing suggestions into the company’s online platform, POLITICO collected a series of real images — and those generated by artificial intelligence. Repeated global trials have shown that, on average, people can detect digital forgeries compared with legitimate images about 60 percent of the time, according to tech company officials with whom POLITICO spoke.

While the technology is still a work in progress, the ability of anyone — including POLITICO reporters — to create such realistic images with a few clicks on a keyboard has politicians, policymakers and disinformation experts worried.

If AI puts such power in the hands of anyone with a laptop, internet connection and $50 to access these powerful tools, such deepfake political content may flood people’s social media feeds in the months ahead.

How well will you do? Take the quiz before you read the rest of the story. (Spoilers below!)

Who wants to be a clone?

Of the potential deepfake threats this year, cybersecurity and disinformation experts are most worried about audio.

So far, almost all contentious AI-generated images have been debunked within hours, mostly because of the power of social media to quickly crowdsource errors in these photos that are often otherwise imperceptible. Big Tech companies and independent fact-checkers, too, have prioritized finding and removing such harmful politically motivated falsehoods.

Spot the deepfake: The AI tools undermining our own eyes and ears (1)

But audio — especially the AI-powered grainy clips that were unsuccessfully used to smear British Labour Party Leader Keir Starmer — remains uncharted territory. The disconnect between what people hear and what they see can often hoodwink individuals into believing that an inflammatory deepfake audio clip is legitimate.

To test that theory, POLITICO used off-the-shelf technologies — costing less than $50 in total to purchase — to see how easy it was to generate a deepfake audio clip. Initially, we were going to clone actual politicians. But as such falsehoods are both legally dubious and represent a direct threat to this year’s election cycle, we decided instead to mimic the voices of POLITICO reporters.

You can judge for yourself whether these AI-generated clips are good enough to fool you.

THE REAL MARK SCOTT Spot the deepfake: The AI tools undermining our own eyes and ears (2)
THE REAL AOIFE WHITE Spot the deepfake: The AI tools undermining our own eyes and ears (3)
THE OTHER MARK SCOTT Spot the deepfake: The AI tools undermining our own eyes and ears (4)
THE OTHER AOIFE WHITE Spot the deepfake: The AI tools undermining our own eyes and ears (5)

AI Biden vs. AI Trump

The next frontier of AI deepfakes is video — especially content that can interact with humans in real-time.

And, when it comes to politically motivated AI-powered photos, a Soviet-era office block near the German-Polish border has become ground zero in demonstrating how that technology is evolving.

There, amid a group of activists known as the Singularity Group, researchers created an ongoing, real-time online video debate between an AI-powered Joe Biden and an AI-generated Donald Trump.

Spot the deepfake: The AI tools undermining our own eyes and ears (6)

The project, which has been running for almost nine months, uses so-called open-source technology, or AI models freely accessible to the public. It allows anyone to type in a debate question — through the Amazon-owned streaming service Twitch — and then the Biden/Trump bots power up, calculate an answer through Singularity’s AI systems, and spit it out, mimicking the politicians’ voices and images.

“Deepfakes are a real concern,” said Reese Leysen, one of the activists behind the project that — importantly — is labeled as a parody on Twitch. “We wanted to focus on politicians to make people take notice.”

POLITICO asked multiple real-world debate questions to the fake Biden and Trump. Most were either too racy or too filled with profanity to publish — not surprising, given that this AI system has been trained on random people asking it questions on the internet for almost a year.

But below are the two least-graphic videos. Is the technology perfect? Definitely not. But it’s a snapshot of where things are headed.

Ask the bots

We asked fake Donald Trump and Joe Biden a few real-world debate questions. Here’s how they answered.

Prompt question No. 1: Which Disney character best represents your political opponent, and why?

Prompt question No. 2: If you were to win the November election, how would you resolve the Russia-Ukraine war?

TRUMP

BIDEN

This article is part of a series,Bots and ballots: How artificial intelligence is reshaping elections worldwide,presented byLuminate. The article is produced with full editorial independence by POLITICO reporters and editors.Learn moreabout editorial content presented by outside advertisers.

Spot the deepfake: The AI tools undermining our own eyes and ears (2024)

FAQs

How to spot deepfakes? ›

Currently, video deepfakes typically give off several clues, like poorly rendered hands and fingers, off-kilter lighting and reflections, a deadness to the eyes, and poor lip-syncing.

What is the AI that detects deepfakes? ›

Sensity is an AI-driven solution designed for the efficient detection of deepfake content such as face swaps, manipulated audio, and AI-generated images.

How can AI help with the fight against deepfakes? ›

To combat such abuses, technologies can be used to detect deepfakes or enable authentication of genuine media. Detection technologies aim to identify fake media without needing to compare it to the original, unaltered media. These technologies typically use a form of AI known as machine learning.

How does deepfake affect our society? ›

The rapid spread of deepfakes on social media worsens the already prevalent issue of misinformation. A study conducted by the University of Baltimore and Cybersecurity firm CHEQ, found that in 2020, fake news cost the global economy $78 billion.

Is it illegal to look at deepfakes? ›

There is currently no federal law against disseminating such content. However, some legal professionals believe “such illicit practices may not require new legislation, as they already fall under a patchwork of existing privacy, defamation or intellectual property laws,” according to an article by Law.com.

Are deepfakes actually illegal? ›

Defamation, Harassment, and Privacy Laws

Laws are in place to take legal action against deepfakes that ruin reputation, bully, or violate privacy. If deepfakes spread false and harmful information causing defamation, libel laws can be invoked.

Is there really an AI detector? ›

AI detectors are given a set of training data, which typically contains both human and AI-generated text. It analyzes those articles to figure out which characteristics best identify the AI-generated pieces. Two of the major characteristics AI detectors analyze are: Perplexity: The unpredictability of the content.

How do you deceive an AI detector? ›

Techniques for How to Avoid AI Detection
  1. Using Unicode Characters. ...
  2. Adding Punctuation and Symbols. ...
  3. Using hom*oglyphs. ...
  4. Using Synonyms and Antonyms. ...
  5. Rearranging Sentence Syntax. ...
  6. Changing Word Forms. ...
  7. Visual Camouflage Techniques. ...
  8. Audio Encoding Methods.
6 days ago

What are some examples of deepfakes? ›

Deepfake examples

One benign example is a video that appears to show soccer star David Beckham fluently speaking nine different languages, when he actually only speaks one. Another fake shows Richard Nixon giving the speech he prepared in the event that the Apollo 11 mission failed and the astronauts didn't survive.

How can AI manipulate humans? ›

AI systems are often trained to imitate human data which contains manipulative behaviors: for instance, language models trained on internet content seem to learn how to behave in both persuasive and manipulative ways [17, 64, 165].

Can AI outsmart us? ›

"It's not a question of if AI will outsmart us but when. We simply cannot compete with the raw processing power," Jon Schweppe, the policy director of the American Principles Project, told Fox News Digital.

Can AI generate fake human faces? ›

Artificial intelligence, or AI, can now generate faces that are indistinguishable from human faces. However, AI algorithms tend to be trained using a disproportionate number of White faces. As a result, AI faces may appear especially realistic when they are White.

Why are deepfakes harmful? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

What is the bad side of deepfakes? ›

By enabling the creation of convincing yet fraudulent content, Deepfake technology has the potential to undermine trust, propagate misinformation, and facilitate cybercrimes with profound societal consequences.

What are the bad things about deepfakes? ›

Typically, deepfakes are used to purposefully spread false information or they may have a malicious intent behind their use. They can be designed to harass, intimidate, demean and undermine people. Deepfakes can also create misinformation and confusion about important issues.

Why are deepfakes hard to detect? ›

Deepfake detection systems work very differently from how human beings listen. They analyze audio samples for artifacts like missing frequencies that are often left behind when audio is programmatically generated.

Can facial recognition detect deepfake? ›

Deepfake detection tools are also being used in facial recognition systems. Deepfake detection algorithms focus on identifying subtle discrepancies and anomalies that are not present in authentic human faces.

Can deepfakes be tracked? ›

As these generative artificial intelligence (AI) technologies become more common, researchers are now tracking their proliferation through a database of political deepfakes.

References

Top Articles
Latest Posts
Article information

Author: Fr. Dewey Fisher

Last Updated:

Views: 6113

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Fr. Dewey Fisher

Birthday: 1993-03-26

Address: 917 Hyun Views, Rogahnmouth, KY 91013-8827

Phone: +5938540192553

Job: Administration Developer

Hobby: Embroidery, Horseback riding, Juggling, Urban exploration, Skiing, Cycling, Handball

Introduction: My name is Fr. Dewey Fisher, I am a powerful, open, faithful, combative, spotless, faithful, fair person who loves writing and wants to share my knowledge and understanding with you.