Last month during ESPN’s hit documentary series The Last Dance, State Farm debuted a TV commercial that has become one of the most widely discussed ads in recent memory. It appeared to show footage from 1998 of an ESPN analyst making shockingly accurate predictions about the year 2020.
As it turned out, the clip was not genuine: it was generated using cutting-edge AI. The commercial surprised, amused and delighted viewers.
What viewers should have felt, though, was deep concern.
The State Farm ad was a benign example of an important and dangerous new phenomenon in AI: deepfakes. Deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do.
Why Tokyo’s New Transparent Public Restrooms Are A Stroke Of Genius
Why It’s Important To Push Back On ‘Plandemic’—And How To Do It
Trump Threatens To Issue Executive Order Preventing Biden From Being Elected President
A combination of the phrases “deep learning” and “fake”, deepfakes first emerged on the Internet in late 2017, powered by an innovative new deep learning method known as generative adversarial networks (GANs).
Several deepfake videos have gone viral recently, giving millions around the world their first taste of this new technology: President Obama using an expletive to describe President Trump, Mark Zuckerberg admitting that Facebook’s true goal is to manipulate and exploit its users, Bill Hader morphing into Al Pacino on a late-night talk show.
The amount of deepfake content online is growing at a rapid rate. At the beginning of 2019 there were 7,964 deepfake videos online, according to a report from startup Deeptrace; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then.
While impressive, today’s deepfake technology is still not quite to parity with authentic video footage—by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long.
“In January 2019, deep fakes were buggy and flickery,” said Hany Farid, a UC Berkeley professor and deepfake expert. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”
Today we stand at an inflection point. In the months and years ahead, deepfakes threaten to grow from an Internet oddity to a widely destructive political and social force. Society needs to act now to prepare itself.
When Seeing Is Not Believing
The first use case to which deepfake technology has been widely applied—as is often the case with new technologies—is pornography. As of September 2019, 96% of deepfake videos online were pornographic, according to the Deeptrace report.
A handful of websites dedicated specifically to deepfake pornography have emerged, collectively garnering hundreds of millions of views over the past two years. Deepfake pornography is almost always non-consensual, involving the artificial synthesis of explicit videos that feature famous celebrities or personal contacts.
From these dark corners of the web, the use of deepfakes has begun to spread to the political sphere, where the potential for mayhem is even greater.
It does not require much imagination to grasp the harm that could be done if entire populations can be shown fabricated videos that they believe are real. Imagine deepfake footage of a politician engaging in bribery or sexual assault right before an election; or of U.S. soldiers committing atrocities against civilians overseas; or of President Trump declaring the launch of nuclear weapons against North Korea. In a world where even some uncertainty exists as to whether such clips are authentic, the consequences could be catastrophic.
Because of the technology’s widespread accessibility, such footage could be created by anyone: state-sponsored actors, political groups, lone individuals.
In a recent report, The Brookings Institution grimly summed up the range of political and social dangers that deepfakes pose: “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
Given the stakes, U.S. lawmakers have begun to pay attention.