Skip to content
Subscriber Only
Technology
Quicktake

How Deepfakes Make Disinformation More Real Than Ever

Nixon, but not Nixon's words.

Nixon, but not Nixon's words.

Source: YouTube

Updated on
From

One video shows Barack Obama using an obscenity to refer to U.S. President Donald Trump. Another features a different former president, Richard Nixon, performing a comedy routine. But neither video is real: The first was created by filmmaker Jordan Peele, the second by Jigsaw, a technology incubator within Alphabet, Inc. Both are examples of deepfakes, videos or audio that use artificial intelligence to make someone appear to do or say something they didn’t. The technology is a few years old and getting better. So far, it’s mostly been used to create phony pornography, but many worry that it has the potential to disrupt politics and business. Researchers at New York University have called deepfakes a “menace on the horizon,” with the “potential to erode what remains of public trust in democratic institutions.” With U.S. elections approaching, Facebook is tightening its policy.

Originating with an early practitioner and Reddit user named “deepfakes,” the name appears to give a nod to deep learning, a subset of machine learning that uses layers of artificial neural networks to train computers to perform a task. With deepfake videos, a program is typically fed high-quality images of a target’s face and then seamlessly swaps it onto someone else’s face in a video. A deepfake audio uses a legitimate recording to train computers to talk like a specific person. Similar machine-learning techniques can be used to train computers to write fake text. A video that’s been slowed down, sped up or edited for deception -- such as a recent clip of House Speaker Nancy Pelosi slurring her words -- isn’t typically considered a deepfake and is sometimes called a shallowfake.