Can Taylor Swift Save Humanity From AI’s Dark Side?
The singer’s clout may finally tip the scales toward effective laws and spotlight how generative AI is doing more harm than good.
Swifties aren’t having it.
Photographer: Patrick Smith/Getty Images North AmericaThe recurring story of new technology is unintended consequences. Take AI-powered image generators. Their creators have claimed they are enhancing human imagination and making everyone an artist, but they often fail to mention how much they’re helping to create illicit deepfake pornography too. Lots of it. Over the weekend, X had to shut down searches for “Taylor Swift” because the site formerly known as Twitter had been flooded with so many faked porn images of the singer that it couldn’t weed them all out. One image alone was viewed more than 45 million times before being taken down. Swift’s scandal points to a broader problem: Around 96% of deepfakes on the web are pornographic. But it could also be the final tipping point before some genuine solutions are introduced.
Enough has happened in January alone to show that in the absence of proper regulation, the harms of generative AI are starting to outweigh the benefits. The technology is being used in more scams and bank frauds, it’s making Google search results worse and it’s duping voters with fake robocalls from President Joe Biden. But Swift’s attack shows where generative AI’s toxic effects are most insidious, by creating whole new groups of both victims and abusers in a marketplace for unauthorized, sexualized images. They point to the quieter but no-less damaging way that generative AI has been undermining the dignity of women, churning out images that are sexualized by default, for instance, with the situation worse for women of color.
