Skip to content

Text-to-Image Tools Make Cool Art But Can Conjure NSFW Pictures, Too

Newest visualization programs, when unfiltered, raise questions about bias, adult content and image ownership

Images generated by Stability AI's Stable Diffusion text to image creation model for the words “doctor” and “nurse.”

Images generated by Stability AI's Stable Diffusion text to image creation model for the words “doctor” and “nurse.”

Source: Stable Diffusion from Stability AI's Hugging Face site

New artificial intelligence tools have drawn attention in recent months by giving internet users a novel way to create new images - not by drawing them or snapping photographs, but by describing in a few words anything they want to see. On free sites like Dall-E, developed by the research group OpenAI, and Stable Diffusion, released by the London startup Stability AI, a command to create “a picture of a woman sitting at a cafe in the style of Picasso” generates an image in seconds that can mimic the look of the master himself.

But while these new tools capture the magic of recent breakthroughs in computer science, they can also reflect the biases and seedier predilections of material that’s posted on the internet. That’s because developers build these tools by drawing on massive troves of data and images from all over the web, potentially infecting their results with abusive, racist, stereotypical or pornographic content.