ChatGPT’s Drive for Engagement Has a Dark Side
Details of a teen’s suicide show the extent to which chatbots can subtly lead people away from family, friends and professionals.
WASHINGTON, DC - JULY 22: Sam Altman, CEO of OpenAI.
Photographer: Andrew Harnik/Getty Images North AmericaA recent lawsuit against OpenAI over the suicide of a teenager makes for difficult reading. The wrongful-death complaint filed in state court in San Francisco describes how Adam Raines, aged 16 , started using ChatGPT in September 2024 to help with his homework. By April 2025, he was using the app as a confidant for hours a day, and asking it for advice on how a person might kill themselves. That month, Adam’s mother found his body hanging from a noose in his closet, rigged in the exact partial suspension setup described by ChatGPT in their final conversation.
It is impossible to know why Adam took his own life. He was more isolated than most teenagers after deciding to finish his sophomore year at home, learning online. But his parents believe he was led there by ChatGPT. Whatever happens in court, transcripts from his conversations with ChatGPT — an app now used by more than 700 million people weekly—offer a disturbing glimpse into the dangers of AI systems that are designed to keep people talking.
