Parmy Olson, Columnist

ChatGPT Is Just Too Dangerous for Teenagers

OpenAI needs to do more to protect teens from potential harm triggered by its AI chatbot.

Photographer: Lionel Bonaventure/AFP/Getty Images

When Jacob Irwin asked ChatGPT about faster-than-light (FTL) travel, it didn’t challenge his theory as any expert physicist might. The artificial intelligence system, which has 800 million weekly users, called it one of the “most robust… systems ever proposed.” That misplaced flattery, according to a recent lawsuit, helped push the 30-year-old Wisconsin man into a psychotic episode. The suit is one of seven levelled against OpenAI last week alleging the company released dangerously manipulative technology to the public.

ChatGPT’s sycophantic behavior became so well known it earned the name “glazing” earlier this year; the validation loops that users like Irwin found themselves in seem to have led some to psychosis, self harm and suicide. Irwin lost his job and was placed in psychiatric care. A spokesperson for OpenAI told Bloomberg Law that the company was reviewing the latest lawsuits and called the situation “heartbreaking.”