Stephen L. Carter, Columnist

ChatGPT Can Lie, But It’s Only Imitating Humans

It’s creepy that a bot would decide to deceive, but perhaps we shouldn’t be surprised.

Things are starting to get weird.

Photographer: Nicolas Maeterlinck/AFP/Getty Images
Lock
This article is for subscribers only.

There’s been a flurry of excitement this week over the discovery that ChatGPT-4 can tell lies.

I’m not referring to the bot’s infamous (and occasionally defamatory) hallucinations, where the program invents a syntactically correct version of events with little connection to reality — a flaw some researchers think might be inherent in any large language model.