Parmy Olson, Columnist

The Human Perils of Giving ChatGPT More Memory

Personalized AI systems could revive some of the unintended consequences of social media, like echo chambers.    

Photographer: SOPA Images/LightRocket
Lock
This article is for subscribers only.

OpenAI is rolling out what it calls a memory feature in ChatGPT. The popular chatbot will be able to store key details about its users to make answers more personalized and “more helpful,” according to OpenAI. These can be facts about your family or health, or preferences about how you want ChatGPT to talk to you so that instead of starting on a blank page it’s armed with useful context. As with so many tech innovations, what sounds cutting-edge and useful also has a dark flipside: It could blast another hole into our digital privacy and — just maybe — push us further into the echo chambers that social media forged.

AI firms have been chasing new ways of increasing chatbot “memory”1for years to make their bots more useful. They’re also following a roadmap that worked for Facebook, gleaning personal information to better target users with content to keep them scrolling.