Tyler Cowen, Columnist

The AI ‘Safety Movement’ Is Dead

Public pressure to rein in artificial intelligence may be waning, but the work of making these systems less risky is just beginning. 

When it comes to AI, they’re actually making sense.

Photographer: Graeme Sloan/Bloomberg

Lock
This article is for subscribers only.

May 2024 will be remembered as the month that the AI safety movement died. It will also be remembered as the time when the work of actually making artificial intelligence safer began in earnest.

Some history: In the mid-2000s, a movement known as “effective altruism” made AI safety a top priority, based on fears that highly advanced AI models could vanquish us all or at least cause significant global chaos. Two leading AI companies, Anthropic and OpenAI, set up complicated board structures, with nonprofit elements in the mix, to keep those companies from producing dangerous systems.