Parmy Olson, Columnist

OpenAI’s Q* Is Alarming for a Different Reason

When AI systems start solving problems, the temptation to give them more responsibility is predictable. That warrants greater caution. 

OpenAI Chief Executive Officer Sam Altman

Photographer: Justin Sullivan/Getty Images North America
Lock
This article is for subscribers only.

When news stories emerged last week that OpenAI had been working on a new AI model called Q* (pronounced “q star”), some suggested this was a major step toward powerful, humanlike artificial intelligence that could one day go rogue. What’s more certain: The hype around Q* has boosted excitement about the company’s engineering prowess, just as it’s steadying itself from a failed board coup. Peaks of AI excitement about milestones have taken the public for a ride plenty times before. The real warning we should take from Q* is the direction in which these systems are progressing. As they get better at reasoning, it will become more tempting to give such tools greater responsibilities. More than any concerns about AI annihilation, that alone should give us pause.

OpenAI hasn’t confirmed what Q* is, with reinstated Chief Executive Officer Sam Altman only describing it as an “unfortunate leak,” but in the media, it sounds similar to another system Alphabet Inc.’s Google is working on. Gemini is the big new competitor to ChatGPT, which won’t only generate text and images but also excel at planning and strategizing, according to Google DeepMind CEO Demis Hassabis. DeepMind famously created an AI model that beat champion Go players, and Gemini will use some of those techniques for problem solving.