Technology

AI Energy Crisis Boosts Interest in Chips That Do It All

So-called in-memory computing is drawing new funding—and even inspiring some geopolitical maneuvering.

Sid Sheth, founder and CEO of d-Matrix, holds the company’s Jayhawk and Nighthawk chiplets.

Source: Courtesy d-Matrix

To understand a key reason artificial intelligence requires so much energy, imagine a computer chip serving as a branch of the local library and an AI algorithm as a researcher with borrowing privileges. Every time the algorithm needs data, it goes to the library, known as a memory chip, checks out the data and takes it to another chip, known as a processor, to carry out a function.

AI requires massive amounts of data, which means there are the equivalent of billions of books being trucked back and forth between these two chips, a process that burns through lots of electricity. For at least a decade, researchers have tried to save power by building chips that could process data where it’s stored. “Instead of bringing the book from the library to home, you’re going to the library to do your work,” says Stanford University professor Philip Wong, a top expert in memory chips who’s also a consultant to Taiwan Semiconductor Manufacturing Co.