Artificial Intelligence Has an ‘Alignment’ Problem
A big risk of AI is amplifying the biases of its human creators. On this episode of AI IRL, we explore whether the danger can be avoided.
AI IRL: The Alignment Problem
The increasing number of risks associated with artificial intelligence include repeating and amplifying the biases of its human creators, and its failure to understand and apply the nuances and values that color our decisions.
The issue is one of “alignment,” a conundrum inherent to AI. While the technology can be taught to accomplish tasks in a logical fashion—how to get from your house to the pizza parlor downtown—it must also learn the values that, say, would prevent you from driving across a golf course to get there. AI researchers have their own catalogs of alignment failures, some comical, some much more sobering.