Can AI Gain Sentience? Maybe, But Probably Not Yet: QuickTake
Media coverage of artificial intelligence tends to invoke tired references to “The Terminator” or “2001: A Space Odyssey,” in which a HAL 9000 computer kills a spaceship’s passengers. Hollywood loves a story about a sentient robot destroying humanity in order to survive. In recent days, Google researcher Blake Lemoine grabbed headlines for getting suspended after releasing transcripts of a “conversation” with the company’s Lamda artificial intelligence research experiment. Lemoine believes that Lamda is sentient and aware of itself and describes the machine as a “coworker.” He told the Washington Post that part of his motivation for going public was because his belief that “Google shouldn’t be the ones making all the choices” about what to do with it. The overwhelming reaction among artificial intelligence experts was to pour cold water on these claims.
It’s an acronym for language model for dialogue applications. As the name might suggest, it’s a tool designed to create a “model” of language so people can talk to it. Like similar experiments GPT-3 (Generative Pre-trained Transformer 3) from Elon Musk-backed OpenAI and Google’s earlier BERT (Bidirectional Encoder Representations from Transformers), these experiments are best thought of as amped-up versions of the algebra you learned at school, with a twist. That twist is called machine learning, but before that we have to go back to the classroom and talk about algorithms.