- Five-game series designed to show off software capabilities
- Go is a 2,500-year-old strategy game for taking territory
Google DeepMind’s artificial intelligence system beat a top-ranked player of the board game Go in a televised match in South Korea, providing the first evidence that the company’s software has attained super-human status at a challenging 2,500-year-old strategy contest.
The Internet company is playing a five-match tournament against Lee Sedol, who Google said has been the top-ranked Go player of the past decade, to show off the capabilities of software developed by its London-based AI subsidiary DeepMind.
“It’ll never get tired and it’ll never get intimidated,” said DeepMind co-founder Demis Hassabis, at a press conference Tuesday ahead of the match. “These are the main advantages.”
The breakthrough astounded experts, who’d previously thought it would be five to 10 years before AI would be good enough to play Go, and positions Google as a leader in the next generation of super-smart computing. The search giant already uses AI in a range of products -- automatically writing e-mails, recommending YouTube videos, helping cars drive themselves. The next wave of AI technologies will use techniques akin to those developed by DeepMind, though the company hasn’t yet disclosed any particular products.
“Health care is one of the main things we’re looking at next,” Hassabis said. “The system and techniques that we’re using for AlphaGo should be useful for anywhere, any kind of problem where there’s lots and lots of data and you’re trying to understand the structure in that data and make some kind of decision.”
DeepMind, part of Mountain View, California-based Alphabet Inc., revealed its software, called AlphaGo, in January in a paper published in science journal Nature. AlphaGo had attained expert human-level performance at Go, and had beaten European professional Go player Fan Hui in a match held in the company’s London offices in October.
The first win against Lee is further confirmation of the power of DeepMind’s system and its progress in seeking to make machines that can out-smart humans. For scientists and researchers in AI, Go has been the game to conquer since IBM’s supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997.
What sets DeepMind’s approach apart from traditional Go-playing software is its use of a technology called a neural network, which lets computers learn from experience, rather than specific programming. This enables it to learn by studying example games, then by playing millions of games against itself, inferring the rules and, eventually, developing long-term strategies it can use to try to win. The system also uses a more traditional computing technique called Monte Carlo Tree Search.
Go, also known as Baduk, is a game that sees players battle to take territory on a board by taking turns placing stones on the intersections of a grid. There is only one type of piece and players choose to play as either white or black. On a 19-by-19 Go grid, there are more possible board configurations than there are atoms in the known universe.
“I’m somewhat shocked,” Lee told reporters after the match. “I didn’t really imagine I’d lose. I didn’t foresee AlphaGo would play Go so perfectly.”
The game is played widely in Asia, with tournaments awarding prizes in the hundreds of thousands of dollars. Top players like Lee are treated like celebrities -- DeepMind first contacted him through his agent, rather than reaching out directly, Hassabis said in January.
“Whenever you have a large number of people using something, we can probably use machine intelligence to make it more efficient,” Alphabet Chairman Eric Schmidt said in Seoul.