Google Software Beats Board-Game Champ in Three Straight Matches

Lee Se-Dol

Lee Se-Dol, one of the greatest modern players of the ancient board game Go, arrives before the third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo at a hotel in Seoul on March 12, 2016.

Photographer: Jung Yeon-Je/AFP via Getty Images
  • AI system scores victory in televised `Go' game tournament
  • Beaten South Korean Lee Sedol considered world's best player

Chalk up another win for artificial intelligence. Google DeepMind’s AI system won its match against a top-ranked player of Go, as machine-learning software mastered the intricacies of the 2,500-year-old strategy board game.

The program scored its third victory against Lee Sedol Saturday, winning a five-match tournament. The South Korean is considered the world’s best player of Go in the past decade.

Lee Se-Dol at the Google DeepMind Challenge Match
Lee Se-Dol at the Google DeepMind Challenge Match
Photographer: Jung Yeon-Je/AFP/Getty Images

“It is a huge landmark for Go, a long-anticipated moment in the West and -- to a greater degree than I imagined -- a shock to many people in Asia,” said Andrew Okun, president of the American Go Association.

DeepMind’s success at Go has astounded experts, who thought it would be five to 10 years before AI could beat top-ranked professional players of the game. While the rules are simple -- players battle for territory by placing white or black stones on a 19-by-19 grid of squares -- Go is much more complex than chess, by an order of 10 followed by 99 zeros.

The victory positions Google as a leader in the next generation of super-smart computing. The search giant already uses AI in a range of products -- automatically writing e-mails, recommending YouTube videos, helping cars drive themselves. The next wave of AI technologies will use techniques akin to those developed by DeepMind, though the company hasn’t yet disclosed any particular products.

‘Somewhat Shocked’

Google revealed the AlphaGo game-playing software in a science journal Nature in January. At that time, Google said its system had already beaten professional European Go player Fan Hui in matches held at its London office in October. Since then, the software has been playing thousands upon thousands of games against itself, with Google running as many as a hundred separate versions of the program in parallel at any one time. That let the software to acquire experience and knowledge at a rate faster than a human ever could.

“I’m somewhat shocked,” Lee said after his first loss. “I am quite speechless,” he said after his second loss.

Development of the software does not finish with this victory. DeepMind’s co-founder, Demis Hassabis, has said the company wants to devise a version of the algorithm that requires less knowledge of the game. DeepMind also plans to test out the peculiar intelligence displayed by its software by tweaking the Go board, perhaps by removing certain points on the grid or changing how they’re connected, to see how AlphaGo reacts.

There are already signs that AlphaGo has come up with strategies that professional Go players haven’t previously considered. During its second match, the program made a move that the game experts found hard to understand.

“It’s playing moves that are definitely not usual moves,” said Michael Redmond, the game commentator and also a highly ranked Go player, speculating it was “coming up with the moves on its own.”

All five games will be played to determine the final match score and learn
more from Lee, according to a post on Google’s Asiapacific blog. The next game will be on Sunday and the final on Tuesday, March 15n according to the post.

Before it's here, it's on the Bloomberg Terminal. LEARN MORE