Artificial intelligence researchers are closing in on a new benchmark for comparing the human mind and machine. On Wednesday, DeepMind, a research organization that operates under the umbrella of Alphabet, reported that a program combining two separate algorithms had soundly defeated a high-ranking professional Go player in a series of five matches.
The result, which appeared in the Jan. 27 edition of the journal Nature, is further evidence of the power created when a class of A.I. machine learning programs known as âdeep neural networksâ is combined with immense sets of data.
Go is seen as a good test for artificial intelligence researchers because it is more complex than chess, with a far larger range of possible positions. This makes strategy and reasoning in the game challenging.
Go is played with round black and white stones, and two players alternately place pieces on a square grid with the goal of occupying the most territory. Until recently, software programs had not been able to do better than beat amateur Go players. In the Nature paper, however, engineers at DeepMind described the program AlphaGo that had achieved a 99.8 percent winning rate against other Go programs. It also swept five games from the European Go champion, Fan Hui.
The match between the AlphaGo program and Mr. Hui was in October, and the DeepMind program has continued to train since then, said Demis Hassabis, a researcher who founded DeepMind Technologies, which was acquired by Google in 2014. Google changed its name to Alphabet last year, though the company’s traditional ad-based businesses still operate under the Google label.
“The machine has continued to get better; we haven’t hit any kind of ceiling yet on performance,” he said.
The Alphabet approach relies on the newest so-called “deep learning” approach combined with a more traditional type of algorithm known as a Monte Carlo, which is designed to exhaustively explore large numbers of possible combinations of moves. The researchers said they had also trained their program using input from human expert Go players.
Possibly as intriguing as the DeepMind advance is the rivalry the research and the game has created with the public relations departments of companies like Alphabet, Microsoft and Facebook.
The day before the Alphabet paper was published, Facebook republished an earlier paper the company had posted on the arXiv.org web site. At the same time, Facebook issued blog posts from Yann LeCun, one of its artificial intelligence researchers, and one from the company chief executive, Mark Zuckerberg. The statement by Mr. Zuckerberg resulted in a swift response from one Facebook user that may express a deeper human concern than the narrow results of the research: “Why don’t you leave that ancient game alone and let it be without any artificial players? Do we really need an A.I. in everything?” wrote Konstantinos Karakasidis.
Those concerns are not likely to be heeded. In a blog post on Wednesday morning, Alphabet stated that, in an effort to reprise the winning IBM Deep Blue chess playing program that defeated the chess champion Garry Kasparov in 1996, Alphabet will soon match its AlphaGo program against Lee Sedol, the current Go champion. AlphaGo is scheduled to play a five-game match against Mr. Sedol in March.
There will be a $ 1 million prize for the winner, and Mr. Hassabis said that Alphabet would donate the prize to charity if AlphaGo wins. The match will be streamed live on YouTube.
Mr. Hassabis, who is a skilled chess player and has been a professional gamer as well, said that Go was a beautiful game, but that “building an A.I. is also a human endeavor and a kind of ingenious one too. The reason games are used as a testing ground is that they’re kind of like a microcosm of the real world.”