Why we shouldn’’t get too excited about Google DeepMind beating the Go world champion

Google’s artificial intelligence (AI) program, DeepMind, this week won the first two of five scheduled matches of board game Go against world champion Lee Sedol. By doing so, it became the first computer in history to defeat a top-ranked human player of Go.

DeepMind Technologies was founded in 2010 by Mustafa Suleyman and Demis Hassabis from the UK, and New Zealander Shane Legg, who Hassabis met at University College London's Gatsby Computational Neuroscience Unit.

Using AI, the start-up built a computer that mimics the short-term memory of the human brain.

In 2014, Google acquired the company for £400 million and renamed it Google DeepMind, but it has remained headquartered in London.

>See also: Google's British AI program defeats Go world champion Lee Sedol in historic match

IBM made headlines when its AI computer Deep Blue defeated chess grandmaster Garry Kasparov in 1997, but chess only has around 10 to the power of 60 possible ways a game can be played, compared to 10 to the power of 700 possible scenarios in Go.

Dr Michael Green, chief analytical officer at Blackwood Seven, explains how DeepMind did it – and, while a historic moment, why it’s not as significant for artificial intelligence (AI) as some people think.

Dr Green has spent the last decade using theoretical physics across a number of sectors, including financial services and medicine. Blackwood Seven uses similar machine learning technology to Google’s self-driving cars, but in the media industry.

This is really a landmark in the use of artificial intelligence (AI) since it is the very first time a computer has beaten a full Rank 9-dan player in Go without handicaps. It has beaten a 5-dan before but never a full rank master.

The methodology they've used is not new but a very nice combination of two known techniques. What DeepMind has done is combine deep learning with Monte Carlo tree search, which allows the computer software to simulate millions of games and see the outcome of those games and then learn from that.

They could generate as many games as they wanted to in order to train their deep learning neural network. 

This allowed them to alleviate the biggest challenge that deep learning has today, which is the fact that it requires a huge number of examples to train on in order to learn how to generalise.

This approach will allow DeepMind to beat any human in any objective game within a very short period of time.

It's important to remember that AI has already replaced humans in some smaller areas. It’s already in your phone, computer, spell-checker and voice recognition – these spaces all use AI, and the accuracy for AI in recognising spoken language is now better than humans.

This victory removes the doubts surrounding AI. People were convinced DeepMind could not beat the best Go player, but they will have to start taking AI seriously now. 

>See also: Google's AI computer AlphaGo wins second match against world champion Lee Sedol

DeepMind has made decisions but in this instance, the AI could only make decisions for a game.

If you put DeepMind's AlphaGO, or any AI for that matter, to the test to predict any other scenario than it was designed to do, it couldn’t do it. 

No one has yet written an AI that can move across domains. AI at the moment can only do one particular domain at a time and cannot move from one domain to another without retraining and restructuring.

There is no AI in the world today that we can release into the world and will just work for all areas simultaneously. This will be the next step for AI: creating a being that is able to make decisions in this huge playground that we call the world.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...