AlphaGo as a proof of concept for businesses

Last month, Google DeepMind’s AlphaGo programme famously defeated professional Go player, Lee Sedol, in what has been described as a breakthrough for artificial intelligence research.

Unlike previous gaming computers, such as IBM’s chess-playing Deep Blue which defeated Garry Kasparov in 1997 and IBM’s Watson which won Jeopardy! in 2011, AlphaGo implements a fundamentally different type of AI search algorithm that leverages neural networks trained with a combination of supervised and reinforcement learning.

Previous game-playing computers relied heavily on deterministic search techniques custom built for a narrow problem domain. For example, IBM’s Deep Blue, though expert at chess, would have to be entirely reprogrammed to play checkers.

The novelty of AlphaGo’s search algorithm lies in its use of deep neural networks, a method of programming that does not rely on any specific domain information. Instead, neural networks are trained using mostly reinforcement learning methods and are thought to be capable of learning how to perform any number of complex tasks.

>See also: Google DeepMind’s AlphaGo victory not ‘true AI’, says Facebook’s AI chief

Unlike the so-called “narrow AI” techniques used in programs such as Deep Blue, Watson or even Siri, reinforcement learning is a general learning technique that is inspired by neuroscience and thought to mimic how humans learn.

Go was traditionally revered as a particularly difficult problem for an AI system to solve because of its complexity. With approximately 250150 possible sequences of moves – or more than there are atoms in the universe – traditional AI search techniques would fall short. The fact that AlphaGo was able to beat a human expert ten years earlier than the AI community had predicted is a testament to the power of neural networks.

In reinforcement learning, an AI system is taught to build as accurate a model of the world as possible based on noisy and incomplete sensory data, and then to take the best action in response to those inputs.

The action taken then produces some effect that the system then senses and that informs its next action. This continuous feedback loop of act-observe-act is thought to model how humans learn from experience.

Conventional wisdom has said that certain problems are so complex that they require a human expert. For example, driving a car, composing music and making business decisions are the types of tasks that – until recently – have been considered to sit squarely in the human-expert domain.

Intuition gained through experience is what defines a ‘human expert’ and was thought to be the enduring advantage that humans will always have over computers.

Now that computers are learning through techniques that simulate the process of gaining experience, they are capable of solving much more complex problems like the game of Go.

It is natural to ask then: if computers can be programmed to gain artificial intuition, is there any reason to think that computers will not be able to outperform humans in any domain in the future?

Google DeepMind’s achievement has captured the imaginations of businesses looking to stay one step ahead of the competition by using the most cutting-edge software to help them automate smart decisions.

Today, the most innovative companies make lots of small decisions in real-time, sometimes referred to as high-frequency decisioning or operational agility.

These real-time applications range from fraud detection for credit card transactions to ad placement engines. Such systems require algorithms to make decisions quickly in order to impact business as it happens.

Today’s systems, however, are programmed using deep knowledge of a narrow domain area. They are not general-purpose decision makers. But what if we could build systems that would be able to make expert business decisions in a variety of domains using reinforcement learning?

While the learning algorithms used by Google DeepMind have been hailed as revolutionary in the field of artificial intelligence, few have described in detail how a business might go about implementing a similarly powerful algorithm to make smart business decisions.

So what did it take to build AlphaGo?

First, it took a lot of brain power. Google DeepMind employs hundreds of research scientists and adapted a hybrid model of innovation, combining talent from academia and industry in an ambitious program to ‘solve intelligence’

Second, it took enormous computational resources. During competition, the AlphaGo program ran on a distributed compute platform that boasted over 1,200 CPUs and 175 GPUs.

Even well before AlphaGo played against any human, its neural networks had to be trained. The training itself was a large computation that utilised Google’s compute cloud. The supervised training phase took around three weeks to run 340 million training steps and the reinforcement learning phases took around eight days on 50 GPUs.

Third, it required big data. The supervised learning phase of the neural network training employed a database of tens of millions of moves from past Go matches between expert players.

Google DeepMind has provided a proof of concept for the power of these new AI techniques. However, the reality is that business decisions are much more complex than even the game of Go. There is a well-defined goal in Go, but in business that is not always the case and data is often noisy and incomplete.

>See also: Google's British AI program defeats Go world champion Lee Sedol in historic match

As a result, the computational resources required to make truly smart business decisions will be even greater than those used by Google DeepMind. 

The smartest companies today understand that we are at an inflection point on the curve of what is computationally feasible and are making preparations. The enormous surge in interest in big data computational platforms, such as those built on the Hadoop framework, is testament to that.

Businesses that will differentiate themselves from the competition recognise that the key to building smart applications rests first and foremost on a sound and scalable big data platform that can grow and support an increasing number of complex applications over time.

As more and more companies see this sea change coming, the use of big data platforms for running mission-critical operational applications will accelerate and the winners will be those whose platforms are versatile, agile, scalable and extensible enough to keep up.

 

Sourced from Crystal Valentine, VP of technology strategy, MapR Technologies

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...