Artificial intelligence: advancements, abilities and limitations

‘Being good at Go isn’t top of the priority list for humanity’s needs, but building AI technology that is getting closer to human intuition certainly is’

 Artificial intelligence: advancements, abilities and limitations


John McCarthy coined the term ‘artificial intelligence’ in 1955, describing the field as  "the science and engineering of making intelligent machines”.

Back then, many of the first applications of the early computers were AI programs. In 1956, Allen Newell and Herbet A. Simon created Logic Theorist, a program that discovered proofs in propositional logic. Another example is the software built to play checkers by Arthur Samuel.

While most of these programs focused on search and learning as the foundation of the newly discovered field, the tricky part was getting AI to solve problems – and AI has gotten pretty good at it over the years.

The AI that can predict 85% of cyber attacks

Cyber attacks are a real problem in the digital age. Last year there were 487,731,758 reported leaks, and the actual number is likely to be far higher. Some of these attacks involved money being stolen, others data being lifted and in many cases lives were ruined.

This is a serious problem, but the MIT is fighting back with an AI system that is reported to be able to predict 85% of attacks before they even happen.

This may seem overly optimistic, but MIT has tested the software using 3.6 billion log lines of internet activity to come to this conclusion. The new AI has also reported five times fewer false positives than its predecessors.

Cyber-attack spotters of earlier technology work in one of two ways. Some are AI programs that look for anomalies in internet traffic. Although they have their success rates, many of these systems throw up false positives, causing alarm where none is needed.

>See also: How artificial intelligence is growing up

The second type of system is built on rules that have been developed by humans. Again, these systems do spot cyber attacks successfully, but it’s hard for a human to program scenarios to catch every single attack.

The new program from MIT combines the two approaches to create something completely new. Called AI², the software uses three different machine learning algorithms which to detect suspicious activity. Like most current AI, the software still needs some human intervention to clarify whether the events it discovers are truly suspicious, so it’s not completely autonomous.

After the feedback has been given to AI², it refines its internal models and, over time, becomes better at differentiating signal from noise.

As with all AI systems, AI² is only going to get better over time. This seems like a definite tick in the ability box for AI – anything that can help combat cyber attacks is a positive step forward.

The AI that solved poker

Much like one of the first examples of computer programming, an AI system has been successfully created to beat all humans at a game.

This time it’s heads-up Limit Hold’em, a variant of Texas Hold’em which limits players’ betting. Called Cepheus, this new poker bot was created by a research team at the University of Alberta.

In reality, Cepheus has only ‘weak solved’ the game. In other words, Cepheus can only win 0.000986 big blinds in a game based on expectation. If the software were to go from weakly solving the game to actually solving it, it would need to transform 0.000986 to 0.0000000 big blinds per game on expectation.

Regardless of this, where it stands at 0.000986 big blinds per game on expectation, it still means that it is unbeatable in the long run in Limit Hold’em. Cepheus can also only win when playing against one opponent at a time.

But how was this piece of software created? The team used adaptive software which they fed with every possible situation in a poker game – that’s 316. different situations in a heads-up game. Cepheus then had to play against itself to reach its goal of beating heads-up Limit Hold’em. It took 70 days, 200 processors and generated a data base of 11 terabytes – but they did it.

Poker players know that sometimes there is no right or wrong choice. The theory says that 70% of the time you should call and 30% you should fold. To get around this, Cepheus uses a random number generator for certain situations – making it as unpredictable as a human player.

The poker world understandably took note of this new bit of AI threatening their beloved game, but most of them reveled in the new piece of tech. One satirical article even suggested that popular online poker room PokerStars had bought the bot to add it to its roster of pro poker players. While the article was made up, it was a sign of respect for the capabilities of AI on behalf of the online poker community.

Practically, the learnings of Cepheus will be used in areas such as medicine, security and other fields based on its ability to make good decisions. So while there are clear limitations in its actual poker playing ability, future practical applications are looking up.

The AI that wins at Go

Another example of AI that has been created to beat a game is AlphaGO, Google's Go bot.

Invented in China 2,500 years ago, Go has been cited as the most complex game ever created by humans. Simple to learn, but hard to master, Go is played on a 19-by-19 grid where black and white markers are alternately place on intersection points. A huge set of possibilities have made this a challenging game for computers to calculate.

However, Google has done it. Developed by Google DeepMind, AlphaGo has two neural networks, each of which has millions of interconnected nodes, and the connections vary in strength as the computer learns. It essentially mimics the human brain.

>See also: Google's British AI program defeats Go world champion Lee Sedol in historic match

Unlike programs that came before it to beat chess, and Cepheus, AlphaGo doesn’t simply take every possible move from a game of Go and then use computer power to work out the best possible outcome. Instead, it has managed to bottle something that the best Go players have: intuition.

The team at Google DeepMind started by taking 150,000 games played by good human Go players and used the artificial neural network to find patterns. Once they had achieved this, they made AlphaGo repeatedly play games against earlier versions of itself, creating a policy network that could play a good game.

After this first stage, the developers still had to instill the concept of the intuition that a human Go player has. They did this by getting AlphaGo to play the policy network against itself, which allowed the program to get a good estimate of the winning chances of a board position.

The probability of the win gave AlphaGo a valuation of the position. Once this had been established, AlphaGo could combine this valuation approach with a search of all the possible lines of play, targeting its search to plays the policy network thought were more likely, eventually picking the move that forced the most effective board valuation.

Essentially, AlphaGo has created a valuation system that mimics the intuition of a good human Go player. This is a big deal, and a huge advancement in AI.

Of course, being good at Go isn’t top of the priority list for humanity’s needs, but building AI technology that is getting closer to human intuition certainly is. It’s a skill that computers have not mastered before and, practically speaking, it opens up the range of problems we can now use computers to solve.

Comments (0)