The rise and rise of intelligent machines

2016 has been the year artificial intelligence transitioned from sci-fi high concept to everyday reality.

AI, and the branch of this discipline called deep learning, crashed the public consciousness on March 9 when Google DeepMind’s AlphaGo program beat South Korean champion Lee Se-dol at ancient Chinese game Go.

Since then, barely a week has gone by without a fresh breakthrough – or scare story.

True AI implies a change in the relationship between machine and human – from predictable task worker reliant on our instructions, to a system which has the capacity to surprise and surpass us.

For many, the loss of control inherent in this change is alarming. Hal and Skynet are suddenly too close for comfort.

>See also: Gartner includes robot bosses in its top 10 IT predictions by 2020

It’s easy to get caught up in the frenzy of speculation. But, for business decision makers, it’s vital to cut through the hype and address the opportunities this technology presents. AI won’t be an industry – it will be part of every industry.

Market research firm Tractica has sized the market for AI systems for enterprise applications at $11.1 billion by 2024.

The vocabulary of AI

Grasping the principles of this complex technology can seem daunting. However, a glimpse into the history of AI and an understanding of its core terminology can be extremely helpful in establishing a foundation from which to evaluate the technology with confidence.

Three terms tend to be used interchangeably: artificial intelligence, machine learning and deep learning. Their relationship is a bit like Russian dolls. AI is the overarching idea, within which machine learning and deep learning fit.

AI has been a well-studied subject since 1950 when Alan Turing first speculated that machines could one day think like humans. After decades in the doldrums, AI has recently exploded in a boom that has unleashed applications used by hundreds of millions of people every day.

The entertainment industry tends to fuel most people’s imaginations when they think of intelligent machines. In reality, what’s possible today is known as ‘narrow AI’, as opposed to the ‘general AI’ displayed by C3PO and the Terminator.

Narrow AI encompasses technologies that can perform specific tasks, such as image classification or speech recognition, as well as or better than humans.

This human-like intelligence brings us to machine learning. At a basic level, machine learning involves running a large amount of data through algorithms to learn from it, then make a determination or prediction about something.

Rather than hand-coding computers to solve problems, scientists theorised that computers could be trained to find their own solutions. Great in principle but less effective in practise – until deep learning and graphics processing units (GPUs) came along.

Putting the ‘deep’ in ‘deep learning’

Taking the human brain as the model for developing artificial neural networks is yet another concept that has been studied for decades. Such was the level of computational horsepower required the approach was all but written off.

But, thanks to a perfect storm of research breakthroughs, computer hardware and the availability of big data, deep learning has emerged as the big bang of modern AI.

Deep learning is a fundamentally new software model where billions of software-neurons and trillions of connections are trained in parallel.

Incongruously, a piece of silicon originally designed to run 3D games has emerged as the ideal processor to accelerate deep learning from the realms of theory into commercial deployment.

GPUs, like artificial neural networks and the human brains on which they’re modelled, process information in parallel, handling multiple tasks simultaneously.

That’s why GPUs can now be found accelerating deep learning-based applications from movie recommendations to cancer detection and fraud detection to self-driving cars.

From textbook to factory floor

The democratisation of AI brought about by GPU-accelerated deep learning is already finding its way into deployment across a vast range of industries.

Given that 90% of all the data that exists today was created in the last two years, deep learning’s capacity to turn the ‘black box’ of big data into solutions that will transform business is invaluable.

>See also: Do the benefits of robotics outweigh the heavy demands on infrastructure?

For example, we’re already seeing companies using AI to customise the way consumers interact, procure and receive services from vendors.

Retailers like Amazon and Netflix already suggest products that fit our preferences – a technique that uses deep learning to analyse not only our own purchasing and browsing history, but that of thousands of other consumers to deliver uncannily accurate results.

In warehouses and manufacturing plants, AI will also be revolutionary. Industrial robots that can learn new processes, rather than require costly modification or replacement, will bring huge gains in effectiveness and flexibility to production lines.

There’s some exciting work being done in this ‘future factories’ field by companies like French start-up Akeoplus. And in warehouses, we’ve already seen online retailing giant Zalando achieve impressive improvements in its systems by implementing deep learning to calculate the most efficient picking routes. No wonder Gartner has identified deep neural networks as one of its key technology trends for 2016.

Sourced from Jaap Zuiderveld, VP of sales and marketing, NVIDIA

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics