With or without human-level intelligence — AI has finally come of age

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten” — Bill Gates.

Introduction

When I started working in AI, some 30 years ago, it was virtually unknown — very few had heard of AI. Nowadays, I rarely meet anyone who has not heard of AI. Hardly a week now passes without major announcements on AI technology – such as that being used in driverless cars, robotics, and other innovate uses. Yet, AI is hardly new. Indeed, the term AI was first coined at the Dartmouth Conference in 1956 – 0ver 60 years ago.

Artificial Intelligence — what CTOs and co need to know

As part of Information Age’s Artificial Intelligence Month, we summarise everything you need to know about the technology on everyone’s minds. Read here

The hyped legacy of AI

The power of AI applications had risen modestly during the early years due to a shifting focus on AI software tools and techniques, along with commensurate improvements in hardware in line with Moore’s Law.

For many years, the predominant AI paradigm has now come to be known as good old fashioned AI (GOFAI). This approach to AI tried to mimic thinking through symbolic reasoning – techniques that use symbols to manipulate logical patterns akin to human reasoning. The very early years of AI were dominated by general problem solving techniques that had their roots in mathematics. They recorded early successes in theorem proving and checkers and that led to much euphoria. As a consequence, some hyped-up claims and exaggerated forecasts were then made by some of the founding fathers of AI. For example, in 1961, Marvin Minsky wrote, “within our lifetime machines may surpass us in general intelligence”. Moreover, John McCarthy who founded the Stanford AI project in 1963, stated their goals were to: “build a fully intelligent machine in a decade”.

These predictions, like others made at that time, did not come to pass. The general problem solving approach soon ran out of steam because there generality made them impractical to implement. Hence, another phase of symbolic AI, which saw a big rise in activity, began in the 1980’s – known as knowledge based systems, or expert systems. These were software programs that stored human domain specific knowledge and attempted to reason with it in the same way that a human expert would. These systems were used quite extensively and some of them were quite successful and still in use today. But their major drawback was their inability to learn adequately — a shortcoming partly resolved with machine learning.

A guide to artificial intelligence in enterprise: Is it right for your business?

While true artificial intelligence is some way off, businesses are taking advantage of intelligent automation, like machine learning, to improve business operations, drive innovation and improve the customer experience. Read here

Machine learning and deep neural networks

The term machine learning refers to methods that enable the machine to learn without being explicitly programmed. This concept, like AI, is by no means new — first coined in 1959. Machine learning is very important, because, without the capacity to learn, improvements in AI are severely restricted, since humans learn new skills, new knowledge, and practice continuously to improve. Thus, learning is inextricably linked with intelligent behaviour and machine learning is therefore, a branch of AI. Research activity began in this field during the 1960’s, and a variety of machine learning techniques are now in use.

Connectionism and ANNs

However, one paradigm has now come to dominate, known as connectionism, or artificial neural networks (ANNs) – that attempt to mimic the way that biological neurons in the brain work. In essence, they are trained (or learn) with historical data that can then be used to predict outcomes with new data. Unlike symbolic AI, they don’t use the explicit use of human programmed symbols in problem solving — they figure things out themselves by learning from adjusting numeric weights in the neurons.

ANNs are not new either. They were first conceived by McCulloch and Pitts in 1943 who proposed an artificial neuron model — called a perceptron. However, they were mostly ignored until the 1980’s when a resurgence of ANN activity began. The early applications were based on perceptron software that used a single layer of neurons. However, better results were accomplished with multilayer, or deep neural networks. The newer machine learning techniques use ANNs that used hidden layers to identify particular features.

Over time, this technology improved and became quite successfully established in business, especially retailing during the 1990’s, through knowledge discovery and data mining. Its use was made possible through a combination of hardware improvements that could process large amounts of data, statistical techniques, and the gradual emergence of the WWW and Internet. Much of the data then used came from internal data sources, such as company databases, retail sales data, and so on. Analysis of this data would typically provide insights into trends, and so on, enabling organisations to improve decision making.

However, many ANNs nowadays have access to humungous amounts of training data from the Web through a plethora of sources such as retailing and social networking Web sites. The data itself can come from heterogeneous sources on the Web and can take the form of text, charts, photos, videos, sound files, and so on. This is called Big Data. Again, this term is not new, but its impact cannot be overstated. When Google’s Chief Scientist Peter Norvig was asked at Google’s Zeitgeist in 2011, what was the secret to Google’s success. He replied: “We don’t have better algorithms than anyone else; we just have more data”.

What industries will AI impact the most — a CTO guide

In this guide, seven CTOs and AI experts provide their view on what industries will be most impacted by artificial intelligence. Read here

However, the noughties saw the rise of social networking sites, such as Twitter, Facebook, and so on. And with it, millions of images were flying around the Web daily. To analyse these images their contents needed to be understood, but they were often unlabelled, in that its contents were unknown. The need to understanding their contents and recognise images led to the emergence of a Website called ImageNet. When launched in 2009, it contained a database of 14 million images. New ANN applications can train themselves from these images on ImageNet but this was still prone to errors. So much so that the founder of the Website introduced a competition called the ImageNet Challenge to encourage research into computer algorithms that could identify objects in the dataset images to minimise the errors in identification. In 2012, a researcher named Alex Krizhevsky, then working for Google, achieved outstanding results in this competition beating all others by more than 10%. It has come to be known as AlexNet. This paved the way for spectacular progress in the age of what is now called ‘deep learning’.

Challenges

Despite the phenomenal success of Deep Learning AI, Some experts question whether this paradigm is sufficient for human-level intelligence. For example, according to Francois Chollet, a prominent researcher of deep learning networks, “You cannot achieve general intelligence simply by scaling up today’s deep learning techniques”.
There are also other challenges for this technology. One of the shortcomings of ANNs is that they are woefully inadequate in explaining and making transparent their decision making reasoning. They are black box architectures. This is particularly problematic in applications such as healthcare diagnostic systems — where practitioners need to understand their decision making processes.

For this reason, DARPA (Defense Advanced Research Projects Agency), a division of the American Defense Department that investigates new technologies, launched a funded research program in August 2016. The purpose of this program was to provide funding for a series of new projects, called Explainable Artificial Intelligence (XAI). The purpose of these projects is to create tools that will enable a user of an AI program to understand the reasoning behind that decision. Several of these projects are now underway on will be completed within four years.

How to implement artificial intelligence into your business

How can your business embrace the artificial intelligence arms race? Jorge Sanchez — director of product strategy at Appian — explains everything your business needs to consider. Read here

Conclusions

Given the short history of AI, some people are now posing the question: Is this another period of hype?. I think not. Unlike earlier periods of AI, the commercial benefits of deep learning are now ubiquitous and everywhere to be seen. The investment and number of start-ups is growing exponentially. The applications span robotics, medicine, education, finance, autonomous vehicles, and every conceivable area of activity. Furthermore, many of these AI applications are as good as— and in some cases better than— humans.

These include games like, chess and Go, healthcare apps, like the Babylon GP at Hand used in part of NHS in the UK, medical chat-bots, computer vision, object recognition, speech to text conversion, speaker recognition, and much improved robots – to name but a few. These applications will continue to improve with better machine learning algorithms.

Many in the AI community have posed the question: When, if ever, will machines acquire human-level intelligence? Whatever the answer, few can now question the impact that AI is having on our lives and will continue to do so in the future. AI has come of age.

Written by Keith Darlington, retired lecturer in artificial intelligence
Written by Keith Darlington, retired lecturer in artificial intelligence

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com