Logo Header Menu

A history of AI; key moments in the story of AI

AI is the word of the moment, or is that acronym of the moment? But where did the idea come from? We track key moments, discoveries and ideas in the history of AI A history of AI; key moments in the story of AI image

The history of AI

BC: Talos:

Talos,: was it a Greek myth equivalent of robotics or AI?

The history of AI begins with a myth and just as with many modern AI systems, it concerned defence. According to Greek mythology, Talos was a giant automaton made of bronze, created by the god Hephaestus, for the purpose of guarding the island of Crete by throwing stones at passing ships.

1769: The Turk

The first chess playing AI system turned out to be a lie

Just as many experts in AI today accuse companies of claiming to have AI when in fact they don’t, the second well known example of AI was a lie. The Turk was supposed to be a mechanical device for playing chess, created by Wolfgang von Kempelen, trying to impress Empress Maria Theresa of Austria in 1769. The Turk even played against Napoleon Bonaparte and won. But in fact it was a hoax. A human chess master sat inside the machine, controlling it.

1830s: Babbage computer

Charles Babbage, if not the history of AI, then at least the history of computers, began with him

In the 1830s, Charles Babbage created a design for the Analytical Engine. The product was finally built 153 years later. It wasn’t a design for an AI machine, of course, but today is considered to be the forerunner of the digital computer.

What is AI? A simple guide to machine learning, neural networks, deep learning and random forests!

What is AI? Information Age has created a simple guide to AI, machine learning, neural networks, deep learning and random forests.

1943: McCulloch and Pitts produce a model describing the neuron.

An understanding of a neuron opened the way for the idea of neural networks

1945: The history of AI begins with an essay

This essay may have marked the true beginning of the history of AI

In this year, Vannevar Bush wrote an essay, published in The Atlantic, entitled ‘As we may think’. The idea was for a machine that holds a kind of collective memory, that provided knowledge. Bush believed that the big hope for humanity lied in supporting greater knowledge, rather than information.

1947: The transistor

It looks different today, but without the invention of the transistor, the history of AI would have got stuck on square one

John Bardeen and Walter Brattain, with support from colleague William Shockley,
demonstrate the transistor at Bell Laboratories in Murray Hill, New Jersey.

1950: The Turing Test

LONDON, UK – FEBRUARY 16TH 2017: A blue plaque on Warrington Cresent in the Maida Vale area of London, marking the location where famous Code-breaker Alan Turing was born, taken on 16th February 2017.

Alan Turing proposes a test for ascertaining whether a computer can think like a human. In that year he published a paper: ‘Computing Machinery and Intelligence.’ Turing test was based on the imitation game in which a judge questions a human and a computer in different rooms, communicating by keyboard and monitor. The judge has to identify which of the two is human. This has become known as ‘The Turing Test.’

Is artificial general intelligence possible? If so, when?

Is artificial general intelligence or AGI possible? There are plenty of cynics, but some credible believers, too. If AGI is possible, when, and indeed what is it?

1951: Marvin Minsky makes his first mark of many

Marvin Minsky is to the history of AI what 1066, or Elizabeth I is to English history, image credit 

Marvin Minsky builds the first randomly wired neural network learning machine.  Later in life, Minsky’s whose contributions to AI are legion, said: “No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.”

1956, the word artificial intelligence is first used

It was at Dartmouth College when the phrase artificial intelligence was first used

1956: John McCarthy presented at a conference held at Dartmouth College, which he organised, when he coined the phrase Artificial Intelligence. Also at the conference were Marvin Minsky, Nathaniel Rochester, and Claude Shannon.

1958: The book was written

It wasn’t quite JK Rowling, but the book is to the AI what Hogwarts was to Harry Potter

John von Neumann’s book, The Computer and the Brain was published.

1959: Machine Learning

Arthur Samuel, image courtesy of Stanford University 

Arthur Samuel coined the phrase machine learning.

1965, Moore’s Law

Gordon Moore, image credit

Gordon Moore, who went on to be a co-founder of Intel, says that the number of transistors on a silicon circuit double every two years — Moore’s Law evolved out of this, suggesting computers double in speed every 18-months.

1968: Hal

Hal: “I am sorry Dave, I am afraid I can’t do that.”

1968: Stanley Kubrick releases 2001 a Space Odyssey, based on the novel by Arthur C Clarke. The movie is perhaps most famous for the scenes involving HAL, an apparently self-aware computer.

1975: The history of AI takes a turn to neural networks

A neural network

The first multi layered unsupervised computer network — the beginning of neural networks.

1982: Fifth Generation neural networks concept

With the Fifth Generation neural networks concept, US researchers feared Japan was going to leave the US behind in the AI race

At a US-Japan conference on Neural Networks, Japan announced the Fifth Generation neural networks concept.

1984, “winter is coming” for AI, or so it was warned

Fears of an AI winter, to an extent they may have been realised, the history of AI did seem to go through something of an hiatus

At the American Association of Artificial Intelligence, Roger Schank and Marvin Minsky warned that AI had become over-hyped and that this would lead to disappointment and an AI winter.

1984 The Terminator

He said he would come back, and he did

James Cameron’s movie the Terminator is released, maybe it did for the public perception of AI what Jaws did for sharks.

1997: Chess victory to the computer

“The computer that beat me was no more intelligent than an alarm clock”

IBM’s Deep Blue defeated Gary Kasparov at chess. Today, Kasparov says the “computer that defeated me at chess is no more intelligent than an alarm clock.” The ability to defeat a human ceased to be a criteria for defining AI.

Kasparov and AI: the gulf between perception and reality

Speaking at a recent conference, chess legend, Gary Kasparov, said that the public perception of AI has been overly influenced by Hollywood: the reality is far more positive — Kasparov’s take on AI is a reason for optimism

2009: The big bang

Graphics process units were developed for video games, but they seemed ideal for neural networks

The big bang of neural networks: Nvidia, a hardware company that originally specialised in technology for video games, trained its GPUs in neural networks. At around that time, Andrew Ng, worked out that using GPUs in a neural network could increase the speed of deep learning algorithms 1,000 fold.

2010: Deep Mind began

DeepMind’s contribution to AI is immense, but it is most famous for AlphaGo

Deep Mind is founded by Demis Hassabis, Shane Legg and Mustafa Suleyman.

2011: IBM strikes back

Watson wins at Jeopardy, image credit 

IBM Watson wins Jeopardy, defeating legendary champions Brad Rutter and Jen Jennings.

2013: Natural Language Processing

Time to talk about NLP

Neural networks adapted for natural language processing.

2015 to 201i, the rise of AlphaZero,

AlphaGo does it

In 2015, DeepMind’s AlphaGo defeated the European champion of the Chinese game of Go. 2017: Within three days of being switched on, AlphaGo Zero can defeat the previous version at Go with just the instructions as a guide, within 40 days it arguably became the greatest Go player ever, rediscovering lost strategies.

Some predictions, courtesy of ‘When Will AI Exceed Human Performance? Evidence from AI Experts’:

When will AI be able to do everything better than humans? You are about to find out

2027: Projected date for automated truck drivers.

2049: Projected date for AI writing a best seller.

2054: Projected date for Robot Surgeons.

2060: Average projected date for when AI can do everything that humans can do.


Related articles

Are there solutions to the AI threats facing businesses?

What is AI? A simple guide to machine learning, neural networks, deep learning and random forests! 

If you want to see the benefits of AI, forget moonshots and think boring

AI predictions: how AI is transforming five key industries

Darktrace unveils the Cyber AI Analyst: a faster response to threats

Is the cloud and AI becoming two sides of the same coin?


Latest news

divider
IoT and M2M
Costa Coffee to deploy IoT-enabled vending machines

Costa Coffee to deploy IoT-enabled vending machines

18 September 2019 / Today, Costa Coffee announced its partnership with Eseye, the UK-based industrial IoT connectivity specialist, enabling [...]

divider
Business & Strategy
Most companies lack a defined innovation strategy, according to survey

Most companies lack a defined innovation strategy, according to survey

17 September 2019 / Innovation as a discipline is nothing new; innovation techniques such as design thinking and the [...]

divider
AI & Machine Learning
Data science cowboys are exacerbating the AI and analytics challenge

Data science cowboys are exacerbating the AI and analytics challenge

17 September 2019 / In the below, Dr Scott Zoldi, chief analytics officer at analytic software firm FICO, explains [...]

divider
AI & Machine Learning
5 tips to boost employee skills in the age of AI

5 tips to boost employee skills in the age of AI

17 September 2019 / AI is poised to disrupt a wide variety of industries, ranging from finance to healthcare. [...]

divider
Cloud & Edge Computing
Oracle and Deloitte collaborate to accelerate customer cloud journeys

Oracle and Deloitte collaborate to accelerate customer cloud journeys

16 September 2019 / Today, Oracle Consulting and Deloitte announced a strategic sales and delivery collaboration called ELEVATE. This [...]

divider
Cybersecurity
Cyber security concerns increase among UK financial institutions

Cyber security concerns increase among UK financial institutions

16 September 2019 / Senior leaders within financial institutions in the  UK have become more concerned about cyber security [...]

divider
AI & Machine Learning
5 ways to use AI to improve business efficiency

5 ways to use AI to improve business efficiency

16 September 2019 / 1. Use AI to answer queries and support customer engagement Chatbots are an increasingly popular [...]

divider
Education
University of Dundee partners with TechnologyOne

University of Dundee partners with TechnologyOne

16 September 2019 / The University of Dundee today announced it has moved its core financial functions to enterprise [...]

divider
Digital Transformation
Tech Leaders Awards 2019 – winners revealed

Tech Leaders Awards 2019 – winners revealed

13 September 2019 / The UK’s top tech leaders, innovators and disruptors were revealed last night at the inaugural [...]

Do NOT follow this link or you will be banned from the site!

Pin It on Pinterest