Logo Header Menu

Is artificial general intelligence possible? If so, when?

Is artificial general intelligence or AGI possible? There are plenty of cynics, but some credible believers, too. If AGI is possible, when, and indeed what is it? Is artificial general intelligence possible? If so, when? image

Richard Tang, chairman and founder of UK ISP — Zen — is one such person. He gave a talk at Linx105 when he looked at the possibility, or should we say danger of AI becoming super intelligent — a kind of super artificial general intelligence. 

Richard Tang is talking at the upcoming tech leaders summit too, on the 12th September in London, which might be a good opportunity for you to quiz him further

We are one or two or maybe three leaps in AI research before we can say: ‘you know what, that machine is thinking,’” he told Information Age.

Famously, Elon Musk has a similar view, of course. But before we get to that, let’s ask what artificial general intelligence is? And then we can, after taking a detour via JRR Tolkien’s Treebeard, consider some pretty mind-blowing stuff.

What is artificial general intelligence?

In some ways, AI is like Paddington Bear. The famous creation of Michael Bond was described as being a ‘bear of very little brains.’

Computers, even state of the art technology such as neural networks, maybe with millions of artificial neurons, are like dots compared to the human brain with its 100 billion or so neurons (no one has actually counted them), forming a trillion or more synapses with an infinite number of permutations in the strength of these synapses.

For that reason, AI is limited. It can be trained on a specific task, like recognising cats from dogs, but has no wider perspective. This is why, when researchers showed an AI engine various images, such as skateboarders, it did a good job of recognising the image for what it was. But when showed an image of goats up a tree, it described it as birds in a tree.

AI, then, can be very powerful at very specific tasks, training an algorithm on a dataset in specialist areas, for example. That’s what we call Narrow AI, but in fact Narrow AI is where we are today, that’s state of the art.

But it might not be like that for much longer.

Take the views of Sir John Turing, nephew of Alan Turing, a member of the European Post-Trade Forum, and a Trustee of the Turing Trust. Speaking at a conference last year, he said: “What AI is all about is simple self-contained things like facial recognition, self-driving cars, voice recognition, algorithms for helping Amazon sell products — self-contained boxes. For as long as AI exists like this, in disparate groups, there is no risk that AI could escape from their boxes, and take over the planet.”

From Turing to Turing, the great man’s nephew has a warning that all people interested in AI should listen to

At a recent AI conference, Sir John Dermot Turing, nephew of the famous Alan Turing, had a warning and a call to action concerning AI – people who work in this area may need to take heed

He continued: “I don’t think we are that far away from when someone comes along with a super algorithm that glues together all the bits of thinking and we end up with a super intelligent algorithm.

“It seems to me that we underestimate our own capabilities, someone clever is going to come up with something and we need to be prepared.”

Musk, thinking faster than we can speak, and machines thinking faster than we can think

Elon Musk famously argues that AI poses an existential threat to humanity. In a recent discussion with Alibaba’s founder Jack Ma, he  said: “Assuming a benign scenario with AI, we will be just too slow.”

 

In the book, Lord of The Rings, the character Treebeard spoke incredibly slowly. 

For the two Hobbit characters who first met Treebeard and his kin, the Ents, it was frustrating. They could guess what Treebeard was saying, but had to wait patiently for him to finish a sentence. Maybe you know someone like that. 

“You must understand, young Hobbit, it takes a long time to say anything in Old Entish. And we never say anything unless it is worth taking a long time to say.”

Treebeard: maybe he spoke at a tiny fraction of the time it might take artificial intelligence to think, but then again Treebeard was quite wise

 

Maybe there is a sort of analogy with AI.

A computer that has many ExaFLOPs  https://kb.iu.edu/d/apeq of capability, a millisecond is an eternity, and to us, it’s nothing,” said Musk. He added: “Human speech to a computer will sound like very slow internal wheezing, like whale sounds.”

Richard Tang looks at various sources of evidence. He cites the example of AlphaGo Zero, the AI system that was able to teach itself the Chinese Game of Go just from the instructions. Within three days of being switched on, it was able to beat the previous DeepMind AI — AlphaGo Lee. Within forty days it arguably became the greatest Go player ever, rediscovering strategies that had been lost centuries ago. “All this with no human intervention and no historical data.”

There are 101023  possible moves in Go

 

Tang cites a recent study: “When Will AI Exceed Human Performance? Evidence from AI Experts,”  in which 353 AI researches across the world were surveyed. The study found that all researchers agreed that at some point within the next 100 years, AI will be able to do absolutely everything that a human worker can do but more efficiently. Some thought that date may occur within ten years, others a hundred. The average projection for this point is 2060. The study found that on average, researchers believe that a computer would be able to do the work of a surgeon by 2053.

A key part of the narrative of Artificial General Intelligence is Moore’s Law — named after Intel co-founder Gordon Moore, who predicted a doubling in the number of transistors on integrated circuits every two years. Today, Moore’s Law is generally assumed to mean computers doubling in speed every 18 months.

At a Moore’s Law trajectory, within 20 years computers would be 8,000 times faster than present, and within 30 years, one million times faster.

And although cynics might suggest Moore’s Law as defined by Gordon Moore is slowing, other technologies, such as Photonics, molecular computing and quantum computers could see a much faster rate of growth. Take as an example, Rose’s Law. This suggests that the number of qubits in a quantum computer could double every year. Some experts predict quantum computers doubling in power every six months — if correct, within 20 years, quantum computers will be a trillion times more powerful than present. 

Are emerging technologies set to transform business?

The quantum computing world is subject to its own version of Moore’s Law: Rose’s Law, named after Geordie Rose, former CEO of D-Wave. Rose suggested that the number of qubits in a scalable quantum computer should double every year.

Tang says that the jury is out on how useful quantum computers will be in advancing AI, but agreed with us when we suggested the real fireworks may relate to quantum computers in combination with more traditional computers.

Will general artificial intelligence ever be a thing? It is surely a matter of time.

Will this throw up machines more intelligent than us? Tang says that some people question how a machine could ever be more intelligent than humans.

Gartner on futurology and the year 2035: Technologists can be pragmatic about futurism, but there is a need for us all to speak up

Guessing what the future will be like is the stuff of science fiction and hasn’t got much to to with business, right? According to Gartner’s Frank Buytendijk, futurology is something we should all think about in order to become ready for change. And we all need to speak up if we want to mould a future we want.

Musk said: “The most important mistake people make…is thinking they are smart.”

What we can say is that humanity’s intelligence and indeed consciousness, which is not unique to humans, evolved.

When people ask, could artificial general intelligence throw up a machine that thinks like people? they are surely asking the wrong question.

It is more relevant to ask whether a machine could think like the simplest organism that has a consciousness, whether that’s your pet dog, a gold fish or an amoeba. If it is conceivable that a machine could have comparable consciousness to the simplest conscious organism, then it’s conceivable that artificial general intelligence with consciousness could evolve, just like human consciousness evolved. 

Of course, Darwinian evolution took billions of years to throw up humans — but then a meaningful timescale for mutation in evolution is millions of years. In a digital environmental a mutation rate could be measured in milliseconds. That may seem like instantaneous to us, but and as Musk said, “to a computer that has many ExaFLOPs capability, a millisecond is an eternity.” 

For tech’s sake: Reconciling emerging tech and the GDPR

Digital disruption means ethics is playing catch-up

AI ethics and how adversarial algorithms might be the answer

What is AI? A simple guide to machine learning, neural networks, deep learning and random forests!

Kasparov and AI: the gulf between perception and reality

Latest news

divider
Education
University of Dundee partners with TechnologyOne

University of Dundee partners with TechnologyOne

16 September 2019 / The University of Dundee today announced it has moved its core financial functions to enterprise [...]

divider
Events
Tech Leaders Awards 2019 – winners revealed

Tech Leaders Awards 2019 – winners revealed

13 September 2019 / The UK’s top tech leaders, innovators and disruptors were revealed last night at the inaugural [...]

divider
Blockchain
Demystifying blockchain to reveal its business benefits

Demystifying blockchain to reveal its business benefits

13 September 2019 / Chief Financial Officers are currently having a hard time. The stresses being imposed by the [...]

divider
AI & Machine Learning
The use of AI in robotics and hardware: what CTOs need to know

The use of AI in robotics and hardware: what CTOs need to know

13 September 2019 / In today’s ultra competitive environment, every business must implement advanced technologies to stay ahead of [...]

divider
Blockchain
Are blockchain regulatory frameworks a necessary safeguard or an inhibitor of growth?

Are blockchain regulatory frameworks a necessary safeguard or an inhibitor of growth?

12 September 2019 / Regulatory frameworks exist for different verticals and technologies. They help in defining the ground rules [...]

divider
AI & Machine Learning
A history of AI; key moments in the story of AI

A history of AI; key moments in the story of AI

12 September 2019 / The history of AI BC: Talos: The history of AI begins with a myth and [...]

divider
AI & Machine Learning
Finding talent for those hard-to-fill AI jobs

Finding talent for those hard-to-fill AI jobs

12 September 2019 / According to research by SnapLogic, the software company, although 93% of IT decision-makers in the [...]

divider
Smart Cities
Why we should dispel the negative image of smart cities

Why we should dispel the negative image of smart cities

12 September 2019 / Whether it’s stories about people being treated like ‘lab rats in a surveillance experiment’ or [...]

divider
AI & Machine Learning
Developing your AI skills: what AI courses are available?

Developing your AI skills: what AI courses are available?

12 September 2019 / As the technology moves across the hype cycle, AI skills will become an increasingly important [...]

Do NOT follow this link or you will be banned from the site!

Pin It on Pinterest