Richard Tang, chairman and founder of UK ISP — Zen — is one such person. He gave a talk at Linx105 when he looked at the possibility, or should we say danger of AI becoming super intelligent — a kind of super artificial general intelligence.
We are one or two or maybe three leaps in AI research before we can say: ‘you know what, that machine is thinking,’” he told Information Age.
Famously, Elon Musk has a similar view, of course. But before we get to that, let’s ask what artificial general intelligence is? And then we can, after taking a detour via JRR Tolkien’s Treebeard, consider some pretty mind-blowing stuff.
What is artificial general intelligence?
In some ways, AI is like Paddington Bear. The famous creation of Michael Bond was described as being a ‘bear of very little brains.’
Computers, even state of the art technology such as neural networks, maybe with millions of artificial neurons, are like dots compared to the human brain with its 100 billion or so neurons (no one has actually counted them), forming a trillion or more synapses with an infinite number of permutations in the strength of these synapses.
For that reason, AI is limited. It can be trained on a specific task, like recognising cats from dogs, but has no wider perspective. This is why, when researchers showed an AI engine various images, such as skateboarders, it did a good job of recognising the image for what it was. But when showed an image of goats up a tree, it described it as birds in a tree.
AI, then, can be very powerful at very specific tasks, training an algorithm on a dataset in specialist areas, for example. That’s what we call Narrow AI, but in fact Narrow AI is where we are today, that’s state of the art.
But it might not be like that for much longer.
Take the views of Sir John Turing, nephew of Alan Turing, a member of the European Post-Trade Forum, and a Trustee of the Turing Trust. Speaking at a conference last year, he said: “What AI is all about is simple self-contained things like facial recognition, self-driving cars, voice recognition, algorithms for helping Amazon sell products — self-contained boxes. For as long as AI exists like this, in disparate groups, there is no risk that AI could escape from their boxes, and take over the planet.”
From Turing to Turing, the great man’s nephew has a warning that all people interested in AI should listen to
He continued: “I don’t think we are that far away from when someone comes along with a super algorithm that glues together all the bits of thinking and we end up with a super intelligent algorithm.
“It seems to me that we underestimate our own capabilities, someone clever is going to come up with something and we need to be prepared.”
Musk, thinking faster than we can speak, and machines thinking faster than we can think
Elon Musk famously argues that AI poses an existential threat to humanity. In a recent discussion with Alibaba’s founder Jack Ma, he said: “Assuming a benign scenario with AI, we will be just too slow.”
In the book, Lord of The Rings, the character Treebeard spoke incredibly slowly.
For the two Hobbit characters who first met Treebeard and his kin, the Ents, it was frustrating. They could guess what Treebeard was saying, but had to wait patiently for him to finish a sentence. Maybe you know someone like that.
“You must understand, young Hobbit, it takes a long time to say anything in Old Entish. And we never say anything unless it is worth taking a long time to say.”
Maybe there is a sort of analogy with AI.
A computer that has many ExaFLOPs https://kb.iu.edu/d/
Richard Tang looks at various sources of evidence. He cites the example of AlphaGo Zero, the AI system that was able to teach itself the Chinese Game of Go just from the instructions. Within three days of being switched on, it was able to beat the previous DeepMind AI — AlphaGo Lee. Within forty days it arguably became the greatest Go player ever, rediscovering strategies that had been lost centuries ago. “All this with no human intervention and no historical data.”
Tang cites a recent study: “When Will AI Exceed Human Performance? Evidence from AI Experts,” in which 353 AI researches across the world were surveyed. The study found that all researchers agreed that at some point within the next 100 years, AI will be able to do absolutely everything that a human worker can do but more efficiently. Some thought that date may occur within ten years, others a hundred. The average projection for this point is 2060. The study found that on average, researchers believe that a computer would be able to do the work of a surgeon by 2053.
A key part of the narrative of Artificial General Intelligence is Moore’s Law — named after Intel co-founder Gordon Moore, who predicted a doubling in the number of transistors on integrated circuits every two years. Today, Moore’s Law is generally assumed to mean computers doubling in speed every 18 months.
At a Moore’s Law trajectory, within 20 years computers would be 8,000 times faster than present, and within 30 years, one million times faster.
And although cynics might suggest Moore’s Law as defined by Gordon Moore is slowing, other technologies, such as Photonics, molecular computing and quantum computers could see a much faster rate of growth. Take as an example, Rose’s Law. This suggests that the number of qubits in a quantum computer could double every year. Some experts predict quantum computers doubling in power every six months — if correct, within 20 years, quantum computers will be a trillion times more powerful than present.
Are emerging technologies set to transform business?
The quantum computing world is subject to its own version of Moore’s Law: Rose’s Law, named after Geordie Rose, former CEO of D-Wave. Rose suggested that the number of qubits in a scalable quantum computer should double every year.
Tang says that the jury is out on how useful quantum computers will be in advancing AI, but agreed with us when we suggested the real fireworks may relate to quantum computers in combination with more traditional computers.
Will general artificial intelligence ever be a thing? It is surely a matter of time.
Will this throw up machines more intelligent than us? Tang says that some people question how a machine could ever be more intelligent than humans.
Gartner on futurology and the year 2035: Technologists can be pragmatic about futurism, but there is a need for us all to speak up
Guessing what the future will be like is the stuff of science fiction and hasn’t got much to to with business, right? According to Gartner’s Frank Buytendijk, futurology is something we should all think about in order to become ready for change. And we all need to speak up if we want to mould a future we want.
Musk said: “The most important mistake people make…is thinking they are smart.”
What we can say is that humanity’s intelligence and indeed consciousness, which is not unique to humans, evolved.
When people ask, could artificial general intelligence throw up a machine that thinks like people? they are surely asking the wrong question.
It is more relevant to ask whether a machine could think like the simplest organism that has a consciousness, whether that’s your pet dog, a gold fish or an amoeba. If it is conceivable that a machine could have comparable consciousness to the simplest conscious organism, then it’s conceivable that artificial general intelligence with consciousness could evolve, just like human consciousness evolved.
Of course, Darwinian evolution took billions of years to throw up humans — but then a meaningful timescale for mutation in evolution is millions of years. In a digital environmental a mutation rate could be measured in milliseconds. That may seem like instantaneous to us, but and as Musk said, “to a computer that has many ExaFLOPs capability, a millisecond is an eternity.”