Artificial intelligence is a misleading phrase, or so many argue. It is misleading because there is nothing intelligent about it; it conjures up images of a machine looking like Arnold Schwarzenegger on a hunt for Sarah Connor, when in fact, the reality is a machine that is lacking in any form of intelligence. Machine learning by contrast, goes the argument, is altogether different. Except a new paper argues that machine learning is not real learning.
In a recently published paper, Selmer Bringsjord, Naveen Sundar Govindarajulu, Shreya Banerjee and John Hummel questioned whether even the phrase machine learning really means what it suggests.
ML and AI in cyber security: real opportunities overshadowed by hype
People learn, pet dogs learn, even toads learn, but do machines?
Ever since the time of Euclid, (a scholar from the fourth century BC who is often described as the father of geometry) we have known how to test whether a student has learned a mathematical function, argues the paper. The authors give an example: “A student, left for high school after breakfast and upon arriving were reminded in a maths class of the factorial function n!. Later in the day, when home, you inform a parent that you have learned the function in question. But you are promptly asked whether you really did learn it. So, you are tested by your parent, and by some homework questions that align with the function.”
Or to put it another way, mathematics text books have been testing whether we have really learned something from time immemorial, but would machines pass such tests?
The authors define whether something has been learned by an agent, they refer to as ‘a’ applying three tests:
1: a understands the formal definition Df of f,
2: can a produce both f(x) for all x ∈ N, and
3: proof of the correctness of what is supplied in the second test.
They claim that machine learning simply does not pass these tests:
“We cannot allow the field of AI, and specifically its ML (machine learning) subpart, now on the intellectual scene for not more than a blip of time, to trample ordinary language and ordinary meaning that has been firmly in place within the formal sciences for millennia.”
They take as an example, G Luger’s ‘Artificial Intelligence: Structures and Strategies for Complex Problem’. And take as a further example, a chapter looking at connectionist learning and say “there isn’t a scintilla of overlap between what is covered in…(the chapter) and real learning.”
A guide to artificial intelligence in enterprise: Is it right for your business?
While true artificial intelligence is some way off, businesses are taking advantage of intelligent automation, like machine learning, to improve business operations, drive innovation and improve the customer experience. Read here
Although their paper focuses on learning in the context of mathematics, it also takes a brief sojourn into creative writing — an area which some say that, thanks to AI, machines will be able to excel at.
“What does it take to learn the ‘functions’ at the heart of creative writing, so that eventually one can take as input the premise for a story and yield as output a good story? We can safely say that any agent capable of doing this must be able to read not formal-scientist Euclid, but, say, Aristophanes, and a line of creative writers who have been excelling since the ancient Greeks; and learn from such exemplars how such a “function” can be computed. But reading and understanding literary prose, and learning thereby, is patently outside the purview of current and foreseeable AI. And it gets worse for anyone who thinks that today’s machine-learning machines learn in such domains: In order to learn to be a creative writer one must generate stories, over and over, and learn from the reaction and analysis thereof, and then generate again, and iterate the process. Such learning, which is real learning in creative writing, isn’t only not happening in machine learning today; it’s also hard to imagine it happening in even machine learning of tomorrow.”