Who would have thought it? If schools and universities are going to help create a generation that is equipped to support the AI revolution, they might be better off teaching philosophy and psychology.
Sport might be a good analogy. If you are trying to hire talent, you might be better off hiring staff while they are young, grabbing them from school or university as part of placements perhaps, an approach Melanie Oldham explains in this piece.
It is an approach that sports clubs are fully versed in — football teams with their academies and talent scouts, scouring the playing fields on a Saturday morning. It often works out as a more effective approach than getting the cheque book out and buying players after they emerge.
>Read more on Artificial Intelligence — what CTOs and co need to know
But for Rinku Singh and Dinesh Patel the route to stardom in baseball was not conventional. They joined the American baseball world after entering a talent contest in India. It was an unorthodox recruitment process made famous by the movie ‘Million Dollar Arm.’ In India, cricket is the sport of choice and a US sports agent speculated that meant an awful lot of bowlers whose skills could be transposed to baseball.
And that takes us to AI. What skills do you require to be good at AI?
It stands to reason, studying computer science, or failing that maths, is a pre-requisite.
But maybe what you really need is a certain aptitude. Skills that are transposable. When he was a child, Demis Hassabis, one of the founders of DeepMind, was a brilliant chess player, for example.
>Read more on Artificial intelligence: Transforming the insurance industry
At a recent conference organised by Forbes, a program lead at IBM, brought up the idea of hiring philosophy students.
It is not just about programming. There is the classic autonomous car dilemma. If there is a child on the road, but to avoid the child the car has to swerve, risking injury to the passengers, what does it do?
Ana Paula Assis, General Manager of IBM Latin America: “We decided to build a whole division around responsible AI. The types of skills we’re bringing in are psychologists, philosophers, that type of profile, that can solve really complex philosophical problems and think about what’s the best way to address that.”
Another speaker at the conference, this time Deanna Mulligan, CEO of Guardian Life Insurers, said that her company is looking at hiring interpreters to help explain AI and what it does and its benefits to the layman.
At Oxford University the philosopher, Nick Boston, has already made a name for himself communicating many of the big ethical and philosophical challenges that might be posed by AI.
>Read more on Increasing the adoption of ethics in artificial intelligence
Boston once said: “On one estimate, the adult human brain stores about one billion bits—a couple of orders of magnitude less than a low-end smartphone.”
But could the maturing mind studying philosophy be a potential AI wizard? Not just to think about ethical questions, but because their brain is wired that way?
Boston also said: “The gap between a dumb and a clever person may appear large from an anthropocentric perspective, yet in a less parochial view the two have nearly indistinguishable minds.”
Well if that is so, what might the gap be between the ability of a computer scientist’s mind and that of a philosopher?
As Oxford University says in its prospectus for a ‘computer science and philosophy degree’: “Artificial intelligence (AI), logic, robotics, virtual reality: fascinating areas where Computer Science and Philosophy meet. There are many others, since the two disciplines share a broad focus on the representation of information and rational inference, embracing common interests in algorithms, cognition, intelligence, language, models, proof and verification.”