The Academy Awards are Sunday night, and Best Picture nominee ‘Her’ has some people wondering if the movie is more science fiction or reality. The story centres on a man who falls in love with an intelligent computer operating system (OS) with a female voice and personality.
While this may seem like science fiction and a long way off, the day when your computer will know you better than any human is not as far off as you may think.
Many of the capabilities of Samantha, the intelligent OS in the movie, are already here, including speech and natural language recognition, and some conversational abilities.
Much of the recent progress is due to advances in the artificial intelligence of machine learning, whereby the system doesn’t have to be pre-programmed for every eventuality, but learns from experience.
There are already virtual personalities such as Cleverbot that learn from their discussions with humans, with impressive results as shown by this YouTube video of two Cleverbots conversing:
Once the computer can get smarter from new information, there’s nothing to stop it becoming as good as, and eventually better than, a person doing the same task.
We’ve already seen it in tasks as ‘uniquely human’ as grading student essays and figuring out which wine will age the best. Every day there’s a new example of a task that we would have thought only a human could do, except now a machine can do it better.
So what’s to stop an OS from becoming a better companion than most humans? The more it interacts with you, the more it learns about what pleases you and what doesn’t, until it knows you better than you know yourself.
Humour and creativity will be among the more challenging areas for artificial intelligence, but even here researchers are experimenting with clever algorithms and deep learning.
If a computer can learn what makes people laugh – and more importantly what makes you laugh – based on watching and analysing over time, there is no theoretical reason that a computer couldn’t eventually display and respond to humour.
Similarly with music or art – by experimenting, analysing and learning, it could figure out which compositions create the best emotional resonance in the human brain.
Once an artificially intelligent computer achieves these milestones, we get to the thorny challenge of consciousness and will. If an artificial intelligence computer exhibits its own unique goals and emotions in an appropriate way, how will we ever tell if it is conscious or not?
Even if our philosophy of life doesn’t allow us to credit an inanimate object with consciousness (although what if the computer or robot was built from live tissue?), it may not matter.
Dutch scientists found that people hesitated in switching off a cute robot cat begging for mercy, and took nearly three times as long when the cat was perceived as intelligent and agreeable.
The way we react to a funny, smart and helpful entity is hardwired into the human brain, so we may have less choice than we imagine about how we relate to our future artificial intelligence companions.
While society grapples with the recent stigma of surveillance and identity theft threatening to undo some of the latest innovations in human-computer interaction, other natural user interface technologies are making their way into the forefront.
In particular, interest is bubbling over efforts to improve natural-language speech recognition and, more ambitiously, to ascertain mood or emotion by interfacing with brain waves.
In the movie, Samantha’s input was limited to voice and video, which already provides a wealth of information about a person’s emotional state that goes beyond what other humans might detect. For example, micro expressions that reveal a person’s true feelings last less than a fifth of a second and are not usually noticeable by others, but a computer analysing a video stream could easily spot them.
In a world of cheap sensors and quantified-self aficionados, computers will be able to track a person’s vital signs such as heart rate, blood pressure, temperature and so on, and see how they change based on a person’s activities or sensory stimuli.
Put that together with the advances in brain-computer interfaces that determine intent and emotion directly from brain signals, and your OS will be able to figure out your needs without the need for a conversation.
Right now, much of the focus is on reading brain signals, but technologies such as transcranial stimulation have the potential to change brain states as well. If you wanted it to, your OS would be able to put you in a more focused or cheerful state of mind if it noticed you getting too distracted or grumpy.
In the 2013 Emerging Technologies Hype Cycle, enterprises were encouraged to look beyond the narrow perspective that only sees a future in which machines and computers replace humans.
In fact, by observing how emerging technologies are being used by early adopters, there are actually three main trends at work. These are augmenting humans with technology; machines replacing humans; and humans and machines working alongside each other.
The first thing is to acknowledge that artificial intelligence and smart machines – including robots – are going to represent a juggernaut trend for the next decade. Re-evaluate tasks that you thought only humans could do – can you redesign how processes are performed and decisions are made within your enterprise based on new smart technologies? You’ll need to reassess this every year or two as the capabilities improve.
Look in particular at how to balance tasks between humans, software and robots to take best advantage of the abilities of each. There are still many challenging endeavours – including chess – where the best solution is a human working together with a computer.
Hire an ethicist or two, as ethical tradeoffs are going to be one of the few areas that remain firmly in the domain of humans. Computers may be able to answer a question faster and more accurately than any person, but it’s going to be the humans who decide what the right question to ask is.