Regulating robots – keeping an eye on AI

If there’s any emerging technology that’s gripped the public consciousness in recent years it’s AI and machine learning (ML). Autonomous vehicles, shopping recommendations, Siri and Alexa, these are just a few of the day to day examples of the rapid evolution of ML applications.

The fervour around AI and ML’s development is only fuelling these advancements. As public interest grows we’re already seeing more students attracted to ML and AI courses. Just look at the popularity of Professor Andrew Ng’s Coursera course on machine learning or the record number of Stanford students who enrolled in the machine learning class this semester. With more young professionals and academics entering the field the rate of innovation is only set to increase.

>See also: European politicians vote to regulate rise of robots

As exciting as these fast-paced developments are, there are noteworthy voices of dissent from both the academic and industrial spheres. Earlier this year, Stephen Hawking warned AI could be the “worst event in the history of our civilization”, whilst Tesla boss Elon Musk ponders an AI arms race leading to a third world war.

Sensationalist statements? Perhaps, but it raises some interesting questions around what responsibilities we should give AI, where to draw the line and how to govern its use, some of which I’d like to address below.

AI – Decision making, independent thought and the robo-apocalypse

First and foremost, we have to iron out the semantics. It’s important to understand the distinction between ML and AI, and their relationship to one another to fully appreciate whether Hawking and Musk’s robo-apocalypse is even possible.

Machine learning refers to the concept of programming computers to learn patterns, classify data, and make predictions based on past behaviour. Artificial intelligence, on the other hand, refers to the discipline of making computers exhibit human intelligence. As such, ML is a sub-discipline of AI, a rung on the ladder to the ultimate goal of true machine intelligence.

Distilling this further, we have to separate two distinct concepts – decision-making and ‘independent’ thought. Autonomous cars demonstrate decision-making by applying the brakes to avoid a collision. This decision making is powered by ML algorithms, and limited to this very specific situation.

Independent thought is powered by self-awareness and emotion, and we’ve yet to achieve AI which demonstrates this capability. We honestly don’t even know if it’s possible, and I’m skeptical that we will be able to develop computers that achieve true independent thought.

>See also: AI: from hype to reality in healthcare

As the crux of the robo-apocalypse argument generally revolves around us giving too much responsibility to AI, and it consequently running amok, as things currently stand, it seems an unlikely scenario.

Take, for instance, the case of autonomous cars. They may be able to make decisions, but lacking true independent thought, they’re unable to make crucial emotional judgments. A self-driving car could realistically have to make the crucial decision of potentially harming its passengers or perhaps many more pedestrians. Should the car protect the passengers at all costs, or try to minimise the total harm to human life at the expense of the passengers? If you knew that a self-driving car was programmed to minimise the total harm to humans in certain situations, would you agree to allow the car to drive you? Probably not.

Inherently, AI and ML as they currently stand, can’t be tasked with making these important decisions, and as such, the potential to do world ending harm is necessarily limited.

Regulating AI

That said, although a potential robo-apocalypse is doubtful, governance will still need to be an important part of AI’s development.

Software is already increasingly including AI and ML components, and this will only accelerate in the coming years. Governance will need to adapt to handle and regulate computer software that is used specifically in activities that can impact human well being. Transportation, voting machines and health systems are just a handful of possible examples.

>See also: Is your artificial intelligence right for you?

Take a real world example: if you were building a hospital you’d of course employ licensed engineers and qualified architects to do so. However, when it comes to the developers building the software that runs on the medical devices in that hospital, we ask for no equivalent license. Both are inherently tied to human wellbeing, but are governed very differently. As stated above, AI in practice is really the application of algorithms to data in a process that is, ultimately, controlled by humans, so this kind of licensing may be a possible path for governing appropriate use of ML and AI in the future.

As with lots of areas, overly zealous regulation often leads to a stifling of creative freedoms and innovation. On the other hand, people already exist in a world where software can be developed to exhibit bias, lead to unsafe systems, or make financially irresponsible decisions, and clearly this needs to be regulated appropriately.

Ultimately, what it boils down to is finding an equilibrium between limiting the potential damage AI could cause without quashing the potential benefits. It’s going to be a difficult line for governments to tread whilst still keeping pace with this rapidly advancing field.

 

Sourced by Dr. Greg Benson, chief scientist at SnapLogic and Professor of Computer Science at University of San Francisco

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...