Gartner wants to prevent the robot uprising by building ‘ethical programming’ into smart machines

As machines get smarter, should we be baking ethics into programming?

Self-driving cars, artificially intelligent assistants and smart sensors creeping into every facet of our everyday lives – it seems we’re now entering the future of classic science fiction, but as any sci-fi buff will tell you, blindly stumbling into an era of powerful, self-aware machines rarely works out well for the humans that proliferated them. From Space Odyssey’s murderous computer HAL to the enslaving of all humanity by machines in The Matrix, films and popular culture have long warned us of the implications of letting technology run amok without the right guidance.

We may be a long way off living in mechanical vats controlled by an evil machine empire, but IT analyst firm Gartner thinks now is the time for those manufacturing and developing technologies to establish the ground rules for machines to ‘behave ethically,’ ensuring the the tech community program ethical values into their creations from the earliest outset. To this end, it has outlined a series of recommendations on how companies can act now to prevent a future robot apocalypse- and more importantly keep their customers.

‘Clearly, people must trust smart machines if they are to accept and use them,’ said Gartner analyst Frank Buytendijk. ‘The ability to earn trust must be part of any plan to implement artificial intelligence (AI) or smart machines, and will be an important selling point when marketing this technology. CIOs must be able to monitor smart machine technology for unintended consequences of public use and respond immediately, embracing unforeseen positive outcomes and countering undesirable ones.’

> See also: The day I met Amelia: the virtual assistant hoping to usher in the AI revolution 

As Buytendijk explains, realising the potential of smart machines – and ensuring successful outcomes for the businesses that rely on them – will hinge on how trusted smart machines are and how well they maintain that trust. Central to establishing this trust will be ethical values.

Gartner has identified five ‘levels’ of programming and system development based on their potential ethical impact, and the kinds of discussions CIOs should be having around them.

Level 0: Non-Ethical Programming

At this lowest level, says Gartner, there are no real ethical considerations for the ‘behaviour’ of technology. The technology manufacturer assumes very limited ethical responsibility, other than that the technology must provide the promised functions safely and reliably.

Examples include ‘vapourware’ (a technology that is announced to the public but never delivered) that can reduce customer trust in a manufacturer. The first release of any software is seldom complete, which means customers may have limited expectations of ‘version 1.0’ software.

Gartner recommends that technology manufacturers communicate openly on what they will deliver and any changing circumstances, altering what can be delivered and what cannot. This should include service-level agreements (SLAs) that specify what is delivered and how.

Level 1: Ethical Oversight

The next degree of sophistication has no ethical programming, but the deployment and use of technology may have ethical consequences. Smart machines may be used, but it’s essentially up to users what they do with the results. The main ethical responsibility is in the hands of those who use the smart machines.

Some companies have established an ethics board and some end-user organisations — particularly in financial services — have also established such boards but they are a small minority.

Gartner recommends that organisations establish governance practices that ensure no laws are broken, as a bare minimum. They should also seek to make ethics a part of governance by identifying and discussing dilemmas posed by using new technologies.

Level 2: Ethical Programming

This next level of complexity is now being explored, as smart machines, such as virtual personal assistants (VPAs), are being introduced. Here, the user perspective changes considerably. Whereas in Levels 0 and 1, the user is generally a business professional performing a job, in Level 2, the user is often a customer, citizen or consumer.

> See also: The ethics behind a best-practise big data strategy

Responsibility is shared between the user, the service provider and the designer. Users are responsible for the content of the interactions they start (such as a search or a request), but not for the answer they receive. The designer is responsible for considering the unintended consequences of the technology’s use (within reason). The service provider has a clear responsibility to interact with users in an ethical way, while teaching the technology correctly, and monitoring its use and behaviour.

For example, one smartphone-based virtual personal assistant would in the past guide you to the nearest bridge if you told it you’d like to jump off one. Now, it is programmed to pick up on such signals and refer you to a help line. This change underlines Gartner’s recommendation to monitor technology for the unintended consequences of public use, and to respond accordingly.

Level 3: Evolutionary Ethical Programming

This level introduces ethical programming as part of a machine that learns and evolves. The more a smart machine learns, the more it departs from its original design, and the more its behaviour becomes individual. At this level the concept of the user changes again. In Level 2, the user is still in control, but in Level 3 many tasks are outsourced and performed autonomously by smart machines.

The less responsibility the user has, the more trust becomes vital to the adoption of smart machines. For example, if you don’t trust your virtual personal assistant with expense reporting, you won’t authorise it to act on your behalf. If you don’t trust your autonomous vehicle to drive through a mountain pass, you’ll take back control in that situation. Given the effect of smart technologies on society, Level 3 will require considerable new regulation and legislation.

Gartner recommends that as part of long-term planning, CIOs consider how their organisations will handle autonomous technologies acting on the owner’s behalf. For example, should a smart machine be able to access an employee’s personal or corporate credit card, or even have its own credit card?

Level 4: Machine-Developed Ethics

Lastly, Gartner doesn’t expect smart machines to become self-aware anytime soon. However, Level 4 predicts that smart machines will eventually become self-aware. They will need to be raised and taught, and their ethics will need to evolve. In Level 4, the concept of ‘users’ has disappeared. We are left with only the concept of ‘actors’ who initiate interactions, respond and adapt.

Ultimately, a smart machine, being self-aware and able to reflect, is responsible for its own behaviour.

‘The questions that we should ask ourselves,’ Buytendijk adds, ‘are how will we ensure these machines stick to their responsibilities? Will we treat smart machines like pets, with owners remaining responsible? Or will we treat them like children, raising them until they are able to take responsibility for themselves?’

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...