Why now is the time to implement regulation of AI

It’s rare to hear big tech, government and academics singing from the same hymn sheet, especially on a subject as divisive as AI regulation. Yet that is the current reality – with everyone from the late Stephen Hawking to Elon Musk, and from Ursula von der Leyer to Boris Johnson calling for increased legislation “before it’s too late”.

It is a new frontier not just for emerging technologies, like artificial intelligence (AI), but for the regulatory and ethical standards that surround them. With the AI market expected to break the $500 billion mark by 2024, according to IDC, both risk and reward must be considered by all. This consideration is particularly critical for those responsible for building and implementing AI: the developers.

Robotic process automation (RPA) offers businesses – big and small – significant benefits. Driving better digital customer experiences that allow customers to engage in empathetic, personal conversations with agents is just one example. The problem is while enterprise leaders are usually sold by the business case for RPA, employees often feel differently. Anxiety about automation and the seeming lack of safeguards in place to regulate it or provide ethical guidance when building or implementing it. This concern is so acute that it’s slowing down the rate at which RPA is being adopted by users, despite the installed base continuing to grow.

Why RPA is a game changer in the post-Covid era

Monica Spigner, executive vice-president of business transformation at Teleperformance, discusses the expanding applications of Robotic Process Automation in businesses since the pandemic took hold, and provides guidance on successful implementation of the technology post-Covid. Read here

What employees are really talking about here is trust — trust that they’re not going to be replaced by robots, or trust that the algorithm they design, or use is going to consider customer mortgage applications fairly. In my experience, most organisations are not closing contact centres and laying off workers because they deployed an RPA platform. Instead, the deployment is freeing advisors up to concentrate on meaningful conversations with customers, akin to those that usual take place on the local high street. But the trust in that relationship lies between an employee and an employer – or service provider and customer – not with the technology itself. Where trust exists, some surveys are showing workers are actively calling for deeper adoption of AI in their everyday working lives.

To encourage trust in RPA, we decided to ‘codify’ long standing principles at the core of our engineering team into a ‘Robo Ethical Framework’. Inspired by Asimov’s Three Principles, it’s our hope and expectation that organisations we are working with adopt this ethical framework. This will help to bred trust among employees and customers, in turn driving innovation and growth.

One example is the 1974 Fair Credit Billing Act limited cardholder liability — a clear and deliberate act to curb the then emerging credit card industry that was unfairly holding cardholders liable for fraudulent transactions, even when the card had been stolen. By regulating the industry, consumers could have trust that they were being protected and use that product with confidence.

Experience has taught us that without controlling and regulating new technology we face potential unethical outcomes. Our Robo Ethics Framework is intended as another voice adding to the appeal for world leaders, businesses and the employees on the ground to come together, implement real change so that AI and RPA can fulfill its true promise.

Written by Oded Karev, general manager of NICE Advanced Process Automation at NICE

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com