How should we go about establishing strong AI regulation?

This week, Alphabet CEO Sundar Pichai and IBM CEO Ginni Rometty called for AI to get its own regulation system.

Alphabet CEO Pichai stated that it was “too important not to”, going on to expand by explaining that sectors within AI technology, such as autonomous cars and healthtech, needed their own sets of rules.

IBM CEO Rometty joined the discussion with the idea of ‘precision regulation’, stating that it is not the technology itself, but how it is used that should be regulated, using facial recognition as an example of technology that can harm people’s privacy as well as having its benefits, such as catching criminals.

What is the answer to regulating AI? And why is it important?

Asheesh Mehra, co-founder and CEO at AntWorks, explains why regulating AI is important. Without it, the technology won’t take the world by storm

Still work to do

These announcements have come in spite of recent setbacks in the sphere; just last week it was revealed that the European Commission were considering a five year ban on facial recognition, and Google‘s last attempt to assemble an AI ethics board lasted under two weeks due to controversy over who was appointed.

Speaking on the potential ban on facial recognition within the EU, vice-president of Cognizant‘s Centre for the Future of Work, Robert Hoyle Brown, said: “The health of our democracy demands trust. The tech trust deficit, however, is not closing.

“Too often there is still retroactive regret that not enough was done to prevent ‘X’ from happening. Meanwhile, there is near-unanimous agreement that facial recognition software error rates are unacceptably high.

“Bans on facial recognition, such as those suggested by the European Commission, raise valid concerns over the need to walk before we run. Understanding how all involved will respond when, not if, things go wrong is critical now rather than later.

Facial biometrics: assuring genuine presence of the user

Andrew Bud, founder and CEO of iProov, helps explain how to assure genuine presence when dealing with facial biometrics –– the new standard in security. Read here

“A temporary ban on the technology may just bide us our time.”

Where to go from here

So what would be a potentially beneficial strategy for regulating AI that gets the best out of the technology while avoiding harm?

“There’s multiple aspects of ML/AI that should be regulated, from data collection all the way through how machine learning models gets used, including production ML, or MLOps,” said Santiago Giraldo, senior manager of data engineering at Cloudera. “It is estimated today that only about 10% of models created by businesses make it into production, but the proliferation of more sophisticated and complete methods of deploying and operating the models that enable AI is quickly magnifying the need for oversight and regulations.”

He went on to identify two other major aspects of AI regulation to consider:

Open standards

Considering how many sectors are now leveraging AI within their companies, there will need to be a standardised set of regulations that encompasses all of them.

“First and foremost, there needs to be an open, transparent, and universal standard for operation of AI models,” said Giraldo. “Establishing these best practices and standards not only enables organisations to better deploy, monitor, and govern their models, but also makes it easier for regulating bodies to hold organisations accountable.

“Unlike regulating data privacy and usage, there are many aspects of AI that are much more difficult to measure and quantify. An AI model is constantly learning from new inputs and new data, and this gives way to a plethora of new aspects that may need regulation.

Balancing data privacy with ambitious IT projects for digital transformation

Sophie Chase-Borthwick, director of Data Ethics & Privacy at Calligo, discusses how to apply privacy by design to digital transformation projects, without compromising either the business’ objectives or its adherence to privacy legislation. Read here

“The problem is that today, every organisation manages these aspects of AI operations in different ways, making consistency in oversight very challenging.”

Agreement between all stakeholders

Giraldo continued by citing a need for regulators, vendors and users to all be on the same page in order for AI regulation to be a success.

“We need regulators with domain expertise and a clear understanding of how machine learning and AI models are built, trained, and operated in production, and consensus and collaboration between vendors, enterprises, and industry leaders on establishing and adopting open, interpretable, standards for ML operations.

“In reality, everything stated is practical, and we are today laying the essential groundwork for real, sensible regulations to take hold. We have the opportunity to build these functions and regulations the right way in the early stages of true AI adoption.”

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.