The pending EU AI Act is set to operate through a risk-based approach, categorising use of artificial intelligence from low to high risk levels to regulate use cases accordingly.
499 of a total of 620 participating MEPs voted in favour of the act, with 28 voting against the proposed legislation.
The plenary vote follows a debate among ministers around the various matters to take into consideration when it comes to AI regulation, which occurred yesterday.
Following today’s vote on how to go about regulating businesses that are innovating with AI, the European Parliament will enter talks with the European Council on drafting the law, with a view to completion by the end of the year.
However, it is said to be “virtually impossible” that the law will be ready to come into force before the Parliament elections in June next year, according to Dragoş Tudorache, who spoke during the Parliament press conference today.
“In the negotiations with the European Council and Commission, there is an issue not only of time for companies to comply, but more seriously, time for member states and governments to mount up expertise within the authorities that will take on the role of market regulators,” said Tudorache.
How generative AI regulation is shaping up around the world — With generative AI developments heating up globally, we take a look at the regulation state of play for regions across the world.
The drafting journey for the EU legislation began with initial legislative proposal being delivered by the European Commission in April 2021, with a first reading of the draft law being carried out on the 11th May 2023.
According to global chief privacy officer at Wipro Ivana Bartoletti — who has been directly involved in the development of European AI policy — the law will coincide with existing laws around AI that encompass matters including data privacy, consumer rights and non-discrimination.
“The EU AI Act will be the first legislation governing high risk AI applications and generative AI tools. At a time when everyone is talking about the harms of AI and the need to set the rules of the game, it’s good to see regulation finally on the horizon,” said Bartoletti.
Here, we delve into what the European Parliament’s announced position on AI regulation will mean for organisations using artificial intelligence to bolster operations.
Holding AI development to account
According to David Dumont, partner at law firm Hunton Andrews Kurth, the “landmark piece of legislation” subject to today’s European Parliament vote subjects developers of high-risk AI systems to more robust obligations than those proposed European Commission’s proposal.
Dumont said: “The text the European Parliament voted on today establishes a GDPR-like right to lodge a complaint with a supervisory authority, broadening individuals’ options to push companies to comply with the Act.
“It creates the possibility for privacy activists to target AI companies and tools through complaints to supervisory authorities as we have seen with the GDPR.
“[The text] introduces an obligation for deployers of high-risk AI systems to carry out a ‘Fundamental Rights Impact Assessment’ that should address how the system they are deploying will affect compliance with fundamental rights in the EU.”
While capable of speeding up creation of code and creative projects, among other workplace task areas, the technology has also fallen into the hands of threat actors looking to trick users into giving away sensitive information.
Additionally, generative AI remains capable of displaying bias and sharing misinformation and potentially harmful content if not managed properly by programmers.
“Generative AI is an excellent example of how technology has outpaced the law. However, in its rush to position itself as a global leader, the EU can’t forget the importance of protecting its people by ensuring its AI Act in its final form does not water down or contradict the privacy protections provided by the GDPR,” said Alex Hazell, head of EMEA privacy and legal at Acxiom.
Also being discussed among MEPs has been the use of remote biometric authentication to verify users, and how to balance security with privacy rights. A blanket ban has been placed on facial recognition in public spaces, in the aim of keeping those privacy rights protected.
AI-powered biometric authentication collects and recognises the fingerprint, face, voice or other biological or behavioural aspect of a person — leading to debates commencing over data privacy. The remote kind of biometric authentication being discussed for drafting of the EU AI Act carries this out at a distance, and at scale.
“By looking to curtail specific applications of AI, such as live facial recognition in public spaces and predictive policing tools, the EU has taken a decisive step towards protecting fundamental rights and societal values,” said Jaeger Glucina, managing director and chief of staff at legal process automation provider Luminance.
“This initial approach holds plenty of logic whilst the minds behind AI are still yet to be brought to the table.”
The impact of ChatGPT on multi-factor authentication — In this article, we delve into the security implications of generative AI tools like ChatGPT, and how businesses can adapt their authentication strategies.
To ensure that innovation with artificial intelligence continues to grow safely, “regulatory sandboxes” created by public authorities are being looked into as a way to develop AI models and test for flaws in a controlled environment before public launch.
MEPs have also proposed exemptions for the purposes of research activities and open source AI software.
While a landmark development for tech regulation globally, Robin Röhm, CEO of machine learning and analytics platform Apheris believes that the draft law “raises more questions than it answers”, with concerns around possible decreases in venture capital funding to be addressed.
“It is critical that we allow for capital to flow to businesses, given the cost of building AI technology, but the risk-based approach to regulation proposed by the EU is likely to lead to a lot of extra burden for the European ecosystem and will make investing less attractive,” said Röhm.
“The key to good regulation is ensuring that safety concerns are addressed while not stifling innovation. It remains to be seen whether the EU can achieve this.”
“The EU’s proposed bill represents a good starting point for a global AI regulatory approach, as it doesn’t fall into the trap of ‘trying to get under the hood’ of the technology,” Glucina added.
“Long-term the success of this Act will depend upon significant buy-in on a global scale. The differing regulatory approaches taken by the US and the EU will need to meet in the middle. Today’s vote is an important step in the journey towards an AI-enabled future, however there is still a long way to go.”
5 ways AI can transform compliance — Compliance is all about rules and AI seems a perfect tool to help overworked compliance officers. We look at use cases for AI when dealing with compliance.