What the draft EU AI Act means for regulation

Information Age speaks to EU data protection, intellectual property and technology experts about the business implications of the EU AI Act

Drafting of the EU AI Act has been in progress since April 2021, with regulators looking to keep up with rapid innovations within startups and big tech corporations alike.

A plenary vote for the draft of the EU AI Act is currently scheduled for mid-June 2023.

Use cases for artificial intelligence were already being discovered regularly across many sectors before OpenAI released its ChatGPT tool publicly in November 2022, but generative AI has made how to regulate AI a hot-button topic.


Five years of GDPR — the data compliance state of playFive years on from the inception of the EU General Data Protection Regulation (GDPR), we explore what businesses need to consider when it comes to compliance


Much of the discussions around how to go about this have included the matter of “explainability” – opening up what has previously been a sealed black box for many; users need clarity on how data is being processed, and how the algorithm operates. There are also the possibilities of misinformation and bias to consider, with tools such as ChatGPT recently found to be susceptible to cultural biases and “hallucinations”.

David Dumont, partner at law firm Hunton Andrews Kurth, says: “The EU AI Act is being passed at a time of unprecedented innovation in the space and intense debate around how AI should and should not be used across various sectors.

“Regulations generally move slower than technology and various stakeholders, including AI technology developers and users, as well as regulators and legislators, are also facing legal questions with respect to certain AI uses for the first time.”

What will the draft EU bill mean for AI regulation going forward?

Principles of AI development

The European Parliament is in talks to include general principles for use of all AI systems. These could include:

  • diversity
  • human agency and oversight
  • non-discrimination and fairness
  • privacy and data governance
  • social and environmental wellbeing
  • technical safety
  • transparency

From this, we can see that there is a focus on accountability when it comes to human developers of such technology, and on ensuring that people from different backgrounds are always involved in testing and deployment to avoid biases and misinformation.

Meanwhile, the “social and environmental well-being” aspect points to a focus on ESG.

“Regarding data privacy, AI-driven products and services typically require the processing of a significant amount of personal data, creating a clear tension between use of that data and some of the GDPR’s key data protection principles,” says Dumont.

“However, it should be possible to develop and deploy AI products and services in a GDPR-compliant manner, which is why the EU AI Act is so important as it creates a more robust legal framework for businesses to work within going forward.”

As it stands, non-compliance could lead to culprit businesses being fined up to €30 million – and 6 per cent of total worldwide annual turnover in the most serious cases.

EU AI Act – managing risk

Currently, AI systems are being classified by the level of risk they can create within society: this risk-based approach features a variety of risk strands, for example facial recognition and dark pattern AI; high risk systems, for use in education, employment, justice and immigration law; and prohibited practices, including social scoring of citizens.

Providers will have to ensure that the AI system complies with the requirements under its risk allocation – from the data sets informing AI training and performance-testing to record keeping, cybersecurity, and effective human oversight. While this demonstrates an attempt to provide bespoke regulation, depending on the system in question, there is a concern that businesses hit a barrier due to overzealous regulation.


The best IT compliance tools for your businessAntony Savvas looks at some of the best IT compliance tools and methods that are suitable for all types of business


It’s currently believed, at the time of writing, that the following measures would need to be carried out:

  • A risk-handling system that encompasses design, testing and analysis
  • A management system that oversees data quality
  • Provision of technical documentation and instructions of use
  • Registration of data sources and training resources involved in the making of the foundation model
  • Installation of energy-efficiency standards

Impractical requirements

JJ Shaw, managing associate in the digital, commerce & creative group at Lewis Silkin, says: “The AI regulation previously designated an AI system as ‘high-risk’ where its intended purpose was high-risk. However, bringing General Purpose AI System (GPAIS) within scope of the ‘high-risk’ classification due to the (however unlikely) chance of a ‘high-risk application’ means such systems are likely to become subject to tough compliance requirements and the associated cost consequences.

“The concern with this amendment is that providers will be given impractical requirements, such as listing all possible applications of a tool and the requirement to develop mitigation strategies to deal with such applications. Some commentators have suggested that the full force of the high-risk section of the AI regulation should apply only if a GPAIS is indeed used for high-risk purposes, rather than having a possible application.”

Generative AI

While discussions around AI regulation were high on the agenda before ChatGPT began hitting the headlines, the surge in generative AI innovation globally has brought about the need for a rethink around possible risks.


How generative AI regulation is shaping up around the worldWith generative AI developments heating up globally, we take a look at the regulation state of play for regions across the world.


“Particularly relevant to text and image-generating AI are the recent amendments to the AI Regulation in 2022 – which introduced the concept of GPAIS and includes any AI system that can be used for many different purposes and tasks,” explains Shaw.

“This wide definition captures a variety of AI tools, including AI models for image and speech recognition, pattern detection, translation and also text and image-generating AI [like OpenAI’s ChatGPT and Dall-E].

“It is difficult to predict the potential applications for a GPAIS, because these systems are versatile and can complete a variety of tasks when compared to ‘narrow-based’ AI systems, which have specific intended use cases. For example, a text-generating AI tool might be used to draft patient letters for medical professionals, utilising sensitive patient data, even if this was not its original intention.

“Whilst a GPAIS might be considered as a great technological development by AI enthusiasts, from the EU law-making perspective, such unpredictable applications are considered ‘high-risk’”

Global impacts

While the EU AI Act focuses on development across the European Union, there is no doubt that the regulation will affect other regions too, including the UK and US – given the global nature of the AI market, and indeed medium-to-large businesses looking to innovate with the technology.

“Given the rapid developments we’re all seeing in the artificial intelligence sphere, the EU AI Act has been voted through at a very opportune time and countries outside of the union will be observing very closely how businesses react to the new rules and how they will be enforced,” says Ellen Keenan-O’Malley, senior associate at EIP.

“AI operates in a global market and although the EU is a key market and the EU hopes that the AI Act will become a de-facto global standard, there are signs of a divergence in approach across different jurisdictions with the US and the UK looking to take a more ‘pro-innovation’ and decentralised approach.

“This is a cause of concern for businesses who are already working hard to assess the potential impact different regulatory regimes will have on how they should approach AI governance internally and manage possible liabilities in their own supply chains.”

More on AI regulation

VIDEO: Elizabeth Renieris – ‘Our robot overlords aren’t quite here just yet’Elizabeth Renieris is a renowned artificial intelligence ethics expert, who believes that Big Tech is being disingenuous when it calls for a global AI super-regulator. Existing laws cover AI, she says, we just need to leverage them

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.