GDPR — How does it impact AI?

As the GDPR turns five, how has its relationship with AI evolved?

The vast scope of GDPR has raised fresh challenges — chief among them is the complex interaction between AI and the GDPR. Having marked its fifth anniversary on the 25th May this year, it’s hard to believe how much has changed since Europe’s data privacy law was first effected. In particular, the adoption of – and advancements in – artificial intelligence have reached previously unimaginable heights, transforming how we live and work and creating new challenges for regulators when it comes to data protection.

When I covered the topic for this publication back in 2019, it was to discuss the complicated interplay between AI and, at that point, the relatively new GDPR. Especially with regards to Article 22, whose goal was to avoid automated profiling as the only decision-maker in choices that could have a big impact on people’s rights, freedoms and interests. There were calls, particularly from the Information Commissioner’s Office (ICO), for greater action and collaboration to instil a new framework for data protection to ensure the responsible and ethical use of AI. In subsequent years, as AI has moved on considerably, those calls have only become louder.

The rate of digitalisation was further accelerated by the Covid-19 pandemic as consumers and businesses shifted online and towards new working models. AI emerged as a real enabler of business growth, holding great promise for organisations hoping to leverage next-generation technologies to drive cost and operational efficiencies through automation. Many leaders also zoned in on AI’s powers to enrich the user experience journey as well as offset volatility and build resilience in uncertain markets.

According to McKinsey, adoption of AI globally grew 2.5x between 2017 and 2022, and 63 per cent of respondents expect to see their companies’ investment in AI increase even more in the coming three years. However, the same report shows that while AI usage has surged, there have been no meaningful changes since 2019 to manage AI-related risk.


Is China at an advantage in the AI race because of GDPR?AI/machine learning needs lots of data. Is China at an advantage in the AI race because of GDPR? Will the UK and Europe be left behind?


The rise and rise of AI

AI continues to escalate in its influence worldwide and is revolutionising business processes in a way that is no longer theoretical or the material of science fiction — but tangible and immediate. With the EU currently representing around 15 per cent of global GDP, EU-based organisations have no choice but to strike the right balance between reaping the benefits of AI and managing it to ensure there are no unintended consequences.

Recent months have brought extraordinary breakthroughs in the field of generative AI, with the much-anticipated launch of ChatGPT closely followed by competitors such as Google’s Bard and more. While these applications are breaking down barriers by being able to mimic human-like thinking, they also bring many more grey areas to the fore than the automated profiling that Article 22 sought to tackle some years ago.

With the technology evolving at breakneck speed, governments and regulators are still unsure of the wider implications and scope when navigating the associated risks of AI, which encompass everything from data privacy violations to cybersecurity concerns, fraud, inaccuracies, misinformation, copyright infringement and even academic cheating. We saw how Italy, for instance, temporarily banned ChatGPT due to fears that it might conflict with the GDPR. Managing the ever-increasing risks while reaping the benefits of AI and building digital trust should be the overwhelming focus areas right now.


GDPR: has the regulation backfired?Has GDPR backfired? And what is next for the regulation?


Navigating the evolving data landscape

Data, which is the raw material powering AI, continues to grow exponentially. Research shows that the total amount of data generated, stored and consumed worldwide passed 64 zettabytes in 2020 and is set to reach over 180 zettabytes by 2025. It is no surprise that legislation has lagged behind the unprecedented rise of AI, but this is where leaning more on data protection regulation may help to fill an important gap in the meantime.

Another factor that has completely altered the landscape in the past five years is the UK’s exit from the EU, which brought additional complexities for the effective monitoring of personal data; while ‘UK GDPR’ is largely the same as the EU version it does carry some slight differences which make it an imperative for companies to increase education around data usage to understand the new policy landscape and avoid running afoul of these differences.

The grey areas of data protection

The priorities should be ensuring trust, ownership and transparency that result in compliance. Anything other than this can not only result in the compromise of individuals’ personal data, but also lead to potential significant ramifications for organisations, both reputational and financial. It is encouraging to see progress happening on a bigger scale, such as the approval by a European Parliament committee of a landmark EU AI Act on the back of rallying calls to control the rise of AI properly and responsibly, which, if it translates into law, will provide a clearer and more risk-based way to regulate AI.

In the UK, where the government has long championed measures to keep the country at the forefront of global science and technology development, the AI industry is booming and is a major economic driver – contributing a huge £3.7 bn to the GDP last year. A new AI whitepaper was released in March to aid in informing the responsible ongoing growth of AI, geared around five key principles with fairness and transparency top among them, to strike the right balance of innovation, safety and compliance.


Post GDPR and the ownership of dataPost GDPR and the ownership of data. Will citizens become custodians of their own data and will companies have to ask them for permission to use it?


Addressing the conundrum

Looking ahead, although the landscape has undoubtedly become far more complex, I remain a firm believer that the GDPR and AI can still work successfully in tandem as long as rigorous measures, checks and best practices are embedded firmly into business strategies and on the proviso that AI-related policy also evolves as a way to supplement existing data regulations. It also hinges on having the right understanding and training in place, for AI, cybersecurity and other digital skills, something that organisations must actively commit to.

It remains the case – even more so now than five years ago – that companies must focus on ensuring that AI adoption is aligned not only with the GDPR but also the established rules of ethical corporate behaviour. Those who have not done so already can consider designating an ‘AI ethics officer’ who would have oversight for establishing and implementing a strategy of rigorous processes and checks to ensure compliance and prevent AI from being used in inappropriate or damaging ways.

Regulatory scrutiny will only intensify and, as we enter previously unchartered waters, now more than ever it is vital to remember the original premise of the GDPR – to safeguard individuals and their personal information, and to act with integrity, transparency and good ethics.

Eric Winston is executive vice-president, general counsel and chief ethics & compliance officer at Mphasis.

Related:

How generative AI regulation is shaping up around the worldWith generative AI developments heating up globally, we take a look at the regulation state of play for regions across the world.