How AI could be a game-changer for data privacy

AI offers multiple benefits to businesses, but it also poses data privacy risks

Artificial intelligence (AI) is everywhere, powering applications such as smart assistants, spam filters and search engines. The technology offers multiple advantages to businesses – such as the ability to provide a more personalised experience for customers. AI can also boost business efficiency and improve security by helping to predict and mitigate cyber-attacks.

But while AI offers benefits, the technology poses significant risks to privacy, including the potential to de-anonymise data. Recent research revealed AI-based deep learning models are able to determine the race of patients based on radiologic images such as chest x-rays or mammograms – and with “significantly better” accuracy than human experts.

>See also: Information Age guide to data + privacy

There is a “substantial risk” of infringing individuals’ privacy while using their data for AI applications, says Sandeep Sharma, lead data scientist at Capgemini. The threat is elevated by a lack of understanding of privacy among organisations using AI, he says.

Common mistakes include:

  • Using data for purposes other than it was collected for
  • Gathering information on individuals not in the scope of data collection
  • Storing data for longer than necessary

This could leave firms falling foul of regulation governing data privacy such as the EU update to General Data Protection Regulation (GDPR).

>See also: Best GDPR compliance software for CTOs

AI + data privacy

The risks posed by AI-based systems span multiple vectors. For example, the potential for bias needs to be taken into account, says Tom Whittaker, senior associate in the technology team at UK law firm Burges Salmon. “AI systems rely on data, some of which may be personal. That data, or the way the models are trained, may be biased unintentionally.”

At the same time, there is also a chance that AI systems could be compromised, and an individual’s private information exposed. This is partly because AI systems rely on large datasets, which might make them a greater target for cyber-attacks, says Whittaker.

Meanwhile, there is the potential that data output from an AI system exposes an individual’s private details directly, or when combined with other information.

There is also a more general risk to society as AI systems are used for an increasing number of applications.

“Credit-scoring, criminal risk-profiling and immigration decisions are a few examples,” says Whittaker. “If the AI or the way it is used is flawed, people may be subject to greater intrusions into their privacy than would otherwise have occurred.”

However, other experts point out that AI can have a positive impact on privacy. It can be used as a form of privacy enhancing technology (PET) to help organisations comply with data protection by design obligations.

“AI can be used to create synthetic data which replicates patterns and statistical properties of personal data,” Whittaker explains.

AI can also be used to minimise the risk of privacy breaches by encrypting personal data, reducing human error and detecting potential cyber security incidents, he adds.

It is with these benefits in mind that Estonia’s government is aiming to be AI-powered by 2030. Ott Velsberg, government chief data officer, at the Estonian Ministry of Economic Affairs and Communications, says AI plays a “critical role” in PET.

For example, federated learning can be used to train models on remote datasets, without sharing information, he says.

To ensure compliance with data protection regulation, Estonia has developed a consent service to enable people to share their government held data with external stakeholders.

“We have also developed a data tracker providing an overview of how personal data is being processed, which is visible on the government portal,” says Velsberg.

>See also: Data privacy audit checklist – how to compile one

Regulation to ensure privacy

AI is currently governed by regulation including the GDPR, but more is coming. Right now, the EU has the “most robust AI-related privacy protections in law,” says Michael Bennett, director of responsible AI at the Institute for Experiential AI at Northeastern University.

The EU is also planning to introduce more regulations specific to AI, Whittaker points out. “These are relevant to those who place an AI system on the EU market, so will impact those based in the UK who sell or deploy AI solutions into the EU. These regulations are intended to prohibit certain AI systems and place obligations upon any that are high-risk, outlining how data can be stored and used.”

Meanwhile, the UK is set to publish a white paper on how it proposes to regulate AI at the end of 2022. 

>See also: What is the role of the data manager?

When trying to manage the risks, it’s important that business leaders know about current and planned regulation covering AI, says Whittaker. He points out that failure to comply with regulations can result in significant consequences: “Breach of high-risk obligations under the EU’s proposed AI Act carries potential fines of up to €20m, or up to 4 per cent of annual turnover.”

For firms using AI systems, transparency about how data is used is essential, says Whittaker. “If users do not know they were affected by a decision made by AI, they will not be in a position to understand or challenge it.”

Crucially, ensuring consent and the legitimate use of data is key, says Mark Mamone, group chief information officer at GBG. On top of this, he says firms should ensure the algorithms themselves, as well as the data on which they depend, are “carefully designed, developed, and managed to avoid unwanted and negative consequences”.

Overarching this, good data hygiene is integral, says Mike Loukides, VP of emerging tech at O’Reilly. “Don’t collect data you don’t need and make sure information is deleted after a certain amount of time. Ensure access to data is properly restricted, and that you have good security practices in place.”

AI is certainly a game-changing technology that’s likely to have an increasing presence in business, but it must be managed responsibly to avoid privacy intrusions. With this in mind, business leaders need to think more critically about how AI is used – and abused, says Loukides. “If an AI application is approving loans, is that application fair; what data does it have access to; and what exactly are the inputs to the AI engine?”

Related:

Clive Humby – data can predict nearly everything about running a business Clive Humby, inventor of the Tesco Clubcard, on ways to stop feeling so overwhelmed by data, how to convince your CEO of its importance, and why data should look forward and not backwards

How businesses can prepare for the Data Protection and Digital Information BillWith the Data Protection and Digital Information Bill currently being reviewed in Parliament, Netwrix vice-president of research and development Michael Paye explains how businesses can amply prepare

Forget digital transformation: data transformation is what you needStefano Maifreni, founder of Eggcelerate, discusses why organisations must focus on data transformation to maximise long-term value