What is data poisoning, and what is the antidote?

An increasing number of organisations are turning to machine learning models to aid the development of their AI technologies. But another trend could pose a threat to the trustworthiness of those systems: data poisoning.

The key to a successful antidote lies in more than simply fixing the problem after it has occurred. To guard precious data against it, businesses must fully understand the severity of the threat, what it takes to poison data, and how they can protect against it throughout the whole process of creating AI systems.

Back to basics with machine learning

Before we discuss data poisoning, it’s worth revisiting how machine learning models work. We train these models to make predictions by ‘feeding’ them with historical data. From this data, we already know the outcome that we would like to predict in the future and the characteristics that drive this outcome. This data ‘teaches’ the model to learn from the past. The model can then use what it has learned to predict the future. As a rule of thumb, when more data are available to train the model, its predictions will be more accurate and stable.

AI systems that include machine learning models are normally developed by experienced data scientists. They thoroughly examine and explore the data, remove outliers and run several sanity and validation checks before, during and after the model development process. This means that, as far as possible, the data used for training genuinely reflect the outcomes that the developers want to achieve.

Improving understanding of machine learning for end-users

This article will explore how AI vendors can improve the understanding of machine learning for the benefit of end-users. Read here

Data poisoners attack automation

However, what happens when this training process is automated? This does not very often occur during development, but there are many occasions when we want models to continuously learn from new operational data: ‘on the job’ learning. At that stage, it would not be difficult for someone to develop ‘misleading’ data that would directly feed into AI systems to make them produce faulty predictions.

Consider, for example, Amazon or Netflix’s recommendation engines. Think how easy it is to change the recommendations you receive by buying something for someone else. Now consider that it is possible to set up bot-based accounts to rate programmes or products millions of times. This will clearly change ratings and recommendations, and ‘poison’ the recommendation engine. This is known as data poisoning.

It is particularly easy if those involved suspect that they are dealing with a self-learning system, like a recommendation engine. All they need to do is make their attack clever enough to pass the automated data checks—which is not usually very hard.

The other issue with data poisoning is that it could be a long, slow process. Hackers can afford to take their time to change the data by feeding in a few results at a time. Indeed, this is often more effective, because it is harder to detect than a massive influx of data at a single point in time—and significantly harder to undo.

How to prevent data poisoning in four steps

Fortunately, there are steps that organisations can take to prevent data poisoning. These include:

  1. Establish an end-to-end ModelOps process, and monitor all aspects of model performance and data drifts using advanced model management tools.
  2. For automatic re-training of models, establish a business flow using workflow management tools. This means that your model will have to go through a series of checks and validations by different people in the business before the updated version goes live.
  3. Hire experienced data scientists and analysts. There is a growing tendency to assume that everything technical can be handled by software engineers, especially with the shortage of qualified and experienced data scientists. However, this is not the case. We need experts who really understand AI systems and machine learning algorithms, and who know what to look for when we are dealing with threats like data poisoning.
  4. Use ‘open’ with caution. Open source data are very appealing because they provide access to more data to enrich existing sources. In principle, this should make it easier to develop more accurate models. However, these data are just that: open. This makes them an easy target for fraudsters and hackers. The recent attack on PyPI, which flooded it with spam packages, shows just how simple this can be.

The real antidote? Human supervision

Criminals who want to compromise the integrity of machine learning outcomes exist, and their methods in compromising data are extremely subtle. Businesses must pay careful attention to these four points if they value the integrity of their machine learning models.

However, one of the most impactful ways of preventing these attacks is by ensuring that humans oversee the whole machine learning process. Only this way can intelligent machines and perceptive humans work together to prevent a biased outcome, ultimately foiling these ultra-modern attempts at data manipulation.

Written by Spiros Potamitis, senior data scientist, global technology practice at SAS

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com