Training machine learning models to be future-ready

With machine learning being used to automate various operations across multiple sectors, there is the possibility for use cases to evolve to suit future needs. The Covid-19 pandemic clearly evidenced the need to fast track digitisation to meet continuously changing customer behaviours, and this is showing no signs of slowing down any time soon. When it comes to machine learning, it’s vital that models remain relevant for the organisation’s goals, and this means training them to be future-ready.

Overall, the long-term success of machine learning models depends on how they are embedded and put into operation. This can vary from platform-as-a-service (PaaS) and software-as-a-service (SaaS) approaches, which are useful for easier maintenance, to partnerships with data and model providers, for acceleration and enhancement of solutions. Machine learning alone is rarely enough.

The value of SaaS offerings in a post-Covid business environment

Jonathan Bowl, AVP & general manager, UK, Ireland & Nordics at Commvault, explores the value of SaaS offerings in a post-COVID business environment. Read here

In this article, we delve into the ways in which machine learning models can be kept up with the progression of customer and employee requirements.

Removing bias

One of the most important considerations to be made when it comes to training machine learning models is removing possible bias, a pitfall that can damage the long-term prospects of any AI project, as well as a brand’s reputation. Biases can lead to inaccurate detections of anomalies, among other flaws.

A big step towards succeeding in this endeavour can be made through ensuring that models are overseen by a diverse team of engineers, as having this be covered by the same employees each time can cause personal biases to be overlooked. This needs to be considered early on in the development stages.

“We’ve seen plenty of examples of companies that didn’t account for unintended biases within their machine learning models,” explained Jonathan Zaleski, head of Applause Labs.

“Removing bias from an AI is easier said than done, because humans train machine models and all humans have inherent biases, whether they are aware of them or not. The key is to secure a very large and diverse set of data and people to train machine models. The more data, demographics, geographies that you can include, the better.”

Once a model has been trained, it’s a good idea to have a separate, equally diverse group of engineers test the outcomes of the model to ensure accuracy.

How to build the diversity that’s needed in tech

Daniela Aramu, Thomsons Online Benefits’ head of user experience, discusses why diversity is needed in software development, and how to build it. Read here

Consider the real world

Another way to ensure that machine learning models can be accurate in the long-term is through the use of data that reflects the real world and real people, while keeping the possibility for unexpected twists in mind. Covid-19 brought a whole host of such occurrences to organisations of all sizes and sectors, and data science teams need to be prepared for further sudden developments going forward to ensure they aren’t left behind.

Chris Stephenson, technical director at Sagacity, said: “As we’ve seen over the last year, events can take a sudden unexpected twist in the real world.

“As machine learning models ‘learn’ solely based on the evidence we present to them in the data, changes in behavioural patterns will therefore lead to inaccurate predictions being made unless the model is retrained with new data. For example, a model that predicted footfall in a shopping centre pre-Covid would need to be retrained to take into account the new reality.

“Since there’s no such thing as a future-ready model, it’s the organisation’s responsibility to give it the information it needs and to continually evaluate and assess the model’s effectiveness.

“This could come as part of an MLOps-style approach; borrowing from the spirit of the DevOps and Agile Development principles, MLOps is a continuous improvement process, which sees cross-functional teams collaborating, continually improving and, ultimately, driving better quality outcomes from the application of machine learning.”

How to achieve agile DevOps: a disruptive necessity for transformation

Three experts explain how organisations can achieve agile DevOps. The process is disruptive, but entirely necessary for transformation. Read here

Distinguish between learning and alerts

One area that has been burgeoning as an area for machine learning development has been in the smart city space. Rishi Lodhia, managing director EMEA at Eagle Eye Networks, believes that cameras will play a prominent role in this respect, and with this in mind, engineers should look out for the differences between signs of learning, and alerts sent by the model.

Lodhia explained: “Think, for example, how the cameras on a car are making autonomous driving possible. Essentially, a camera is becoming a sensor that can recognise abnormalities and create a proper alert that can be acted upon.

“The question is: how does the camera learn when an abnormality occurs? To answer this question it is important to distinguish the learning component from the alert component.

“The learning component in a perfect world happens in the cloud for simplicity and scale. Cameras stream all the surveillance video to the cloud, and the technology applies the training of the AI models in the cloud.

“A greater picture can be built up, which can be compared to a vast amount of existing video footage, and the AI system can use previous experience to determine what actions have been taken before in a similar situation, even if the one presented to it now is completely new.”

Urban life innovations post-lockdown: Top VCs on the cities of tomorrow

As the UK moves into the next stage of lockdown, leading VCs discuss the future of innovations in the sectors most affected by coronavirus. Read here

Improvement with learning

Finally, but no less importantly, it’s important for data science teams to know that to be future-ready, machine learning models should improve with learning.

“Future proofing machine learning models is essentially a false premise because, like science, machine learning should continue to improve as we learn,” said Franki Hackett, head of audit and ethics at Engine B.

“A machine learning model which you put in place and abandon is never going to be future-ready, it becomes future-ready from building it into a reflexive process, so that as you get more data or learn more about your data you improve your model.”

The right processes need to be in place to ensure that learning is ongoing, which means regular reviews and evaluations of decision-making, outcomes and outputs.

“It’s important to remember that your work with training machine learning models is never done. It’s not something you can set and forget,” added Zaleski.

“Even if you have an AI that works today, that may not be the case further down the line. You must continue to invest more resources and data into training the ML models, so they continually improve.”

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.