The robots are coming: machine learning now enhances itself

Machine learning seems to be the new hot topic these days. Everybody is talking about it since  machines beat human players in chess, jeopardy, and now even Go. People’s cars will be driven by artificial intelligence in the future, and many jobs will be taken over by robots. There’s a lot of hype, a lot of fear and uncertainty – as so often when new technology has the potential to disrupt our societies.

However, when you talk to the people that are actually involved in developing these new types of intelligent algorithms, you get a quite different picture. Today, there’s a lot of manual work involved in automating decision processes.

The development of algorithms that can make decisions in a “weak” intelligent way is hard work. These can be called “weak” intelligent algorithms, since people have only so far been able to develop algorithms that can do one thing.

>See also: Where does machine learning fit in the education sector?

They might be able to do this one thing extraordinarily well, like playing Go or playing Chess. However, if you ask the algorithm that can play Go to drive your car, it will fail. So, scientists are still a long way off being able to develop the ‘highly’ intelligent machine.

Cumbersome trial-and-error approach

What can be done, though, is apply algorithms to almost any kind of digital data to extract information automatically and make decisions in a seemingly intelligent way. The development of these algorithms, called machine learning, can remain a cumbersome journey.

This is because the usual approach is to apply trial-and-error methods to find the optimal algorithms for a problem at hand. Usually, a data scientist will choose algorithms based on practical experience and personal preferences. That is okay, because usually there’s no unique and relevant solution to create a machine learning model.

Many algorithms have been developed to automate manual and tedious steps of the machine learning pipeline – for example, to loosen prerequisites under which machine learning theories and approaches apply, to create input features automatically and select the best predictors, to test different modeling algorithms and choose the best model. But still, there’s a lot of lab work required to build a machine learning model with trustworthy results.

>See also: 4 industries that will be transformed by machine learning in 2017

A big chunk of this manual work is related to finding the optimal set of hyperparameters for a chosen modelling algorithm. Hyperparameters are the parameters that define the model applied to a data set for automated information extraction.

For example, if someone decides to build a machine learning model to predict which customer to grant a credit, they need to make many decisions during the training process. They need to choose which modeling approaches to test, which data they choose to train the model and which data to test the results, how to tune the parameters of the chosen model and how to validate the results.

All these choices will impact the outcome of my model building exercise, and eventually the final model selected. When you consider this model will be used to decide which customer will get credit, it’s important that we have high confidence in the model to make decisions people can trust.

>See also: Machine learning: The saviour of cyber security?

A large portion of the model building process – beside the analytical data preparation that still takes the lion’s share of the time – is taken up by experiments to identify the optimal set of parameters for the model algorithm. Here it quite quickly gets into the curse of dimensionality.

Modern machine learning algorithms have a large number of parameters that need to be tuned during the model training process. There’s also a trend to develop more and more complex algorithms that can automatically drill deeper into the data to find more subtle patterns.

For example, we’re seeing a development from shallow neural networks to deep neural networks, from simple decision trees to random forests and gradient boosting algorithms. While these algorithms improve the chances to build accurate, stable predictive models for more complex business problems (such as fraud detection, image processing, speech recognition, cognitive computing), they also require a much larger number of parameters to be tuned during training. (There is no free lunch J).

>See also: Can AI and machine learning transform the entertainment industry?

So, if you have 10 parameters that need to be tuned to an optimal setting and each parameter can have 10 different values (these are very conservative numbers) I end up with combinations to test as many as 90 different settings. And this only applies to a single modeling approach. If I’d like to test different algorithms this number grows very quickly.

Speedy autotuning approach

So, what can people do? There are several ways to support the data scientist in this cumbersome lab work of tuning machine learning model parameters. These approaches are called hyperparameter optimisation.

In general, there are three different types: parameter sweep, random search and parameter optimisation.

Parameter sweep

This is an exhaustive search through a pre-defined set of parameter values. The data scientist selects the candidates of values for each parameter to tune, trains a model with each possible combination and selects the best-performing model. Here, the final outcome very much depends on the experience and selection of the data scientist.

>See also: Machine learning and artificial intelligence technology in hospitality

Random search

This is a search through a set of randomly selected sets of values for the model parameters. With modern computers, this can provide a less biased approach to finding an optimal set of parameters for the selected model. As this is a random search there are chances to miss the optimal set unless a sufficient number of experiments are conducted, which can be expensive.

Parameter optimisation

Again there are different approacheshere, but they all apply modern optimisation techniques to find the optimal solution. This is the best way to find the most appropriate set of parameters for any predictive model, and any business problem, in the least expensive way. The “Optimal Solution” so to speak.

>See also: Machine learning: fashion’s next revolution

SAS has conducted lots of research into hyperparameter tuning – which it calls autotuning. It’s now possible to quickly and easily find the optimal parameter settings for diverse machine learning algorithms such as decision trees, random forests, gradient boosting, neural networks, support vector machines and factoridation machines by simply selecting the option you want.

In the background there are complex local search optimisation routines hard at work that tune the models efficiently and effectively. This new capability will be a great help to the modern data scientist. They will find the best model much quicker and with more confidence. For the business this means getting to value with machine learning faster.


Sourced by By Sascha Schubert, business solutions manager at SAS


The UK’s largest conference for tech leadership, Tech Leaders Summit, returns in September with 40+ top execs signed up to speak about the challenges and opportunities surrounding the most disruptive innovations facing the enterprise today. Secure your place at this prestigious summit by registering here

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...