Treading carefully in developing and applying AI

In her recently published 2018 trends report, Mary Meeker discussed how the combination of accelerated data gathering due to computer adoption and the declining cost of cloud computing, have enabled Artificial Intelligence (AI) to emerge as a service platform. There are many definitions of AI, from machines that work and think like humans, through to “any device that perceives its environment and takes actions that maximise its change of successfully achieving its goal”. Whilst the specifics of the definition are still discussed, what is now unequivocal is the power AI has to shape our lives, with intelligent assistants such as Siri and Alexa understanding our spoken commands, predictive maintenance algorithms that can optimise equipment repair, and cars that can drive themselves. However, even as AI creeps deeper and deeper into our everyday lives, serious questions remain about artificial responsibility.

In an interview with Newsweek, Sundar Pichai, CEO of Google said:“AI is one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire. [While fire is good] it kills people, too. They learn to harness fire for the benefits of humanity, but we will have to overcome its downsides, too. My point is AI is really important, but we have to be concerned about it,”

In developing AI solutions, there are several areas we need to consider carefully:

How are we training our data?

AI uses existing data to make predictions, such as taking logs of previous service centre calls to automatically predict the correct response to give to customer questions. Critically, this training data, and what constitutes a ‘correct’ result, is provided by us. Humans. With all our inherent flaws and biases.
These biases can skew the results from an AI algorithm, predicting not the ‘truth’ of a situation, but the outcome a biased human would make.

>See also: The value of gender diversity is seen as critical to business success

In the U.S, the use of computerised risk assessment tools in the criminal justice process is widespread, with some states even requiring it. Tools such as COMPAS are given historical recidivism data from which they determine what factors make a defendant a higher risk, using the information to generate sentences for future defendants. As you might expect, the historical data reflects the racial bias of previous generations, with a ProPublica study finding that COMPAS predicts black defendants will have higher risks of recidivism than they actually do. (The producers of COMPAS, Nothpointe Inc., disputes this analysis).

We need to be vigilant in training AI to ensure our input data doesn’t contain conscious or unconscious biases, or else we risk a feedback loop of prejudices, with the skewed output of one model feeding another, each iteration predicting a further distortion of reality.

How is our algorithm working?

Machine learning, a sub-field of AI, contains a diverse set of models to create predictions, such as a person’s credit score, or classifying an object into a category, for example, if an image contains a human face or not. These models range greatly in their complexity and predictive power: broadly speaking the more complex the model the better the predictive ability. However, this predictive power comes at the cost of interpretability. The most complex of these, deep learning neural networks, are effectively black boxes, taking in inputs and spitting out predictions and classifications with no ability for humans to understand how the model reached its conclusion. No ability, that is, until recently.

>See also: How to make the most out of your machine learning investments

In their paper ‘”Why Should I trust You?”‘ Explaining the Predictions of Any Classifier’, Ribeiro et al. propose LIME, a new technique to explain the predictions of any classifier in an interpretable and faithful manner, including advanced machine learning models. We’ll skip over the details, and jump straight to their remarkable, and troubling, findings.

They trained a classifier to distinguish between photos of Wolves and Eskimo Dogs (Huskies), a task machine learning algorithms have struggled with until recently. The classifier performed well, but the LIME results showed something unexpected was happening inside the classifier. The photo pixels that had the most impact on the classification results were not those of the animals, but instead those of the background. The photos of wolves all had snow in the background. The Huskies did not.

Can we be confident our AI is working in a way we are comfortable with? We must be extremely cautious with AI that our algorithms are using only relevant features, applying techniques such as LIME to open up more of the black box.

Is it right to put a machine in charge of this?

As the scope for AI continues to grow, so does the call for legislation and oversight, with the question shifting from what’s possible, to what’s ethical.

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”Ian Malcolm, Jurassic Park

Regular news from tech giants continues to bring this debate into focus, as Tesla sees yet another crash with its’s autopilot, and Google partners with the US military.

Tesla have stated that “when using Autopilot, drivers are continuously reminded of their responsibility to keep their hands on the wheel and maintain control of the vehicle at all times,” placing the burden of responsibility squarely on the driver, and begging the question of the point of the feature if a driver has to remain fully alert at all times.

>See also: CIOs beginning to deliver real value from machine learning

Google has promised ethical principles to guide their development of military AI, saying the guidelines will include a ban on the development of AI weaponry. Despite these assurances, dozens of Google employees have resigned since the contract was made public, and thousands have signed a petition demanding the company withdraw from all such work.

Where do we place the line between human and machine? What responsibility can we in good conscience place in the hands of algorithms?

As with every powerful new technology, AI brings with it a tremendous opportunity for businesses and individuals alike; but it also brings risks, from well-intentioned but poorly chosen training data to algorithms powered by correlation instead of causation, to our ultimate peril: an abdication of our ethical duty.

As practitioners in digital and marketing, we all have a tremendous opportunity to reap the benefits of AI, and an even greater obligation to ensure its ethical use.

By Paul McCormick, senior consultant – data & personalisation, TH_NK

Related Topics