How much decision making should we leave to machines?

'Humankind is licking its wounds after its latest defeat at the ‘hands’ of artificial intelligence' warned an article on how the AplhaGo algorithm beat Lee Sedol, an 18-time international titlist in Go. After the match, Lee said: 'When I saw AlphaGo’s moves, I wondered whether the Go moves I have known were the right ones.'

Who defeated whom is irrelevant; the key is that the computer approached the game differently than the human. Instead, what if they tried to solve the same problems as a team? Might their different approaches complement each other rather than clash?

The future of human-machine relations is in fact collaboration, not conflict. In the pursuit of accurate decisions, businesses, governments, and individuals will benefit from the aid of AI.

Here’s why: biology limits our investigative abilities. We can only ask so many questions and invest so much effort before feeling bored, burned out, or stuck. Particularly in business, we have little time for uncertainty.

> See also: Bring the noise: how AI can help improve cyber security

We rush to conclusions that are defensible, but not necessarily correct. Even though we know people have cognitive biases, we believe any pattern we ourselves discovered must be correct.

Machines, tireless and unbiased, can question data relentlessly until they’ve uncovered all the unexpected patterns. Unlike us though, they cannot tap personal experience, emotional intelligence, and ethics to understand these relationships – and act on them.

Therefore, if we care about making sound decisions, we need to combine the strengths and weaknesses of human beings and machines.

What does that look like?

Let me share an example from the healthcare industry. Hospitals examine readmission rates because they signal failures in care and prevention. Researchers usually focus on the elderly, who, for obvious reasons, have higher readmission rates.

In a study with McKinsey though, my company, BeyondCore, examined data on more than 30 million patients using AI. We found that 18- to 35-year-old women with diabetic ketoacidosis (DKA) – a blood condition that can lead to diabetic comas – have a 49% re-hospitalisation rate.

Machines spot problems, but only human beings can determine why a problem exists and how to prevent it. In the case of DKA, women returned to the hospital because they were not taking their insulin regularly. When you don’t take insulin, your body doesn’t process sugar, and therefore you don’t gain weight. Young women were using this as a weight loss tactic.

According to McKinsey, their team could have investigated 250 hypotheses in four months. They would have found at least a few interesting correlations but missed the one about DKA because it was not an obvious pattern.

In this case, the unbiased AI asked a million questions we would have never thought to ask. People then decided which results were relevant and applied the newfound knowledge.

Since we are in college acceptance season, let’s look at an example the sheds light on this yearly ritual. A few months back, Gartner challenged analytics and BI vendors to analyse how much money students should expect to make based on their college, majors, and similar data.

Many people assume that students ought to attend one of the highest ranked colleges they get admitted into, regardless of cost. Most parents assume that majors like business will lead to more income than ‘useless’ ones like dance (which, by the way, I studied alongside computer science and economics).

When AI interrogated the dataset, it found that the biggest predictor of a student’s future income is the parents’ income. The choice of college and major is far less significant by comparison. The human experts had never thought to ask this question and had thus failed to see this crucial pattern.

> See also: Has AI become something we can't live without?

But should a computer decide where our children go to college, based on parental income and the cost of various programs? Absolutely not. We should share the truth and let people make their own choices.

Why, you might wonder, should machines not make decisions?

When we ask AI to decide for us, we are essentially saying, 'We can’t understand AI, so we are willing to accept magic answers.' Magic answers can be dangerous. For evidence, look at 2010 Flash Crash, when autonomous, high frequency trading algorithms triggered a trillion-dollar stock market crash.

In this case, machines asked questions and made decisions. According to a report from Bloomberg, one trader’s spoofing algorithm rapidly placed and canceled orders, destabilsed the derivatives market, and, in turn, toppled the stock market.

At their core, AI and machine learning are algorithms too. Would you want algorithms to control fighter planes, missile silos, and guns? Would you trust algorithms to control your savings accounts or operate on you in the emergency room? I would not. But I would trust militaries, wealth managers, and doctors more if they informed their decisions with AI.

We don’t want robots telling us what to think or do – we want them to reveal hidden insights that demand reflection and action. There is no zero-sum game between man and machine. There is no evil, bias, or shortsightedness in machines, unless we program it.

Thus, we should program machines to explain their discoveries in a way humans can understand. If we focus on bridging that comprehension gap, we can enable symbiosis between humans and machines. Let computers compute, and let people comprehend.

Sourced from Arijit Sengupta, CEO, BeyondCore

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Analytics
Data