Augmented intelligence: why the human element can’t be forgotten

It’s no secret that recently there has been momentous hype surrounding artificial intelligence (AI) as it has evolved from a sci-fi pipe dream to reality. In the business sphere, AI as people currently know it is already widely proliferated – perhaps the most common example being digital assistants.

Consequently, investors are endlessly eager to fund the next DeepMind, as seen with recent home-grown AI successes including Swiftkey, which sold to Microsoft last year for $250 million (£200 million), and Magic Pony, which Twitter bought for $150m (£120 million).

>See also: Augmented intelligence: predicting the best customer moments

To a large degree, the hype is understandable. An Accenture report from last year found that the introduction and development of AI could boost labour productivity 40% by 2035, which clearly demonstrates the potential benefits to the economy.

Furthermore, the implications of AI go beyond business with the potential to solve societal problems. This was demonstrated recently by police in Durham, who are preparing to go live with an AI system called the harm assessment Risk tool (Hart), which is designed to help officers decide whether or not a suspect should be kept in custody. Tested by the force, the system classifies suspects at either a low, medium, or high risk of offending, having been trained on five years of offending histories data.

However, in reality, we are still a long way off true AI in terms of fully-sentient, self-sufficient programs or robots mingling with humans in everyday life, as seen in films such as Ex Machina and Her.

Today, it’s therefore paramount to pay more attention to so-called “augmented intelligence” instead — where we can already use the technology available to achieve positive results.

>See also: How artificial intelligence is driving the next industrial revolution

The concept of augmented intelligence is not to replace humans, but rather capitalise on the combination of algorithms, machine-learning, and data science to inform human decision-making abilities.

In fact, the Durham Constabulary AI system is a prime example of how vital the human element is to complement AI. While Hart has its own capabilities, there’s scope for data scientists to mine the data on hand and draw new conclusions about suspects.

For example, Durham Constabulary’s data is limited in that it’s based solely on offending data from the local police force and doesn’t have access to information in the police national computer. As such, if a suspect isn’t from the area, human intervention is required to look into the wider background.

On top of the limitations in terms of capability, more testing needs to be done on the issue of algorithm bias before AI goes independent, to ensure that unfair profiling doesn’t occur.

While it may be strange to think of a machine having biases, this is a genuine problem because AI software depends on data sets, which of course have been created and inputted by human beings.

A recent study from Princeton University, for example, found that the GloVe algorithm, that is trained on 840 billions words from the internet, absorbed stereotypes inherent in the language and replicated prejudices based on both race and gender.

>See also: How artificial intelligence and augmented reality will change the way you work

Such biases become a major concern when applied to social problems such as prison sentences. For the Durham police force, many experts have expressed concern at the inclusion of data beyond offending history, such as postcode and gender, which may skew the AI’s decision making.

Acknowledging these concerns, the force has stressed that the forecasting model is advisory, subject to an audit trail, and doesn’t remove the role of discretion from the police officer using it — truly highlighting the continued importance of the human factor.

Unfortunately, it’s well-documented that, across the globe, the tech workforce is lacking in diversity — making the possibility of algorithm bias more probable. As long as AI developers are relatively homogenous, this is always going to be an issue people have to watch closely.

As such, it’s currently vital to employ data scientists in tandem with machine-learning to mine data and draw their own conclusions. Inevitably, diversity is the key to combating bias, which is why having a diverse team of data scientists working alongside AI is so crucial.

>See also: IoT and free will: how artificial intelligence will trigger a new nanny state

Ultimately, we are on the verge of the digital revolution with the potent combination of AI, machine learning, and data science, having the potential to disrupt countless businesses and industries – as evidenced by the forward-thinking Durham police force.

While technologies and platforms are great, all businesses must remember that there is still a pressing need for humans to be in the loop – it is neither practical nor safe to do anything otherwise.

AI as popular culture portrays it will not happen for decades, and so now is the time to capitalise on the era of augmented intelligence. AI technologies can make our roles and processes much more efficient, and we should be using them to enhance the human element today.


Sourced by Dr Kim Nilsson, CEO at data science hub Pivigo


The UK’s largest conference for tech leadership, TechLeaders Summit, returns on 14 September with 40+ top execs signed up to speak about the challenges and opportunities surrounding the most disruptive innovations facing the enterprise today. Secure your place at this prestigious summit by registering here

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...