Data science cowboys are exacerbating the AI and analytics challenge

In the below, Dr Scott Zoldi, chief analytics officer at analytic software firm FICO, explains to Information Age why data science cowboys and citizen data scientists could cause catastrophic failures to a business’ AI and analytics ambitions.

Data science cowboys

Although the future will see fast-paced adoption and benefits driven by applying AI to all types of businesses, we will also see catastrophic failures due to the over-extension of analytic tools, and the rise of citizen data scientists and data science cowboys. The former does not have data science training but uses analytic tooling and methods to bring analytics into their businesses; the latter has data science training, but a disregard for the right way to handle AI.

Citizen data scientists often use algorithms and technology they don’t understand, which might result in inappropriate use of their AI tools; the risk from the data science cowboys is that they build AI models that may incorporate non-causal relationships learned from limited data, spurious correlations and outright bias — which could have serious consequences for driverless car systems, for example. Today’s AI threat stems from the efforts of both citizen data scientists and data scientist cowboys to tame complex machine learning algorithms for business outcomes.

Are there solutions to the AI threats facing businesses?

The AI threats facing businesses are growing. In this period of insecurity, how can businesses respond? Is it a case of fighting fire with fire or is deception technology the answer? Read here

Continuing technological advances in computational capacity are enabling larger and more opaque machine learning models to be built. These opaque models are informed by billions — even trillions — of tiny computations that allow them to explore the best ways of combining inputs into complicated and intractable formulas. How these predictive models arrive at their conclusions is often not completely understood by their makers, let alone their users.

Without transparency into what drives model outcomes and specific bias testing, these models are ultimately tested in the wild on human subjects, often with negative outcomes that are not identified until well after the damage has been done.

These results have caused organisations specialising in machine learning to move away from increasingly complex models and embrace a philosophy of ‘Explainable First, Predictive Second’.

Explainability and ethics are often not at odds with high predictive and precision levels, making it possible to get the best of both worlds. By choosing to prioritise explainability and ethics in AI, there has been a shift towards model architectures that ensure models are interpretable, and that all relationships between variables can be extracted and tested to prevent the introduction of bias.

AI ethics: Time to move beyond a list of principles

AI ethics must move beyond lists of ‘principles’ says new report from the Nuffield Foundation and Leverhulme Centre for the Future of Intelligence. Read here

Ethical AI

We can expect continued research into and development of Ethical AI — which not only restricts the type of data going into a model build, but also uses Explainable AI to extract all the often-hidden variable relationships learned by the model, both of which help identify and stop bias. Building Ethical AI is completely connected to Explainable AI – which explains “how” and “why” a machine learning model derives its decisions. Although these techniques have been around for decades, research into new machine learning algorithms focused on explainable and transparent machine learning architectures are driving new ground on the responsible and ethical use of AI.


Related articles

What is AI? A simple guide to machine learning, neural networks, deep learning and random forests! 

If you want to see the benefits of AI, forget moonshots and think boring

AI predictions: how AI is transforming five key industries

Darktrace unveils the Cyber AI Analyst: a faster response to threats

Is the cloud and AI becoming two sides of the same coin?


Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Data Science
Ethical AI