How to minimise technology risk and ensure that AI projects succeed

A spectre is haunting European Artificial Intelligence (AI) projects. But its name isn’t Communism: it’s the risk of disappointment.

More and more signs of corporate unease with what business is getting from their AI pounds and euro expenditure is building up. IDC are warning us that 28% of machine learning initiatives fail; Gartner have said only 53% of AI initiatives make it from a prototype environment to production, and McKinsey frets that most organisations are not agile enough when it comes to deploying AI and its associated tech.

Unifying separate communities of interest with different KPIs

The common problem, luckily, isn’t any underlying problem with AI or the machine learning models themselves. If there are problems, and these figures suggest there are some issues, many AI projects are still delighting their investors. But it’s the way that these systems are being assembled and managed internally by the three main players involved when it comes to the building and deploying of AI models.

You have the business, which is focused on how we make better decisions using machine learning; you have the data scientists, who are all about how we apply this technique to solve it for the business; and you have the IT team, which is responsible for making sure that their colleagues are making the optimal use of the organisation’s investment in the tech infrastructure.

These are often quite separate communities of interest with different KPIs; we have silos of information and perspectives that are never helpful when it comes to delivering a successful business IT solution. What is needed is a way to bridge the differences, and if we don’t, the immense promise of AI will not be delivered for European organisations.

My contention, then, is that spinning up AI projects, amazing as it looks on paper, is asking for trouble before any code is cut, if this isn’t supported by a framework that properly connects all these stakeholders and their work in an easy-to-use environment. The good news is that this is happening, and I’ll tell you how.

Our experience shows that to get an impactful machine learning project up and running, you need a number of components: a component that prepares the data for machine learning to work on, one that allows you to build your models in a way that once the model has been built allows you to have a look at it and understand it and make sure that it’s not biased; a piece that enables you to quickly operationalise that particular model, putting governance and monitoring around it; a way for the organisation to take the model (or preferably, models) and embed it into applications.

Bridging the gap between data engineers and business analysts

Dan Reid, CTO of Xceptor, shares his insight on the common ground between data engineers and business analysts. Read here

What real-world AI customers want: a hybrid develop and provisioning environment

Organisations are using lots of different technologies and multiple processes to try and manage all this, and that’s what’s causing the delay around getting models into production and being used by the business. If we can have one platform that allows us to address all of those key areas, then the speed at which an organisation will gain value from that platform is massively increased. And to do that, you need an environment to develop the applications to the highest level of quality and internal customer satisfaction, and an environment to then consume those applications easily by the business.

Sounds like the cloud, right? Well, not always. When you look at aligning AI, you also have to think about how AI is consumed across an organisation; you need a method to move it from R&D into production, but when it’s deployed, how do we actually use it? What we are hearing is that what they actually want is a hybrid development and provisioning environment, where this combination of technologies could run with no issues, no matter what your development or target environment is, such as on cloud, on-premise, or a combination.

To further minimise risk, you’d want this supportive project harness to be as standards-based as possible to avoid vendor lock-in and an easy swap-out of things that don’t work, and for the same reasons be as open source as you can. So, it’s important to employ the language the data scientists prefer most, which is Python, and be based on the container-orchestration system for automating computer application deployment, scaling, and management IT likes best, which is, of course, Kubernetes — which is fantastic in terms of allowing you to control the cost of your cloud infrastructure and also allows you to quickly deploy individual elements if you want.

That Python bit really matters, as the challenge that can occur with creating applications to be deployed over the Web is they have to be in Java, so you haven’t got a wealth of data scientists that have got the skills to create AI applications using traditional app dev frameworks. But if you can use a language that they’re competent with, your productivity goes right up.

Training machine learning models to be future-ready

As digital innovation continues to accelerate, we explore how machine learning models can be trained to be future-ready. Read here

Hundreds of great models going straight into production

This kind of integrated ‘hybrid cloud’ platform for building and deploying AI across a business has been commonly offered, with companies going from a few machine learning models getting out of the lab per year to hundreds — and they’ve also been able to scale those models to real-time applications.

Customers have found value and have accelerated the delivery of machine learning models, and shifted them into production orders of magnitude quicker by having all the tools together in one place. This makes me optimistic that those disappointing IDC and Gartner stats are just part of the learning curve for AI, and that the dangers of increased AI cost, the risk of failed promises and failed delivery or the desire for often unnecessary AI infrastructure will all rapidly diminish in the medium term.

So I say to the spectre haunting European AI projects: Time’s up. We have a way to start winning, and winning big, with this transformative tech and collaborative tech and business thinking.

Written by John Spooner, head of artificial intelligence, EMEA at

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at