A computer that beats a chess champion. A truck that delivers beer without a driver. Quantum computing that renders classical supercomputers obsolete. These examples showcase the evolution of AI over the last twenty years. No longer relegated to the realm of science fiction, AI is poised to become the dominant technological development of this century.
When, precisely, AI will take over the world is a matter of debate, but its long march to ubiquity can be sensed with every automated customer service call, every facial recognition in a social feed. Enterprises that harness the power of AI will enjoy great success, but only if they understand essence of AI: that it relies on vast amounts of clean data.
As AI has become a household term, its mythology has grown accordingly, and it is not uncommon for a person to imagine machines on the cusp of thinking like humans and wiping out the workforce.
In reality, as Andrew Ng recently argued, AI already has a broad impact, but its types of learning are still narrowly deployed. For instance, most AI is A→B software, where input data (A) quickly generates a simple response (B). At its most sophisticated, this could be classified as Deep Learning, which itself is often defined as a subset of machine learning.
Make no mistake, these AI tasks can be incredibly powerful. Machine learning, for instance, is used in facial recognition, speech recognition, object recognition, and translation projects, with ramifications across many global industries, and deep learning can approach the level of neural networks to advance those abilities. But the more complex the objective, the more susceptible the software becomes to a multitude of biases and security flaws, which can have vast economic and social consequences alike.
When on the prowl for the newest AI developments, it may be helpful to remember that data comes first, not the other way around. With object recognition software, for instance, the ultimate question to ask about its efficiency will be: does it have enough data sets to distinguish among types?
This is effectively what Facebook did last August, when it open sourced DeepMask, SharpMask, and MultiPathNet, three computer vision software tools that work together help break down and contextualise the contents of images.
Indeed, this decision reflects Ng’s thinking, when he wrote that in an AI world of open-source code, the Achilles’ heel of today’s supervised learning software is data, and predicted that the fight for advantages in commerce is not over accessing software but customising it for your enterprise and your data.
In this context, the well-worn field of master data management (MDM) assumes a crucial role in the age of AI. MDM refers to business-critical data stored in disparate systems across an enterprise. Because it provides a common vocabulary for transactions and operations, and includes the cleaning, governance, tracking and control of all data, “master” data is the optimal starting point for good AI.
Legacy MDM solutions, those cobbled together via acquisitions, when customer and product systems grew up separately, have fallen short on this task – and are largely disregarded by the AI Technorati. This is unfortunate, but newer systems have sprouted up in service of the agility and scale requirements associated with the data challenges around AI.
By creating intuitive ways to manage large volumes of data at scale, users with the most context – who are rarely part of the IT Team – can handle it themselves. With clean, clear, agile MDM, businesses can fully grasp their raw numbers and use them to serve AI systems.
>See also: Data will be AI’s key enabler
Though it may lack the nightmarish glamour of a Hollywood blockbuster, marketing in the B2B space is where AI has arguably been at its most disruptive.
That, at least, is the theory of Scott Brinker at Chief Marketing Technologist Blog, who points to marketing solutions integrated with Amazon’s Alexa as an example of AI-UIs poised to shake up the marketing landscape.
But what gives Brinker’s argument its heft is his clam that AI disruptions in marketing will not simplify marketing practices but, rather, make them infinitely more complex. One core reason is that every AI function ultimate depends on the successful use of data: “The strategic advantage with plug-and-play AI is achieved by . . . the specific data you feed [the] algorithms. . . . The strategic battles with AI will be won by the scale, quality, relevance, and uniqueness of your data. Data quality will become ever more important.”
In this framing, markets for second- and third-party data will thrive, and the beneficiaries will be the businesses that have intelligent MDM platforms to accumulate and store the data, allowing them to create a data field from which AI can silo data, rather AI having to break down siloed data.
Still, though, the emphasis is on quality data. And whether or not clean data can become universally true data is a hotly debated topic. AI lags behind a human brain in its capacity for swift, colourful qualitative analysis; people still must ask and answer the right “why” questions to know which data to collect and view, to know how to store that data and mine it for meaning.
But with the right team in place, the possibilities are endless. If the Washington Post can use AI to cover the Olympics, if Microsoft can usher in the age of the intelligent cloud, who’s to say AI won’t one day be one of your enterprise’s most trusted executives?
Nominations are now open for the Tech Leaders Awards 2017, the UK’s flagship celebration of the business, IT and digital leaders driving disruptive innovation and demonstrating value from the application of technology in businesses and organisations. Nominating is free and simply: just click here to enter. Good luck!