Tell the truth

It is a malaise that afflicts companies of all sizes. But it is the scale of the problem at some of the world’s largest companies that has forced radical action.

For decades, IT and business leaders have watched – often powerlessly – as different business units, subsidiaries and departments have independently defined their key data entities such as customer, product and supplier in contradictory ways within separate IT silos. That makes any kind of high-level analysis of business performance difficult if not impossible, sapping confidence in the underlying data and weakening decision-making.

Unilever, Shell, Diageo, Carphone Warehouse, Labbat Breweries and a couple of hundred other multinationals have decreed that state of affairs as untenable and have been attempting to enforce consistency into the way different business units define their critical data. The key: the emerging set of technologies and processes known as master data management (MDM).

Inconsistent views

The business requirement for some means of establishing consistent views of data is hardly new – as decision-makers everywhere know.

A typical scenario might be a meeting involving the finance director, the head of marketing and the production manger, in which the simple question of how many customers their organisation has, is answered differently by each. The finance view of how many customers might appear in the log of purchase orders held by the enterprise resource planning (ERP) system; the sales and marketing number might be drawn from the CRM system; the production system might show a confusing overlap between the supplier and the customer.

Such a situation stems from the way systems have been built. In the past, business leaders viewed data as ‘owned’ by the application that creates it. To move beyond that, organisations need to de-couple their master data from their applications, to put in place a master data layer.

Class of 27000

Although most master data management initiatives are focused on bringing consistency to internal data, many companies are also trying to rationalise the data passed between themselves and their business partners. Prominent among those is the ‘eClass’ international classification and description of materials, products and services championed by DaimlerChrysler, BASF, Audi and several other, mostly German, multinationals.

As a key piece of master data, eClass is used to describe all materials handled by a company, with the idea that the definition is accepted both internally and across its supply chain. At this point, eClass includes 27,000 definitions of different materials.

How can such a standard be integrated into a business?

Certainly, the idea of a purchase order clerk looking up a book containing 27,000 definitions is not credible. So the requirement is for the common definitions to be workflow-based and held in a web-based repository where the ‘golden copy’ master data can be defined, published and distributed.

There have been efforts to create master data structures that pre-date MDM: in manufacturing, product information management (PIM) has been widely used for several years, as has customer data integration (CDI) in areas such as retail. Attempts to standardise and classify external data that is moved between companies has centred around universal product codes such as EAN (European Article Number) and UCC (Uniform Code Council) and global data synchronisation initiatives like GTIN (Global Trade Identification Number) and GLN (Global Location Number).

But, while those have proved useful in specific areas (especially in retail) they have been bound by their narrow focus. Any attempt to gain a corporate-wide view of product or customer, for example, is going to be thwarted if the definitions are only imposed on a part of the business.

In contrast, the MDM products from companies such as IBM, Kalido, Hyperion Solutions and SAP that are now emerging are ‘subject independent’ – and they need to be. What is required, say analysts, is a mechanism interlinked with a set of processes that lays down the law on how key data is defined and kept consistent across all the applications that use it: a so-called ‘golden copy’.

For many organisations, that may seem like a gargantuan undertaking. And, indeed, to date, take up of MDM has been almost exclusively among large companies – perhaps not surprisingly given that they are likely to have the most heterogeneous applications environments.

But adoption has also been held back by a lack of understanding of the problem – and whose problem it is anyway. Above all, say observers, the business needs to get involved in the integrity of its data – not least to ensure it has the confidence to fulfil compliance obligations.

Synchronisation

One important factor that will determine that take-up is the realisation that many of the technology pieces required to orchestrate data look-ups across the business are now in place.

Until relatively recently, the notion of being able to synchronise data definitions – to read and write backwards and forward between many systems and easily stitch them together was beyond most companies. Now, through web services integration and ample bandwidth organisations can propagate data between many systems relatively easily.

That highlights a key question: who owns the data and who is responsibile for ensuring its enterprise-wide consistency? The consensus is that stewardship rests with the business, so data definitions are made independently of IT operations.

 

 

Pete Swabey

Pete Swabey

Pete was Editor of Information Age and head of technology research for Vitesse Media plc from 2005 to 2013, before moving on to be Senior Editor and then Editorial Director at The Economist Intelligence...

Related Topics