Organisations are under intense pressure to make more effective use of the vast amounts of information they gather and hold.
Users at all levels want to be able to access, analyse and report on information of all kinds as they strive to create competitive advantage, serve customers better, grow revenues, foster greater efficiency, ensure suitable levels of compliance and achieve numerous other goals. But there are considerable barriers preventing them from reaping the full benefits of that powerful corporate asset – some technological, many business related.
What has become apparent is that the pool from which they want to drink is both deeper and wider than ever before. An organisation’s information resource will inevitably include traditional structured data – sales numbers, inventory levels, performance metrics and so on – held in data warehouses and typically accessed, analysed and reported on using spreadsheets and business intelligence tools.
But today’s information workers need to draw on lots of other types of data – semi-structured data such as tagged documents, or unstructured data such as email correspondence, text files, pdfs and, in recent times, voice and images. What’s more, they want these two data worlds brought together under one information management architecture.
A call centre agent, for example, might benefit from seeing, in a single screen, a customer’s sales history, their profitability to the company and their tendency to complain about the service. A query on that customer would draw on ERP data, email accounts and scanned documents; it would trigger some automated analysis in the background and present a rounded picture of the state of the relationship to the agent.
But the unification of these previously discrete areas of IT and a focus on extending information management across the organisation in a consistent and integrated fashion are not without their challenges.
Above all, data quality and data integration remain two of the biggest obstacles. The holding of data in siloes by different departments and subsidiaries – and the definition of the structure of that data within those confines – means that organisations have struggled to establish consistent views of their core business attributes.
That is being remedied in some places by the application of data quality and governance tools designed to remove anomalies, errors and duplication.
But that is dealing with data after the quality has been compromised. Other initiatives around master data management, customer data integration and product information management are centralising the definition of data structures, enforcing consistency between the data held across different databases.
Beyond structured data, vendors are beginning to offer tools for analysing unstructured information by identifying hidden structure using technologies such as entity extraction and linguistic analysis. Others argue that search technologies can be applied to structured data just as easily as unstructured.
The approaches may vary, but the primary goals are brought out in this Information Age survey report on Effective Information Management – better decision-making, tighter compliance and the creation of competitive advantage.