How Data as a Service can modernise the data supply chain

Data has become the fuel of most business operations. Whether it’s customer and market insights, reporting, compliance, application development, testing or running apps and services – getting access to data, at speed, has never been more important. However, the data supply chain still has yet to catch up with demand.

This is because the teams that need data rely on the ‘bucket brigade’, passing data hand to hand from IT operations to the teams that need it. Processes for requesting and provisioning data are slow, manual and largely ad hoc. A project manager may need fresh data for a new feature for their mobile app. They put a request into their line manager and once approved go to the data base administrator (DBA).

The process then gets passed to the system admin, who in turn needs to work with storage and network admins to get access and bandwidth for moving that data. Each of these steps will have internal service level agreements (SLAs) and priority queues, adding more time to the request at each hurdle. As a result, some large enterprises can take weeks, or even months to get the data they need.

> See also: Are data scientists destined to remove all humans from the supply chain?

In today’s digital economy, speed matters. The rise of omnipresent networks, ubiquitous connectivity and the consumerisation of IT have driven a culture of immediacy. Most businesses are pursuing DevOps and Continuous Delivery to increase the speed of delivery and volume of launches they deliver each year.

In trying to serve this demand for new features and faster time-to-market, many businesses are hamstrung by constraints in the data supply chain. Moving large data sets is so difficult and so slow that many organisations end up using subsets or synthesised data during development or testing – a sure way to increase defects, bugs and errors.

To confound things, in the current climate of data leaks and theft, concerns for the security of data are at fever pitch. Businesses need to know and control who, how, where and why when it comes to employees accessing data. In this instance, data masking and obfuscation is vital but adds yet another stage in the supply chain and further slowing down delivery.

The cost of delay

The current model for governing and managing data within organisations is broken and these long data delays are costing businesses. Applications are slow to market and internal projects slip because of delays in the data needed for testing new functionality. Analytics are inaccurate because data is stale or incomplete. Even data-centre or cloud-migration projects can be slow because of the complexities in moving large volumes of data. As a result, innovation is choked.

The truth is, inadequacy of the current data supply chain has a direct impact on the bottom line. It is costing businesses billions a year in lost productivity and low utilisation of systems and software resources. To truly succeed, today’s organisations need to find a way to revolutionise the data supply chain. The ability to copy, secure and deliver near real-time data as a service and on-demand is crucial to this.

Only by making the underlying data more agile, can businesses start to reduce the time it takes to provision data for business critical applications and eliminate the bottlenecks between different factions of the business.

Breaking the data dependency

The demand is clearly there, so how do you remove the constraints surrounding data? The answer could lie in a new layer that could do the heavy lifting, allowing teams to self-service their own data. Instead of the data supply chain being a constraint; it could be transformed into a service. 

By virtualising at the data level, copies need no longer be duplicates; rather data blocks can be shared. This means environments can be served up in minutes not months. But the real value in transforming the data supply chain is the newfound power the users have. Data sets can be refreshed and reset on demand. Environments can be bookmarked and shared between users and data can be rewound instantly to any point in time, for example to the second before the systems went down. Data can be masked during data delivery, all of this without the need for multiple roles intervening with manual process.

> See also: Rise of the drones: re-engineering the supply chain for the UK’s drone future

Self-service and automation replaces the ‘bucket brigade’, empowering users to copy and share data without fear. Data is finally in the hands of those that need it, when they need it, fostering collaboration and creativity.

With the constraint of data removed, projects can be started and completed far quicker than before. By increasing available environments projects can be run in parallel increasing productivity. With shorter cycles testing can be increased improving quality.

Data as a Service gives IT the tools to move away from the bucket brigade approach to data management. By introducing infrastructure and processes that move toward automation and self-service, the data supply chain can be reinvented. When unconstrained by slow manual processes, data can be made light. This way, its potential to fuel Continuous Delivery, and ultimately, increase revenue and competitive differentiation, can be realised.

Sourced from Iain Chidgey, VP and general manager, Delphix

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Analytics
Big Data
Data