Where’s the value in big data?

When The Economist features big data on its cover you know that it has gone mainstream. And, indeed for some time now, the lure of big data has been a seductive one; successfully manage and exploit the explosion in data and commercial nirvana will be assured. Increased revenue will be yours, competition will disappear and customers will love you even more.

And yet, the reality is not matching the hype. ‘How do I really drive value from big data’? is a question that needs to be fully answered. Frustration seems to be building and there’s a danger that disillusionment will set in.

But it doesn’t have to be this way. There is a route to driving value but you have to be realistic and you have to be methodical in your approach. You also have to start by recognising that, in reality, there are only three kinds of big data projects.

>See also: Big data delivering business value with intelligence and analytics

The first is simply focused on replacing aging traditional infrastructure; in effect to re-platform an environment and make it fit for purpose in today’s economy – let’s call this the makeover.

The second type of big data project sees companies recognise that their traditional data warehouse environments may be good at reporting and managing structured data, but fall down when it comes to other forms of data and, most crucially, analytics (and after all, without analytics, how can you realise the optimal value of all that data?) – let’s call this the ‘upgrade’.

The third and final type of big data project is where companies decide they’ll save money by blindly adopting big data technology – let’s call this the Kamikaze option.

But here’s the unwelcome truth; only one of these projects will deliver new flexibility in reporting and data collection while saving money. Only one of these projects will drive a very high return on investment. And, lastly, one of these projects is a giant IT black hole that can waste millions and ruin careers.

Let’s start with ‘the ‘makeover’. There are very valid reasons for replacing ageing infrastructure with big data technology. Let’s say you bought a bunch of data warehouse appliances 4 years ago and it’s time to review things.

>See also: Data will be AI’s key enabler

In the years that have passed, your company knows they’ll want to leverage more analytics and be more responsive while lowering their cost. How to achieve this? Deploy a hybrid approach and use new versions of traditional data warehouse technology for a portion of your infrastructure and ‘big data’ technology, such as Hadoop for the rest.

This is logical and will likely be successful. And a tip for success to drive maximum value is to have 3-4 environments that allow for innovation, testing and exploration; including one sizeable area allocated as a data scientist playground. For the companies that upgrade their traditional architecture (but buy less of it) and add Hadoop and related technology, taking this approach will see them have their current and future needs covered, save cost and improve performance. The ROI for this is medium to potentially high and the time to value reasonable.

Now let’s turn to the ‘upgrade’ option. In this case, the company in question is more advanced in their adoption of analytics. They likely have their traditional environment running along just fine thank you, but they realise they are missing out on leveraging big data and the related technology to realise value from new types of data and analytics. For this company, adding a Hadoop environment and an analytics playground will give them maximum value.

Some projects like this deliver 600% to 3,200% ROI in 12 months. The key to success here though is that the company is ready to not just collect and store big data, but to act on the results operationally. Let’s look at the most extreme example.

A giant marketing company with billions of rows of data created a massive analytics environment that included in-database analytics. Their internal business case stated a 12 month return on investment of 3,200%.

>See also: Big data vs. privacy: the big balancing act

They listed numerous reasons, but the main driver was that their data scientist could build 10X as many models during a given period of time. Crucially the marketing team was ready to act on the insight by changing existing programs and running new ones almost immediately after the results of the analytics were available. In their case, the process to turn around some models went from days to minutes. These types of companies are the ones who are truly benefitting from the big data revolution; they start their projects with a clear purpose and are structured to act on the results.

Finally, let’s look at the Kamikaze option. Thankfully, it’s not common, but it does exist. The mantra is a common one. “We have to get some big data projects in place! We’ll adopt all this free open source technology, put it in the cloud and dramatically downsize our staff!” We’ll save a fortune”. Except it won’t happen. For these companies to have any hope of monetising big data, they need to have a few things in place.

One is a clear understanding of the optimal mix of technologies based on their business needs and operational ability to deliver. You can’t just throw all the technology over the fence to the cloud and adopt 100% open source if you are an enterprise level company. Instead, you need to optimise their environment based on a mix of business needs and ability of the business to operationally act on results.

These companies will also need a clear vision of the results they expect as well as stunningly good IT project managers. The list of ‘must haves’ to be successful are onerous though. Therefore, the result (more often than not is that a huge amount of money will have been spent, a lot of disruption will have been endured, and the company in question will find themselves back where they started.

>See also: The 3 pillars of big data analytics potential

There is value to be had from the big data phenomenon. But it requires the right strategy and execution. Put simply, for a moderate return on investment, you’ve got to leverage and optimal mix of traditional and big data technology to replace your aging infrastructure.

Then, for a high return on investment, run analytics on your big data (on traditional and/or Hadoop environments) but be sure you can act operationally on the results. And never throw the baby out with the bath water by opting for the kamikaze option because that path is littered with the wrecks of failed deployments (and careers!) Good luck!

 

Michael Upchurch, co-founder, Fuzzy Logix

Related: Deriving value from data at Lloyd’s of London — lessons from the former chief data officer

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Analytics
Big Data
Hadoop