When Gartner published its 2015 Hype Cycle for Emerging Technologies this past August, big data was surprisingly absent. After hovering at or near the hype cycle’s peak for four years running, it disappeared before getting the opportunity to wind its way through the cycle’s latter phases of experimentation and early adoption, leaving many scratching their heads.
Was big data all hype and no substance? Was it overtaken by a more efficient emerging technology?
According to Gartner, it’s the opposite: big data has matured so rapidly and become so firmly rooted in the fabric of data processing that it’s no longer considered an emerging trend. Big data is here to stay – and it’s only going to get bigger.
>See also: Top 8 trends for big data in 2016
But even though big data skipped right past the hype cycle’s infamous ‘Trough of Disillusionment’ phase, disillusionment will abound in 2016 as businesses deal with five sharp growing pains brought on by big data’s rapid rise.
1. Analysis paralysis
From dog collars that warn of impending heat exhaustion to kegs that warn of impending beer exhaustion, analysts are predicting the Internet of Things (IoT) will grow to encompass as many as 200 billion connected devices by 2020.
Couple the IoT with other exploding data sources, such as cloud applications and social media interactions, and businesses are finding themselves overwhelmed at the prospect of capturing, storing, and processing all this data.
And just as consumers suffer from an inability to make decisions when faced with too many options (a phenomenon known as analysis paralysis), so too will some businesses when faced with the sheer enormity of a big data undertaking.
This paralysis is likely to disrupt even existing data workflows until a path forward can be navigated.
2. Divide and conquered
The end goal of big data is to be able to draw conclusions from a unified pool of data – but the process of getting there is anything but unifying. And it’s not just due to the ever-increasing fragmentation of data sources businesses must deal with.
Big data solutions are equally fragmented, consisting of different tools for performing each unique operation – from data storage to data cleansing to API management to data visualisation, and everything in between.
In this decentralised environment, silos go up faster than they can be torn down, making comprehensive data governance and compliance exceedingly difficult to achieve. As a result, data quality, data security and data visibility all decrease while inefficiency and cost increase.
3. Small businesses are at a disadvantage
Big data requires big resources. Not only must businesses make substantial investments in new technology for storing and processing vast amounts of data, but for generating it as well.
Take any major big box retailer as an example. In brick-and-mortar stores, cameras, mobile consumer apps and point-of-sale software all work together to generate data and insights that allow the retailer to dynamically improve customers’ shopping experiences though process and system changes.
Small and even medium-sized businesses are much less likely to have the resources to monitor, influence and predict consumer behavior to such a high degree. And even when the resources do exist, smaller customer bases constrain the ability to draw macro-level conclusions. Across all industries and many metrics, big businesses have a built-in advantage when it comes to big data.
4. To err is human – and machine
Big data requires that we leave more and more decisions to machines, but algorithms are only as good as the variables that can be conceived of and accounted for.
As demonstrated by the 2010 stock market crash, dubbed the ‘Flash Crash’, unexpected events can result in catastrophic outcomes when machines are left to their own devices.
Although most business decisions don’t have the weight of the US stock market resting on them, the stakes are high when the impact of a decision can cascade through an enterprise’s operations in the blink of an eye.
Decision management processes will need to walk the fine line between the efficiency of machine algorithms and the superiority of human judgment, and be prepared for recalibrations along the way.
5. Stuck inside the box
Albert Einstein said, “We cannot solve our problems with the same thinking we used when we created them.” But big data algorithms, by their very nature, are backward looking. A hypothesis is put forth, data is crunched based on historical data points, and outcomes necessarily fall into predetermined ranges.
There are, of course, valuable insights and correlations to be had, but the kind of out-of-the-box thinking and risk-taking behavior that fosters innovation could be harder to come by in a mathematics-reliant big data environment.
>See also: How to measure the value of big data
Taking the sting out of big data
Fortunately, many of the aches and pains will ease over time as big data best practices, deliverables, and supporting technologies continue to mature and, just as importantly, coalesce.
Already the market is seeing new types of solutions pop up that capitalise on the cloud to provide tailored, full-service solutions that meet businesses’ unique big data needs. These platforms will go a long way toward taking the sting out of big data, bringing it within reach of all businesses regardless of size or expertise.
Despite its challenges, businesses have no choice but to embrace big data – this brave new digital world demands it. Of course, just as big data consumers and solution providers get the hang of it, a technology will emerge to send them all back to the proverbial drawing board.
Sourced Manish Gupta, CMO, Liaison Technologies