How big data can help businesses dodge mistakes

 

Computer processing power doubles roughly every 18 months. This is the widely known Moore’s Law, which has proven to be reasonably accurate for over 35 years.  The effect has been a staggering decrease in the cost of processing data.

Less well known, but equally important, is Kryder’s Law, which states that disk storage density doubles every year. In short, storage productivity doubles every year, while processing productivity doubles every year-and-a-half. 

The cumulative effect of this difference in productivity growth rates has created huge pools of data that sit fully or partially idle because we can’t figure out how to use them at feasible cost. Big Data is just a convenient marketing term to describe this effect. The question for leaders now is how to speed up the transition from having more data to putting this data to work in driving better decisions.

So what do executives want from their data? Fundamentally, business executives seek to distill causal relationships between their own actions (e.g. I changed the price of blue dresses in my stores) and their consumers’ reactions (e.g. Did the customer tweet about the sale? Did she visit the store more often? And did she actually buy the dress because of the price change?). 

Economic changes, competitor actions, geographic factors, and other ‘noise’ make isolating cause-and-effect relationships between a business action and consumer behavior exceptionally difficult. 

The most robust way to isolate the impact of any change is to try a new business idea in a few stores or with a few customers, analyse whether it worked and where or with whom it worked better, and then target the rollout to maximise ROI. 

This process of rigorously analysing data to find cause-and-effect relationships, by conducting a business experiment, is the most impactful use of big data.

In fact, experts across the board agree that rapid and robust experimentation is the fundamental benefit of having big data. A McKinsey report concluded that enabling experimentation to discover needs, expose variability, and improve performance is the key benefit of having big data, while thought leaders at MIT said ‘faster insights with cheap experiments’ is the key benefit of big data.

Why then do so many organisations still not use their big data to robustly test their ideas? While many reasons may be cited, executives often believe that they know how their decisions will turn out and that a scientific test may not be needed. 

A convenience store network hypothesised that giving hourly employees a regular wage increase on a pre-determined schedule would decrease employee attrition, thus creating operational efficiencies. The store implemented a test of the new compensation structure in a small subset of stores to determine whether the investment was worthwhile.

To the management team’s surprise, they found that the programme caused an increase in net profit, in addition to a decrease in employee attrition. The management thought this result was nearly impossible until they decided to look further into results.

They found that the programme also reduced shrink dramatically, as employees were more loyal to the company and less likely to steal, as a result of the wage increase. Profit improvement from the programme was projected to be millions of dollars per year.

Popular US convenience store Wawa had developed a new flatbread breakfast offering that had performed well in spot-testing. However, the analysis team wanted a more robust measurement of how the entire store was affected by the new product introduction.

They decided to scientifically test the flatbread introduction before rolling the product out nationally. They found that while the flatbread performed well, it cannibalised sales of higher-margin menu items, a common symptom of new product introductions.  Wawa decided not to roll out the flatbread, avoiding further wasted investment.

A large supermarket, meanwhile, had a major product that was underperforming, so management decided to try selling it at a lower price. This product was sold by the pound so it decided to test multiple variations of price and weight combinations to understand if one combination was the best.

In the end, the team tested the original price at two different increments, and the lower price at two increments. However, the problem was that the original price at lower increment was what retailers call an ‘ugly’ price point. Everyone agreed to test it anyway, believing that it was not going to be successful. 

The test produced striking results. Keeping the product at the original price but offering it to the customer at the ugly price increment was far and away the best performer (it even tested better than any of the price points at the reduced price). The results were so overwhelming that the company went forward with the ugly pricing, but only rolled it out to stores with a probability of success.

These are just a few examples of companies enhancing decision making through big data techniques, which are enabling them to be maximally creative, try new ideas, learn what works quickly, and generate significant competitive advantage.

The future belongs not simply to those who have big data, but to those who can use it to make better decisions.

 

Sourced from Jim Manzi, Applied Predictive Technologies

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Applications
Big Data
Data