How to test in the real world: Lessons from big data

Organisations can use the large amount of data they now have available to them to run tests to determine the impact of each new business before rolling it out across the network.

A/B testing is already widely used by online companies due to the relatively low level of complexity. However, running empirical tests in an omni-channel or pure brick-and-mortar organisation can be considerably more challenging.

Which business initiatives should these organisations test? And what considerations must be made to drive the greatest impact from these in-market tests?

Promoting certain products can be very effective in bringing more people to your stores and growing the amount they spend on each transaction, but without taking an empirical approach to evaluating each promotion, organisations risk significantly eroding their margin.

>See also: Is big data dead? The rise of smart data

Gone are the days when relying on correlational analysis or “gut feeling” was enough to ensure promotions would be profitable; there are numerous factors that must be considered before broadly rolling out a new promotion.

What happens when a store dramatically cuts the price of one product? The promotion might drive one customer to make an additional trip to a store in which he purchases multiple items in addition to the promoted product.

Conversely, the same promotion might cause a different customer simply to purchase a higher quantity of the discounted product during the promotion, leading to lower future sales (a phenomenon often referred to as “pull forward sales”).

New product introductions can also have a detrimental effect on sales if not properly tested.

An example of this comes from the convenience store Wawa’s introduction of a new flatbread snack. The flatbread had performed well during preliminary trials, but when Wawa tested the impact of introducing this new item on store sales overall, they found that the flatbread cannibalised sales of other higher-margin items.

This test helped executives avoid significant margin loss and identify specific locations where the programme was likely to be successful.

Additionally, organisations are increasingly able to link customer data to transaction data, enabling them to understand nuances including cannibalisation and pull-forward effects of each promotion, and how each of these varies by customer.

Testing each offer with a small number of customers before broader rollout enables organisations to identify the shoppers that are shopping more often because of an offer, and those shoppers that would have purchased the promoted product anyway, but are now just doing so for lower margin.

The key to optimising promotions is to use predictive analysis to measure both the short- and long-term effects of offers such as buy-one-get-one.

This short-term testing happens frequently and is designed to measure the immediate outcome of each promotional programme, often only occurring for a short period of time.

Continuing to evaluate the margin impact over the long term can inform an organisation’s strategic change and positioning in the marketplace. The difference between employing these two strategies is very salient.

The immediate effect of small incremental changes must be monitored, but measuring the long-term outcome these changes could have across an organisation is just as important.

The impact of advertising

Along with promotions and product introductions, there is increasing pressure on marketing departments to accurately assess the impact of media spend in a statistically significant way.

The internet has opened up a new world of digital advertising possibilities, with many websites plastered in branding and ads popping up on every video and article.

Online advertising is well known to drive online sales, but its impact on in-store sales is less clear.

>See also: Big data: not a magic pill, but an antidote

Organisations have the opportunity to use in-market testing to understand where online ad spend should be increased or reduced to realise the highest offline returns.

Organisations can use ‘black out testing’, where all adverts are removed for a period of time from certain postcodes, or conversely, heavy-up testing where advertising is drastically increased in certain areas.

Store performance in these “test” markets is then compared to stores in markets that did not receive the increased/reduced digital marketing spend, allowing organisations to understand the true incremental impact of each additional digital marketing dollar.

Testing the layout

Once customers have been drawn in through advertising and promotions, how does a store’s physical appearance and layout affect consumer behaviour?

Capital investments like refreshes and remodels have the potential to drive millions in profit improvement for retailers, but also can carry risk given the significant costs involved.

The returns on any remodel initiative across a store network are likely to vary greatly by location due to a number of market factors.

The numerous external factors inherent in the retail environment, such as seasonality and competitor actions, can make it extremely difficult to understand the true impact caused by a store refresh. 

However, organisations can accurately understand the attributable impact of each element of these programs by putting the remodel into market in a small subset of locations first.

By comparing sites in which the remodel program is executed to similar sites that do not receive the renovation, executives can be confident that any performance difference between the two store groups can be attributed to the impact of the remodel.

This approach also provides value by enabling organisations to tailor each programme to include only the highest impact components.

For example, a hotel chain may find that refurbishing the outside of its hotels drives significant lift in new bookings, while new carpeting and interior furnishings do not drive new traffic.

Through testing, organisations can bid farewell to the all-or-nothing approach; businesses are able to adopt a more flexible approach, making changes only where they will see significant returns.

Counterintuitive results

The value of in-market testing becomes especially evident when it reveals counterintuitive results, often due to external factors that would otherwise not have been considered.

For example, moving expensive items into protective cases to prevent theft may actually cause an overall decrease in profit, as this added security may discourage consumers from purchasing these large ticket items.

Charging a pound for a shopping trolley may generate a new revenue stream, and reduce theft of trolleys, but it can cause a larger decline in sales as customers switch from trolleys to shopping baskets and consequently buy fewer things on each visit.

Testing may also reveal unexpected upside. For example, a convenience store raised pay yet saw an increase in overall profit, as the improved employee loyalty and lower turnover also meant decreased theft.

>See also: Big data and mapping – a potent combination

Organisations that are able to use testing in physical locations to drive growth and profitability will hold a distinct competitive advantage.

We have seen industry leaders minimise the risk of innovation and maximise incremental margin by making testing each new initiative faster and cheaper prior to broad rollout.

These in-market tests allow executives to predict which programmes will work, where they will work best, and how they can be tailored for maximum impact.

Changes can be planned and strategically rolled out, avoiding expensive mistakes while shining light on potentially profitable ideas that may not have otherwise been identified.

 

Sourced from Rupert Naylor, Applied Predictive Technologies

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Analytics
Big Data