Attribution: an on-going conundrum which needs to be solved

The digital arena continues to advance at rapid pace, but attribution is one area that has struggled to keep up.

Despite all the hype in the industry around attribution, and the inadequacy of the models currently in use, adoption of statistical techniques to identify the actual contribution of our digital campaigns has been limited.

With the complexity of the customer journey increasing, continuing to take a simplistic view of attribution does not help when you are addressing it.

The solution lies in the use of science to measure more accurately, not only the influence of the channel, but also to take into account how channels are interconnected.

The most commonly used attribution models are still either click or rules-based; the former include first click and last click, and the latter include even distribution, time decay, and positional.

All these methods are subjective and have significant flaws, but their main attraction is the ease of implementation.

They come pre-packaged within advertising platforms and don’t require additional investment.

Limited existing models

If we look into the evolution of digital marketing since its inception in the late 90s – with the development of real-time bidding (RTB), mobile, and others – and compare it with the stagnation of the platforms and methodologies used to measure them there is a clear disconnect.

>See also: Mobile web vs. apps: which will triumph in the battle for advertising spend?

Clicks are not fully representative of the value of a channel, and by refusing to acknowledge this a deception gets perpetuated in the form of inadequate measurement.

Buying ads on a fraudulent site, for example, usually comes very cheap (impressions for 1-5p CPM) and have good click-through rates (CTRs above 1%).

So, if you have fraud in your advertising mix, what you see as an advertiser is that for a small amount of money you get a good number of clicks.

When you remove these fraudulent clicks, it can look like traffic is being bought on more expensive sites with worse click-through rates, as the marketing key performance indicators (KPIs) will often go down.

This is why in display advertising it is very common to see 75-80% of conversion driven by view-through as opposed to click-through.

A click-centric measurement model will prevent businesses from capitalising on the full potential of their marketing investment, by distorting the actual performance.

Taking on the challenge

There are a variety of factors that contribute to the current lack of sophistication around the measure of digital marketing attribution.

The first of these is a shortage of technical skills to employ big data technologies to manage the complexity around processing and transforming extreme volumes of hyper-structured data (web log files) to enable advanced statistical analysis.

>See also: Will GDPR still be relevant for data security if Britain leaves the EU?

It’s worth clarifying the term hyper-structure – weblogs are sometimes referred to as unstructured data.

The reality is that once you look into the details of how the data is generated by digital platforms, it becomes apparent that the structure is there within the data itself, and it can be derived though string parsing it.

It’s not a structure in the traditional BI sense of columns and rows, but nonetheless, structure it is.

As a result of the above, the technical skills required to manage this complexity are not as widely available as the traditional analytical skills used for the widely adopted, but inapt, current measurement methodologies (e.g. last-click, first-click, etc.)

The value of expertise in analytics

The second factor is the difficulty of striking the right balance of expertise between business acumen and the technical understanding of how digital channels and tracking platforms form part of the same ecosystem.

The ability to bring together the business dimension with the actual implementation of the marketing objectives into digital channels and its tracking platforms is a crucial requirement to enable advanced algorithmic attribution.

The third key factor is the scarcity of data science talent.

>See also: Why companies need pseudonymisation and data masking for GDPR compliance

The right people can enable you to establish the link between the business objectives and the statistical methodologies required to get you to the desired outcome, but these people are still very hard to find.

This is a challenge that will persist for the time being, and organisations whose core competence is around data and analytics will be better placed to attract and retain such an elusive talent.

Attribution should be approached by combining big data processing, advanced statistical techniques and constraint-based optimisation.

Organisations should always aim to have robust measurement frameworks to make the most effective business decisions, and attribution is no different in this respect.

Any model that allows brands to reduce uncertainty in decision-making should be pursued and statistical modelling for algorithmic attribution certainly falls under this ‘uncertainty reduction’ banner.
Sourced by Rafael Garcia-Navarro, Chief Analytics Officer UK&I, Experian

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics