Rethinking the attribution model in B2B marketing

Jonathan Marek, senior VP at Applied Predictive Technologies, explores the role of data analytics in optimising the B2B sales cycle

With long sales cycles and multiple stakeholders involved in most transactions, B2B marketers have long struggled to answer the question “which of our marketing actions actually accelerate the sales cycle and generate revenue?”

Imagine this common scenario: marketing sends an email campaign containing a thought leadership piece, and a potential buyer clicks on the link. After nurturing the lead, marketing passes it on to the sales team, which, through a combination of emails and calls, generates a meeting. Two sales meetings occur, at which point the prospect attends a marketing-organised conference. Sales continues to engage the prospect, and one month later, the prospect makes a purchase.

While this scenario is certainly a great example of sales and marketing collaboration, it also gives rise to some critical questions. Which of the marketing actions contributed the most to revenue? Would the prospect have purchased anyway without either touchpoint? How can this and other buying journeys help inform future marketing investments?

Searching for answers with attribution modeling

Today, most B2B marketers use attribution modelling to try to answer these questions. These models assign value to touchpoints and investments along the buying journey to identify which activities contributed to revenue. They are a helpful tool for generating hypotheses about the relative value of various actions. Different organisations use different methodologies; a few common approaches include first action attribution, last action attribution, and weighted attribution.

First action attribution gives full revenue credit to the initial touchpoint that generates a sales process. Last action attribution does the opposite. Weighted attribution, meanwhile, assigns some value to each marketing touchpoint that occurs before a sale. Each methodology would lead to dramatically different answers when applied to the previously-described scenario. According to the first model, the email campaign would be given full revenue attribution; the second model would assign full credit to the annual conference; finally, the third model would give some weight to both. Which, though, gets to the right answer and can help inform the best future decisions?

Because attribution models are based on correlations, they can give decision-makers a rough estimate of which actions are productive, but unfortunately cannot accurately isolate the true cause-and-effect relationships between business actions and outcomes. This truth is a fundamental challenge, as the whole goal of attribution is understanding how taking a given action will affect customer behavior.

Learnings from consumer-focused industries

Marketers can improve the accuracy of their attribution models by employing the same approach that nearly half of the top 100 US retailers, leading banks, manufacturers, restaurants, and hotels leverage today: test versus control analytics. By establishing precisely how business actions cause customers to change their behavior, this methodology provides powerful insights that cut to the core of decision-making.

The concept is simple in theory: compare the performance of accounts that received a given marketing touchpoint (‘test’) with highly similar ‘control’ accounts that did not. The difference in performance (e.g., the relative change in the total value of their transactions) between the two groups after the marketing touchpoint (e.g. event attendance, email campaign, etc.) can then be confidently attributed to the action.

Challenges to getting it right

While the premise is straightforward, test versus control analysis is difficult to do well in practice. The challenge stems from a couple of fundamental issues.

First, accounts that receive marketing touches are likely to be fundamentally different than accounts that do not across a variety of dimensions (e.g., pipeline stage, opportunity size, etc.). For example, analysts wouldn’t want to compare new accounts with growing relationships to long-tenured accounts with consistent buying habits. Instead, analysts should make apples-to-apples comparisons by identifying other new account that are growing their relationships, but did not receive the marketing action in question.

Second, there are countless outside factors influencing any given account at any moment. Whether it is external economic factors, shifts in organisational strategy, or leadership changes, many different events that are outside marketing’s control influence results. This makes it very difficult to isolate the true incremental impact of any investment. The best way to overcome this challenge is to identify control accounts that behave extremely similarly to ‘test’ accounts in the months leading up to the marketing action – then, the impact of the action will be reflected in any subsequent change in purchase behavior between the two groups.

Institutionalising test versus control analytics to drive innovation

When applied across marketing activities, cause-and-effect analytics allow executives to understand the real return on investments and inform better budget allocation decisions. Further, companies can analyse individual programs (e.g. annual conferences, specific email campaigns, etc.) through this test versus control lens to reach unprecedented levels of accuracy and granularity. By segmenting results, organisations can understand which accounts will respond best to a given marketing program and see which aspects of the program are most effective. 

This methodology can be applied to solve challenges beyond just marketing. Test versus control analysis is increasingly becoming relied upon to improve B2B decisions related to sales force optimisation, pricing, and more. As B2B organisations become more sophisticated with this approach, they can proactively test new ideas with some accounts and not others to identify effective strategies and discard those that will not pay back. Rapid, proactive testing allows companies to be more innovative by enabling them to try risky ideas on a small scale, and only move forward with those that meet the desired ROI hurdles. Organisations that institutionalise test versus control analytics not only stand to improve attribution accuracy, but will also make better decisions and push traditional boundaries.

Don’t get left behind relying on correlations and guesswork – use cause-and-effect analytics to isolate the true impact of business actions on revenue generation.