Attribution Surveys

A guide to understanding the use of attribution surveys.

The predominant task of marketing attribution is to rebuild a customer's journey to purchase, understand which marketing channels or campaigns the user encountered during this journey, and measure the importance (known as "weight") of each channel on the customers buying decision.

If done correctly, an accurate attribution model can have massive effects on the scale of a business, as reinvestment in high-performing channels yields a virtuous growth cycle -- at least theoretically. Attribution, however, is a naturally combative process because every channel wants to take as much credit as it can reasonably justify. This dilemma is compounded by the fact that some channels (those with pixel data) are at a reporting advantage over other channels, which is often misinterpreted as a performance advantage.

This is where sophisticated marketers rely on post-purchase checkout surveys, gathering a definitive and unbiased response to the question, "how did you hear about us?" Market researchers know this as the HDYHAU survey. The data collected in this survey comes from customers directly, offering a reality check on vendor-reported data and filling attribution gaps for purchase paths that are either difficult to measure, or channels that are yet to be undiscovered.

Survey Design

Given the multi-channel make-up of a modern marketer's arsenal, attribution surveys can get quite complex. There are trade-offs that must be made when building out the survey question.

Our two-step Response Clarification allows for increased precision, though some drop-off in second question response rate should be expected. Conversely, if you run a simple single-level survey instead, you lose the opportunity to gather specific channel or campaign-level information (e.g. which podcast?).

To achieve the best of both worlds, our Response Clarification surveys capture the first response even if the user abandons the second question. Given this safety net, we recommend using a two-level survey.

How does wording affect survey completion rates?

Although we haven't seen a large variance in survey completion rates by the number of options presented to the customer, we do see a variance when the complexity of language is increased. The more thinking a user has to do, the less likely they are to respond (or submit the correct response). On occasion, the customer will simply respond with "Other", but they're more likely to simply abandon the survey when no option sounds familiar.

Therefore, a decrease in confusing language = an increase in response confidence. You should be optimizing for data integrity and survey completion rate, not solely survey completion rate. To that end, we recommend avoiding any marketing insider language, and using the follow-up question to probe into a more granular response. Here are a few examples for comparison:

Poor Language: "PR"
Better Language: "News Article"

Poor Language: "Instagram Influencer >> (Influencer Name)"
Better Language: "Influencer >> (Influencer Name)"

Poor Language: "Video Ad"
Better Language: "YouTube >> (Channel Name)"

The "Other" Field

If your "Other" field is among the top 3 responses in your quesiton dashboard, it's most likely an issue with the language in your survey. It's recommended that you recategorize a recent date range of your "Other" responses (via our export), analyze which channels users are manually submitting at a high rate, and then offer those channels by name in your revised survey.