For most marketers, certainty is a luxury. You’re rarely working with a complete view of your customers (although tech stack integration helps).
Instead, you make the most of the data you have—with analysis, logical deduction, inference, and maybe even a little educated guesswork. And that combination of art and science is why ad effectiveness is so hard to reliably measure.
Put it this way: say you are a clothing retailer looking to promote a new line of designer jeans for young women. You might create a segment of 18-24-year-old women interested in high-street fashion and push display and social ads to that group.
Then: great news! You see an uptick in conversions among this segment. But which buyers saw your ads?
As we covered in our blog on ad effectiveness—with identity resolution you could cross reference exposure data and click-through rates with conversions to understand which anonymous IDs in that segment saw (and engaged) with your ads and also converted.
But advanced marketers are taking things even further—looking to measure not just correlation between ad exposure and customer spend, but rather direct causation.
And that’s what incrementality testing can deliver.
Incrementality Testing Makes Measurement More Scientific
Incrementality testing (also known as uplift modelling) can be one of the most accurate ways for marketers to measure the precise impact of a discrete element or activity within a campaign, with the ability to isolate true cause and effect.
It works in one of two main ways:
- Running two almost identical campaigns simultaneously—one for a control group and one for the target group—with some minor variation for the latter measured in isolation.
- Running a campaign where you expose one group to a creative and have an unexposed hold-out group. Ideally, all things being equal, the only difference between the groups is the exposure, so you can isolate its impact.
The Hard Thing About Incrementality Testing
Until now, incrementality testing has been hard to operationalize outside of specific environments. You’ve probably already encountered some form of incrementality testing through companies like Facebook and Google. They often provide incrementality testing as part of their service to validate their effectiveness and help customers justify further spend within those platforms.
However, those companies have a significant certainty advantage. They’re closed ecosystems with data based on everything people do once they’re logged in.
If you want to run people-based incrementality testing across the open internet, it’s impossible to do well without a reliable way to resolve or link the various identifiers of the consumers behind the behaviors you’re trying to measure. Without this capability, you’re shooting in the dark as there can be contamination in your exposure groups, particularly once you start looking across channels or platforms.
Enter identity resolution.
Why Identity Resolution Opens Up Incrementality Testing Everywhere
When you resolve the identifiers dispersed across your tech stack, you effectively create your own incrementality testing ecosystem that’s more accurate and controlled.
Identity resolution makes it possible to assign known exposure groups versus random ones that increase the chances of drawing faulty or incomplete conclusions. This means you can measure the relative variable impact of your marketing activities across any channel where you can resolve identifiers to a people-based level.
Let’s see what that could look like:
10% off vs. 20% off
A simple but effective use case for incrementality testing is to measure your audience’s responsiveness to offers.
Say that clothing retailer from earlier on wanted to optimize their return on promotional discounts for lapsed customers. They could take a segment of historically loyal customers who haven’t bought anything in six months and split them in half—enticing one group back with 20% coupons, and the other with 10% coupons.
If the 10% performs similarly to the 20%, they’d know they could bring this audience back without overspending on future promotions.
But what happens if you wanted to measure effectiveness across different channels?
Display vs. Social
Say that same clothing retailer wanted to measure precisely how effective their display ads were performing. They could take a customer segment (say, of 18-24-year-old-women) and split them into a control group and a test group.
They’d show the control group the display ads and the test group the social ads (or the control group nothing, and the test group both, and so on). The result is an objective picture of performance that reveals how your spend impacts performance.
You can use this basic comparative structure to measure the relative effectiveness of any aspect of your spend across different channels—so long as you’re working with resolved people-based identities, the possibilities for accurate testing and results are endless. And the learnings and optimization opportunities only get better with a structured and scaled approach to continuous test and learn.
Video + Display vs. Video + Display + Retargeting
Thinking even bigger, you could even use incrementality testing to rationalize the cost and complexity of your wider media plan.
Say that clothing store was running a combination of video, display, and retargeting ads. Measuring the exact ROI for only one of those channels is incredibly complex—not least because they all use different DSP partners. But with identity resolution-enabled incrementality testing, it’s easier than you might think. In fact, it’s the same basic equation, albeit scaled up.
If a brand wanted to understand how effective its retargeting spend was, they could split out an audience segment into a control group and a test group. The control group would see video, display, and retargeting ads, and the test group would just see video and display.
Using identity resolution-enabled incrementality testing they could see if the test segment outperformed the control segment by enough to justify the retargeting spend.
If the results were positive, the brand could experiment with upping its retargeting spend to see how that impacts sales. Or if they weren’t, they could cut costs and scale back their media mix—all while gaining valuable customer insight. Whatever the outcome, it’s win-win-win.
Leveraging identity resolution to run incrementality testing turns the open internet into your testing playground. But the first step is to resolve the identifiers spread amongst the data and systems within your tech stack in a privacy-first manner.
Ready to dive in?