Demystifying Measurement: Why Methodology Matters

Marketers today are met with as many ways to measure campaign performance as there are advertising channels. And each is growing in size and complexity.

Recent research showed us how the role of measurement is the foundation for influencing campaign decisions—the definition of success varies based on the measurement tests, metrics and methodologies used. To further explore the topic of measurement, Facebook’s Neha Bhargava and Dan Chapsky partnered with Brett Gordon and Florian Zettelmeyer, professors from Northwestern’s Kellogg School of Management.* Together, the team co-authored a white paper that explores what constitutes good measurement and what success metrics really say about ad performance.

To answer these questions, the team explored how well observational approaches to measurement fare against randomized controlled trials on the same marketing campaigns.

Randomized controlled trials (RCTs) are studies in which individual users are randomly assigned to either a treatment group (meaning ads may appear in their feeds) or a control group that will never see ads. RCTs are considered the “gold standard” of measurement because they ensure that control groups and test groups are comparable in makeup and allow analysts to isolate the causal impact of ad exposure.

Observational methods, on the other hand, rely on the measures that an advertiser can collect without having a randomized control group. For example, after targeting a group of users with an ad, the advertiser might compare those who saw the ad with those who did not.

The study’s results were presented to the ad research community at the ARF Re!Think Conference in New York City on March 15, 2016, where the Facebook researchers and professors discussed the findings and what they mean for the marketing community.

The research looked at 12 US-based Facebook ad campaigns totaling over 1.4 billion impressions across industries and marketing objectives. These campaigns were run through the Facebook Lift solution, which implements an RCT. The campaigns were blinded, so no personally identifiable information about users or advertisers was used. The researchers compared the results from different observational measurement approaches to the true advertising impact as measured by the RCTs. Below is an example of this comparison for one study, where observational methods overestimated lift by nearly three times.

FBIQ_Kellogg_Blog_Post_FNL

Across the 12 studies, observational methods were not a reliable substitute for RCTs. The most sophisticated observational method did well in some studies, but it was impossible to predict when that would be the case. Through a marketer’s lens, each method tells a different story about the performance of the ad, which can be explained by the biases and nuances of each methodology.

 

What it means for marketers:

Critically evaluating your measurement solution will help you identify the best approach for you, whether it’s continuing what you’re already doing or taking a new approach.

Remember that methodology matters: Ask questions about how your advertising effectiveness is being measured and why it makes the most sense for your campaign.

Define your measurement strategy: A campaign can be measured in many different ways. Understanding where your measurement comes from will help you better interpret your results.

Be smart about measurement: As a marketer, measurement will always play a part in what you do. Arming yourself with the knowledge of measurement as a larger practice will help you be an active participant in the conversation as measurement strategies continue to evolve alongside the industry.

 

For more insights on this topic, check out Kellogg Insight.

 

* Gordon and Zettelmeyer have no financial interest in Facebook and were not compensated in any way by Facebook or its affiliated companies for engaging in this research.
Source: “A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook” by Facebook’s Neha Bhargava and Dan Chapsky and Northwestern University’s Brett Gordon and Florian Zettlemeyer, Mar 2016.