5 Hidden Costs of Incrementality

Everything in marketing comes with a cost. But while known costs are acceptable because they can be planned for, hidden costs wreak havoc on campaigns. Therefore, it’s your responsibility as a marketer to do everything you can to avoid them. In this blog, you’ll learn about the 5 hidden costs of incrementality. Specifically, you’ll learn how they’re inherently tied to the construction of control groups, so you can use your budget in the most efficient way possible going forward. 

Opportunity cost #1 — Wasted moments


To deploy incrementality properly, your control group needs to mimic your test group. If it doesn’t, the results of your measurement will not be considered applicable to a broad audience. 

That means your control group must consist of people you WANT to talk to as a marketer under normal circumstances—just like your test group. 

For example, pretend you’re planning to advertise a product to a group of 100 ideal prospects. To evaluate your advertisement using incrementality, you also plan to withhold 10 of said prospects as a control for the test. 

That’s 10 ideal engagements you’re going to sacrifice.

In other words, your opportunity cost to use incrementality is 10%, because only 90 out of the 100 ideal prospects will see your ad. That means you’re losing $0.10 on every dollar just to measure the effectiveness of your campaign—a steep price to pay.     

Opportunity cost #2 — Lost-forever engagements


When you create a control group of ideal prospects, some of those prospects are one-and-done in the bid stream. Which means if you use incremental measurement regularly, you will likely miss your one and only chance to engage with these “great-fit” consumers. At scale, these lost engagements lead to lost sales—sales you need to boost your bottom line.  

Opportunity cost #3 — Incomplete testing


Just because an ideal prospect ends up in your test group, it doesn’t mean they’ll actually be tested. All too often, marketers assume every consumer in their test group gets exposed to the creative (e.g., a paid search ad) being tested. Unfortunately, the truth is, not everyone within a test group is reached. 

For example, assume you’re marketing a skin cream that’s popular with women between 40 and 65. These women live in sunny climates, earn more than $50,000 a year, and are browse websites related to health and wellness. 

Based on these parameters, Rachel, a 40-year-old living in Palm Springs, CA with an annual salary of $88,000 who reads shape.com and womenshealth.com would be an ideal prospect for your campaign.

But so would Judy—a 64-year old woman in Boca Raton, FL with an annual salary of $112,000 per year who occasionally visits womensfitness.com and self.com. 

Judy, who’s 24 years older, spends far less time on the internet than Rachel.

Rachel checks her email 3 times a day and visits her favorite websites at least once per day. Judy only checks her email and visits her favorite websites once per week. 

Both women are ideal prospects for your brand, your products, and your marketing. However, Judy’s lack of frequency when it comes to using the internet will taint whatever incremental evidence you’re trying to tease out of your measurement. 

Opportunity cost #4 — Non-representative audiences


Most marketers think they need to randomize and split their audience into a test group and a control group before measuring the lift that’s created by a given campaign. This assumption is mistaken.

Assume you create a small control group to mitigate your exposure to lost ideal engagements (opportunity cost #1). In this scenario, the sparsity of available data will make your control group overweight in one area.

For example, 50% of your control group might contain people who visit your website once a week. Whereas only 25% of your test group is composed of once-a-week individuals. If there’s a correlation between website visits and purchases—and you’re measuring for incremental lift in sales—your results are going to be inaccurate. 

Avoiding such an issue requires equal representation (ages, genders, incomes, geographies, etc.) within both audiences. That can’t be done when control groups are kept disproportionately small.  

Opportunity cost #5 — Deploying a 50/50 split


The easiest way to ensure equal representation between the test group and the control group is by taking your list of randomized ideal prospects and splitting it straight down the middle, 50-50.

This will solve for the issues raised in opportunity cost #3 and #4.

However, doing this requires sacrificing half of your ideal audience for your test. These are people you know will be a great fit for your products and services. 

You won’t be able to talk to or engage any of the consumers who fall into your control group. 

Are you willing to sacrifice  contact with ideal prospects just to measure the effectiveness of your creative or campaign?

Incrementality in summary…


Why do so many marketers do incrementality this way? Why are they OK eating these unappetizing opportunity costs?

There is no point to dividing people into buckets before running an incrementality test. They can be split into a control group and a test group after the fact.

To learn how, read our whitepaper: The Costly Mistakes Marketers Make With Incrementality Measurement.

Need something else? Talk to one of the measurement experts at Zeta.

Demo

Thanks for your interest in Zeta. To see a demo please complete the form.

Thank you!

Your request has been sent. We will be in touch with you shortly.