Attribution models overestimate ad performance by 20-40% on average (Gartner, 2025). That means if Facebook says it drove 100 sales last month, the real number your ads actually caused could be closer to 60—or even 30. Uber tested this in 2018 by pausing Meta ads for three months. The result? Zero measurable business impact. They reallocated $35 million in annual ad spend. The question isn’t whether your attribution numbers are inflated. The question is by how much.
How to Know Which Number Is Real
The answer is incrementality testing—a method that separates the sales your ads caused from the sales that would have happened anyway. And it’s no longer reserved for enterprise brands with six-figure testing budgets.
Why Attribution Models Overcount
Attribution models answer a simple question: which marketing touchpoint gets credit for a sale? Last-click attribution gives all credit to the final interaction. Multi-touch models spread credit across several touchpoints. Data-driven attribution uses algorithms to weight each channel’s contribution.
The problem? Every one of these models assumes the sale required those touchpoints to happen. That’s often wrong.
A customer sees your Facebook ad on Monday, Googles your brand on Wednesday, and buys on Friday. Last-click credits Google. Multi-touch credits both. But neither asks the real question: would this customer have bought anyway?
Platform dashboards make this worse. Facebook counts every conversion where a user clicked an ad in the last 7 days or viewed one in the last 24 hours—even if that user was already heading to your store. Google Ads does the same with its own attribution window. Each platform counts the same sale as their win. If you’ve ever noticed that Facebook says 85 sales, Google says 60, and WooCommerce says 50, this is exactly why.
Brands without incrementality testing waste an average of 23% of marketing spend on non-incremental activities (Marketing Science Institute, 2025). For a WooCommerce store spending $10,000 a month on ads, that’s $2,300 per month funding conversions that would have happened for free.
What Incrementality Testing Actually Measures
Incrementality measures the additional conversions, revenue, or outcomes caused by a marketing activity that would not have occurred without it. Instead of asking “who touched the sale,” it asks “did the ad make the sale happen?”
The method is straightforward. You split your audience (or geographic regions) into two groups. The test group sees your ads. The control group—the holdout—doesn’t. You compare results. The difference between the two groups represents your incremental lift: the sales your ads actually caused.
52% of brands and agencies now use incrementality testing to measure and optimize campaigns (eMarketer/TransUnion, 2025). That number has accelerated because both Google and Meta have made testing more accessible. Google lowered its incrementality test minimum from $100,000 to $5,000 in 2025. Meta launched incremental optimization controls that show a 24% reduction in Cost Per Acquisition in initial tests (Meta, 2025).
The metric that matters here is incremental ROAS (iROAS)—the return on ad spend calculated using only incremental revenue. Platform-reported ROAS might show 5x. Your iROAS might be 2x. Both numbers are “correct,” but only one tells you whether increasing that budget will actually increase revenue.
80% of US senior marketing analytics professionals report that incremental experiments have a high impact on revenue growth (Google/BCG, 2025). The reason is simple: when you know which channels actually drive new revenue, you stop overfunding channels that just take credit for it.
You may be interested in: GA4 Key Events vs Google Ads Conversions: Why Your Numbers Never Match
The $35 Million Lesson From Uber
Uber’s incrementality test remains the most cited case in ad measurement. In 2018, Uber paused Meta advertising entirely for three months. No retargeting. No prospecting. No brand awareness campaigns. Nothing.
The attribution model predicted disaster. Instead, Uber found no measurable decline in rider acquisition or revenue. The $35 million they spent annually on Meta ads was funding conversions that were happening anyway—through organic search, word of mouth, and direct app downloads.
Your WooCommerce store isn’t Uber. Your brand awareness isn’t global. But the principle scales down identically. If 20-40% of your attributed conversions aren’t incremental, you’re making budget decisions on fantasy numbers.
The platforms aren’t lying—they’re answering a different question than the one you’re asking. Attribution asks “who touched the customer?” Incrementality asks “did the ad create the customer?” Those are fundamentally different questions with fundamentally different answers.
Why Your Tracking Data Makes or Breaks the Test
Here’s the thing. Incrementality tests are only as valid as the data feeding them. If your tracking misses 30-40% of conversions due to ad blockers and browser restrictions, both your test group and control group have blind spots. The incremental lift calculation becomes unreliable because you can’t measure what you can’t see.
This is where most incrementality guides stop—they explain the concept but ignore the data quality prerequisite. Ad blockers hide 31.5% of your WooCommerce visitors from analytics, and Safari’s ITP limits cookies to 7 days. Run a holdout test with that much missing data and your confidence intervals collapse.
Server-side tracking solves this by capturing conversion data on your server before it reaches browsers where it can be blocked. Transmute Engine™ runs as a first-party Node.js server on your subdomain, routing events to GA4, Facebook CAPI, and Google Ads simultaneously—giving incrementality tests the complete dataset they need to produce trustworthy results.
Key Takeaways
- Attribution overestimates by 20-40%: Platform dashboards count conversions they influenced, not conversions they caused (Gartner, 2025).
- Incrementality testing reveals the real number: Compare exposed audiences against holdout groups to isolate ad-driven sales from organic ones.
- The barrier to entry has dropped: Google’s minimum is now $5,000, and 52% of brands already test incrementality (eMarketer/TransUnion, 2025).
- 23% average waste without testing: Nearly a quarter of ad spend funds conversions that would happen anyway (Marketing Science Institute, 2025).
- Data accuracy is the prerequisite: Incrementality tests require complete conversion tracking—server-side tracking closes the gaps that browser-based tracking creates.
Incrementality testing compares a group of customers exposed to your ads against a holdout group that isn’t, revealing how many sales your ads actually caused versus sales that would have happened anyway. For WooCommerce stores, this matters because attribution models overestimate channel contribution by 20-40% on average—meaning you could be wasting nearly a quarter of your ad budget on conversions you’d get for free.
Facebook’s attribution model counts every sale where the customer interacted with an ad within its attribution window—even if that customer would have purchased anyway through organic search, email, or direct visit. This is attribution, not causation. Incrementality testing reveals that platform-reported ROAS is consistently higher than incremental ROAS because platforms take credit for sales they influenced but didn’t cause.
According to the Marketing Science Institute, brands without incrementality testing waste an average of 23% of marketing spend on non-incremental activities. For a WooCommerce store spending $10,000 per month on ads, that’s $2,300 every month going toward conversions that would have happened without the ad spend.
Stop budgeting on attributed guesses. Start measuring incremental reality. See how Seresa’s server-side tracking gives your incrementality tests the complete data they need.



