Definition
Incrementality is the share of a marketing outcome that was causally driven by a specific marketing activity — as opposed to the share that would have happened anyway. An incremental conversion is one that would not have occurred without the marketing touch. A non-incremental conversion is one that the customer would have completed regardless.
Incrementality is the correct answer to the question every marketing team tries to avoid asking: "If we turned this channel off tomorrow, how much revenue would we actually lose?"
Incrementality vs. attribution
Attribution assigns credit for conversions that happened. Incrementality asks whether the credit is deserved.
A coupon affiliate that sits at the bottom of the funnel might attribute 20% of your conversions under last-click. But if most of those buyers were already in your checkout when they went looking for a coupon, the coupon affiliate's incremental contribution is close to zero — you're paying commission on sales that were going to happen anyway.
Classic cases where attribution and incrementality diverge:
- Brand-term SEO retargeting — high last-click attribution, low incrementality (the customer already knows your brand)
- Coupon sites during checkout — same pattern
- Ad platforms double-counting — Facebook, Google, and TikTok will all take credit for the same conversion; each one's incremental share is the only honest number
How incrementality is measured
Holdout tests (the gold standard)
Randomly withhold a slice of a channel's traffic — say 10% — and compare the outcomes in the held-out slice vs. the treated slice. If the two groups convert at the same rate, the channel's incrementality is zero. If the treated slice converts more, the difference is the channel's causal lift.
This is the approach Google, Meta, and most data-driven brands use. It's the closest marketing gets to a randomized controlled trial.
Ghost ads / PSA holdouts
Same idea but applied to ad platforms — the treatment group sees your ad, the holdout group sees a public-service announcement. Requires the ad platform to support it natively.
Geo tests
Pause a channel in one region for 30+ days and compare conversion volume to a matched control region. Cruder than holdout but useful when the platform doesn't support proper holdout splits.
Difference-in-differences (no randomization)
Compare the change in a channel's performance before and after a specific event (e.g., a policy change, a new creative). Weaker causal claims, but it's sometimes the only tool you have.
The statistics
Once you have a treated vs. holdout split with enough volume, run a two-proportion z-test to check whether the difference in conversion rates is statistically significant at your chosen alpha (usually 0.05). Pair it with a Wald 95% confidence interval on the lift estimate — if the interval crosses zero, the lift isn't distinguishable from noise.
Volume matters. Rough rule of thumb for catching a 10% lift at 95% confidence:
| Baseline CVR | Clicks needed | |---|---| | 1% | ~130,000 | | 3% | ~45,000 | | 10% | ~15,000 |
A partner sending 50 clicks a month will never reach significance no matter how long you wait. For those partners, incrementality testing is directional at best.
Why brands don't run incrementality tests
Most brands skip incrementality because:
- It feels expensive — the held-out slice forgoes conversions
- It's operationally complex — requires holdout infrastructure most tracking platforms don't have
- Vendors resist — ad platforms and affiliate networks push against holdouts because the results expose overclaiming
The cost is usually dramatically overstated. A 10% holdout on a partner driving 1,000 conversions a month means 100 potential conversions are redirected into the control measurement — and you learn whether the other 900 are actually driven by that partner or would have happened anyway. If the answer is "happened anyway," you save the entire spend. The holdout is a cheap insurance policy against wasted spend.
How Trcker implements incrementality
Trcker runs live incrementality holdouts per partner. Operators configure a holdout rule (brand, optional offer scope, percentage) and Trcker deterministically assigns each click to the treated or holdout cohort. Clicks in the holdout slice still redirect to the advertiser — the visitor's experience is unchanged — but both the click and any downstream conversion are flagged, so the partner isn't credited with them.
Every night, Trcker recomputes treated vs. holdout conversion rates per partner, runs a two-proportion z-test, and computes a Wald 95% CI. Results surface on the Incrementality dashboard with a significance badge (Incremental / Not incremental / Not significant / Pending) and a revenue-impact chart estimating the dollar value of the incremental contribution.
Related concepts
- Attribution — correlational credit assignment
- Multi-touch attribution — cross-channel credit distribution
- Holdout testing — the methodology behind incrementality measurement
- CPA — cost per acquisition, which looks different once you control for incrementality