What happens if we apply Sainsbury to the individual reach measured by FLUZO, and then we compare the Sainsbury calculated result against the actual total reach measured?
This is an itch that we had to scratch: many, many times when we talk with an Agency or Advertiser, we get back a “yeah, we use Sainsbury to estimate Total Reach”. Indeed, at FLUZO we believe that the words “estimate” and “Total Reach” shouldn’t be together in the same sentence. The aim of this analysis is to compare the two methodologies and test them in the measurement of a campaign. How good is Sainsbury against reality? Can you trust the results?
SAINSBURY
For a very long time, advertisers have relied on the Sainsbury's method to estimate net-reach of their crossmedia campaigns. It assumes that media exposure is a Bernoulli process, i.e. it follows a binomial distribution, so that the overlap between different media is a linear ratio of the reach size of each media.
Sainsbury formula states that, given any universe, two different medias A and B will share a reach equal to a linear ratio based on reach size of each of the media. And it usually appears formulated as: 1 - [[1-A][1-B][1-C][1-D][1-E][1-F][1-G][1-H][1-I]*[1-J]].
We are not delving here into the mess that the reach data being fed into Sainsbury is: TV only home, very fragmented digital data, almost no radio usable data and so on, we have other threads about those topics, but if you are using Sainsbury in the wild, remember Garbage in, Garbage out).
FLUZO
If you are following us, by now you already know that FLUZO has solved a certain number of key challenges on the measurement of advertising campaigns (and media) thanks to its cross-media and single-sourced measurement:
FLUZO delivers both the individual reach of each media measured equally AND the total net reach. So we got this little evil idea about how to do a check… what happens if we apply Sainsbury to the individual reach measured by FLUZO, and then we compare the Sainsbury calculated result against the actual Total reach measured?
If Sainsbury is a good tool, most or all campaigns will deliver the same result Observed vs Calculated… if Sainsbury is a decent tool, there will be differences, but they could be explained by some factors and those differences would be rather stable. If the result is chaos… well, you know the answer.
Next step was to apply the check to a random sample of campaigns measured with FLUZO, and we kept going until we got bored of the process. We analyzed campaigns with different media mixes, different number of media, different countries, different reaches, different frequency, across different verticals… and different campaign durations. And the outcome? Let see some numbers.
To make things harder, we found no correlation between reach and the error of Sainsbury: the worst case is on one campaign that had 32% reach across just 2 media, but the second worst was on one with 86% across 4 different media.
Would your business use, in any area, a tool that fails to meet observation 2/3 of the time, and on which the error gap has this size and degree of variability? No, we neither.
Diego Semprún de Castellane is Chief Operations Officer of FLUZO and has been dedicated to the world of media measurement for more than fifteen years, with a special focus on Digital measurement.