The Story Behind The Stats

1

(By Bob McCurdy) One of the best investments I made in my career, long ago, was hiring a college professor to tutor me regarding the “ins” and “outs” of statistics. Early on I learned that in those days, Arbitron, and now Nielsen, audience “estimates” are just that, “estimates” — amounting to what could be referred to as educated guesses.

Those one-on-one “tutorials” with the professor cost a couple of bucks but paid for themselves one hundred fold over the years.

Last week I met with several newly hired salespeople and, as I usually do, spent some time discussing these “estimates.” I wanted to make sure that they understood that in spite of Nielsen being the industry sales “currency” and a well-respected research company, that their estimates were still subject to survey and sampling error, the same as estimates from any poll or other non-media survey.

To highlight this point I took out a quarter, asking them to predict the number of heads and tails we’d get on 10 coin tosses, our own mini “survey.” Both said five heads and five tails. Logical, theoretically correct, but as it turns out, wrong.

I tossed and came up with seven heads and three tails.

I then quickly replicated the survey and tossed the coin 10 more times and came up with six heads and four tails.

Note: Even if we were to average the results of these two mini surveys as we do with “months” in PPM markets and “books” in diary markets, the results of 13 heads and seven tails would have deviated from what we know should have been, 10 heads and 10 tails. The challenge with radio surveys is we have no idea what “should have been” to gauge the accuracy of the published audience estimate.

Each mini “survey” above, in spite of being executed virtually simultaneously, generated different estimates. The same would occur should Nielsen ever decide to conduct and replicate surveys in any market concurrently (in PPM markets would need two panels instead of one). The two surveys would likely show some station audience estimates to be identical, some would remain close to being identical and some would differ, in some instances greatly.

According to the laws of statistics and sampling error, one out three station’s audience ratings published by Nielsen are off by one standard deviation, which in layman’s terms means we can assume they are not entirely accurate. By those same laws, one out of 10 station’s rating are off by a lot (two standard deviations), meaning that they are largely inaccurate.

So in a market with 20 reported stations, this equates to seven of the stations’ audience estimates not being entirely accurate and two of the stations’ audience estimates being way off base.

A rule of thumb as it pertains to standard deviation is that the more constricted the demo, the less reliable the estimate. Same goes for in-tab. The smaller the in-tab, the less reliable the estimate. And the more constricted the day-part, the less reliable the estimate.

This fact is often lost on both buyer and seller and can make a huge difference in a market where one-tenth of a rating point can separate the #1 station from the #10 station. Rank can be misleading and is often meaningless.

By the way, the chances of getting five heads and five tails with a ten coin tosses is only 24.6%.

Finally, let’s look at a top 30 PPM market where the #1 station in the Adult 25-54 demo does have a .5 total week rating, while the #10 station does have a .4 rating.

Statistically, what are the chances that the #1 station’s audience is truly larger than #12’s audience? It might surprise you, but it is only 52%; meaning that statistically, there’s a 48% chance that there is no audience difference between the two stations. A statistical toss-up. The “difference between two means” formula was used to compute this.

The takeaways for the rookies were:

— That Nielsen audience estimates, similar to our coin toss, or even a political poll, are subject to survey variability — they are not “absolutes” and should never be perceived as such by seller or buyer. Nielsen says it best in their description of methodology in their e-book: “Clients should be mindful that — due to the limitations described in Chapter 15 of this Local Radio Syndicated Services Description of Methodology — it is not possible to determine the reliability of our estimates, data, reports, and their statistical evaluators to any precise mathematical value or definition.

— In a listening environment where .1 or .2 of a rating point separate the #1 station from the #10 station, factors beyond audience ratings, qualitative, audience skew, attribution metrics, etc. should be presented and given serious consideration when selecting stations to purchase.

— Use the numbers, be familiar with the limitations of the numbers, and never stop selling beyond the numbers.

This would not be a bad topic to discuss with clients. It might assist them in making more effective purchasing decisions.

Bob McCurdy is The Vice President of Sales for The Beasley Media Group and can be reached at [email protected]

1 COMMENT

  1. Right on the mark again Robert. We must view ratings in a relative sense, looking more a multiple survey averages to stabilize the numbers to some degree. That said, format, execution, context, standing with the target audience and actual self promotion are more important factors. Too many planners and buyers “plug it in and turn the crank” when it comes to these decisions without taking the time to look deep in to the real factors of how much a station means to its listeners and their market. It takes work which means going beyond formulas but rather, getting to know what’s happening in the market that will benefit the brand. That’s when advertisers find out that local radio is by no means dead, but still a live and vibrant factor in a listener’s day. Numbers for guidance are a small part of the story. Day-to-day life with local radio IS the story. Thanks for the article Bob.

LEAVE A REPLY

Please enter your comment!
Please enter your name here