(By Buzz Knight) Is PPM technology due for an upgrade? The recent announcement by Nielsen on June 16th that they are expanding their encoding options with their Audio Software Encoder for FM stations is good news for the future improvement of audience measurement.
Stations will no longer have to house and monitor a separate PPM encoder to ensure their station is encoded and Nielsen says the Audio Software Encoder will “provide more flexibility for encoding options and this enhancement will enable clients to encode for PPM using certified equipment from third party vendors.”
This announcement, although in the works for a few years got me thinking about the core of the measurement technology, ratings watermarks and whether there was more thorough technology that could benefit the industry.
Consistency in performance of the PPM system has been a hot topic since its introduction in 2007, particularly for minority broadcasters who met the rollout by Arbitron back then with skepticism and criticism.
The sentiment from many of the broadcasters was that Arbitron created a methodology that required broadcasters to change or modify their programs to accommodate the data and gain more ratings to enable to have their business survive.
The questions about the accuracy of the service back then were focused on sample quality and insufficient sample size as audience fragmentation increased the need for more granular data.
As the measurement service would evolve and Arbitron would be sold to Nielsen questions would emerge about PPM’s reliability in counting all forms of content.
The implications would surround the demise of format types such as Smooth Jazz and would also impact certain personalities who experienced ratings declines under the new system.
Ultimately, in 2015 a product named Voltair made it on the scene as a remedy to some of the deficiencies and limitations of the PPM system allowing radio stations to capture missing listening by amplifying the existing watermark tones.
This activity led to Nielsen rolling out their new encoding algorithm called “Enhanced CBET (Critical Band Encoding Technology) which ultimately was an additional factor that gave listening estimates a lift.
Did this leave the radio industry in the best possible place when it came to the convergence of audio processing meeting technology for the best-case scenario of all listening being reported?
But there are always questions that we must wonder about when it comes to the conundrum of supreme audio quality and ratings measurement.
Are all sources such as digital technology created equal when it comes to measurement?
Which brings me back to fingerprinting technology such as the music recognition algorithms used by Shazam.
If you capture 20 seconds of a song, no matter if it’s intro, verse, or chorus, commercial, or morning show bit it will create a fingerprint for the recorded sample, consult a database, and use its music recognition algorithm to tell you what content you are listening to in real time.
With the continued changing dynamics of listening behavior and the need for PPM and Diary to continuously improve so broadcasters reap their fair share of listening, isn’t it worth a study to analyze fingerprinting versus watermarking?
What if it showed increased listening levels and at the same time poised the ratings service for a better business model for the years to come?
As Thomas Edison said, “I never pick up an item without thinking how I might improve it.”
Consider this an exercise in creating dialogue.
Buzz Knight is the CEO of Buzz Knight Media and can be reached by e-mail at [email protected]m