Cumulus’ Bob Walker Releases AI Media Planning Guide for Radio

0

Cumulus Media Operations President Bob Walker has a playbook for how local advertisers should be using AI, and it starts with asking better questions. The guide, published through the Audio Active Group, walks local advertisers and media sellers through five best practices for using AI platforms like ChatGPT to inform media plans and buying decisions.

The backdrop: a recent Quantilope national study of 2,000 Americans found only 34% of adults 18+ use AI at work monthly, and 58% have never used it for work at all. For the radio industry, where local advertising remains a core revenue driver, the stakes of getting AI adoption right are significant.

Walker is clear that AI does not function like a Google search — it requires specific methods of querying, and the difference between a useful output and a misleading one often comes down to how the question is framed.

Be exact. Vague prompts produce vague results, and the chart Walker includes in the guide makes that case plainly. Asking “What is the ideal media mix?” returns no audio recommendations whatsoever. Reframing the same question as “What is the ideal media mix for a national advertiser targeting adults 18-49?” brings audio into the response with tailored guidance.

Stating a desired outcome like “grow awareness,” “increase sales,” or “expand my customer base” sharpens results further. Adding purchase context like “I am already buying local TV, what other media can I include to increase reach?” helps narrow unwieldy responses and moves radio up in the recommendations.

Use reputable sources within queries. One of Walker’s more practical tips: include the name of a trusted industry organization directly in the prompt. For example, asking “What is radio’s weekly reach according to the Audio Active Group?” returned a sourced, current figure from Radio Ink linked to recognized industry organizations, rather than pulling from unknown or outdated references.

Check sourcing and dates. A query as straightforward as “What is radio’s weekly reach?” may return a figure that looks accurate on its surface but links back to a 2014 article. Walker flags AI hallucinations, or instances where platforms produce outputs not grounded in training data or any identifiable pattern, as a particular concern.

Understand that platforms differ. The guide focuses on ChatGPT, but Walker notes that each platform carries its own training history and tendencies that shape outputs.

A BrightEdge study comparing ChatGPT and Google Gemini found differences in how each handles the same queries: ChatGPT tends to recommend actionable tools users can deploy immediately, while Gemini points toward explanatory articles designed to build understanding before any decision is made.

Expect responses to change. Even the same user asking the same question can get different results depending on whether they’re logged in or out. ChatGPT’s memory feature, which stores user preferences and goals to personalize future responses, means outputs vary by individual. Walker advises reading responses carefully and avoiding treating any single result as a fixed or authoritative answer.

Walker is direct that Large Language Models can be a useful starting point for answering questions, solving targeting problems, and generating creative ideas, but they do not replace syndicated data sets or real-world case studies, and should not be treated as if they do.

LEAVE A REPLY

Please enter your comment!
Please enter your name here