State Rules On Radio AI Disclosures May Soon Be Unenforceable

0

As Congress debates a sweeping budget reconciliation bill, a provision buried deep in the legislation could soon render many state-level AI regulations, including those directly affecting radio broadcasters, effectively unenforceable.

The provision, included in both House and Senate versions of the bill, would impose a 10-year moratorium preventing states from enforcing any laws that “limit, restrict, or otherwise regulate” artificial intelligence. While states could technically continue passing AI-related laws, they would be powerless to enforce them, effectively handing regulatory oversight to federal agencies and private industry.

For radio broadcasters already navigating rising challenges from AI-generated deepfakes, synthetic voice cloning, political ad manipulation, and content theft, the potential loss of state-level protections adds another layer of risk. Broadcasters have increasingly found themselves at the intersection of AI technology and public trust as lawmakers move to address growing concerns.

Tennessee Attorney General Jonathan Skrmetti and Washington Attorney General Nick Brown joined Senators Maria Cantwell (D-WA) and Marsha Blackburn (R-TN) to publicly oppose the moratorium, warning of its potential consequences. Skrmetti said, “We want America to be AI dominant. We want to make sure that our adversaries don’t get ahead of us, but we need to make sure that in the process, we’re not leaving American consumers behind. If there’s a 10-year moratorium on state enforcement, that effectively means 10 years where we are at the mercy of the judgment of big tech.”

Many states have already moved forward with legislation designed to directly address AI threats in media. Blackburn pointed to Tennessee’s ELVIS Act, which criminalizes unauthorized AI voice cloning in music and broadcasting, as an example of the kind of protections now at risk.

In New York, broadcasters are required to provide audible disclosures when AI-generated content is used in political advertising. Other states, including California, Texas, Minnesota, New Jersey, Idaho, Indiana, New Mexico, Utah, Wisconsin, and Washington, have passed deepfake legislation that places legal responsibility on broadcasters to label or reject deceptive AI-generated content. Oregon goes even further, mandating clear disclosures for AI use in all campaign communications.

If the moratorium becomes law, these state-level rules — many designed specifically to safeguard broadcasters, news organizations, and local media — could become dormant for the next decade, raising new legal uncertainty for stations heading into high-stakes election seasons.

The NAB has warned that while broadcasters support efforts to combat misleading AI content, a patchwork of inconsistent or overly broad rules could unfairly burden stations. The association has urged lawmakers to take a balanced approach that protects consumers while not imposing impossible compliance standards on local media operators.

The debate comes as public concern over media trust continues to rise. According to a 2024 Reuters Institute report, 72% of Americans say they are increasingly worried about distinguishing real from fake content – a 3% rise from the previous year, underscoring growing public anxiety about AI’s role in shaping news and information.