AI Executive Order May Have FCC and States on Collision Course

    1

    A new Executive Order from President Donald Trump is sparking a new clash between the White House, state governments, and federal regulators that could reshape how radio manages AI-generated content, political ads, and synthetic voices in 2026 and beyond.

    The order, aimed at “Ensuring a National Policy Framework for Artificial Intelligence,” directs the Department of Justice to challenge “onerous” state AI laws, authorizes the Commerce Department to tie broadband funding to state compliance, and instructs the FCC to explore a single national disclosure and reporting standard for AI models that would preempt existing state rules.

    The order follows months of legislative debate over whether states should retain authority to police AI use in media.

    In June, a provision buried in Congress’s budget reconciliation bill sought to impose a ten-year moratorium on state enforcement of AI-related laws. A bipartisan coalition of lawmakers, like Senators Marsha Blackburn (R-TN) and Maria Cantwell (D-WA), formed in opposition to the measure, warning it would leave consumers vulnerable to AI manipulation while stripping states of oversight authority.

    Many states have already adopted AI disclosure or deepfake legislation. New York requires broadcasters to include audible disclaimers in political ads that use AI-generated material. Several other states, including California, Texas, Minnesota, and Washington, have enacted deepfake laws holding stations accountable for identifying or rejecting deceptive synthetic content. Oregon’s statute is the broadest, requiring explicit disclosure of AI use in all campaign communications

    In addition, Tennessee’s ELVIS Act directly targets unauthorized AI voice cloning in music and broadcasting.

    By early July, after a series of bipartisan negotiations, the Senate voted 99-1 to remove the moratorium language from the reconciliation bill. Later that month, the White House introduced its AI Action Plan, signaling intent to reassert federal dominance over AI policy through agency coordination, particularly by tasking the FCC with assessing whether state laws interfered with federal communications authority.

    The December 11 order formalizes that approach, establishing an AI Litigation Task Force within the Department of Justice to challenge state laws that “thwart innovation” or “impermissibly regulate beyond State borders.” The Commerce Department must publish within 90 days an evaluation identifying which state AI laws conflict with the new national policy, potentially triggering lawsuits or funding restrictions.

    States with “onerous” AI rules could become ineligible for certain federal broadband funds under the BEAD program, while federal agencies are encouraged to condition other grants on non-enforcement of conflicting laws.

    Most consequential for broadcasters, Section 6 of the order instructs FCC Chairman Brendan Carr to begin a proceeding within 90 days to determine whether to adopt a federal reporting and disclosure standard for AI models, which could override state laws requiring on-air disclosures or disclaimers for AI-generated or altered content.

    Senator Maria Cantwell (D-WA), Ranking Member of the Senate Commerce Committee, criticized the move, saying, “This executive order’s overly broad preemption threatens states with lawsuits and funding cuts for protecting their residents from AI-powered frauds, scams, and deepfakes — leaving American consumers without any protection. Let’s get it right and pass a bipartisan national AI framework that both leads on innovation and protects consumers.”

    Several state attorneys general are weighing their legal recourse, with Colorado Attorney General Phil Weiser already stating he plans to challenge the order in federal court.

    For now, broadcasters face a regulatory vacuum: state laws remain on the books but could be unenforceable, the FCC has yet to define a replacement framework, and the DOJ is preparing to litigate against states that resist. As the 2026 election cycle approaches, the industry’s responsibility to ensure content authenticity may become even more complex and politically charged than before.

    1 COMMENT

    1. And next election season the political games begin anew. We’ve all received one of those “stop running (insert politician’s name here) that ad” letters threatening suit unless we stop running a campaign ad. Of course, especially in Federal elections, we (the station) have no say in the matter under Farmers Coop v. WDAY.

      This gets more interesting for stations on state borders–I have 4 stations in West Virginia, but one tower is in Ohio & the rest aren’t far from the Buckeye state. If Ohio law requires one set of disclaimers for AI use–and they are exactly contrary to WV law…well you can see the problem. Particularly when the spots come flying in from all those anonymous PAC’s at the last minute before the primary or general election.

    Comments are closed.