NAB: FCC AI Rules for Political Ads ‘Burdensome’ To Broadcasters

    0

    On Friday, the NAB submitted comments on the FCC’s proposed regulations requiring broadcasters to disclose the use of artificial intelligence in political advertisements. Radio Ink has delved into the comprehensive 74-page filing to unpack the organization’s dissent.

    At the heart of the argument is the assertion that the FCC does not have the authority under the Communications Act to impose the proposed disclosure requirements. Specifically, the Act “places certain requirements on federal candidates to qualify for lowest unit charge, but it does not mandate (or permit the FCC to mandate) any disclosures by broadcasters.”

    The NAB emphasizes, “The fact that Congress provided carefully delineated, limited authority over certain aspects of political broadcasting does not imply that the Commission can forge ahead on its own and adopt additional political broadcasting requirements as it thinks best.”

    The filing says that even if the FCC had the authority, the proposed rules would be “arbitrary and capricious under the Administrative Procedure Act.” The APA requires agencies to “examine the relevant data and articulate a satisfactory explanation for its action including a rational connection between the facts found and the choice made.” The rules would require disclosures for any political advertisement using AI, regardless of whether it is deceptive. The NAB claims such an approach could cause audiences to mistrust all AI-labeled ads or ignore the disclosures altogether, thus not effectively addressing the problem the FCC aims to solve.

    The NAB also argues that the rules would infringe on First Amendment protections by imposing content-based regulations on political speech. They argue, “The proposed rules are content based because they apply on their face only to political advertisements with AI-created content and not to any other ads or programming with or without AI content.”

    Since the regulations are content-based and compel speech, the NAB contends, “The proposed rule will be subject to strict scrutiny, which requires the government to prove it ‘furthers a compelling interest and is narrowly tailored to achieve that interest.'”

    “Labeling a candidate or issue ad as AI generated will automatically make that ad more suspect in the public’s eye than another political ad or other content without such a tag, regardless of the veracity of the ad or how AI was used in its creation.”

    Citing Supreme Court precedent, the NAB asserts that the only permissible ground for restricting political speech is the prevention of quid pro quo corruption or its appearance. The NAB points out that the FCC has not identified instances of AI-generated deepfake political ads being aired on broadcast stations, failing to establish a real harm that needs to be addressed.

    The NAB says the proposed rules would impose significant operational burdens on broadcasters, including the need to verify AI content in ads, often without sufficient information. The statement explains, “It would be highly burdensome and time-consuming for stations to try to discover the individual(s) with personal knowledge of how voluminous numbers of ads were produced and whether AI was used.”

    These requirements could delay the airing of political ads, jeopardizing the rights of candidates and political speakers to reach voters during crucial election periods.

    The NAB stresses that existing mechanisms already address concerns about deceptive political advertising, including AI-generated content. They note, “Broadcasters have decades of experience in dealing with political issue advertisements… These station-specific processes successfully—and usually quickly—resolve complaints about issue advertisements.”

    In addition, several states have enacted or are considering laws targeting deceptive AI-generated content in political ads, focusing on the creators of such content rather than broadcasters. The NAB highlights, “Many states have already passed legislation regulating the use of AI or other synthetic media to mislead audiences in political communications, and other states and the U.S. Congress are considering legislative action.”

    It also suggests that under the Federal Election Campaign Act, the FEC—not the FCC—has authority over fraudulent political ads, which has proved a previous point of contention between the two agencies.

    As the NAB’s opening comments state, “NAB strongly encourages the Commission to close this proceeding without moving forward. While Commission staff worked diligently to wrestle with the deepfake problem it identified, the agency is severely hamstrung by a complete or near-complete absence of Congressional authority. NAB urges the Commission to seek only holistic solutions that will not create new problems by trying to solve others. To the extent there is an issue to address, it is Congress, and not the FCC, that can and should take the lead.”

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here