Spotify Ramps Up Safety Measures As 2024 Elections Near

0

As candidates continue to dole out ad dollars ahead of the 2024 election in November, Spotify is laying out global ground rules for platform safety to keep out content that could manipulate or interfere with the democratic process or foster hate speech.

In 2022, Spotify acquired Kinzen, enhancing the streamer’s ability to monitor misinformation and hate speech through advanced artificial intelligence tools like Spotlight, which identifies risks in long-form audio content. Partnerships with entities like the Spotify Safety Advisory Council and the Institute for Strategic Dialogue further bolster Spotify’s policymaking by incorporating diverse expert opinions.

Spotify uses country-specific risk assessments to address local and nuanced types of abuse that vary from nation to nation. These assessments take into account Spotify’s presence in the region, historical instances of harm, and current geopolitical factors.

During election periods, Spotify aims to drive nonpartisan community engagement through various in-product resources.

During the 2020 election, Spotify launched its “Play Your Part” voter turnout initiative. Through partnerships with artists like Conan Gray, Alaina Castillo, King Princess, and Chloe x Halle, first-time eligible voters received encouraging messages to participate. Other artists and influencers curated special playlists, podcast episodes on civic engagement, and video and audio voting reminders in popular Spotify playlists.

Additionally, Anchor – now Spotify for Podcasters – launched an “election takeover” of its Sponsorships tool, allowing podcasters to donate ad space to voting nonprofits like HeadCount, BallotReady, and Democracy Works. Edison Research found that 16% of ad-supported audio time spent among registered voters in the US was spent with podcasts.

For the 2024 election, Spotify has outlined specific guidelines for political advertisements, which are permitted in select markets. These include stringent verification processes for advertisers and mandatory disclosure for any synthetic or manipulated media, like AI generated content, used in ads.

Punishments for offenders could include removing content that directly violates guidelines, account suspension or termination for severe or repeated breaches, and restricting content monetization, along with other actions as needed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here