iHeartMedia’s A.I. Chatbot Concerns Are Well-Founded

0

While many in radio are worried about the potential role of artificial intelligence in replacing talent, copywriters, and programmers, there’s another fear at play that could be more dangerous right now. In a Wednesday memo to all staff, iHeartMedia CEO Bob Pittman addressed these concerns by banning public AI chatbot use for company projects.

In the note, Pittman said, “Although AI, including ChatGPT and other ‘conversational’ AIs, can be enormously helpful and truly transformative, we want to be smart about how we implement these tools to protect ourselves, our partners, our company’s information and our user data. For example, if you’re uploading iHeart information to an AI platform (like ChatGPT), it will effectively train that AI so that anyone – even our competitors – can use it, including all our competitive, proprietary information.”

This also came with news of iHeart’s plans to roll out in-house solutions to reap the benefits of AI while safeguarding information and intellectual property.

So are AI chatbots really that dangerous to intellectual property and business secrets?

Actually, there is some potential risk there. Many public AI chatbots, including ChatGPT and Google Bard, store and use user input and feedback for data training that helps improve the service. In the sense that the public version of ChatGPT will learn from your inputs and grow, it isn’t impossible that your input could train the model in a way that benefits competitors too if they’re asking the same questions.

This doesn’t mean you can ask ChatGPT, “What are [insert radio company here]’s sales secrets?” and it will hand them over on a digital silver platter. While all user inputs are stored, only training approved by a moderator will affect the AI program.

This does lead to the largest security concern from chatbots. As the iHeart memo says, “As tempting as it is, please don’t use AI tools like ChatGPT on company devices or in relation to company work, or put any company documents into them, in order to protect iHeart’s intellectual property, partners, data and confidential information.”

In March, a data leak incident occurred with ChatGPT, where some users were able to view the usage history of other users, as well as limited personal and billing information. Parent company OpenAI identified and patched the bug, which allowed connections to become corrupted, resulting in data belonging to one user being received by another user.

As with most technology, it’s important to stay vigilant to keep confidential info from getting into the wrong hands via a bug or cyber attack. Some companies like Microsoft are now offering private AI chatbot instances that are disconnected from the rest and don’t share training data.

Businesses that integrate AI chatbot services like ChatGPT should consider them as third-party applications and subject them to rigorous risk management. This includes assessing the security measures implemented by the AI service provider, conducting due diligence, and ensuring appropriate contractual agreements are in place to protect sensitive data.

LEAVE A REPLY

Please enter your comment!
Please enter your name here