Malicious actors are increasingly abusing generative AI music tools to produce and disseminate homophobic, racist, and propagandistic songs. This alarming trend has been documented by ActiveFence, a service dedicated to managing trust and safety on online platforms.

Rising Threat of AI-Generated Hate Speech

Since March, there has been a significant increase in discussions within hate speech-related communities about how to exploit AI music creation tools to produce offensive content. According to a recent report by ActiveFence, these AI-generated songs are being shared on forums and discussion boards, targeting minority groups and promoting hate, violence, and terrorism.

ActiveFence researchers have found that these songs aim to incite hatred against various ethnic, gender, racial, and religious groups. The report highlights that these offensive songs also celebrate acts of martyrdom, self-harm, and terrorism, amplifying their dangerous impact.

The fear is that the availability of easy-to-use, free music-generating tools will enable more individuals, who previously lacked the means or expertise, to create and spread hateful content. This mirrors the spread of misinformation, disinformation, and hate speech facilitated by AI-generated images, voice, video, and text.

“These trends are intensifying as more users learn to generate and share these songs,” an ActiveFence spokesperson told TechCrunch. “Threat actors are identifying vulnerabilities to exploit these platforms and generate malicious content.”

How AI Music Generators Are Misused

Generative AI music tools like Udio and Suno allow users to add custom lyrics to generated songs. Although these platforms have safeguards to filter out common slurs and pejoratives, users have found ways to circumvent these filters.

In white supremacist forums, users have shared phonetic spellings of offensive terms, such as “jooz” instead of “Jews” and “say tan” instead of “Satan,” to bypass content filters. Some users suggest altering spacings and spellings to evade detection when referring to acts of violence, such as using “mire ape” instead of “my rape.”

TechCrunch tested several of these workarounds on Udio and Suno. While Suno allowed all offensive content through, Udio blocked some but not all of the offensive homophones. When reached for comment, a Udio spokesperson stated that the company prohibits the use of its platform for hate speech. Suno did not respond to requests for comment.

ActiveFence found links to AI-generated songs spreading conspiracy theories about Jewish people, advocating for their mass murder, containing slogans associated with ISIS and Al-Qaeda, and glorifying sexual violence against women.

The Emotional Power of Music

ActiveFence argues that songs carry an emotional weight that makes them a potent tool for hate groups and political warfare. They draw parallels to Rock Against Communism, a series of white power rock concerts in the U.K. in the late 1970s and early 1980s, which spawned subgenres of antisemitic and racist “hatecore” music.

“AI makes harmful content more appealing — imagine someone creating a rhyming song that makes a harmful narrative easy for everyone to sing and remember,” the ActiveFence spokesperson explained. “These songs reinforce group solidarity, indoctrinate peripheral members, and are used to shock and offend unaffiliated internet users.”

Calls for Stricter Moderation

ActiveFence is urging music generation platforms to implement better prevention tools and conduct more extensive safety evaluations. They suggest “red teaming” to simulate threat actor behavior and surface vulnerabilities, and enhancing moderation of both input and output to block offensive content before it reaches users.

However, as users continue to find new ways to bypass moderation, these fixes may be only temporary. For example, some AI-generated terrorist propaganda songs were created using Arabic-language euphemisms and transliterations that the music generators failed to detect due to weak filters in Arabic.

The potential for AI-generated hateful music to spread widely is significant. Earlier this year, Wired reported an AI-manipulated clip of Adolf Hitler received over 15 million views on X after being shared by a far-right conspiracy influencer.

Experts, including a UN advisory body, have expressed concerns that generative AI could amplify racist, antisemitic, Islamophobic, and xenophobic content.

“Generative AI services enable users lacking resources or skills to build engaging content and spread ideas competing for attention globally,” the spokesperson noted. “Threat actors, recognizing this potential, are working to bypass moderation and avoid detection — and they have been successful.”

As I see it, the misuse of AI music generators for spreading hate underscores the urgent need for robust safeguards and constant vigilance. While technology can be a powerful tool for creativity and connection, it also poses significant risks if left unchecked. The challenge lies in balancing innovation with responsibility to ensure a safer online environment for all.