So, you thought AI was just for writing slightly off-kilter emails and making your cat look like a pirate? Think again. Whilst the bright minds in Silicon Valley are busy promising us a techno-utopia powered by generative AI, a shadier corner of the internet is putting the same tools to work on a much less noble pursuit: industrial-scale music fraud. This isn’t some niche problem for audiophiles to grumble about. We’re talking about a sophisticated, multi-million-pound heist that strikes at the very heart of the creative economy, and it’s all powered by some clever code and an unsettling grasp of human psychology.
This new wave of digital deception is far more than just a few fake clicks. It’s a calculated campaign designed to manipulate listeners, game the streaming platforms, and siphon royalties directly from the pockets of real, struggling artists. Forget the image of a lone hacker in a dark room. This is the factory farming of music, where bots are the audience and the “artists” are nothing more than algorithms churning out bland, royalty-eligible noise. To really understand what’s going on, you have to look past the code and into the mind.
What Exactly Is an AI Music Scam?
Let’s be clear. When we talk about an AI music scam, we’re not talking about the fun (and legally dubious) AI-generated Drake track that did the rounds on TikTok. We’re talking about something far more systematic. At its core, an AI music scam involves using generative AI to create thousands, sometimes hundreds of thousands, of passable music tracks in minutes. These aren’t destined for the charts; they’re designed purely to be eligible for royalties on platforms like Spotify, Apple Music, and YouTube.
The scam isn’t just in the creation; it’s in the promotion. Fraudsters then deploy vast networks of bots—automated accounts—to stream these songs relentlessly. Each play, however fleeting or artificial, generates a tiny fraction of a penny. According to industry figures, royalty rates hover between $0.003 and $0.005 per play. It sounds like nothing, but when you have a botnet making billions of streams, it quickly adds up to millions of pounds stolen from the collective royalty pool. As Inna Vasilyeva, a senior threat intel analyst at HUMAN Security, told Dark Reading, “Last year, billions of music streams were consumed by bots, diverting millions in royalties away from real artists.”
This is the strategic difference: instead of trying to make one hit song, scammers are playing a numbers game. Why spend months crafting a masterpiece when you can generate 1,000 “good enough” tracks and force them to be “popular”? It’s a brute-force attack on a system built on the assumption of human listeners.
The Psychology Behind the Scam: Why Our Brains Fall for It
This whole enterprise would fall apart if it didn’t effectively exploit the quirks of human psychology. The fraudsters aren’t just technologists; they are masters of manipulation, and their primary weapon is your own brain.
Emotional Engagement Hacking
Music is a direct line to our emotions. A certain chord progression can make us feel nostalgic, a beat can make us want to move, and a melody can bring a tear to our eye. Scammers know this. Emotional engagement hacking isn’t about creating profound art; it’s about producing music that is just emotionally resonant enough to blend in.
Think about the explosion of “lo-fi hip hop beats to study/relax to” or ambient “sleep soundscape” playlists. These genres are algorithmically perfect for this kind of fraud. They are repetitive, instrumental, and designed to exist in the background. An AI can be trained on thousands of hours of this music to produce tracks that hit all the right notes—literally. It can generate a calm, non-distracting soundscape that a real person might genuinely put on for hours. You don’t question its authenticity because it serves its functional purpose. The AI isn’t trying to be Beethoven; it’s just trying to be beige, and it’s exceptionally good at it.
Virality Manipulation
Here’s where it gets really clever. The bots aren’t just there to rack up plays. They are there to create the illusion of popularity. This is classic virality manipulation. Social platforms and music streaming services are designed to promote what’s already trending. Their algorithms see a song getting thousands of plays and think, “Aha! This must be the next big thing.” So, they start recommending it to real human users.
It’s like setting up a fake queue outside a new restaurant. A handful of paid actors standing in line makes passers-by think it must be the hottest spot in town, and soon, a real queue forms behind them. The initial bot activity tricks the algorithm, and the algorithm then serves the track to you. You see a song with 500,000 plays on a playlist and assume it must be good, or at least popular. You give it a listen, adding a legitimate stream to the pile and reinforcing the cycle. The bots create the spark, and the platform’s own recommendation engine fans the flames.
The Grimy Mechanics of Exploitation
This scam operates on two parallel tracks: exploiting what’s popular in our culture and using sophisticated technology to hide its tracks.
Cultural Trend Exploitation
Fraudsters are nothing if not opportunistic. They aren’t trying to invent a new genre; they are focused on cultural trend exploitation. They watch what’s bubbling up on TikTok, what playlists are gaining traction on Spotify, and what kind of background music is popular on YouTube. Then, they use AI to flood that specific micro-genre.
Right now, that might be ambient sleep music, focus beats, or instrumental covers of pop songs. The content is cheap to produce and taps into a pre-existing demand. Companies like Epidemic Sound and Firefly Entertainment, which provide legitimate royalty-free music, now face a new form of competition: a limitless supply of algorithmically generated tracks that mimic their style. Why would a content creator pay for a real artist’s track when a passable, AI-generated alternative is being pushed to the top of playlists by bots?
Botnet-Driven Fake Engagement
This is the engine room of the whole operation. According to the investigation by HUMAN Security’s Satori Threat Intelligence team, these fraudsters use highly sophisticated botnets to generate fake streams. They don’t just use basic scripts from a single server; that would be too easy to detect. Instead, they use residential proxies, which makes the bot traffic look like it’s coming from thousands of different homes, just like real users.
They also program the bots to mimic human behaviour. A bot might “listen” to a song for more than 30 seconds to ensure it counts as a stream, add the song to a playlist, follow the “artist,” and then move on to another track. This level of automation is designed to fool the fraud detection systems that streaming platforms have in place. The tell-tale sign, as researchers point out, is the traffic pattern: a massive, unnatural spike in streams that appears out of nowhere and then vanishes just as quickly. For a real artist, growth is usually gradual. For a botnet, it’s a sheer cliff face.
Case Study: Unmasking the Ghost in the Machine
The recent report from HUMAN Security paints a stark picture of these campaigns. Investigators uncovered “hundreds of thousands” of AI-generated songs being used in these schemes. They found that in many cases, the “artist” profiles were completely fabricated, with generic names and no social media presence. The only thing they had was a catalogue of thousands of tracks and an impossibly high stream count.
Lindsay Kaye, VP of threat intelligence at HUMAN, highlighted how these schemes are not just automated but are becoming “smarter” in their ability to evade detection. The fraudsters operate in what’s known as a “cat-and-mouse game” with platforms. As Spotify tightens its security, the scammers adapt their tactics, finding new ways to make their bots look human. This isn’t a one-off problem; it’s a persistent, evolving threat. The financial incentive is simply too large for them to give up easily. With millions of pounds on the line, this has become a core business model for some cybercriminal groups.
What Can Be Done? The Fight for Authenticity
So, how do we combat this digital plague? The solution requires a multi-pronged approach involving platforms, artists, and listeners.
For Platforms and Artists
For streaming services, it’s an ongoing technological arms race. They need to invest more in AI-powered detection systems that can identify the subtle fingerprints of bot activity. Analysing traffic patterns for those tell-tale spikes and drops is a start. They must also get better at verifying artist identities and flagging accounts that upload thousands of tracks overnight.
For artists, vigilance is key. It’s crucial to regularly monitor your streaming data. A sudden, inexplicable surge in plays from a single city or playlist could be a red flag. Whilst it might be tempting to enjoy the apparent boost, it’s often a sign that your music is being used as filler by a botnet to make its activity on fraudulent tracks look more natural. Reporting anomalies to your distributor or the platform directly is essential.
Educating the Audience
Ultimately, the most powerful tool might be public awareness. We, as listeners, need to become more critical consumers of digital content. When you stumble upon a new artist with a million plays but only ten followers, be sceptical. When a playlist is filled with generic, instrumental tracks from artists you’ve never heard of, question its origin.
Start looking for the human element: a social media presence, interviews, behind-the-scenes content. Real artists build careers; bots just build play counts. By actively choosing to support artists we know are real, we can help starve these fraudulent operations of the legitimate streams they need to appear credible.
This isn’t just about money or technology. It’s about preserving the value of human creativity. The rise of AI music scam psychology shows how easily systems can be gamed when we underestimate the cynical ingenuity of those looking to exploit them. The future of music—and indeed, all creative industries—may depend on our ability to tell the difference between a real artist and a ghost in the machine.
What do you think? Have you ever stumbled upon a playlist or artist that just felt… off? How do we ensure that technology serves art, not undermines it?


