How Could Generative AI Impact Club Music?

NUKG Monthly is Nathan Evans' UK garage and club music column. This edition explores how generative AI could impact club music - what happens after the bubble bursts? Are we going to have DJ sets filled with gen-AI filler?

Subscribe to the KEYMAG Substack to receive email notifications on new editions of NUKG Monthly.

Listen to the audio version of this piece below:

How Does Generative AI Affect Club Culture?

Last year, TIME’s Person of the Year was a group of people - the architects behind generative AI. It’s everywhere now - it’s on your webpage, it’s generating your images, it’s possibly in your Spotify library. Which is concerning, as these AI systems aren’t advancing much of anything. Quite literally, they hold no independent thought of their own, only able to predict what the user wants based on the query and its petabytes of data. These are not generative systems so much as they are regurgitative.

Then, there are the ethical concerns from even generative AI platforms at all. AI companies are using phenomenal amounts of water, energy and computer processing to handle the global user base using them daily, it has meant that the most powerful consumer computer processors are being made unavailable, affecting the water supply of local populations where data centres situated and globally, caused the same CO2 emissions in 2025 as New York City, according to a study.

In club music, gen-AI has bombarded its way into the conversation, bringing a storm of controversy with it. “I Run” by HAVEN is currently embroiled in a legal battle with Jorja Smith’s label FAMM, as the chart hit was (allegedly) found to have made an AI copy of Smith’s voice and marketed the track as an unreleased tune from the R&B singer. Caribou’s “Broke My Heart” caused a stir when project mastermind Dan Snaith revealed that he used AI vocal manipulation to sing on the track - which was named the #6 best garage tune of 2024 by this very blog. Not everyone loved it - the Guardian gave his wider album Honey two stars out of five, calling it, “dubious on so many levels”.

Generative AI platforms like Suno and Udio market themselves as allowing people to create wholesale tracks for themselves, and have struck deals with major labels to allow use of their music catalogues to train the AI models, clearly without artist consent. Is UK garage and club culture at large doomed to become a scene of AI-generated tracks? Or does Suno and Udio not know their 2step from their dark garage, their Moodswing from their Muskatt?

To find out more and get a perspective, I spoke to Darren Hemmings, Managing Director of Motive Unknown and writer of the Network Notes newsletter, which reports and analyses the latest developments between the music industry and AI. In a conversation that is mostly about putting my fears to rest somewhat, we chat about what the future is beyond the hype cycle around AI, how AI can become a harmonious part of the music-making process, and how it could reshape the sound of club music.

If you’d like to introduce yourself…

Sure, my name’s Darren Hemmings, I run a company called Motive Unknown. I started it about 15 years ago, and we do marketing in the music space. We’ve done work for the likes of Ninja Tune, Warp, Domino, LuckyMe, Anjuna, and people like Chase & Status, Spice Girls, Kylie Minogue. It’s a broad spectrum, and that’s about 60% of what we do. The other 40% is working with music tech, a lot of companies in the audio production space like PLugin Boutique, Beatport, YouJam, Minimal Audio, Lunacy Audio… if you’re a producer, you’re the person we’re marketing to.

With that, you were probably very early in the know on generative AI integrating into music. When did you start looking?

We make it our business to be across all technical developments. We spent a lot of time in the Web3 space, which turned out to be a waste of time [laughs]. We’re looking at the likes of ChatGPT, Suno and Udio from the perspective of not just how they impact our clients, but how we work and their wider impact on popular culture.

You mentioned Suno and Udio, which are the two major AI-powered music generation platforms. Are there any others lurking about?

I suspect there are many lurking about, but these are the two dominant ones. We track within less the context of end-to-end generative - ‘make me song that sounds like Stormzy’, for example - and more so the capabilities of AI within the music-making process. Soundlabs and other voice cloning platforms which allow someone like me, who can’t sing a note, to wail badly into my mic and it clones that into Barry White or whoever you want. In the space we occupy, we don’t have clients coming to us and showing us an AI track they’ve made in Suno. The people we work with are not interested in the use of Suno and Udio to make music, but they are interested in a legal perspective and what the impact is going to be for them.

In the context of club music and UK garage, I feel like the many subgenres of club music might be safe from generative AI? My theory is that these programs don’t even have the context needed to make the stuff, but I wonder if you had an inkling on the dataset of these platforms, where are they pulling from to train their models? Because so much of club music exists in vinyl and YouTube rips, and I don’t think that’s where these platforms are taking from.

They’ll be pulling from digital sources, unquestionably. If you’re pulling a YouTube rip of a garage track with weak bass and feed it into an AI model, is that what it thinks is going to be a good garage track?

I read a piece recently from the Atlantic that talks about how it’s gotten proven repeatedly that these AI companies are hoovering up copyrighted works, because they are getting AI to spit that back out almost perfectly, be it music lyrics, books and visual imagery. GEMA, the European Rights Body, is suing AI companies for copyright infringement off the back of that, and that’s got massive implications for the future if they can prove it, and you’d hope that these AI companies will need to pay massive sums of money to the people they’ve ripped off.

The boom and subsequent backlash on generative AI muddied the waters on what is AI, which was a thing before ChatGPT, and what is this new generative AI. In what ways are artists using generative AI in the music production process?

I think we get too wrapped up in generative being a means for creating songs versus careers. Can you use AI to make an ersatz version of Stormzy? Yes, but the value is low because it’s so easy to make, and people love artists for reasons that go beyond the music. It’s the image, the messaging, the experience of seeing that artist live, the controversy.

To your point, I feel like there’s a huge amount of opportunity within the AI space to do some really good stuff. One of my favourite plugins is a synth called Synplant, which has a patch built into it called the Genome, where I could take a snatch of bass, taken from a Wu-Tang Clan track or something, and I drop it into the Genome patcher and it uses AI to clone that sound as a playable synth patch. Not as a sampler, but as something you can tune. That’s genuinely brilliant to me. And yeah, you are cloning the bass sound, but you’d like to think that an artist has got the smarts to add something else to it rather than all the other clones of sound, you know? It’s a good example of AI enhancing the artist’s repertoire and capability, without it being a cheap shortcut.

Do you think that will replace what sampling used to mean? Because there’s a magic to sampling that is lost, the fact that you can research and find the original context from which an artist pulled from.

Yeah, but I think we’re back to the synthesizer argument in truth. People thought they were the devil and were going to kill rock music, but they didn’t, they created something that was different and interesting. Instead of crate digging, artists can now go prompt digging, and the outputs of those results are the same in the context of creating whole tracks. It may create something that’s dynamite, like a great drum break, but I don’t think it will put most people off of sampling. I think people are quick to say that it’ll ruin sampling, and I think the truth is significantly more nuanced. If someone’s looking for me to be screaming blue murder about this, but the truth is, it’s nuanced.

It remains to be seen what AI can do in the hands of genuinely brilliant creative artists, just as in the same way of synths. I remember seeing a video from the artist Caribou, who posted a video answering people’s question of who was singing on his track “Broke My Heart”, and he said that it was all him but sung with AI. There will always be people who will just hate because it’s AI, but others thought the way he used that it was actually pretty clever.

What do you think of Bandcamp becoming a no-AI music platform?

I get it, and in principle I kind of applaud it, but it’s a complex statement to make. If I made a track that uses synplant to make the bass sound, does that count? Is that gonna get booted off? You can outline an ethical position, but it’s such a difficult thing to enforce because it’s such a complex area. I think Bandcamp were trying to say that they don’t want their marketplace flooded with AI like Spotify has, with billions of songs which never get any streams and cause problems. It seems an underdiscussed aspect is storage. How do you store all this crap that gets dumped into their servers? It’s a complex issue, and I think you’ll see people continue to struggle with it until culturally, we find a reconciliation point with it.

Another way in which AI vocal manipulation could be used for is to create reference tracks to see how other artists may sound on a track. Have you come across this?

It’s interesting you say it. One of our management clients, that’s exactly what they do. They have songs they think would work for a particular artist, so they clone an artist’s voice and use that to create that reference track and send it to the artist. No-one ever hears them outside of that closed circle of management and the artist.

The other one I found quite funny is in America particularly, artists have to record radio idents. ‘You’re listening to KCRW!’ That was the other use I’ve found, is artists using it not for their singing voice but their spoken voice. Now, they don’t have to sit there for two days recording 300 radio idents [laughs].

Let’s talk briefly about the HAVEN ‘I Run’ case, because that’s kind of the scenario that people dreaded, right? An artist promoting a song off the back of another artist’s name and likeness, allegedly?

The only thing I was curious about was how they managed to train the model to sing like Jorja. Most artists wouldn’t know how to clone a voice so accurately. But yeah, it was always going to happen - when, not if. But then, we could get a scenario where artists fully fingerprint their voice in the same way that Marvin Gaye’s estate sued the writers of “Blurred Lines” not because they copied him directly, but it sounded enough like Gaye to infringe. In larger legacy artists, you’ll possibly see more of this, like if someone tries to clone Michael Jackson’s voice.

How does AI affect the music industry long-term?

We talk about the music industry generally like it’s this one big thing we need to fix, but it’s always been this umbrella term for a massive collection of genres and scenes. The golden age of jungle was probably an example of a cottage industry, where the artists often had their own labels, and the communities would drive forward the collective. They wouldn’t have been affected by something that affected major labels. There will still be communities of human-made music that may come with more friction. Right now, we’re in a world where you can get everything instantly, but there is a buzz phrase going around called “friction-maxxing”, which calls back to when I was younger, of hearing a track on the radio and either taping it on the radio or spending ages scouring for it on vinyl. But when you got that song, you played it to death. I think we’ll see more communities looking to reintroduce that friction.

That’s especially true of UK garage, which thrived on dubplates and vinyl pressings and later, MP3 downloads. I think we’ll see more of a divide between garage artists going the streaming route and going the friction-maxxed route.

I do wonder if you’re going to see less music being released in general, because it’s not a victory anymore like it used to be. The friction of vinyl dubplates creates that. I think you need communities into which you can share music, and I think more people will work to share their music across private communities and find one that will raise your art rather than just releasing it on Bandcamp.

Another way I’ve seen AI used in the music-making process is plugins that help artists with the mix of a track. I was wondering if you’re aware of those?

Yeah, I think they’re getting better. There’s an arguable awkward truth that for a lot of producers, mixing and mastering was beyond their own capability and beyond the means of paying someone to do it. It’s a bit of a black art that most people don’t quite understand fully. We worked with a company called RoEx, and they have a feature called Mix Check Studio, where you upload your track and it critiques your mix to highlight what’s imbalanced, or if you’ve got a phasing issue, et cetera. As an objective third party that can help me as I’m going along, it’s incredible.

I’ve no doubt that there are mixing engineers that are loathing this, but I don’t think it was ever intended to replace the Bob Clearmountains of the world. I read that Skream doesn’t do mixing anymore because it sucks the energy out of the track’s creation, and he’d rather pay someone to do it for him. But also, he’s in a position where he’s making enough money to do so. Most artists can’t afford those services, so these tools help fill a gap, but don’t replace a great mixing engineer. They solve a problem in the meantime by doing about 60% of what a mastering engineer can do.

I suppose it could create this idea of ‘one true mixing style’ and discourage alternative and experimental ways of mixing tracks.

Yeah, it’s a little problematic. One mate of mine makes hip-hop beats which are brutally distorted. It sounds amazing, but I reckon it’s exactly the type of thing that AI would look at and have a fit over. The way these AI platforms go about it is genre-led, not description-led, so it’s a recognised shortcoming of these things.

AI companies are using phenomenal amounts of water, energy and computer processing to handle the global user base using them daily. How do you reckon with the environmental concerns of AI?

…I don’t know. It’s difficult because history has shown that money wins out over ethics a lot of the time. But I think there’s a parallel to what we saw with Web3, where the energy used around cryptocurrency was so gargantuan to begin with, but then the technology evolved greatly and massively reduced that carbon footprint. I’m sure it’s still bad, and I’m not an advocate for Web3 at all, but it was improved upon. I think you’ll see a similar thing here - it may be improved. We are in a place where governments aren’t prepared to put pressure on these companies, which is pretty bleak, but history moves in cycle, and optimism is key.

What type of club music is most likely to be impacted by generative AI?

The worst music. If you’re in the realm of utterly identikit music, then you’ve probably got a problem. I’m a huge fan of dub techno, I love how meditative it is when you’re walking around town. It feels purpose-built for noise-cancelling headphones. But it is quite formulaic, so it’s logical that AI could recreate that. But it’s something that Massive Attack posted, which is to the effect of, is the problem that AI can reproduce this music, or is it that the music you were making was so bland that it’s easily reproducible?

We’re seeing a lot of hyperbole right now with gen-AI, and seems like despite what we can all see for ourselves with its inaccuracies and limited capabilities, the makers want us to believe it can do anything. Will the bubble burst?

It’s like an update of Microsoft Word and the stupid Clippy tool that was like, ‘it looks like you're writing a letter, do you want help with that?’ No! Get back in your box, Clippy. It’s the same with AI. We saw this with Web3, where there was a lot of hyperbole about how it’s the thing that will solve our problems, and then the money runs out and it all collapses. But then, what replaces that is common logic and actual pragmatism in terms of its use. I don’t think we’re there yet, but the money will run out, and I think that will be the fate of Suno and Udio. Which I think is why major labels have struck deals with these companies, because if they go bust, they don’t have to give back the $10m they got, so it’s risk-free for them. Once the price of these services go up by a factor of 10, which reflects the true cost of the service, a load of people will suddenly change their mind about it. And it’ll die. That’s the future past the hyperbole phase of the hype cycle.

I think in music, we’re going to see a ‘corn syrup effect’, similar to what goes on in America. Corn syrup is used in so much food products there - it’s why the bread takes like cake. But that means there’s a subset of products that get good buzz by marketing themselves as ‘corn syrup free’, and I think with public reaction of gen-AI music, artists will market their music as ‘AI free’ or ‘100% human made’.

It’ll be clear on what’s human made at a certain level because the originality will be clear to see as something that couldn’t have possibly been made by AI. We have to credit humans for being beautifully idiosyncratic and unpredictable.

Previous
Previous

RamonPang is Fostering a UK Garage Scene for California

Next
Next

Top 100 UK Garage Tunes of 2025