written by Mark Grundhoefer, musician and music instructor at Metro Music Makers
I grew up making music the old-fashioned way. My first album of original material was recorded by my high school band and tracked to tape in the back of a gas station in Flomaton, Alabama. No computerized instruments. No digital shortcuts. If you wanted your part to sound right, you had to play it right. We learned to write songs, arrange parts, and rehearse until the timing was locked in. Every take was a gamble, and every good one felt like a little miracle.
Fast forward a couple of decades, and we’re in a completely different world. Now we have software that can generate a “new” song in seconds. We have programs that can separate stems from a stereo track, clean up noisy recordings, and even “master” your mix instantly. Some of this stuff is genuinely amazing. Some of it…
When AI Helps
I’m not against technology in music and never have been. We’ve had game-changing tech for decades. Synthesizers opened up whole new worlds of sound. MIDI allowed different instruments to talk to each other. Digital audio workstations put a whole recording studio on your laptop. Effects… modeling amps… plugins… these are tools we’ve embraced because they still rely on us to do the creative work.
And there are AI tools that make the impossible possible. The stem-splitting feature in Logic? It’s an absolute lifesaver for learning songs or remixing. Or take Peter Jackson’s Get Back documentary. AI was used to pull clear audio out of noisy Beatles rehearsal tapes that would’ve been unusable otherwise. That’s technology serving art, not replacing it.
Used like this, AI is just another wrench in the toolbox. It’s when people start letting it write the music for them that I get worried.
When AI Crosses the Line
Here’s the thing: AI can’t feel anything. It doesn’t get goosebumps when a lyric hits just right. It doesn’t stay up at 3 a.m. trying to find the right chord to match the mood in its head. It can only mimic what it’s been trained on, and what it’s been trained on is our music.
That means every generated track is stitched together from pieces of human creativity, often without credit or consent. It might sound convincing, but it’s hollow. There’s no story behind it. No experience. No scars or joy or heartbreak.
And the ones where you type in “make a pop song about summer love in the style of Taylor Swift” and it spits out something ready to upload? That’s not art. That’s content manufacturing. If we let music become just another stream of disposable background noise, we’re selling the soul of the craft for convenience.
The Human Element
Music, at its core, is a reflection of who we are. A note bent just slightly sharp because you felt it that way… the crack in a singer’s voice when they’re holding back tears… the push and pull between players in a live take…
These are things AI can fake—but not mean. And if we start outsourcing the act of creation, we risk erasing the one thing technology can’t replicate: human expression.
We’ve already made recording and production easier than ever before. That’s a good thing. But writing and performing are still deeply human acts. They’re the last part of the process that belongs entirely to us. Why would we hand that over?
The Money and the Mess
There’s also the ethical dilemma no one’s fully untangled yet. People are already making money from AI-generated songs built on the work of countless musicians who never agreed to be “training data.” Stock music libraries are filling up with AI tracks, making it harder for real composers to compete.
And then there’s the environmental side: training and running these massive AI models chews through a ton of energy. We can’t just ignore that cost.
My Bottom Line
I’m not saying, “Never touch AI.” I’m saying use it like you’d use any other piece of studio gear. Use it with intention, with limits, and with respect for the art form.
Let AI clean up a noisy demo? Sure. Let it pull a bass line out of a stereo track so you can learn it? Go for it. But let it write your song? No thanks.
Because at the end of the day, music is about connection. One human telling another, “Here’s what I felt; do you feel it too?” No machine can do that for us. And if we let them try, we might wake up one day and realize we’ve lost something we can’t get back.
A Musician’s Guide to Responsible AI Use
If you want to keep AI in your workflow without losing your voice, here are some ways to use it responsibly:
- Use AI for cleanup, not creation—noise reduction, stem separation, click removal, and other technical fixes.
- Leverage it for inspiration, not replacement—as a last resort, let it suggest ideas when you’re stuck, but rewrite them in your own voice.
- Avoid training models on other artists’ work without consent—respect the source material.
- Keep the writing and performance human—AI can’t feel, so don’t hand it your emotional labor.
- Remember the environmental cost—avoid unnecessary processing or massive model runs for trivial results.