How we make sound.
If we know anything, you can’t pay for taste or style.
When discussing ideas for our next post with Tracy, I thought it might be fun and/or funny to have AI take one of our songs and “make it better.” Knowing full well that this would be strictly for entertainment value. We didn’t even make it as far as discussing which song before I thought twice about this. Using a chatbot to roast my high school yearbook photo is one thing. Songs we’ve spent many hours crafting and workshopping to get right are another. While I’m sure the results may or may not have been humorous, that sort of activity is probably better suited in my personal writing space.
All this got me thinking about AI tools, music creation, and where we fit in. At some point, we plan to showcase some specifics of how we make certain sounds. Not giving the magic away completely, but some fun hacks we use to blow stuff up. The songwriting process is something I deeply enjoy, and the thought of taking a shortcut with it feels quite bizarre. Making music can be time-consuming, messy, and absolutely soul-crushing at times. When it works, and something you’ve made creates an indescribable feeling, it’s hard to want to give that away.
To date, most of the Outer World songs have been written and recorded as pretty high-fidelity demos. We rely on a variety of VSTs, some of which are AI-powered. It isn’t as if we buy these things based on one fact or another; it’s based on what we need for our workflow and a song. The majority of what we do is quite traditional (ie, play an instrument and record it), but I’d be lying if I said we didn't allow a plug-in to do some minor EQ’ing on its own, so we can move on to other things we’d rather spend time on. It feels bizarre to imagine a world where we’d ever type in the phrase “write a bridge to this song similar to Bad Mouth by Fugazi. The bridge begins at 1:49 and lasts until 2:10.”
By day, I work in the industry formerly known as advertising. I don’t even know what we call it today, since half of what we are trying to solve is getting anyone to pay attention to anything we make, whether it is an ad or not. I mention this because the past two years have been nothing but discussions about gen AI in our industry, and I’ve been to a variety of conferences where so-called experts have talked about the magic of these products. During one of these events, I was having a conversation with some folks in the crowd about my industry, my perceived impact on it, and how to utilize these tools for some benefit without feeling completely shitty about it. The topic of generative music came up, and I went a bit quiet. This was mostly because I didn’t want to talk about being in a band. I love being in a band, but it can be a difficult conversation with some people who have bizarre perceptions of what it means or just want to talk about how epic their high school talent show was.
Regardless, it did come up, and I explained that, despite having pretty serious careers, my wife and I still aggressively pursue writing, creating, releasing, and performing music. As you might imagine, the next question was whether we ever used AI to create music.
Remember a few sentences ago when I mentioned not wanting to talk about being in a band. This is what I get.
There wasn’t much to say back to the person. We love spending time creating and unlocking doors to sound so much that we’d rather waste all the days and weekends we have writing one song than write some prompts to do it in ten minutes.
A colleague of mine was very hyped on Suno when it came out. His entire mission was to make songs about people around the office. It was mildly humorous until we got to the metal song about me. Within the first second of it playing, I recognized what it had scrubbed. It was nearly note-for-note “Skin o’ My Teeth” by Megadeth. And it felt so wrong instantly. My co-worker would likely abandon and move on to something else that piqued his interest in the next week. Others would and have used these tools for monetization and creator credit, likely unknowingly, that their chill vibes track was probably based on and stolen from a Röyksopp song.
I love tech and learning new things. As we think about our next release, there are so many avenues we get to explore that we couldn’t have imagined even two years ago. This is all quite daunting at times, as there is always something to do that has nothing to do with writing or playing music. As creative types, this freedom allows us to explore different things related to our music, which is inherently exciting. As AI tech continues to be shoved down our collective throats, I remain unimpressed by its lack of taste and polish. If we know anything, you can’t pay for taste or style, no matter how hard you try.
I did respond to the person at that conference.
“We don’t use AI to make music because we just love the process of making music together, as time-consuming as it can be.”
He smirked and continued talking about himself.


