| 1 | ||
| 1 | ||
| 1 | ||
| 1 | ||
| 1 |
According to Polymarket, Ordinary by Alex Warren has a 5% chance of being the most played song in 2025. While a 5% chance is really low, that's still higher than all other options. To me it's crazy this isn't more predictable.
I had never heard the song so I thought I'd give it a listen: https://www.youtube.com/watch?v=u2ah9tWTkmk
IMO it's better than a lot of the other available top songs. But I was a little shocked listening to it how much it sounds exactly like AI gen music. I get that AI is trying to sound like popular music, and that popular music is formulaic. But this is way too close. Identical.
So this makes me think a few things. One, could AI music sound completely non-formulaic if it was trained on better music? I had assumed that AI would of course produce something more formulaic than what it was trained on when it lacks the intention of a human. But seeing that AI has hit it's target 1:1 maybe 0% of AI music's blandness comes from AI itself. Maybe it is 100% the training data.
Second thought is I wonder if these artists are miss stepping. I'd think you would want to make an effort to not sound like AI right now. I assume his song isn't AI generated. I think it would be smart if you want to think long term to not make a product that is indistinguishable from the cheapest way to make your product.
I might be wrong but I think that might be pretty easy to do. As far as I can tell the most typical music AI never uses a micro-tone or even a bend. The second one is common in music. Maybe we'll have to listen to some AI music to check. But it would make sense that AI would avoid it. There are two kind of output a neural network can generate. A continuous value or a discrete one. Another important thing to remember is that any of these "creative" AIs invite randomness into how they work.
When an AI produces a discrete values usually you have a fixed number of options and the AI votes on how much it wants each option. The top one is then picked or sometimes one is picked randomly from the top few options. The other output option is a continuous value. You could feed that into what tone an instrument should play. But because of the intentional randomness this would always sound out of key. So it would make sense for them to snap any continuous output value into a specific note. So basically things are descretized no matter what, or will sound slightly out of tune, or both if the dev wants to trade off how much of which problem he wants vs the other. You could develop it with a few more note levels, which I think they do. But to get things to rest on an actual note you can't go overboard. In fact I suspect many of these AI have tone levels that aren't even aligned with actual notes because they always sound slightly out of tune.
So how do you not sound like AI? At least current AI. It's pretty easy. Mostly just play normal music. Play some notes in tune. Include at least a few microtones. AI won't be good at doing both of those things for a while with the same software. Add some bends (which you probably are already doing). I also think some fret and string interactions might be hard for it, like slides and hammer ons. Because a basic AI is just going to try to snap to a note. That's different than real music where you can hear the mechanics of how that note change happened (at least if the artist wants you to hear that). So yeah, pretty normal music.
But I guess none of that music is cool anymore. It's cool to play music where you could be replaced by AI from a year ago.