Artificial Intelligence (AI) programs have come a long way, but can these algorithms replace musicians?
If you want a bespoke soundtrack for your advertisement, TV show, film, or even just your daily walk, you no longer have to turn to musicians. At least, that’s what the people behind Amper would have you believe.
This video, with ironically poor audio quality, shows Amper Music CEO Drew Silverstein introducing the Amper program at New York Media Festival last year. About four minutes through the clip, Silverstein hits play on a music snippet called ‘Epic Driving’. The audio quality of this particular video may be low, but the quality of the music certainly isn’t. ‘Epic Driving’ does sound like music to accompany a particularly epic car journey. And it was created, composed, performed and produced by artificial intelligence. Nearly a year later AP, Mental Floss, Adobe, and the USC School of Cinematic Arts all use Amper’s AI music software.
It’s not just startups that are jumping on the AI music train. Last year, Google launched Project Magenta, an open source research project that aims to see if a computer can use deep learning to figure out how to compose music itself. On a very basic level, this is the same thing as a human trying to master the piano by listening to every Elton John song on repeat.
The Magenta AI ‘brain’ was fed thousands and thousands of songs, with the theory being that it would eventually learn which notes “should” follow each other in a sequence. This sound impressive, but it also highlights one of the key problems with AI music. The objection goes that because a computer network cannot experience human life, it will be impossible for its compositions to communicate any real depth of feeling. While this does have some validity, those of us who make this argument might be holding music created by a computer to too high a standard.
Can people ever really make music?
In the 2004 Will Smith film I, Robot, Smith’s character, detective Del Spooner, asks his robot co-star Sunny: “Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?” Sunny’s earnest response is this: “Can you?”
Del Spooner is stumped by this answer. Sunny makes a good point. Some humans are capable of masterful feats of music composition, but that doesn’t mean all of them are. Potentially devoid of emotion and feeling as it may be, AI-generated music could well be more impressive than the attempts of some human non-musicians, so we shouldn’t write it off.
Where should AI music go?
Perhaps there is a time and a place for AI music. While it’s unlikely people would line up to buy tickets for an AI-composed opera, AI music could work well in other pre-programmed mediums such as video games. Nearly all video games use some form of AI, normally to allow non-playable characters to behave responsively to the player. It’s not a huge leap to imagine video game music being generated in the same way.
Then, there is the even more exciting possibility of putting AI music in the hands of musical innovators. In fact, just as previous leaps forward in music technology have enabled humans to create new genres of electronic music, so too could this one. The Acid House movement was kick started when the likes of DJ Pierre played with the sequencer elements of a Roland TB-303. Though the sequencer is not really artificial intelligence, it is still possible that a maverick producer today will find a way to incorporate computer-generated music into his or her productions.
Whatever it is used for, there is no need to write off AI music as any worse than music made solely by humans. It’s just different, that’s all . Perhaps a collaboration between your favourite DJ-producer and an up-and-coming AI artist is not far off.