Could artificial intelligence replace traditional A&R? | Features | MN2S

It comes as no surprise that while artificial intelligence is infiltrating industries across the board, it is also having a monumental impact on the music industry.

As the internet becomes saturated with new music uploads—20,000 tracks per day on Spotify, according to Scott Cohen—it grows harder for artists to reach new fans and for A&R (artists & repertoire) to discover new talent. AI can be used by musicians and A&R alike for a wide range of purposes varying from music composition to estimating prospective clients’ potential success to improving consumer experience.

Several record companies have already started incorporating AI into their scouting and development processes. In 2016 Warner Music Group obtained Sodatone, whose machine learning technology claims to predict which unsigned artist might have the most future success by analyzing streaming, social and touring data along with data measurements from loyalty and engagement levels from the artist’s online fanbase.

Where multiple people with subjective tastes in music previously would have had to sift through the music to find artists who fit with the label’s specific sound, AI now makes it possible to find matching artists with the use of objective algorithms—no humans necessary. Amazon’s patent-pending hit prediction software claims to predict the growth and future popularity of ‘obscure’ media content based on early engagement—whether it’s music, books, or film—which is helpful when deciding to take on certain clients or release certain tracks.

While it’s useful to have a tool that can vet thousands of songs in a short period of time and give A&R a better grasp of what’s “in it” for them when they sign someone, there are a few uncertainties that come with AI. For instance, when analyzing an artist’s potential success, can AI technology anticipate how they might grow personally and how that might affect their success? Though traditional A&R teams can’t analyze all these data aspects without the help of tools, they can judge the artist based on regular human interaction.

Where AI technology might see a lower likelihood of success based on data, humans can see dedication and passion, a willingness to learn and grow, within the artist. Nobody can predict what social trends or scandals might arise that might boost or drain an artist’s success. If we were to rely solely on AI-powered technology for A&R tasks, would we be robbing artists of their potential?

AI technology, like HITLAB’s Music Digital Nuance Analysis (DNA) tool, can be used to assess and predict songs’ success and to create personalized playlists. By using sound attributes to create unique signatures for each song, DNA can compare new and unknown songs with highly rated songs. This type of tool can help artists in their bid to boost exposure by helping listeners find other songs with similar signatures to their music preferences.

When streaming services apply AI to their features, like Spotify’s Discover Weekly playlist, users can skip the search through the 20,000 daily uploads and dive into a custom-made playlist flooded with music with song signatures for their specific tastes and interests. It’s an efficient feature that can help turn small artists into big artists and make for a better listening experience. For artists, this means they have the chance to rework their songs for a better chance of landing a Top 100 hit, which is helpful for labels that want to sell more records. However, does a system that focuses on numbers devalue music as a form of art and meaningful expression for the artist? Should it be considered unethical because of the ability it gives artists to “cheat the system” to ensure a hard hitter? Would artists face backlash if their fans knew they were aiming for high numbers than a cultural impact?

Nevertheless, Incorporating AI into the music process is also beneficial for music production. For example, musicians can now use AI for services like audio mastering and recording music, and AI technology can even compose music on its own. Thanks to its accessibility, AI makes audio mastering more affordable for artists who wouldn’t be able to master otherwise since human-based mastering can be expensive.

Machine learning software makes AI music composition possible. Through reinforcement learning, where the algorithm analyzes data from various compositions and learns what characteristics and patterns mimic a certain genre or create enjoyable music, entire songs can be produced and performed by AI. As proof, Blue Jeans and Bloody Tears, an AI-produced song, was posted on YouTube in May of 2019 and has accumulated over 3 million views. Though the lyrics are questionable, and not exactly coherent, the track is admittedly catchy. With continuous development, it might be possible for AI to eventually take over the charts, though for now humans are very much needed in the creation process.

Based on what we know and don’t know of AI we should explore its use in the music industry and in composition. How else can we know its potentials and shortcomings, whether it supports your label or music, without firsthand experience? Though AI-powered A&R technology is looking to become more prominent within the music industry, it seems unlikely to render its human counterparts obsolete.

For more information about how our label services and digital distribution team can help manage and grow your label, get in touch.

Be the first to know

Be the first to know about our newest signings, tours, talent availability by signing up today! We only email you updates that matter most to you and vow to never share your email address with anyone else.
Sign up here
* Please fill out this field
* Please fill out this field
* Please fill out this field
* Please fill out this field
NEXT
BACK

Atleast one genre is required

NEXT
BACK
STEP 01 of 03