Composed by Bots: Understanding AI's Interpretation of Music

Photo - Composed by Bots: Understanding AI's Interpretation of Music
Music is a universal language and machines are learning it too. Artificial Intelligence tools can now create instrumentation, melodies, and lyrics from scratch. It takes only to describe the style and genre to an app, and a new song will be ready within a minute.
Alternatively, musicians can use these new tools to experiment with tunes over and over and find the right harmony. AI-generated music has reached a point where it’s hard to tell whether a composition is created by a computer, a human, or a collaboration of both.

How Does AI Make Music? Is Its Interpretation Good Enough? 

AI music generation tools use Machine Learning to train the computer on a large dataset of artistic creation. This includes music of different generations, styles, and genres. Algorithms analyze compositions, patterns, and structures, and come up with new music. With the advent of generative AI, creators and machines interact more easily.

Music maker apps like Suno AI, Udio, and AIVA, enable users to create personalized works by providing instructions in the same way as using ChatGPT, Midjourney, or other gen AI apps. The results can be pretty good with sharp rhythms, flow, and accurate timing. Whether listeners will enjoy it or not is questionable because music is subjective. While professionals and discerning listeners may spot the robotic touch and soulless nature of fully generated AI tracks, others may find them enjoyable. The fact that everybody seems to agree with is that technology's impact on the music industry is growing at a fast rate. Twitter user @nickfloats shares his experience of collaborating with an AI tool to make a song. Nick entered lyrics and structure while guiding the app on how to suggest changes for different parts, including the beginning, breaks, and ending.

Artists Against AI: Concerns of Copyright and Royalty Issues

AI music generation tools cause mixed feelings. They have benefits, like inspiring artists, helping them in their creative process, and democratizing music. A songwriter, for example, can generate instrumentation for their work without a large upfront investment. Or, a person without musical expertise can create a track and share it with a close person. 

On the flip side, there are concerns around ethics and copyright. It’s uncertain which data AI apps use to train their models and whether they have permission from musicians. AI training and copyright have become big problems in the musical industry. Now, everybody can command an AI to sing a song using the voice of a famous artist. An example of this was the track  “Heart On My Sleeve” created by an anonymous user Ghostwriter in April 2023. Ghostwriter used vocals from Drake and The Weekend to sing the lyrics. The song went viral on streaming and social media platforms. According to Billboard, it gained over 600,000 spins on Spotify over the weekend before being removed upon the takedown notice by the Universal Music Group. The corporation called AI music fraud and a violation of copyright law. Its fight, however, has been difficult as fans soon reloaded the track to the internet.

Musicians continue their battle against copyright violations. This April, over 200 artists, including Billie Eilish, Nicki Minaj, Stevie Wonder, Zayn Malik, Pearl Jam, the estates of Frank Sinatra, and Bob Marley, signed an open letter calling for protection against irresponsible AI practices. Released by the Artist Rights Alliance, the letter is directed to AI developers, tech companies, platforms, and music services, urging them to cease the use of AI to devalue the rights of human artists. The Artist Rights Alliance believes that AI has the potential to enable the development and growth of music experiences, but it shouldn’t come at the expense of artists’ rights.
Some of the biggest and most powerful companies are, without permission, using our work to train AI models,” the letter states. “These efforts are directly aimed at replacing the work of human artists with massive quantities of AI-created ‘sounds’ and ‘images’ that substantially dilute the royalty pools that are paid out to artists.
Most recently, Sony Music raised its voice against tech companies that develop AI music production tools. The publisher sent letters to over 700 firms, including OpenAI, Google, and Microsoft stating that it forbids anyone to use artists’s songs for training AI models and developing apps without permission. Sony Music mentioned a deadline for the firms to respond, adding it will enforce its copyright according to the law. 

In the meantime, court hearings around the use of AI in music have already started. The US Senate is currently discussing a bill on protecting artists’ rights against AI replicas. British musician FKA Twigs was among the persons who testified at the Congress. She mentioned the dangers of deep fakes for artists’ reputations and how difficult it has become to find trustworthy information. According to FKA Twigs, AI fakes harm artists by altering the way people receive their work and talent. The musician states that the main value of artists’ work is that fans can find themselves in the messages. Changing the narrative would mean taking the value away. FKA Twigs believes there’s a need for a law that puts the power back in the hands of the artists. Not entirely being against AI, Twigs says artists should be in control of how the technology uses their voices and creations. 

It’s worth mentioning that musicians take different approaches toward the use of AI. Canadian pop singer Grimes, for example, publicly announced that fans can use her voice to generate songs with AI. There’s only one condition: in case of any song achieving success, they need to split royalties. 

The Good Side of AI: Working Wonders in the Music Industry 

AI brings new ways to create music. It helps separate vocals from background noises, improve the quality of recordings, and add new elements to compositions. 

AI technology has a big impact on the visual side of the music industry too. From creating video clips to organizing immersive concerts, it makes the unimaginable real. AI-created content seems natural due to its ability to analyze large amounts of data and mimic human behaviors, including movements and expressions.

One of the most anticipated AI-enabled events is the Elvis Presley concert scheduled for November 2024 in the UK. Later, it is planned to move to Las Vegas, Tokyo, and Berlin. During the concert, fans will be able to see and hear a life-sized digital representation of the King of Rock and Roll, made possible by AI, augmented reality, and holographic projection. 

In January, Layered Reality, an immersive entertainment company, announced securing the rights from the Authentic Brands Group to organize the concert. According to the company, having access to the archive of performance and home photos enables them to create an authentic version of Elvis Presley.

Should AI Music Bots Be Silenced? 

Generally, we enjoy the kind of music that we feel. We admire artists for their talent and the emotional response their performances evoke. Although bots can generate technically perfect sounds, they won’t be authentic without a human touch. Solely AI-generated music is not about experiences, energy exchange, or communication, it’s just a sequence of notes. That being said, the use of AI in music generation is not purely positive or negative. When used ethically, AI tools can assist in music production, improve sound quality, add special effects, or serve as inspiration at times of creative block. Bots should be silenced when they treat the music industry with soulless works and copyright violations. 

Web3 writer and crypto HODLer with a keen interest in market trends and recent technologies.