possessed-photography-unsplash-691x

With Advancements in Technology, is AI Really a Threat to Human Composers?

The music world is changing. AI is here, and it’s powerful. With the rapid evolution of AI music generators like Udio, Suno, and Stable Audio that enable users to compose music based on text prompts or input parameters, the question on everyone’s mind is: will AI ultimately replace human composers?

History reminds us to embrace change, as new inventions often spark initial resistance, but ultimately unlock creative potential. For instance, the advent of the phonograph in 1877 allowed audio recording and playback for the first time, paving the way for the modern recording industry. While some feared it would diminish live performances and make musicians obsolete, it revolutionized music distribution via physical media.

Similarly, the introduction of MIDI (Musical Instrument Digital Interface) in 1983 faced skepticism over its technical limitations and concerns that separating musical data from audio could diminish human expression. However, MIDI enabled communication between different digital instruments and computers, allowing precise control, sequencing, and new creative possibilities through editing and manipulation of musical data.

More recently, internet streaming disrupted traditional business models, making music more accessible globally. Peer-to-peer file-sharing services like Napster initially caused concerns about piracy among artists and record labels. However, legitimate streaming platforms opened up new avenues for music distribution and discovery, providing a legal and convenient way to access a vast library of songs.

As AI enters the scene, artists are understandably on the fence. On one hand, AI brings efficiency and innovation. It can handle repetitive tasks, generate fresh ideas, and collaborate with human artists to create novel experiences. However, it also raises concerns about the authenticity and soul of musical expression, as well as the economic impact on the industry.

To thrive with AI, we will need to be intentional and have a transparent dialogue to establish ethical standards. Collaboration between artists, tech companies, industry organizations, and policymakers will be essential in shaping a balanced approach that fosters innovation while protecting intellectual property and artistic integrity.

Navigating issues of copyright and ownership in the age of AI will be especially crucial. Many large tech companies are suspected of scraping and ingesting copyrighted music data without proper licensing or compensation to train their generative AI models. This raises issues of data privacy, lack of attribution, and the undermining of creators’ livelihoods. Efforts like the ELVIS Act in Tennessee and the EU AI Act aim to address these concerns by prohibiting unauthorized commercial use of individuals’ voices and likenesses, and imposing stricter regulations on AI systems. Recent cases like the “Ghostwriter” track “Heart on my Sleeve” using AI deepfakes of Drake and The Weeknd highlight the need for robust safeguards and regulations to protect artists’ rights. However, the rapid pace of AI advancement often outstrips the ability of legislation to keep up, leading to a growing pile of legal and ethical challenges.

The question of ownership and rights becomes even more complex when considering AI-generated works. In cases of human-AI collaboration, there is debate over whether the human artist retains full ownership or if the AI company has a stake. The situation becomes murkier when an AI generates music based on training data from copyrighted works, potentially infringing on the original creators’ rights.

Initiatives like Fairly Trained and the Human Artistry Campaign are vital steps towards establishing standards that prioritize the rights and livelihoods of human creators. Fairly Trained advocates for fair compensation and data rights for the individuals whose creative works are used to train AI models. It aims to establish ethical guidelines and standards to ensure that artists and creators are properly acknowledged and paid when their content is utilized for AI training. The Human Artistry Campaign, on the other hand, champions the intrinsic value of human creativity and artistry. It raises awareness about the potential risks of AI-generated content undermining human artistic expression and seeks to protect the livelihoods of human creators in various fields.

To move forward ethically and responsibly, the music industry must work towards unifying standards and best practices for AI use. While full industry agreement may be challenging in the short term, the growing risks of public backlash and legal consequences are likely to drive more companies to adopt ethical AI practices proactively.

AI is already transforming music creation and promotion. Personalized recommendations, virtual collaborations, and immersive experiences are just the beginning of AI’s potential benefit to musicians. AI music video generators like Neural Frames, can produce engaging visuals for social media by creating animations synced to music tracks. Mixing and mastering processes are being automated by AI-driven tools like Landr, RoEx, and Masterchannel. AI stem splitters, including Lalal.ai, Audioshake, and BandLab, enable producers to isolate individual track components for precise manipulation. For instance, AI stem splitters were notably used to isolate John Lennon’s vocals from background noise and a piano track on the original demo tape for the last Beatles track, “Now and Then.”

The music industry at large has started experimenting with integrating AI at various points in the creative process. For example, YouTuber Taryn Southern used Amper Music’s AI to co-write and co-produce her pop album, “I AM AI”. As someone with an interest in music and coding but no formal training, she used AI to generate multiple stems based on text prompts, which she edited and stitched together to create each track. Music notation has been the dominant tool in classical composition for centuries, often preventing those who compose with aural tools and alternate notation methods from accessing opportunities in classical music spaces. AI could help bridge the gap between those who use notation and those who do not, potentially allowing composers to bypass notation and instead dictate ideas to AI, which could later be transcribed for performers with the help of an arranger.

AI can also generate musical ideas for composers to refine and further develop. To test the capabilities of an AI program built by the Chinese telecommunications company Huawei, film composer Lucas Cantor took on the challenge of completing Schubert’s unfinished Symphony No. 8. After feeding the software a substantial portion of Schubert’s catalog, the AI suggested new melodies for the missing movements, which Cantor curated, orchestrated, and harmonized. AI’s ability to suggest melodies could potentially benefit composers working in the commercial music industry; when juggling simultaneous projects and facing tight deadlines, AI could provide an initial spark of inspiration that could be nuanced and developed by human composers.

There are other artists who are showing audiences the difference between human and AI-generated compositions in real time. For instance, at the “Digi Muse – 2024 Music+Technology Festival,” a human composer and an algorithm engineer using AI each created a new piano piece based on a given theme, which were both performed for the audience. Additionally, the RTVE Symphony Orchestra in Spain performed two different versions of a piece composed by AI, one unedited, and one rearranged to make the work more coherent and cohesive. In both instances, the AI produced a composition that required significant editing from humans, demonstrating that – for now – human composers have nothing to fear when it comes to AI.

By leveraging AI tools and thinking about AI as a collaborator, contemporary and experimental composers can push the boundaries of traditional music composition, exploring new creative territories while maintaining the human touch and emotional depth that defines artistic expression. Composer Tod Machover has extensively collaborated with AI in his works, such as his “City Symphonies,” which use AI to organize crowdsourced city sounds, blending human creativity with AI-generated content. Percussionist Lisa Pegher has also been experimenting with AI through her project A.I.R.E (AI Rhythm Evolution). Over the course of the evening-length work, 10 new compositions written by ICEBERG New Music Collective gradually intertwine with AI, culminating in a fully AI-generated soundscape.

AI is also making it easier for listeners to discover new music. Streaming services use AI recommendation engines to suggest tracks that align with each user’s unique taste. Furthermore, AI is improving the accuracy and efficiency of royalty payments. It helps clean up metadata, match data, and ensure that artists receive the compensation they deserve. Innovative platforms like Song Sleuth’s UGSeeker even employ AI to recover missed royalties from digital service providers.

The path forward may be complex and uncertain, but it is also filled with incredible potential, including the possibility of new revenue streams for artists. AI-generated voices could become a valuable service, allowing vocalists to lend their voices to creators through platforms like Voice Swap and Kits.AI. This enables vocalists to be present in multiple projects simultaneously. Legitimate AI music generators should also start compensating artists for their copyrighted material. Companies like Soundful are already partnering with producers like 3LAU, who gets paid every time his production style is used to generate a track on the platform.

As AI continues to advance, it is critical that the music community remains vigilant and proactive in shaping its development and application. But by engaging in open conversations, establishing clear guidelines, and advocating for the interests of human creators, we can work towards a future in which AI enhances and complements the art of music rather than diminishing it.

This article is part of ACF’s digital media expansion to empower artists, made possible by funding from the John S. and James L. Knight Foundation. Learn more at kf.org and follow @knightfdn on social media.

 

I CARE IF YOU LISTEN is an editorially-independent program of the American Composers Forum, funded with generous donor and institutional support. Opinions expressed are solely those of the author and may not represent the views of ICIYL or ACF. 

A gift to ACF helps support the work of ICIYL. For more on ACF, visit the “At ACF” section or composersforum.org.