With generative AI technologies and products evolving rapidly, their implications for music creation, copyright laws, and artistic authenticity have become pivotal discussion points that are growing more complicated by the month. First we’ll be covering the most prevalent players in the generative AI music space, before looking at the legal questions at play in court.
AI tools impacting music creation
Generative AI’s ability to create music from scratch has been steadily improving over the last several years, with all the major players already involved.
- Technologies like DeepMind’s Lyria and AIVA offer extensive capabilities, ranging from generating compositions based on word prompts to creating music in predefined styles like “modern cinematic”.
- Sony’s FlowMachines and Google’s Magenta AI have both demonstrated their capabilities by creating songs in the styles of The Beatles and Nirvana, respectively.
- IBM’s Watson Beat is another notable mention, which uses cognitive technology to collaborate with musicians, transforming their ideas into complex compositions.
- Meta’s foray into AI-generated music is marked by their release of AudioCraft, which includes MusicGen. This tool generates music from text descriptions and is trained on a substantial library of Meta-owned and licensed music.
- Suno’s recent introduction of their V3 model sets a high bar for quality in generative AI music, allowing users to specify not only a musical style, but also provide their own lyrics for the song to incorporate. The small team from Cambridge has created a model that resonates with users, with many claiming it to be the best on the market. Check out an example here.
- Stable Audio’s 2.0 model allows users to upload their own audio files to “fine-tune” the melody of their output in the direction of the uploaded music, raising a variety of questions on potentially unethical use of copyrighted audio. The product has integrated content identification technology, and claims it does not allow the use of copyrighted songs for fine-tuning, but we’ve found in our tests that the technology is lacking and can’t always detect copyrighted songs if they’ve been manipulated, including just a 10% increase in speed.
- A new entrant from three ex-Googlers, Udio, launched recently with a model that allows extensive customization of music styles, lyrics, and more, with quality that matches (and occasionally surpasses) Suno. Unlike Suno, Udio is allowing free users to claim any potential ownership of their generated songs, indicating that we may be in for a massive influx of realistic-sounding AI music in the marketplace.
AI’s capability to transform basic musical ideas into full compositions proposes an equitable partnership between humans and machines, imagining a more humanistic future where AI makes good musicians better. DeepMind’s AI Tools, for example, allows creators to start with simple inputs, like a hum or a basic chord, and develop them into complete musical pieces. Amper Music enables creators to start with simple inputs and develop them into complete musical pieces. This path strays away from the other model’s complete audio generation, enabling musicians to embrace forward-thinking AI tools without giving up the game.
While the number of entrants in the space is big (and getting bigger), challenges remain, especially in outputting high-fidelity audio files and maintaining musical continuity in longer AI-generated pieces.However, companies like Suno are beginning to prove that these issues are just a temporary hurdle. It’s not unreasonable to predict that they will be resolved over the next few years (or months).
For the next part of our blog, I’ve enlisted my colleague Kevin Casini to help lay out the current legal landscape of AI. In addition to serving as VP of Business and Legal Affairs at Pex, Kevin is a practicing attorney and intellectual property law professor.
Copyright and AI-generated music
The intersection of AI and copyright law is complex and evolving. Key cases and legislation illustrate the challenges in this domain. The US Copyright Office has clearly stated that for a work to be copyrighted, it must originate from human creativity. This policy has impacted cases like Kristina Kashtanova’s “Zarya of the Dawn,” where the overall work was copyrighted, but the AI-generated images were not.
In the UK, the Copyright, Designs and Patents Act 1988 (CDPA) allows for the protection of works generated by AI under certain circumstances, although there is ongoing debate regarding the definition of “originality” and the requirement of a human author. The case of Hyperion Records v. Sawkins also sheds light on how AI-assisted creations, where human input refines AI-generated ideas, could be protected under copyright law.
Proposed in 2023, the UK’s AI (Regulation) Bill seeks to adapt the copyright framework, including the CDPA, to AI’s practical applications. It aims to establish a code of practice for copyright and AI, facilitating data mining licenses for training, addressing AI firms’ challenges, and safeguarding rights holders. This aligns with the CDPA’s goals of protecting intellectual property and fostering innovation. Final amendments and judicial interpretations will determine its interaction with the CDPA.
- Andersen v. Stability AI, LTD, et al: Artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a lawsuit against companies Stability AI, Midjourney, and DeviantArt in the U.S. They alleged that their copyrighted works were used without permission to train AI systems, which then created images in their styles. The lawsuit was partially dismissed, with some claims against Midjourney and DeviantArt dropped, but Andersen’s key claim against Stability AI regarding the training of its AI system was allowed to continue. In a ruling filed May 7, the judge issued findings that the defendants may have compressed or mathematically represented the plaintiffs’ copyrighted works within their AI training data and models. This ‘Training Images theory’ could constitute direct copyright infringement. If the claims proceed to trial and the plaintiffs prevail, the AI companies could face liability for copying images without permission.”
- Concord v. Anthropic: This case was filed by Concord Music Group, Inc. against Anthropic PBC and alleges that Anthropic infringed publisher-owned copyrights in song lyrics. This infringement allegedly occurred when lyrics were copied as part of the model training process, and when those lyrics were reproduced and distributed in response to prompts using Anthropic’s AI product, Claude2. The publishers argue that this chatbot regurgitates significant portions (and, in some cases, the entire portion) of original copyrighted lyrics when prompted by users. The publishers contend that the copyrighted material “is not free for the taking simply because it can be found on the internet” and that just like Anthropic does not want its code taken without its authorization, music publishers and other copyright owners do not want their works to be exploited without permission.
- Ghostwriter’s AI-generated track: A notable incident involved an AI-generated music track titled ‘heart on my sleeve’ by an anonymous artist known as “Ghostwriter”, which imitated the voices of Drake and The Weeknd. The track raised significant questions about voice imitation, vocal likeness, and copyright in the realm of AI-generated music. After it was discovered to contain an uncleared sample, the song was quickly taken down from most platforms after Universal Music Group condemned the infringing content.
- The EU’s Artificial Intelligence Act (AI Act): In the realm of AI regulation, the EU has passed a new law which aims to address the risks posed by powerful models. The Act establishes a common regulatory framework for AI systems in the EU, emphasizing respect for fundamental rights and adherence to ethical principles. Transparency and disclosure are key components, requiring providers of AI systems to be forthcoming about their use of AI, including training data and policies related to copyright and other aspects.
- Authorship and ownership in AI-generated works: The UK Intellectual Property Office’s consultation and the subsequent government response left unresolved the question of who is the author of a computer-generated work: the user of the AI tool or the tool’s owner. This dilemma extends to AI tools used as creative assistants in music composition, drawing parallels with the Hyperion Records v. Sawkins case.
- Imitative vocal synthesizers and legal challenges: The use of AI to imitate famous voices, as demonstrated in the ‘heart on my sleeve‘ track, could lead to more legal challenges arising out of vocal likeness and rights of publicity. The issue of vocal mimicry was highlighted in Rick Astley’s case involving an imitation of his voice. Rick Astley, known for his 1987 hit song “Never Gonna Give You Up”, filed a lawsuit against rapper Yung Gravy, claiming that Yung Gravy’s track “Betty (Get Money)” violated Astley’s right of publicity by mimicking his distinctive voice used in his song. Astley’s legal team stated that a license to use the original underlying musical composition did not authorize the stealing of the artist’s voice in the original recording. They alleged that the public couldn’t tell the difference and believed it was actually Astley singing. The case was settled for an undisclosed sum.
Development in statute
In the evolving landscape of AI and creative arts, several legislative acts have been proposed to address the intricate dynamics at play. Here’s a chronological reordering of these bills:
- It would prevent a person from producing or distributing an unauthorized AI-generated replica of an individual to perform in an audiovisual or sound recording without the consent of the individual being replicated.
- The Act addresses concerns about the misuse of AI technology, such as the creation of deepfakes and unauthorized duplications of an individual’s likeness, voice, or other personal characteristics without that individual’s consent.
- It also provides a legal basis for non-celebrity individuals to sue to protect their identities from AI abuse, including deepfakes.
- It would prevent a person from producing or distributing an unauthorized AI-generated replica of an individual to perform in an audiovisual or sound recording without the consent of the individual being replicated.
- The Act addresses concerns about the misuse of AI technology, such as the creation of deepfakes and unauthorized duplications of an individual’s likeness, voice, or other personal characteristics without that individual’s consent.
- It also provides a legal basis for non-celebrity individuals to sue to protect their identities from AI abuse, including deepfakes.
Privacy of course is a universal concern. In her testimony, UK artist FKA Twigs made clear her concerns: “AI cannot replicate the depth of my life journey, yet those who control it hold the power to mimic the likeness of my art, to replicate it and falsely claim my identity.” As written, rights created by the Act would be enjoyed by the individual, and in limited recording artist cases, may be enforceable by both the artist and her label. For that, Twigs and Kyncl both will be pleased to know voice and content identification technologies are current examples of how technology already available can be utilized to protect the voice and likeness of artists.
- The Act defines key terms, establishes civil and criminal liability, and authorizes damages and injunctions for violations.
- It addresses concerns about the misuse of AI technology, such as the creation of deepfakes and unauthorized duplications of an individual’s likeness, voice, or other personal characteristics without that individual’s consent.
- Like the NO FAKES Act, it also provides a legal basis for individuals to sue to protect their identities from AI abuse, including deepfakes.
- The Act has received support from artists in the music industry, who have been directly affected by the abuse of artificial intelligence, but seeks to establish a federal solution with baseline privacy protections for all Americans.
Text and Data Mining (TDM) and copyright
The legal implications of TDM for AI training in music also present challenges. The UK’s scrapped proposal for a broad TDM exception would have allowed AI tools to be trained on all music without requiring a license, a move opposed by the music industry. This contrasts with the EU’s approach, which provides rightsholders with the ability to opt their works out of the TDM exception.
These cases and developments, especially in the realm of AI-generated music, highlight the ongoing legal discussions around AI-generated content, akin to the notable cases like Thaler v. Perlmutter and the Review Board Decision on Théâtre D’opéra Spatial. The legal landscape continues to adapt to new technologies, but many questions remain unresolved, with significant implications for artists, companies, and legal systems worldwide.
As AI tools continue to emerge within the music industry, they bring both exciting opportunities and challenging questions. While the technology offers new avenues for creativity, it also pushes the boundaries of traditional music production and copyright law. As we move into this new era of music generation, it is crucial to balance innovation with respect for artistic integrity and legal norms.
Navigate AI-generated music and copyright with Pex
At Pex, we use various content identification technologies and voice biometric matching to identify music, including AI-generated music and voices. We have helped rightsholders protect their IP and identify uses of their content online for over a decade. Whether you are interested in finding AI-generated music that uses your IP, preventing your platform from distributing AI-generated music that infringes copyright, or need to validate that an AI model was not trained on copyrighted material, Pex has solutions. Learn more about our AI identification technologies on our blog, or chat with a member of our team.










