“Won’t the Real Slim Shady Please Stand Up”: Deepfake Technology Use in the Music Industry

Sorry, who’s the “Real Slim Shady”, again?  House DJ and music producer, David Guetta used deepfake technology to reproduce rapper, Eminem’s vocals during a recent live performance. Guetta published the video clip of the performance with the caption, “Let me introduce you to Emin-AI-em”. In a separate YouTube video, Guetta admits to using an artificial intelligence (AI) program that allows users to write a verse in the style of various artists. He then used another AI program to duplicate Eminem’s voice reciting the pre-written lyrics. Guetta will not be releasing the content commercially, but it is unlikely that he received Eminem’s permission to use his vocals during the show.

Deepfake technology refers to a form of artificial intelligence, called deep learning that creates highly realistic fabricated images, video and audio of individuals doing or saying anything. To simply explain how deepfake content is created, the AI is trained using a large dataset of media content of the to-be-deep faked person. It then uses this data to create new media of the original person. In this case, the AI model likely used Eminem’s existing songs or other audio content to replicate his voice and rhythmic flow. Deepfake technology’s sophistication makes it quite difficult to distinguish a person’s real voice from the AI-generated version.

In 2020, a Youtuber published a deepfake audio of hip-hop mogul, Jay-Z , rapping Shakespeare’s “To Be or Not to Be” soliloquy. Jay-Z’s legal team subsequently filed a copyright report against the creator to have the content removed from the platform. They argued that “the content unlawfully used AI to impersonate their client”. Their attempt was unsuccessful as current US Copyright law provides protections for an artist’s song (lyrics and music composition), but  does not offer protection for an individual’s voice or vocal likeness as stated in Midler v Ford Motor Co.  

The World Intellectual Property Organization (WIPO) published the Draft Issues Paper on Intellectual Property Policy and Artificial Intelligence, in which they considered  the following: (1) whether copyright law should apply to deepfake content since the AI’s training dataset uses copyright protected materials and (2) whether there should be renumeration when someone’s likeness is replicated in deepfake material. They affirmed that applying current copyright law would not effectively protect deepfaked victims. Since copyright law protects an individual’s original work, the copyright belongs to the creator of the deep fake content.  Alternatively, well-known artists may be able to rely on the right of publicity to prevent their voice or likeness from being misappropriated for commercial gain.

The accessibility of deepfake technology raises questions about the future legal and ethical implications of its use in the music industry. Will people be able to create new original works by deceased artists?  What if an anonymous Soundcloud rapper wants a Drake feature for their song to boost their listener engagement, but is unable to afford his minimum $1,000,000 fee? Will they be able to simply use deepfake technology to incorporate the Drake verse?How will artists keep track of use of these audio deepfakes to enforce their rights? Without adequate legal protections, artists may continue to be at risk of economic and reputational harm by their AI (audio) clones. California recently implemented laws to prohibit deepfake videos that would influence political elections and unauthorized pornographic content. Perhaps there would be some advantage to expanding copyright law protections or implementing legislation to address the use of deepfake technology in this context.

Leave a comment