CyberTalk

Who owns the song you wrote with AI? An expert explains

Young tech guy listening to music while working

By Douglas Broom, Senior Writer, Forum Agenda.

Key Information: 

Artificial intelligence offers even the least musical among us the chance to get in touch with our inner songwriter. But what happens if you create a hit? Who owns the copyright? And what about the artist whose style is being plundered to create the AI hit? It’s a question troubling lawyers and digital media experts.

Which is why we sat down with New York University’s Professor Arun Sundararajan at the World Economic Forum’s 2023 Annual Meeting of the New Champions in Tianjin, China, to seek answers to a question that’s far from theoretical.

“Generative AI systems don’t just generate new content in abstraction, they can be tailored to generate content in the style of a specific person,” Professor Sundararajan explained. “You can create new Beatles songs. You can write a poem like Maya Angelou.

Which is exactly what happened recently when a series of cover versions of popular songs started to appear on TikTok. It quickly emerged that the artists performing the tracks had never recorded them – they had been entirely produced using artificial intelligence (AI).

“What’s the point of having intellectual property law if it can’t protect the most important intellectual property … your creative process?” Sundararajan asked.

The Forum’s 2023 Presidio Recommendations on Responsible Generative AI warned it was “essential for policy-makers and legislators to re-examine and update copyright laws to enable appropriate attribution, and ethical and legal reuse of existing content.”

Here’s a summary of our discussion with Arun Sundararajan, Harold Price Professor at New York University Stern School of Business:

Who owns what we create with AI?

Sundararajan: This is one of the central policy and consumer protection issues when thinking about the governance of generative AI. Generative AI systems don’t just generate new content in abstraction, they can be tailored to generate content in the style of a specific person.

You can create new Beatles songs. You can write a poem like Maya Angelou. And, you know, at this point, the ownership of a person over their creative process, over their intelligence, starts to get challenged by technology.

One way to think about this is: ‘What’s the point of having intellectual property law if it can’t protect the most important intellectual property – your individual intelligence, your creative process?’

In the past, we’ve never really asked this question because it was assumed by default that, of course, you own how you create things. And so at this point in time, we have to extend intellectual property law to protect not just individual creations, but an individual’s process of creation itself.

Is it fair to say our identities are at stake?

Sundararajan: I think one’s creative process is part of one’s identity as a human, but it’s also an important part of one’s human capital. You know, you can spend decades becoming really good at doing things in a specific way, and you have an incentive to do that because you own it and because you can enjoy the spoils, the returns from all of that investment.

The trouble is that now a generative AI system can take hundreds of examples of what an individual has created and start to replicate their creative process, in some way stripping away their human identity or taking part of their human capital away from them.

And this is something that we are seeing a lot in the creative industries, in the art industry and the music industry. Cartoonists’ styles are being replicated by art-generating AI. Musicians’ styles are being replicated by music-generating AI.

Are there implications for business?

Sundararajan: It’s not just an issue for creative artists. Someone could be really skilled at business development in a company. That talent, that know-how, those years of experience that are rendered into email exchanges with clients, a particular style of talking to a particular potential client, a particular sequence of messages, a particular sequence of phone calls that leads to you closing the deal…

And when this business development executive leaves the company, of course they leave their work product behind. But now this work product can be used to create a digital replica of them that they may not have intended to leave behind, but which they can’t take with them the way we have always taken our human capital with us if we move jobs.

What is the difference between mimicry and replicating something human?

Sundararajan: Once something is successful, people start to mimic that style. Because if I’m a musician and I want to imitate someone’s style, there will still be differences between my style and theirs that reflect my talent and my creative ability. With an AI twin, in some sense you are creating an exact replica rather than simply mimicking the style.

I think the second big difference is the scalability of this. Once this is encoded into an AI system, new creations can be generated at a breathtaking pace.

Who is most at risk from this?

Sundararajan: If you’re an incredibly famous artist there’s very little danger, because if someone generates a replica of your music, you can simply say ‘It’s not mine’ and then it won’t be as popular. You’ve got the brand that allows you to get the economic returns from your creations.

On the other hand, if you’re an up-and-coming band that isn’t very well-known and you start to do somewhat well on Spotify and someone encodes into AI your style of music and starts to generate, hundreds of different examples, then your ability to even build that brand can be curtailed before you had the chance to do it. And so you’re unable to get the economic returns associated with either your talent or with your human capital.

So where does the law stand at the moment?

Sundararajan: Every AI system we use was created by training it on examples. A lot of discussion has focused on whether it’s OK for AI companies to use other people’s creations for this. The law is unclear at present.

Some people argue that this falls under the fair use doctrine, which exists in the US, the EU and China (although it may be called different things in different places) where if you’re transforming what you’re using into something sufficiently different in a way that won’t affect the commercial value of what you’re using, then it’s OK, you’re not infringing on copyright.

The law is also unclear today on the ownership associated with something generated by AI, a particular creation. So if an AI system writes a story or generates a piece of art or composes a song, in some jurisdictions, if it is completely AI-generated with no human participation at all, nobody owns it, it’s in the public domain.

If there’s enough human assistance, such as providing a storyline which the AI completes for you or you outline a song and the song is then AI-generated, then you can continue to own the copyright.

On the issue of who owns the creative process, there seems to be little or no law that is giving us a definitive answer on how we can reclaim ownership of our creative process.

What’s likely to happen is the US, the EU or China – one of these three – is [going] to take a leadership role and define the first set of guidelines and laws around individuals’ ownership of their creative processes and the use of data to train something like ChatGPT.

This article was originally published by the World Economic Forum and has been reprinted with permission.

Exit mobile version