Technology

Do Generative AI Models have a Legal Problem?

Sam Hannah

August 24, 2023

Since the advent of generative AI, we’ve seen an explosion of tools created that can help us in productivity tasks, the arts, and one that even generates Spotify playlists for your favourite books and novels.

Some are, of course, more useful than others, but the technology itself is rapidly developing, and, more importantly, this is just the beginning.

One area in which there has been significant debate since AI has been on everyone’s lips is in generative AI. Taking image generation tools as an example, the issue is that, because it uses existing data and images, these can contravene a number of laws including data privacy, discrimination, and, most prevalently, intellectual property.

This is down to how the AI platforms are trained; through data lakes and question snippets, which are made up of billions of parameters constructed by software processing huge archives of images and text. The platforms analyse this data and identify patterns and relationships, which they then use to create rules, and then make judgements and predictions, when responding to a prompt.

Sticking with the image generators and their intellectual property issues for the moment, the argument we have seen in the few of the higher profile cases that have been mounted in the last 12 months is that large parts of the data lakes these platforms learn from is actually formed of proprietary material made by real artists.

In the United States, the Andersen vs Stability AI et al. case filed in late 2022 is the perfect illustration of this. Sarah Andersen, Kelly McKernan, and Karla Ortiz filed suit against Midjourney Inc, DeviantArt Inc, and Stability A.I. Ltd, the company which launches Stable Diffusion, and the phrase they used to describe these text-to-image platforms was “21st century collage tools that violate the rights of millions of artists”.

Their argument was that the training of these AI platforms uses their original works in order for it to learn about different artistic styles, crucially though the artists are not remunerated for this.

However, the latest updates at the time of writing are that the judge is inclined to dismiss these claims, ruling in favour of the defendants, though he did offer an olive branch to the claimants, saying he’d allow them the opportunity to remedy some of the deficiencies in their case highlighted by the defendants.

The main issue faced by claimants in these types of cases is that the parent companies of the generative AI models are positing that whatever is produced by AI falls under the fair use umbrella, which allows for limited use of copyrighted materials without permission being required from the original creator, and should therefore be protected by its provisions.

The Andersen vs. Stability AI et al. case is probably the most high-profile of the ones that have been submitted so far, but more will surely come as the impacts of AI in this sector compound.

It does raise the question of originality in art work, however. Almost everything nowadays that comes out of the creative industry is a collection of bits of previous works, be that in the art, music, or film & television industries.

That’s purely down to the amount of ground covered by previous works and not in any way due to a lack of imagination on the artists’ side, but it is nonetheless the case.

It’s therefore hard to determine in layman’s terms where copyright boundaries lie, and the case could be made that human artists are doing the same thing as AI — i.e. learning and taking inspiration from previous work in order to create their own — just at a slower pace.

This also feeds into a larger point about the potential benefits and risks of AI, and who should be the beneficiaries of it.

At the end of the day, AI is essentially a superhumanly fast researcher. No individual person would be sued by an organisation or other individual for researching a topic using their material (as long as it’s either free or the paywall has been paid for), so why should AI models have this problem?

Conjure’s Head of Technology, Simon Osim, says, “the way I see this debate unfolding in a fair manner is by artists charging for their work instead of making it publicly available. That way, if AI were to use it in its research, the artist will have already been remunerated accordingly”.

Taking it one further, he adds, “it’s not about the data it’s learning from, it’s about the model itself and who controls it”, adding, “if I, as a human being, gain knowledge through online research and then use this knowledge for my work nobody would complain about that. From the money I earn, with the knowledge I gathered online, I would then have to pay taxes. I believe that treating AI as a legal entity in its own right and taxing it for the work it carries out could be a way forward, as it would protect both the original creators and the newer AI models”.

The culmination of all of this further highlights the need for robust regulation around artificial intelligence, and particularly a tightening of the fair use policy umbrella which Stability AI et al are currently relying on.

AI should absolutely be used as an auxiliary tool to help anyone from creatives to office workers perform to the best of their ability, but that shouldn’t come at the expense of the arts industries or creativity in general, and then be funnelled up towards an already booming business-owning class.

The only way to minimise this is through instilling accountability, which can only be achieved through highly stringent regulation or total public ownership.