Artificial intelligence is catalysing changes in intellectual property law

‘In the event that an AI produces material infringing copyright, who would be responsible?’

Artificial intelligence is forcing a new approach to intellectual property law. Politicians in some jurisdictions are helping to smooth some of the legal challenges, but courts are likely to have the final say.

AI promises remarkable productivity gains and, as a result, some politicians are straining to accommodate the needs of a new generation of tech giants in the hope that they will bring prosperity with them.

But not everyone is happy with the proliferation of AI and developers are already running into legal challenges, particularly around intellectual property protection. Some governments are changing the rules to allay these challenges, but other jurisdictions seem likely to hold firm, potentially deterring developers if they find the legal backdrop too testing.

There are two likely consequences. First, certain aspects of IP law will be reconceived to allow for more widespread use of AI. Second, jurisdictions that are reluctant to make changes to their IP rules might find AI developers unwilling to establish businesses or offer services there.

Generative AI models are trained using vast quantities of data typically scraped from the internet. Some of these data is protected by copyright and some copyright holders claim that their permission should have been sought before their material was used for training. Getty Images has filed a case against Stability AI for its stable diffusion art generator, which allegedly trained its model using Getty’s copyrighted images.

The UK is simplifying matters for AI developers. By relaxing the rules that protect copyrighted material, AI developers can make use of the material for the purpose of training AI systems without permission from the rights holder.

Creating these carve-outs for AI developers is a part of the UK’s strategy to make itself the location of choice for AI innovation and research, but it does not mean that AI developers are entirely safe from legal challenges. The rule change only allows those with lawful access to copyright protected material to use it to train models.

The European Union, by contrast, is making no such provisions and, under the drafted AI Act, would require developers to publish a comprehensive list of all the copyrighted material used to train AI models. This could lead to an onslaught of lawsuits from copyright holders who would not otherwise have been aware that their data had been used in training models.

The scale of the datasets in question also makes them challenging to audit effectively to ensure that any list of copyrighted material published by a developer is truly exhaustive.

Even the UK’s AI-friendly approach is not going to eliminate the risk of legal challenges over IP infringement. It is possible that datasets used to train AI models contain copyrighted material that is being distributed illegally. It is difficult to audit the contents of the vast datasets in question, but some US authors are claiming that the detailed and accurate summaries of their books that ChatGPT can generate is an indication that their books were used in training. The UK’s carve-out would not permit this kind of copyright infringement because the developers do not have legal access to the copyrighted material. Although the UK wants to make its rules as AI-friendly as possible, fundamental aspects of the way generative AI is trained may mean that it is still likely to conflict with IP law in some respects.

There is also the risk that, after being trained on copyrighted material legally or otherwise, an AI produces material that infringes copyright. OpenAI, GitHub and Microsoft are the subject of a lawsuit alleging software piracy via the GitHub Copilot coding assistant. It is extremely difficult for an AI developer to prove that they have rendered it impossible for the model to output copyrighted material. Since generative AI is dynamic, models produce different outputs from the same prompts as they learn. This means that auditing outputs cannot prove that an AI will never produce material that infringes copyright.

In the event that an AI does produce material that infringed copyright, who would be responsible? The model itself, the user or the developer? In the GitHub Copilot case, the developing company is being sued but the question of output ownership has another dimension. When original work is produced, Microsoft or OpenAI or Stability AI are unlikely to stake a claim to it. Presumably the person who inputs prompts is the owner of the output, but some AI experts want AI models themselves to be credited as owners of intellectual property. In 2021, an Australian federal court recognised AI engine DABUS as an inventor, but the ruling was later overturned by the High Court. The issue of AI authorship is likely to be a continuous debate, and jurisdictions may differ in their approaches.

AI has also been used to generate original material in the style of an artist. Original material is not normally vulnerable to claims of copyright infringement, but some lawyers have speculated that artists might claim that they constitute unauthorised derivative work.

Regulators have a challenge here. On one hand, they can leave IP law unchanged, risk developers being swamped and deterred by lawsuits and potentially miss out on the growth opportunities AI promises to deliver. On the other hand, they can erode the protections for creators and risk more of that sector coming under the control of tech firms. In any case, however, it seems IP protections will be challenged and defended in court as the use of AI spreads.

Lewis McLellan is Editor of the Digital Monetary Institute at OMFIF.

Join Today

Connect with our membership team

Scroll to Top