top of page

The Implications of AI for Copyright Law

By Darcie Dudding



Artificial intelligence is changing the way creative works are generated, posing significant issues for UK copyright law, given that, until recently, written texts were exclusively written by human authors. In the past few years, this assumption has been severely strained with the development of generative AI capable of producing images, music, text, and code, requiring minimal human involvement. The resulting tensions are most evident in debates over authorship, the legality of using copyrighted material for training data, and responsibility for AI-enabled infringement.


Authorship remains the most basic legal pressure point. The Copyright, Designs and Patents Act 1988 (CDPA) provides that copyright arises automatically in “original” works, and whilst the Act does not define originality, UK courts traditionally consider the latter to require intellectual skill, labour, or judgement. The more recent approach, brought about by European jurisprudence but still evident in UK doctrine, looks to the author’s own creative choices. This premise is disrupted where a work is created largely or entirely by an AI system. The issue has been brought into recent focus in the UK by the Intellectual Property Office (IPO), which has repeatedly reiterated its view that “creativity” is a solely human attribute. In 2022, the IPO denied Stephen Thaler's application to name an AI system as the author of an artwork, explaining clearly that, under UK law, only a natural person can qualify as an author. The decision is in line with long-established UK copyright principles, which have always had their foundation in the expression of ideas by humans.


A second major area of tension is the legality of using copyrighted works as training data. AI models are typically trained on enormous datasets, composed of copyrighted images, text, or recordings scraped from online sources without any explicit permission from the rights holders. According to UK law, copying a work, even as part of a data-processing activity-can infringe copyright unless covered by an exception. A narrow exception for text-and-data mining (TDM) does exist at section 29A CDPA, but this applies only for non-commercial research.


Crucially, AI model development is overwhelmingly commercial. The UK Government proposed in 2022 the extension of the TDM exception to cover commercial use, with the aim of boosting AI innovation and reducing barriers for developers. The proposal was resisted very strongly across the UK's creative industries. Organisations acting on behalf of authors, musicians, visual artists, publishers, and film producers made strong representations with the argument that such an extension would damage licensing markets and undermine the economic incentives that copyright is intended to secure. After months of pressure, which had included detailed position papers and parliamentary scrutiny, the Government formally abandoned the plan at the beginning of 2023, acknowledging the weight of evidence presented that extending the exception risked causing serious harm to the creative sector.


Evidence of the legal tensions surrounding training data is also to be seen in recent litigation. One of the most important UK precedents in the field of copyright law is Getty Images v Stability AI [2023], where Getty alleged that Stability AI infringed copyright by copying millions of its images for training purposes. The High Court accepted that the claim raised substantial issues worth trial, signalling that unauthorised use of copyrighted material for training an AI model may amount to infringement under UK law. Although the case has not reached a final judgment, it stands as powerful evidence that the courts are prepared to examine the training practices of AI and may impose liability where copying exceeds statutory limits.


A third area of legal difficulty arises in the context of AI-enabled infringement. Given that generative systems can produce large volumes of material very quickly, often in response to unpredictable prompts, both developers and users may inadvertently generate content which closely imitates protected works. UK creators have complained of such occurrences through professional bodies and submissions to the IPO, reporting examples of AI-generated images containing stylistic imitations - and in some cases even distorted versions of artists' signatures. There have been similar expressions of alarm from the music industry in relation to AI produced recordings which mimic the voices of identifiable performers. These concerns have been noted at the parliamentary level. The (HoL) House of Lords Communications and Digital Committee, in a 2023 report on large language models and generative AI, recommended that the existing enforcement mechanisms are poorly adapted to the scale and speed of AI output. The Committee concluded that far more detailed guidance is required regarding the responsibilities of both developers and platforms to prevent infringing use, and that the Government should reconsider updating liability rules to reflect the realities of AI production.


However, determining who is responsible when AI produces infringing material remains complex. Current UK principles require a human infringer, but AI-generated content does not fit into existing categories of primary or secondary liability. The debate mirrors earlier discussions in cases such as Cartier v BSkyB [2016], where the Supreme Court recognised that intermediaries may sometimes be obliged to take steps to prevent infringement. However, AI developers are unlike traditional intermediaries: they actively design and train the systems which generate the output. This has led to questions about whether implementation of safeguards should lie with developers, or whether liability for infringement should largely lie with its users. A general lack of clarity serves to increase uncertainty for creators and technology companies alike.


In conclusion, AI is exposing gaping holes in the UK's current copyright framework. Evidence from IPO guidance, government consultations, parliamentary reports, and ongoing litigation all point to the same conclusion: the law, built around human creativity, is struggling to accommodate machine-generated content. Authorship rules lack clarity when AI begins to play a dominant role in creation; the legality of training data is unresolved and likely to be shaped by the courts in the coming years; and liability rules are under strain as AI enables new forms of infringement. As technology continues to develop, the UK will need both judicial interpretation and legislative reform to ensure that copyright law remains capable of protecting human creativity whilst still allowing for innovation in the AI sector.



Edited by Artyom Timofeev


Image source:

 
 
 

Recent Posts

See All
Commercial Awareness Digest - 12th December 2025

Warner Bros Discovery: Competing Acquisition Bids   By Zuha Malik An acquisition is a transaction in which one company purchases a controlling stake in another, allowing it to direct how the business

 
 
 

Comments


© 2025 by UCL LAW FOR ALL SOCIETY 

  • LinkedIn Social Icon
  • YouTube Social  Icon
  • Facebook Social Icon
  • Instagram Social Icon
  • Twitter Social Icon
bottom of page