Remodelling the UK’s ‘Gold-Plated Copyright Regime’ and its Impacts on Creative Industries and AI Training
March 3, 2025
By Aminah Asif
On the 17th of December, 2024, the UK government published an open consultation paper on ‘Copyright and Artificial Intelligence (“AI”)’ which evaluated several proposals for how existing copyright law could be changed.[1] The government recognises the AI sector and creative industries as essential to the UK’s economic development and intends to develop a copyright and AI framework that “rewards human creativity, incentivises innovation and provides the legal certainty required for long-term growth.”[2] The consultation will remain open to the public for responses to the proposed changes until the 25th of February, 2025.[3]
The Proposed Policy Options
The consultation paper proposes four options: (0) to do nothing and leave copyright laws as they are; (1) to strengthen copyright, requiring licensing in all cases; (2) to introduce a broad data mining exception, allowing data mining on copyrighted works without the need for permission from rights holders; and (3) to introduce a data mining exception that allows rights holders to reserve rights, with supporting measures on transparency.[4] Currently in the UK, there is a data mining exception for research purposes, contained in Section 29A of the Copyright Designs and Patents Act 1988 (“CDPA”), provided that users have “lawful access to the work” and present “sufficient acknowledgement.”[5] Consequently, works protected under CDPA typically cannot be used for commercial purposes without users obtaining formal authorisation from rights holders.[6] Although the consultation is open, the government has highlighted a preference for option (3).[7] This option is essentially a proposal for the adoption of an EU-style ‘opt-out’ mechanism, expanding the data mining exception to include commercial usage, but simultaneously giving rights holders the option to disallow users, in this case AI developers, from accessing their work.[8] If option (3) is chosen, the current copyright framework would be inverted as a data mining exception for commercial purposes would be automatically enforced. Correspondingly, users would not be required to undergo an authorisation procedure to access works, unless the rights holder has opted out.
Due to this inversion of the current framework, the government’s preference for a rights reservation system has been met with scepticism. Beeban Kidron, an award-winning film director and member of the House of Lords, argued that the consultation is, “fixed”, and that the government is, “undermining creative industries that bring £126 billion to the UK economy [by] giving away for free the property rights of 2.4 million people.”[9] Kidron further insisted that ‘for more than 300 years we have had a gold-plated copyright regime but now the tech companies and the government are walking around saying it is unclear.’[10] On the contrary, Peter Kyle, the Secretary of State for Science, Innovation and Technology, stated that the current legal binding around copyright, i.e. option (0), is ‘not tenable’ in regards to ongoing quarrels between tech businesses and creative industries.[11] Kyle has emphasised that ‘there will be technical solutions to things like transparency’ to ensure that ‘remarkable people, who create remarkable pieces of art, are respected for it.’[12]
Upon examining how the government discusses these technical solutions in the consultation paper, it appears as though respecting artists is distinct from providing artists with adequate information about how their work will be used, and who will be using it, if an authorisation procedure is removed. Section C.4 of the consultation paper uses the word ‘transparency’ eight times, but it does not clarify what ‘minimum transparency standards’ are.[13] Section C.4 suggests that transparency measures ‘could include requirements for AI firms and others conducting text and data mining to disclose the use of specific works and datasets,’ but the consultation paper’s counter-argument for this is that it may ‘present practical challenges to AI developers’ who use ‘such a large quantity of works.’[14] This counter-argument lacks substantiation as other jurisdictions have been able to enforce disclosure requirements despite these factors. For example, the EU’s AI Act recently introduced a requirement for the disclosure of training sources, contained in Article 53 (1)(d), without the need for it to be overly exhaustive, simply requesting a ‘sufficiently detailed summary about the content.’[15] Prior to the implementation of a rights reservation system, if option (3) is chosen, it is imperative that the UK government clarifies how rights holders will be informed about which datasets will be using their work for training purposes. The government could effectuate more transparency by introducing a requirement for the disclosure of training sources that is similar to the requirement recently introduced in the EU.
AI Developers and Imperfect Data
The lack of disclosure about what data is being used by several AI developers has created additional problems related to bias and misrepresentation. While some of this data may not be available to the public, it is known that these developers ‘often make use of massive volumes of image data scraped from the internet.’[16] The internet does not have an objective understanding of the world and instead reflects the interests of its users, which is instrumental to the production of ‘imperfect or skewed’ data.[17] For example, in 2018, Amazon scrapped its AI recruitment system because of its gender bias.[18] The recruitment system began to penalise women and ‘effectively taught itself that male candidates were preferable’ as most of its accumulated data had been derived from male candidates’ CVs.[19] Similarly, Google encountered issues with its image-recognition software because of its racial bias.[20] In 2015, Jacky Alciné, a Black man, was disturbed to find that the Google Photos app incorrectly labelled him as a ‘gorilla’.[21] Google’s accumulated image data is evidently not representative of a diverse range of people, but this racial bias could also be the result of a surplus of opinions-based text prompts. Google and Apple, fearing the spread of harmful racist rhetoric, and evidently unable to eliminate unconscious bias from their training datasets, have since made the decision to turn off the ability for individuals to visually search for certain types of animals.[22]
These incidents highlight the necessity for new AI legislation to begin tackling ‘unfixable’ flaws in machine learning as unconscious bias continues to spread like wildfire across new AI systems.[23] Arguably, option (2) in the UK’s consultation paper – a broad data mining exception without the need for rights holders’ permission – would widen developers’ access to training materials and diversify the types of data they utilise, potentially resolving this issue. However, by lacking an opt-out mechanism, option (2) could also negatively impact the ability for rights holders to seek remuneration for the use of their work, subsequently inhibiting their potential for financial growth. For rights holders in creative industries, this financial growth could instead be shifted to the AI sector as the value of the AI art market is estimated to reach almost $1 billion by 2028.[24]
Opt-Out Platforms for Artists
Although changes to copyright law in the UK have yet to be made, some global platforms currently offer tools that facilitate rights holders to give informed consent for their work to be used in AI training. Spawning is an independent third-party organisation that has created the opt-out platform ‘Have I Been Trained?’ to allow individuals, primarily visual artists, to manage the use of their images.[25] The founders of the platform aren’t concerned with protecting well-known art styles created by deceased artists or creating ‘copyright hell,’ but rather protecting ‘living, mid-career artists’ targeted by AI generators as they believe more artists would be willing to opt in to AI training if a ‘common respect’ is established.[26] Perhaps this is what Kyle was getting at when he stated that the UK government aims to ensure that artists are ‘respected’.[27]
Currently, Spawning has assisted with the removal of around 1.5 billion images from ‘commercial training-data sets.’[28] The platform grants its users access to search over 5.8 billion images in the Laion-5b dataset, the same dataset used to train AI art generators such as Stable Diffusion and Midjourney.[29] If they can identify their work in the dataset, individuals using the platform can add their data to the Do Not Train Registry to ensure that this data will not be used in future training datasets.[30] Two of the cofounders, Holly Herndon and Mathew Dryhurst, began experimenting with machine-learning software several years ago and found that, unsurprisingly, all existing media can be used to train AI because ‘as soon as something is machine-legible, it’s part of a training canon.’[31] Spawning exists to address a key question currently plaguing artists’ minds: how will artists know to opt out of a dataset if they aren’t aware that their work is included in it? This dilemma further relates back to the transparency issues that campaigners in the UK, like Kidron, are currently trying to tackle.
Weighing Out the Options
It is clear that the UK government is averse to choosing option (0) because it does not favor acting in accordance with global technological advancements. At the Munich Security Conference on the 14th of February, 2025, the Secretary of State for Science, Innovation and Technology explicitly stated that an “AI revolution is happening” and that the UK government will “create one of the biggest clusters of AI innovation in the world and deliver a new era of prosperity and wealth creation.”[32] However, introducing a data mining exception to do this, as proposed in options (2) and (3), does not favour the needs of rights holders in the UK, including those in creative industries. Option (2), being the broadest option, would allow for free and unrestricted access to works, permitting “commercial use for any purpose.”[33] This means that rights holders’ only pathway to controlling remuneration, if option (2) is introduced, may be through an ‘expensive litigation’ process.[34] Despite allowing for a rights reservation system and being the preferred option, option (3) is only marginally better than option (2). The government must expand on what the “supporting measures on transparency” for option (3) will be, in advance of the option’s commencement, as the consultation paper does not currently provide a viable explanation of how rights holders will be able to opt out of datasets that do not disclose what data they are using.[35] This must be prioritised as the proposed data mining exceptions in options (2) and (3) would be applied automatically, abruptly and profoundly changing how much power rights holders have over the use of their work.
Option (1) in the consultation paper has not been widely discussed in the media, perhaps due to rights holders being preoccupied and outraged by the proposed loosening of the current copyright framework in options (2) and (3). Option (1) has several benefits for rights holders as the requirement for “licensing in all cases” would allow for stronger control over the use of their work and protect their ability to seek remuneration.[36] If option (1) was introduced, AI developers would still be able to gain access to works, but the copyright framework would be strengthened to better avoid infringement, meaning that the process to do this may be somewhat protracted. The consultation paper’s counter-argument for introducing this option is that it will “make the UK significantly less competitive” as other jurisdictions, such as the EU and the US, have less restrictions.[37] Arguably, if AI developers find option (1) off-putting, and therefore avoid the UK, the subsequent negative impact on AI development would reflect softer jurisdictions’ disregard for rights holders’ needs rather than the UK’s inability to conform to technological progression.
Conclusion
Whilst the UK government’s final decision about how to change the current copyright framework is still up in the air, more clarity is required to explain how the government will work to consider the needs of rights holders and increase transparency around the AI training process. This is especially necessary if the preferred option in the consultation paper is introduced due to its immediate inversion of the current framework. There are also several risks associated with the other three options proposed, primarily because they could all negatively change how much creative industries contribute to the UK economy. In the meantime, rights holders, particularly living visual artists with distinctive art styles who are concerned about the protection and use of their work, can take precautions by using platforms like ‘Have I Been Trained?’ to manage their data independently.
Suggested Readings and Videos:
- Melissa Heikkilä, Four Ways to Protect Your Art From AI, MIT Technology Review (2024).
- Serpentine, Creating a Consent Layer for AI Systems with Holly Herndon and Mathew Dryhurst, YouTube (2023).
- Ally Clark and Duncan Calow, Training AI Models: Content, Copyright and the EU and UK TDM Exceptions, DLA Piper (2023).
About the Author:
Aminah Asif is an undergraduate student at The Courtauld Institute of Art in London, where she studies Art History.
References:
- Intellectual Property Office et al., Copyright and Artificial Intelligence, Gov UK (2024), available at https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence. ↑
- Id. ↑
- Id. ↑
- Id. ↑
- Copyright Designs and Patents Act 1988, Gov UK (2014), available at https://www.legislation.gov.uk/ukpga/1988/48/section/29. ↑
- Id. ↑
- Intellectual Property Office et al., Copyright and Artificial Intelligence, Gov UK (2024), available at https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence. ↑
- Paul Joseph et al., UK Government Proposes Copyright and AI Reform Mirroring EU Approach, Linklaters (2025), available at https://www.linklaters.com/en/insights/blogs/digilinks/2025/january/uk-government-proposes-copyright-and-ai-reform-mirroring-eu-approach. ↑
- Dan Milmo, UK Copyright Law Consultation ‘Fixed’ in Favour of AI Firms Peer Says, The Guardian (2025), available at https://www.theguardian.com/technology/2025/feb/11/uk-copyright-law-consultation-fixed-favour-ai-firms-peer-says. ↑
- Id. ↑
- Id. ↑
- Id. ↑
- Intellectual Property Office et al., Copyright and Artificial Intelligence, Gov UK (2024), available at https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence. ↑
- Id. ↑
- EU Artificial Intelligence Act, EU Artificial Intelligence Act (2024), available at https://artificialintelligenceact.eu/article/53/#:~:text=This%20article%20states%20that%20companies,still%20protecting%20their%20intellectual%20property.. ↑
- Clara Che Wei Peh, Is AI generating an ‘averaged’, one-sided, view of art history?, The Art Newspaper (2023), available at https://www.theartnewspaper.com/2023/06/30/is-ai-generating-an-averaged-one-sided-view-of-art-history#:~:text=When%20the%20process%20tends%20towards,for%20art%20making%20and%20research.. ↑
- Id. ↑
- Amazon Scrapped ‘Sexist AI’ Tool, BBC (2018), available at https://www.bbc.co.uk/news/technology-45809919. ↑
- Id. ↑
- Nico Grant et al., Google’s Photo App Still Can’t Find Gorillas and Neither Can Apple’s, The New York Times (2023), available at https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-apple.html#:~:text=The%20app%20performed%20well%20in,these%20primates%20in%20our%20collection.. ↑
- Id. ↑
- Nico Grant et al., Google’s Photo App Still Can’t Find Gorillas and Neither Can Apple’s, The New York Times (2023), available at https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-apple.html#:~:text=The%20app%20performed%20well%20in,these%20primates%20in%20our%20collection.. ↑
- Id. ↑
- Virginie Berger, Christie’s AI Generated Art Auction Who Profits and Who Pays the Price, Forbes (2025), available at https://www.forbes.com/sites/virginieberger/2025/02/19/christies-ai-generated-art-auction-who-profits-and-who-pays-the-price/#. ↑
- Have I Been Trained Frequently Asked Questions, Spawning (2025), available at https://spawning.ai/have-i-been-trained. ↑
- Chris Stokel-Walker, This couple is Launching an Organization to Protect Artists in the AI Era, Input (2022), available at https://www.inverse.com/input/culture/mat-dryhurst-holly-herndon-artists-ai-spawning-source-dall-e-midjourney.. ↑
- Dan Milmo, UK Copyright Law Consultation ‘Fixed’ in Favour of AI Firms, Peer Says, The Guardian (2025), available at https://www.theguardian.com/technology/2025/feb/11/uk-copyright-law-consultation-fixed-favour-ai-firms-peer-says. ↑
- Anna Wiener, Holly Herndon’s Infinite Art, The New Yorker (2023), available at https://www.newyorker.com/magazine/2023/11/20/holly-herndons-infinite-art. ↑
- Chris Stokel-Walker, This couple is Launching an Organization to Protect Artists in the AI Era, Input (2022), available at https://www.inverse.com/input/culture/mat-dryhurst-holly-herndon-artists-ai-spawning-source-dall-e-midjourney.. ↑
- Have I Been Trained Frequently Asked Questions, Spawning (2025), available at https://spawning.ai/have-i-been-trained. ↑
- Anna Wiener, Holly Herndon’s Infinite Art, The New Yorker (2023), available at https://www.newyorker.com/magazine/2023/11/20/holly-herndons-infinite-art. ↑
- Department for Science, Innovation and Technology et al., Remarks made by Technology Secretary Peter Kyle at the Munich Security Conference, Gov UK (2025), available at https://www.gov.uk/government/speeches/remarks-made-by-technology-secretary-peter-kyle-at-the-munich-security-conference. ↑
- Intellectual Property Office et al., Copyright and Artificial Intelligence, Gov UK (2024), available at https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence. ↑
- Id. ↑
- Id. ↑
- Intellectual Property Office et al., Copyright and Artificial Intelligence, Gov UK (2024), available at https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence. ↑
- Id. ↑
Disclaimer: This article is for educational purposes only and is not meant to provide legal advice. Readers should not construe or rely on any comment or statement in this article as legal advice. For legal advice, readers should seek a consultation with an attorney.