Generative AI and transparency of databases and their content, from a copyright perspective
May 21, 2024
By Ana Andrijevic*
In May 2024, the Organisation for Economic Co-operation and Development (OECD) updated its Principles on Artificial Intelligence (AI),[1] including the principle of transparency,[2] which has contributed to shaping policy[3] and regulatory debates on AI and generative AI (i.e. deep learning models that can create new content, such as text, computer code, and images, in response to a user’s short, written description – a “prompt”).[4] From a copyright perspective, the principle of transparency has become increasingly relevant in several respects: the transparency of the databases (or datasets) and their content used to train AI models, the transparency in AI models, and the transparency regarding the use of AI tools in the creative process.
This contribution focuses on the transparency of the databases and their content used to train AI models, which are increasingly being kept under lock and key and made inaccessible for perusal by AI companies to maintain their competitive edge. Since these databases contain a wide range of protected literary and artistic works (e.g. literary works,[5] photographic works,[6] paintings and drawings,[7] musical works,[8] and more), the interests of AI companies collide with those of authors and rights holders, whose works are used without authorization or compensation. In this contribution, we explore this discrepancy of interests from the principle of transparency angle.
Lack of transparency of databases and their content used to train AI models
Over the last few years, AI companies have become more cautious about disclosing the databases used to train their AI models, as illustrated, for instance, by Meta (for the training of Llama)[9] and OpenAI[10] (for the training of GPTs, i.e. Generative Pre-trained Transformers). In particular, OpenAI’s strategy has shifted from openness to limiting the amount of information relating to their training datasets. Thus, in the span of a couple of years (2018 to 2020), the US company has gone from disclosing[11] the use of BooksCorpus[12] (a dataset of 7,000 self-published books retrieved from smashwords.com, which are largely protected under copyright)[13] for the training of GPT-1 (released in June 2018), to indicating the use of several vaguely labeled datasets to train GPT-3 (released in July 2020), including two internet-based books corpora (Books1 and Books2).[14] Although the content of Books1 and Books2 remains unknown, the defendants in Tremblay et al. v. OpenAI et al. (consolidated on March 12, 2024),[15] one of several complaints filed in the USA against AI companies in 2023,[16] have investigated the issue at hand.[17]
With the launch of GPT-4 on March 14, 2023, OpenAI has become increasingly secretive, citing “the competitive landscape and the safety implications of large-scale models like GPT-4”[18] to explain its choice. Nevertheless, as OpenAI CEO Sam Altman recently acknowledged, there is no doubt that their datasets contain large amounts of copyrighted work, since, in his words, “it would be impossible to train today’s leading AI models without using copyrighted materials.”[19] In addition to Meta and OpenAI, other AI companies such as Google[20] and Nvidia[21] have also refrained from disclosing their training datasets and their content over time. The same applies to Stability AI which initially opted for disclosure of its datasets for training its AI model Stable Diffusion, a strategy that proved unsuccessful as it became one of the first AI companies to be taken to court in the USA[22] and in the UK[23] in 2023. However, the research paper on its AI model Stable Video Diffusion (released on November 21, 2023[24]) does not disclose any information about the sources of the training datasets.[25]
Generative AI and transparency from a copyright perspective
From a regulatory perspective, the latest amended EU AI Act,[26] approved by the European Parliament in March 2024,[27] includes art. 53 par. 1 let. d, the aim of which is to promote transparency on the data (including copyrighted data) used by providers[28] for training their General Purpose AI (“GPAI”) models.[29] It reads: “Providers of general-purpose AI models shall: (d) draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.”[30] This provision, which was introduced into the EU AI Act at a late stage in the drafting of the regulation,[31] has been the subject of much criticism[32] on the grounds that the requirement posed by art. 53 par. 1 let. d of the EU AI Act was ambiguous and overly demanding.[33]
However, the recent inclusion of recital 107 provides a further clarification that allays some of the concerns raised by AI model providers,[34] and states that the summary “should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used” (emphasis added). Nevertheless, the AI Office’s role does not imply to verify or proceed to “a work-by-work assessment of the training data in terms of copyright compliance.”[35] On the subject of transparency, while a summary of the databases used by AI companies can be a first source of relevant information, the lack of disclosure of their content (especially if they are private) remains a challenge for rights holders to establish conclusive evidence of copying.
In the USA, the Copyright Office issued a Notice of inquiry and request for comments[36] on August 30, 2023, seeking comments on copyright law and policy issues raised by AI, “including those involved in the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works (…).”[37] More specifically, the US Copyright Office inquired whether developers of AI models should be required “to collect, retain, and disclose records regarding the materials used to train their models”[38] to enable copyright owners to determine whether their works have been used.[39] The US Copyright Office received more than 10,000 written comments[40] by the December 2023 deadline; these are currently under review.[41] It is worth noting, however, that unsurprisingly, AI companies such as Google[42] and Meta[43] (and as previously mentioned, OpenAI) agree that datasets and their content should not be divulged.
Yet, on April 9, 2024, the Generative AI Copyright Disclosure Act[44] was introduced by California Democratic Congressman Adam Schiff[45] and, if approved, would require “a notice be submitted to the Register of Copyright with respect to copyrighted works used in building generative AI systems, and for other purposes.” More specifically, section 2(a)(1) of the Generative AI Copyright Disclosure Act would require “[a] person who creates a training dataset, or alters a training dataset (…) in a significant manner, that is used in building a generative AI system” to submit to the Register of Copyrights a notice that contains: “(A) a sufficiently detailed summary of any copyrighted works used – (i) in the training dataset (…); or (ii) to alter the training dataset (…)” and “(B) the URL for such dataset (in the case of a training dataset that is publicly available on the internet at the time the notice is submitted).” Therefore, unlike the requirement of art. 53 par. 1 let. d of the EU AI Act, which is limited to a summary of the content used to train the GPAI, the proposed Generative AI Copyright Disclosure Act would mandate a notice regarding all copyrighted works used in building or altering the training dataset.
Despite the imprecise nature of this proposal (e.g. databases can be created by more than one person, as in the case of entities such as the nonprofit organizations LAION[46] or Common Crawl[47]), access to the content of a training dataset to build a generative AI system would undeniably give rights holders of protected literary and artistic works sturdier evidence of copyright infringement by the AI company. This point is not innocuous, as the plaintiffs in several lawsuits brought against AI companies did not have access to the content of the training datasets and could therefore only assume that their works had been used by AI companies based on the outputs generated by their AI tools. For instance, in Tremblay et al. v. OpenAI et al., the defendants argue that ChatGPT generates very accurate summaries of their copyrighted works.[48] However, without access to OpenAI’s data, it cannot be ruled out that these summaries were generated from other sources (e.g. other summaries written by third parties).
In fact, plaintiffs have rarely provided conclusive evidence of copying in proceedings against AI companies. One such case is Concord Music Group Inc. et al. v. Anthropic PBC,[49] in which the plaintiffs (the rights holders) were able to present clear examples of the reproduction of their lyrics by Claude, Anthropic’s AI tool.[50] Similarly, The New York Times was able to demonstrate in The New York Times Company v. Microsoft Corporation et al.[51] that the defendants’ AI tools can generate output that “recites Times content verbatim, closely summarizes it, and mimics its expressive style,”[52] as illustrated by several examples produced by the plaintiff.[53] Thus, despite the lack of transparency regarding the databases and their content used by the defendants, the plaintiffs were able to provide concrete evidence of copying by analyzing the outputs proposed by these tools. However, these complaints are unlikely to contribute to greater transparency, which begs the question of alternative solutions.
Possible remedies for copyright holders against AI companies
In cases where copyrighted works have already been harvested, the first challenge is to identify which works have been used by AI companies. The proposed solutions diverge between the EU and the USA: On the one hand, the EU AI Act requires GPAI providers to “draw up and make publicly available a sufficiently detailed summary about the content used for training” of the GPAI (art. 53 par. 1 let. d of the EU AI Act), and on the other hand, the US Generative AI Copyright Disclosure Act would require AI companies to disclose both databases and their content (section 2(a)(1)). Thus, while the latter proposal is more advantageous for authors and rights holders (since it enables them to identify the content of databases), the burden and responsibility of requesting the removal of their works or, if necessary, filing a complaint, still rests with them.
For the time being, curative opt-outs (as opposed to preventive opt-outs permitted by art. 4 par. 3 of the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (DSM Directive)[54] referred to in art. 53 par. 1 let. c of the EU AI Act[55]) are left to the rights holders themselves[56] or to the goodwill of AI companies, whose opt-out mechanisms have proved unsatisfactory.[57] What is more, these processes sustain an important disadvantage: They only apply to future uses, not current ones. Indeed, the possibility of deleting the content of trained models is still under development, although there are interesting avenues for large language models (LLMs).[58] However, curative opt-outs are disadvantageous and not a sustainable solution, as rights holders have to ensure, for each AI company, that their works have not been used.
An upstream approach is therefore needed to promote the transparency of databases and their content. The creation of partnerships between AI companies and providers of literary and artistic works, such as between Getty Images and Nvidia,[59] Universal Music and BandLab Technologies,[60] Google with Reddit,[61] OpenAI with Le Monde and Prisa Media,[62] is one development that has been observed. Yet, this type of collaboration mainly involves AI companies with large-scale content suppliers but does not extend to smaller players. They are, however, taken into account for certification, which is another solution to promote transparency of databases and their content, as is done by Fairly Trained,[63] whose mission is to certify AI companies that get a license for their training data.[64]
Conclusion
Fom a legal point of view, the transparency of databases and their content involves a balance between, on the one hand, the interest of AI companies in preserving a competitive advantage, favored by the EU AI Act, and, on the other hand, the interests of rights holders, who could benefit from the obligation to disclose databases and their content under the US Generative AI Copyright Disclosure Act. However, solutions aimed at improving database transparency, in both the EU and the USA, remain unsatisfactory, as the onus is still on rights holders to opt out (where possible and with the aforementioned constraints) or lodge a complaint. Yet, solutions are available, which include partnerships between AI companies and providers of copyrighted works, and certification.
It should nevertheless be noted that if, in the USA, the fair use doctrine is admitted concerning the unauthorized copying of protected works to train AI companies’ models, the US Generative AI Copyright Disclosure Act will lose its relevance, as will the issue of transparency of databases and their content. However, this matter will remain relevant within the EU insofar as rights holders expressly reserve the right to make reproductions and extractions of their works for text and data mining (see art. 4 par. 3 of the DSM Directive, referred to in art. 53 par. 1 let. c of the EU AI Act), which thus favors the interests of rights holders over those of AI companies.
About the Author:
Ana Andrijevic is a PhD candidate at the University of Geneva. She is also a visiting researcher at Harvard Law School where she is an affiliated researcher at the Berkman Klein Center for Internet & Society (Harvard University).
Sources:
- OECD, OECD Legal Instruments, OECD (02.05.2024), available at: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 See also OECD, OECD updates AI Principles to stay abreast of rapid technological developments, OECD (03.05.2024), available at: https://www.oecd.org/newsroom/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.htm ↑
- OECD, OECD Legal Instruments, par. 1.3 on Transparency and explainability, which states in particular that: “AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: iii. when feasible and useful, to provide plain and easy-to-understand information on the sources of data/input (…).” ↑
- See for instance OECD.AI, OECD Principles, Transparency and explainability (Principle 1.3), available at: https://oecd.ai/en/dashboards/ai-principles/P7 ↑
- World Intellectual Property Organization (WIPO), Generative AI, Navigating Intellectual Property, Geneva (2024), p. 2, available at: https://www.wipo.int/export/sites/www/about-ip/en/frontier_technologies/pdf/generative-ai-factsheet.pdf ↑
- See for instance Tremblay et al. v. OpenAI et al., Case 3:23-cv-03223-AMO, 13.03.2024 and The New York Times Company, v. Microsoft Corporation et al., Case 1:23-cv-11195, 27.12.2023. ↑
- Getty Images (US) v. Stability AI Inc., Case 1:23-cv-00135-UNA, 03.02.2023 and Getty Images (UK) et al. v. Stability AI Ltd., [2023] EWHC 3090 (Ch), Case No: IL-2023-000007, 1.12.2023. ↑
- Andersen et al. v. Stability AI Ltd. et al., Case 3:23-cv-00201, 13.01.2023 and Jingna Zhang et al. v. Google LLC et al., Case 3:24-cv-02531, 26.04.2023. ↑
- Ashley Carman and Lucas Shaw, Sony Music Warns Companies to Stop Training AI on Its Artists’ Content, Bloomberg (16.05.2024), available at: https://www.bloomberg.com/news/articles/2024-05-16/sony-music-warns-companies-to-stop-training-ai-on-its-artists-content ↑
- Meta, Introducing Meta Llama3: The most capable openly available LLM to date, Meta (18.04.2024), available at: https://ai.meta.com/blog/meta-llama-3/ It indicates that: “Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources.” ↑
- Chloe Xiang, OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit, Vice (28.02.2024), available at: https://www.vice.com/en/article/5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit ↑
- Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Improving Language Understanding by Generative Pre-Training, OpenAI (2018), p. 4, available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf ↑
- Hugging Face, Datasets: bookcorpus, available at: https://huggingface.co/datasets/bookcorpus ↑
- Tremblay et al. v. OpenAI et all., par. 39. ↑
- Tom B. Brown et al., Language Models are Few-Shot Learners, p. 8, available at: https://arxiv.org/pdf/2005.14165 ↑
- See footnote n°5. ↑
- Edward Lee, Status of all 24 copyright lawsuits v. AI companies, 17.05.2024, available at: https://chatgptiseatingtheworld.com/2024/05/17/status-of-all-24-copyright-lawsuits-v-ai-companies-may-17-2024/ ↑
- Tremblay et al. v. OpenAI et al., par. 40 to 43. ↑
- OpenAI, GPT-4 Technical Report, OpenAI (04.03.2024), p. 2, available at: https://arxiv.org/pdf/2303.08774 ↑
- House of Lords Communications and Digital Select Committee, OpenAI – written evidence (LLM0113), London (05.12.2023), p. 4, available at: https://committees.parliament.uk/writtenevidence/126981/pdf/ ↑
- Jingna Zhang et al. v. Google LLC et al., par. 31 and 32. ↑
- In Abdi Nazemian et al. v. NVIDIA Corporation, Case 3:24-cv-01454, 08.03.2024, par. 22 and 23 and Andre Dubus III et al. v. NVIDIA Corporation, Case 4:24-cv-02655, 02.05.2024, par. 21 and 22, defendants refer to the training of the NeMo Megatron, released in September 2022 and trained on “The Pile” dataset. However, if we take a more recent example, NVIDIA indicates for instance that its AI model PeopleNet was trained on a “proprietary dataset with more than 7.6 million images,” without any further information. For more, see NVIDIA, PeopleNet Model Cart, NVIDIA (11.04.2024), available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet ↑
- Andersen et al. v. Stability AI Ltd. et al. and Getty Images (US) v. Stability AI Inc. ↑
- Getty Images (UK) et al. v. Stability AI Ltd. ↑
- Stability AI, Introducing Stable Video Diffusion, Stability AI (21.11.2023), available at: https://stability.ai/news/stable-video-diffusion-open-ai-video-model ↑
- Andreas Blattmann et al., Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets, Stability AI (21.11.2023), p. 2, available at: https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf ↑
- European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Act (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD), available at: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html#title2 ↑
- European Parliament, Artificial Intelligence Act: MEPs adopt landmark law, Brussels (13.03.2024), available at: https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law As indicated: “The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.” ↑
- See definition of providers in art. 3 par. 3 of the EU AI Act. ↑
- See definition of GPAI in art. 3 par. 63 of the EU AI Act. ↑
- With regard to the AI Office, see art. 3 par. 47 of the EU AI Act: “‘AI Office’ means the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and AI governance carried out by the European Artificial Intelligence Office established by Commission Decision of 24.1.2024; references in this Regulation to the AI Office shall be construed as references to the Commission.” ↑
- Paul Keller, A first look at the copyright relevant parts in the final AI Act compromise, Kluwer Copyright Blog (11.12.2023), available at: https://copyrightblog.kluweriplaw.com/2023/12/11/a-first-look-at-the-copyright-relevant-parts-in-the-final-ai-act-compromise/ ↑
- Andres Guadamuz, The EU AI Act and Copyright, TechnoLlama (14.03.2024), available at: https://www.technollama.co.uk/the-eu-ai-act-and-copyright ↑
- Keller. ↑
- Id. ↑
- See recital 108 of the EU AI Act. ↑
- Library of Congress, Copyright Office, Artificial Intelligence and Copyright, No. 2023-6, in: Federal Register, Vol. 88, No. 167, Washington, DC (30.08.2023), available at: https://www.govinfo.gov/content/pkg/FR-2023-08-30/pdf/2023-18624.pdf ↑
- Id., p. 59942. ↑
- Id., p. 59947. ↑
- Id. ↑
- US Copyright Office, Artificial Intelligence and Copyright, Washington, DC, available at: https://www.regulations.gov/docket/COLC-2023-0006/comments ↑
- US Copyright Office, Washington, DC (23.02.2024), p. 5, available at: https://copyright.gov/laws/hearings/USCO-Letter-on-AI-and-Copyright-Initiative-Update-Feb-23-2024.pdf?loclr=blogcop ↑
- US Copyright Office, Comment from Google, Washington, DC (01.11.2023), pp. 11 and 12, available at: https://www.regulations.gov/comment/COLC-2023-0006-9003 ↑
- US Copyright Office, Comment from Meta Platforms, Inc., Washington, DC (01.11.2023), pp. 19 and 20, available at: https://www.regulations.gov/comment/COLC-2023-0006-9027 ↑
- Available at: https://schiff.house.gov/imo/media/doc/the_generative_ai_copyright_disclosure_act.pdf ↑
- Rep. Schiff introduces groundbreaking bill to create AI transparency between creators and companies, Washington, DC (09.04.2024), available at: https://schiff.house.gov/news/press-releases/rep-schiff-introduces-groundbreaking-bill-to-create-ai-transparency-between-creators-and-companies ↑
- LAION, About, available at: https://laion.ai/about/ ↑
- Common Crawl, Frequently asked questions, available at: https://commoncrawl.org/faq ↑
- Tremblay et al. v. Stability AI et al., par. 5 and 51. ↑
- Concord Music Group, Inc. et al., v. Anthropic PBC, Case 3:23-cv-01092, 18.10.2023. ↑
- Id., par. 66 to 69. ↑
- The New York Times Company, v. Microsoft Corporation et al. ↑
- Id., par. 4. ↑
- See for instance Id., par. 99, 100, and 104 to 107. ↑
- Art. 4 par. 3 of the DSM Directive: “The exception or limitation provided for in paragraph 1 [Exception or limitation for text and data mining] shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.” ↑
- Art. 53 par. 1 let. C of the EU AI Act: “Providers of general-purpose AI models shall: (c) put in place a policy to comply with Union copyright law, and in particular to identify and comply with, including through state of the art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790.” ↑
- See “Have I Been Trained?, which is mentioned in Andersen et al. v. Stability AI et al., p. 6, footnotes n°1 and 2. ↑
- Kali Hays, OpenAI offers a way for creators to opt out of AI training data. It’s so onerous that one artist called it ‘enraging’, Business Insider (29.09.2023), available at: https://www.businessinsider.com/openai-dalle-opt-out-process-artists-enraging-2023-9 ↑
- Ronen Eldan and Mark Russinovich, Who’s Harry Potter? Approximate Unlearning in LLMs, 04.10.2023, available at: https://arxiv.org/pdf/2310.02238 ↑
- Rick Merritt, Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI, NVIDIA (21.03.2023), available at: https://blogs.nvidia.com/blog/generative-ai-getty-images/ ↑
- Universal Music Group, Universal Music Group and BandLab Technologies announce first-of-its-kind strategic AI collaboration, Universal Music (18.10.2023), available at: https://www.universalmusic.com/universal-music-group-and-bandlab-technologies-announce-first-of-its-kind-strategic-ai-collaboration/ ↑
- Anna Tong, Echo Wang, Martin Coulter, Exclusive: Reddit in AI content licensing deal with Google, Reuters (21.02.2024), available at: https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/ ↑
- OpenAI, Global news partnerships: Le Monde and Prisa Media, OpenAI (13.03.2024), available at: https://openai.com/index/global-news-partnerships-le-monde-and-prisa-media/ ↑
- Fairly Trained, About, available at: https://www.fairlytrained.org/about ↑
- Fairly Trained, Licensed Model Certification, available at: https://www.fairlytrained.org/certifications ↑
Disclaimer: This article is for educational purposes only and is not meant to provide legal advice. Readers should not construe or rely on any comment or statement in this article as legal advice. For legal advice, readers should seek a consultation with an attorney.