• About
    • Mission
    • Team
    • Boards
    • Mentions & Testimonials
    • Institutional Recognition
    • Annual Reports
    • Current & Past Sponsors
    • Contact Us
  • Resources
    • Article Collection
    • Podcast: Art in Brief
    • AML and the Art Market
    • AI and Art Authentication
    • Newsletter
      • Subscribe
      • Archives
      • In Brief
    • Art Law Library
    • Movies
    • Nazi-looted Art Restitution Database
    • Global Network
      • Courses and Programs
      • Artists’ Assistance
      • Bar Associations
      • Legal Sources
      • Law Firms
      • Student Societies
      • Research Institutions
    • Additional resources
      • The “Interview” Project
  • Events
    • Worldwide Calendar
    • Our Events
      • All Events
      • Annual Conferences
        • 2025 Art Law Conference
        • 2024 Art Law Conference
        • 2023 Art Law Conference
        • 2022 Art Law Conference
        • 2015 Art Law Conference
  • Programs
    • Visual Artists’ Legal Clinics
      • Art & Copyright Law Clinic
      • Artist-Dealer Relationships Clinic
      • Artist Legacy and Estate Planning Clinic
      • Visual Artists’ Immigration Clinic
    • Summer School
      • 2025
    • Internship and Fellowship
    • Judith Bresler Fellowship
  • Case Law Database
  • 2025 Year-End Appeal
  • Log in
  • Become a Member
  • Donate
  • 2025 Year-End Appeal
  • Log in
  • Become a Member
  • Donate
Center for Art Law
  • About
    About
    • Mission
    • Team
    • Boards
    • Mentions & Testimonials
    • Institutional Recognition
    • Annual Reports
    • Current & Past Sponsors
    • Contact Us
  • Resources
    Resources
    • Article Collection
    • Podcast: Art in Brief
    • AML and the Art Market
    • AI and Art Authentication
    • Newsletter
      Newsletter
      • Subscribe
      • Archives
      • In Brief
    • Art Law Library
    • Movies
    • Nazi-looted Art Restitution Database
    • Global Network
      Global Network
      • Courses and Programs
      • Artists’ Assistance
      • Bar Associations
      • Legal Sources
      • Law Firms
      • Student Societies
      • Research Institutions
    • Additional resources
      Additional resources
      • The “Interview” Project
  • Events
    Events
    • Worldwide Calendar
    • Our Events
      Our Events
      • All Events
      • Annual Conferences
        Annual Conferences
        • 2025 Art Law Conference
        • 2024 Art Law Conference
        • 2023 Art Law Conference
        • 2022 Art Law Conference
        • 2015 Art Law Conference
  • Programs
    Programs
    • Visual Artists’ Legal Clinics
      Visual Artists’ Legal Clinics
      • Art & Copyright Law Clinic
      • Artist-Dealer Relationships Clinic
      • Artist Legacy and Estate Planning Clinic
      • Visual Artists’ Immigration Clinic
    • Summer School
      Summer School
      • 2025
    • Internship and Fellowship
    • Judith Bresler Fellowship
  • Case Law Database
Home image/svg+xml 2021 Timothée Giet Art law image/svg+xml 2021 Timothée Giet Generative AI and transparency of databases and their content, from a copyright perspective
Back

Generative AI and transparency of databases and their content, from a copyright perspective

May 21, 2024

Collage about transparency ai and art with a rose

By Ana Andrijevic*

In May 2024, the Organisation for Economic Co-operation and Development (OECD) updated its Principles on Artificial Intelligence (AI),[1] including the principle of transparency,[2] which has contributed to shaping policy[3] and regulatory debates on AI and generative AI (i.e. deep learning models that can create new content, such as text, computer code, and images, in response to a user’s short, written description – a “prompt”).[4] From a copyright perspective, the principle of transparency has become increasingly relevant in several respects: the transparency of the databases (or datasets) and their content used to train AI models, the transparency in AI models, and the transparency regarding the use of AI tools in the creative process.

This contribution focuses on the transparency of the databases and their content used to train AI models, which are increasingly being kept under lock and key and made inaccessible for perusal by AI companies to maintain their competitive edge. Since these databases contain a wide range of protected literary and artistic works (e.g. literary works,[5] photographic works,[6] paintings and drawings,[7] musical works,[8] and more), the interests of AI companies collide with those of authors and rights holders, whose works are used without authorization or compensation. In this contribution, we explore this discrepancy of interests from the principle of transparency angle.

Lack of transparency of databases and their content used to train AI models

Over the last few years, AI companies have become more cautious about disclosing the databases used to train their AI models, as illustrated, for instance, by Meta (for the training of Llama)[9] and OpenAI[10] (for the training of GPTs, i.e. Generative Pre-trained Transformers). In particular, OpenAI’s strategy has shifted from openness to limiting the amount of information relating to their training datasets. Thus, in the span of a couple of years (2018 to 2020), the US company has gone from disclosing[11] the use of BooksCorpus[12] (a dataset of 7,000 self-published books retrieved from smashwords.com, which are largely protected under copyright)[13] for the training of GPT-1 (released in June 2018), to indicating the use of several vaguely labeled datasets to train GPT-3 (released in July 2020), including two internet-based books corpora (Books1 and Books2).[14] Although the content of Books1 and Books2 remains unknown, the defendants in Tremblay et al. v. OpenAI et al. (consolidated on March 12, 2024),[15] one of several complaints filed in the USA against AI companies in 2023,[16] have investigated the issue at hand.[17]

With the launch of GPT-4 on March 14, 2023, OpenAI has become increasingly secretive, citing “the competitive landscape and the safety implications of large-scale models like GPT-4”[18] to explain its choice. Nevertheless, as OpenAI CEO Sam Altman recently acknowledged, there is no doubt that their datasets contain large amounts of copyrighted work, since, in his words, “it would be impossible to train today’s leading AI models without using copyrighted materials.”[19] In addition to Meta and OpenAI, other AI companies such as Google[20] and Nvidia[21] have also refrained from disclosing their training datasets and their content over time. The same applies to Stability AI which initially opted for disclosure of its datasets for training its AI model Stable Diffusion, a strategy that proved unsuccessful as it became one of the first AI companies to be taken to court in the USA[22] and in the UK[23] in 2023. However, the research paper on its AI model Stable Video Diffusion (released on November 21, 2023[24]) does not disclose any information about the sources of the training datasets.[25]

Generative AI and transparency from a copyright perspective

From a regulatory perspective, the latest amended EU AI Act,[26] approved by the European Parliament in March 2024,[27] includes art. 53 par. 1 let. d, the aim of which is to promote transparency on the data (including copyrighted data) used by providers[28] for training their General Purpose AI (“GPAI”) models.[29] It reads: “Providers of general-purpose AI models shall: (d) draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.”[30] This provision, which was introduced into the EU AI Act at a late stage in the drafting of the regulation,[31] has been the subject of much criticism[32] on the grounds that the requirement posed by art. 53 par. 1 let. d of the EU AI Act was ambiguous and overly demanding.[33]

However, the recent inclusion of recital 107 provides a further clarification that allays some of the concerns raised by AI model providers,[34] and states that the summary “should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used” (emphasis added). Nevertheless, the AI Office’s role does not imply to verify or proceed to “a work-by-work assessment of the training data in terms of copyright compliance.”[35] On the subject of transparency, while a summary of the databases used by AI companies can be a first source of relevant information, the lack of disclosure of their content (especially if they are private) remains a challenge for rights holders to establish conclusive evidence of copying.

In the USA, the Copyright Office issued a Notice of inquiry and request for comments[36] on August 30, 2023, seeking comments on copyright law and policy issues raised by AI, “including those involved in the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works (…).”[37] More specifically, the US Copyright Office inquired whether developers of AI models should be required “to collect, retain, and disclose records regarding the materials used to train their models”[38] to enable copyright owners to determine whether their works have been used.[39] The US Copyright Office received more than 10,000 written comments[40] by the December 2023 deadline; these are currently under review.[41] It is worth noting, however, that unsurprisingly, AI companies such as Google[42] and Meta[43] (and as previously mentioned, OpenAI) agree that datasets and their content should not be divulged.

Yet, on April 9, 2024, the Generative AI Copyright Disclosure Act[44] was introduced by California Democratic Congressman Adam Schiff[45] and, if approved, would require “a notice be submitted to the Register of Copyright with respect to copyrighted works used in building generative AI systems, and for other purposes.” More specifically, section 2(a)(1) of the Generative AI Copyright Disclosure Act would require “[a] person who creates a training dataset, or alters a training dataset (…) in a significant manner, that is used in building a generative AI system” to submit to the Register of Copyrights a notice that contains: “(A) a sufficiently detailed summary of any copyrighted works used – (i) in the training dataset (…); or (ii) to alter the training dataset (…)” and “(B) the URL for such dataset (in the case of a training dataset that is publicly available on the internet at the time the notice is submitted).” Therefore, unlike the requirement of art. 53 par. 1 let. d of the EU AI Act, which is limited to a summary of the content used to train the GPAI, the proposed Generative AI Copyright Disclosure Act would mandate a notice regarding all copyrighted works used in building or altering the training dataset.

Despite the imprecise nature of this proposal (e.g. databases can be created by more than one person, as in the case of entities such as the nonprofit organizations LAION[46] or Common Crawl[47]), access to the content of a training dataset to build a generative AI system would undeniably give rights holders of protected literary and artistic works sturdier evidence of copyright infringement by the AI company. This point is not innocuous, as the plaintiffs in several lawsuits brought against AI companies did not have access to the content of the training datasets and could therefore only assume that their works had been used by AI companies based on the outputs generated by their AI tools. For instance, in Tremblay et al. v. OpenAI et al., the defendants argue that ChatGPT generates very accurate summaries of their copyrighted works.[48] However, without access to OpenAI’s data, it cannot be ruled out that these summaries were generated from other sources (e.g. other summaries written by third parties).

In fact, plaintiffs have rarely provided conclusive evidence of copying in proceedings against AI companies. One such case is Concord Music Group Inc. et al. v. Anthropic PBC,[49] in which the plaintiffs (the rights holders) were able to present clear examples of the reproduction of their lyrics by Claude, Anthropic’s AI tool.[50] Similarly, The New York Times was able to demonstrate in The New York Times Company v. Microsoft Corporation et al.[51] that the defendants’ AI tools can generate output that “recites Times content verbatim, closely summarizes it, and mimics its expressive style,”[52] as illustrated by several examples produced by the plaintiff.[53] Thus, despite the lack of transparency regarding the databases and their content used by the defendants, the plaintiffs were able to provide concrete evidence of copying by analyzing the outputs proposed by these tools. However, these complaints are unlikely to contribute to greater transparency, which begs the question of alternative solutions.

Possible remedies for copyright holders against AI companies

In cases where copyrighted works have already been harvested, the first challenge is to identify which works have been used by AI companies. The proposed solutions diverge between the EU and the USA: On the one hand, the EU AI Act requires GPAI providers to “draw up and make publicly available a sufficiently detailed summary about the content used for training” of the GPAI (art. 53 par. 1 let. d of the EU AI Act), and on the other hand, the US Generative AI Copyright Disclosure Act would require AI companies to disclose both databases and their content (section 2(a)(1)). Thus, while the latter proposal is more advantageous for authors and rights holders (since it enables them to identify the content of databases), the burden and responsibility of requesting the removal of their works or, if necessary, filing a complaint, still rests with them.

For the time being, curative opt-outs (as opposed to preventive opt-outs permitted by art. 4 par. 3 of the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (DSM Directive)[54] referred to in art. 53 par. 1 let. c of the EU AI Act[55]) are left to the rights holders themselves[56] or to the goodwill of AI companies, whose opt-out mechanisms have proved unsatisfactory.[57] What is more, these processes sustain an important disadvantage: They only apply to future uses, not current ones. Indeed, the possibility of deleting the content of trained models is still under development, although there are interesting avenues for large language models (LLMs).[58] However, curative opt-outs are disadvantageous and not a sustainable solution, as rights holders have to ensure, for each AI company, that their works have not been used.

An upstream approach is therefore needed to promote the transparency of databases and their content. The creation of partnerships between AI companies and providers of literary and artistic works, such as between Getty Images and Nvidia,[59] Universal Music and BandLab Technologies,[60] Google with Reddit,[61] OpenAI with Le Monde and Prisa Media,[62] is one development that has been observed. Yet, this type of collaboration mainly involves AI companies with large-scale content suppliers but does not extend to smaller players. They are, however, taken into account for certification, which is another solution to promote transparency of databases and their content, as is done by Fairly Trained,[63] whose mission is to certify AI companies that get a license for their training data.[64]

Conclusion

Fom a legal point of view, the transparency of databases and their content involves a balance between, on the one hand, the interest of AI companies in preserving a competitive advantage, favored by the EU AI Act, and, on the other hand, the interests of rights holders, who could benefit from the obligation to disclose databases and their content under the US Generative AI Copyright Disclosure Act. However, solutions aimed at improving database transparency, in both the EU and the USA, remain unsatisfactory, as the onus is still on rights holders to opt out (where possible and with the aforementioned constraints) or lodge a complaint. Yet, solutions are available, which include partnerships between AI companies and providers of copyrighted works, and certification.

It should nevertheless be noted that if, in the USA, the fair use doctrine is admitted concerning the unauthorized copying of protected works to train AI companies’ models, the US Generative AI Copyright Disclosure Act will lose its relevance, as will the issue of transparency of databases and their content. However, this matter will remain relevant within the EU insofar as rights holders expressly reserve the right to make reproductions and extractions of their works for text and data mining (see art. 4 par. 3 of the DSM Directive, referred to in art. 53 par. 1 let. c of the EU AI Act), which thus favors the interests of rights holders over those of AI companies.

About the Author:

Ana Andrijevic is a PhD candidate at the University of Geneva. She is also a visiting researcher at Harvard Law School where she is an affiliated researcher at the Berkman Klein Center for Internet & Society (Harvard University).

Sources:

  1. OECD, OECD Legal Instruments, OECD (02.05.2024), available at: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 See also OECD, OECD updates AI Principles to stay abreast of rapid technological developments, OECD (03.05.2024), available at: https://www.oecd.org/newsroom/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.htm ↑
  2. OECD, OECD Legal Instruments, par. 1.3 on Transparency and explainability, which states in particular that: “AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: iii. when feasible and useful, to provide plain and easy-to-understand information on the sources of data/input (…).” ↑
  3. See for instance OECD.AI, OECD Principles, Transparency and explainability (Principle 1.3), available at: https://oecd.ai/en/dashboards/ai-principles/P7 ↑
  4. World Intellectual Property Organization (WIPO), Generative AI, Navigating Intellectual Property, Geneva (2024), p. 2, available at: https://www.wipo.int/export/sites/www/about-ip/en/frontier_technologies/pdf/generative-ai-factsheet.pdf ↑
  5. See for instance Tremblay et al. v. OpenAI et al., Case 3:23-cv-03223-AMO, 13.03.2024 and The New York Times Company, v. Microsoft Corporation et al., Case 1:23-cv-11195, 27.12.2023. ↑
  6. Getty Images (US) v. Stability AI Inc., Case 1:23-cv-00135-UNA, 03.02.2023 and Getty Images (UK) et al. v. Stability AI Ltd., [2023] EWHC 3090 (Ch), Case No: IL-2023-000007, 1.12.2023. ↑
  7. Andersen et al. v. Stability AI Ltd. et al., Case 3:23-cv-00201, 13.01.2023 and Jingna Zhang et al. v. Google LLC et al., Case 3:24-cv-02531, 26.04.2023. ↑
  8. Ashley Carman and Lucas Shaw, Sony Music Warns Companies to Stop Training AI on Its Artists’ Content, Bloomberg (16.05.2024), available at: https://www.bloomberg.com/news/articles/2024-05-16/sony-music-warns-companies-to-stop-training-ai-on-its-artists-content ↑
  9. Meta, Introducing Meta Llama3: The most capable openly available LLM to date, Meta (18.04.2024), available at: https://ai.meta.com/blog/meta-llama-3/ It indicates that: “Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources.” ↑
  10. Chloe Xiang, OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit, Vice (28.02.2024), available at: https://www.vice.com/en/article/5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit ↑
  11. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Improving Language Understanding by Generative Pre-Training, OpenAI (2018), p. 4, available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf ↑
  12. Hugging Face, Datasets: bookcorpus, available at: https://huggingface.co/datasets/bookcorpus ↑
  13. Tremblay et al. v. OpenAI et all., par. 39. ↑
  14. Tom B. Brown et al., Language Models are Few-Shot Learners, p. 8, available at: https://arxiv.org/pdf/2005.14165 ↑
  15. See footnote n°5. ↑
  16. Edward Lee, Status of all 24 copyright lawsuits v. AI companies, 17.05.2024, available at: https://chatgptiseatingtheworld.com/2024/05/17/status-of-all-24-copyright-lawsuits-v-ai-companies-may-17-2024/ ↑
  17. Tremblay et al. v. OpenAI et al., par. 40 to 43. ↑
  18. OpenAI, GPT-4 Technical Report, OpenAI (04.03.2024), p. 2, available at: https://arxiv.org/pdf/2303.08774 ↑
  19. House of Lords Communications and Digital Select Committee, OpenAI – written evidence (LLM0113), London (05.12.2023), p. 4, available at: https://committees.parliament.uk/writtenevidence/126981/pdf/ ↑
  20. Jingna Zhang et al. v. Google LLC et al., par. 31 and 32. ↑
  21. In Abdi Nazemian et al. v. NVIDIA Corporation, Case 3:24-cv-01454, 08.03.2024, par. 22 and 23 and Andre Dubus III et al. v. NVIDIA Corporation, Case 4:24-cv-02655, 02.05.2024, par. 21 and 22, defendants refer to the training of the NeMo Megatron, released in September 2022 and trained on “The Pile” dataset. However, if we take a more recent example, NVIDIA indicates for instance that its AI model PeopleNet was trained on a “proprietary dataset with more than 7.6 million images,” without any further information. For more, see NVIDIA, PeopleNet Model Cart, NVIDIA (11.04.2024), available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet ↑
  22. Andersen et al. v. Stability AI Ltd. et al. and Getty Images (US) v. Stability AI Inc. ↑
  23. Getty Images (UK) et al. v. Stability AI Ltd. ↑
  24. Stability AI, Introducing Stable Video Diffusion, Stability AI (21.11.2023), available at: https://stability.ai/news/stable-video-diffusion-open-ai-video-model ↑
  25. Andreas Blattmann et al., Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets, Stability AI (21.11.2023), p. 2, available at: https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf ↑
  26. European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Act (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD), available at: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html#title2 ↑
  27. European Parliament, Artificial Intelligence Act: MEPs adopt landmark law, Brussels (13.03.2024), available at: https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law As indicated: “The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.” ↑
  28. See definition of providers in art. 3 par. 3 of the EU AI Act. ↑
  29. See definition of GPAI in art. 3 par. 63 of the EU AI Act. ↑
  30. With regard to the AI Office, see art. 3 par. 47 of the EU AI Act: “‘AI Office’ means the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and AI governance carried out by the European Artificial Intelligence Office established by Commission Decision of 24.1.2024; references in this Regulation to the AI Office shall be construed as references to the Commission.” ↑
  31. Paul Keller, A first look at the copyright relevant parts in the final AI Act compromise, Kluwer Copyright Blog (11.12.2023), available at: https://copyrightblog.kluweriplaw.com/2023/12/11/a-first-look-at-the-copyright-relevant-parts-in-the-final-ai-act-compromise/ ↑
  32. Andres Guadamuz, The EU AI Act and Copyright, TechnoLlama (14.03.2024), available at: https://www.technollama.co.uk/the-eu-ai-act-and-copyright ↑
  33. Keller. ↑
  34. Id. ↑
  35. See recital 108 of the EU AI Act. ↑
  36. Library of Congress, Copyright Office, Artificial Intelligence and Copyright, No. 2023-6, in: Federal Register, Vol. 88, No. 167, Washington, DC (30.08.2023), available at: https://www.govinfo.gov/content/pkg/FR-2023-08-30/pdf/2023-18624.pdf ↑
  37. Id., p. 59942. ↑
  38. Id., p. 59947. ↑
  39. Id. ↑
  40. US Copyright Office, Artificial Intelligence and Copyright, Washington, DC, available at: https://www.regulations.gov/docket/COLC-2023-0006/comments ↑
  41. US Copyright Office, Washington, DC (23.02.2024), p. 5, available at: https://copyright.gov/laws/hearings/USCO-Letter-on-AI-and-Copyright-Initiative-Update-Feb-23-2024.pdf?loclr=blogcop ↑
  42. US Copyright Office, Comment from Google, Washington, DC (01.11.2023), pp. 11 and 12, available at: https://www.regulations.gov/comment/COLC-2023-0006-9003 ↑
  43. US Copyright Office, Comment from Meta Platforms, Inc., Washington, DC (01.11.2023), pp. 19 and 20, available at: https://www.regulations.gov/comment/COLC-2023-0006-9027 ↑
  44. Available at: https://schiff.house.gov/imo/media/doc/the_generative_ai_copyright_disclosure_act.pdf ↑
  45. Rep. Schiff introduces groundbreaking bill to create AI transparency between creators and companies, Washington, DC (09.04.2024), available at: https://schiff.house.gov/news/press-releases/rep-schiff-introduces-groundbreaking-bill-to-create-ai-transparency-between-creators-and-companies ↑
  46. LAION, About, available at: https://laion.ai/about/ ↑
  47. Common Crawl, Frequently asked questions, available at: https://commoncrawl.org/faq ↑
  48. Tremblay et al. v. Stability AI et al., par. 5 and 51. ↑
  49. Concord Music Group, Inc. et al., v. Anthropic PBC, Case 3:23-cv-01092, 18.10.2023. ↑
  50. Id., par. 66 to 69. ↑
  51. The New York Times Company, v. Microsoft Corporation et al. ↑
  52. Id., par. 4. ↑
  53. See for instance Id., par. 99, 100, and 104 to 107. ↑
  54. Art. 4 par. 3 of the DSM Directive: “The exception or limitation provided for in paragraph 1 [Exception or limitation for text and data mining] shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.” ↑
  55. Art. 53 par. 1 let. C of the EU AI Act: “Providers of general-purpose AI models shall: (c) put in place a policy to comply with Union copyright law, and in particular to identify and comply with, including through state of the art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790.” ↑
  56. See “Have I Been Trained?, which is mentioned in Andersen et al. v. Stability AI et al., p. 6, footnotes n°1 and 2. ↑
  57. Kali Hays, OpenAI offers a way for creators to opt out of AI training data. It’s so onerous that one artist called it ‘enraging’, Business Insider (29.09.2023), available at: https://www.businessinsider.com/openai-dalle-opt-out-process-artists-enraging-2023-9 ↑
  58. Ronen Eldan and Mark Russinovich, Who’s Harry Potter? Approximate Unlearning in LLMs, 04.10.2023, available at: https://arxiv.org/pdf/2310.02238 ↑
  59. Rick Merritt, Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI, NVIDIA (21.03.2023), available at: https://blogs.nvidia.com/blog/generative-ai-getty-images/ ↑
  60. Universal Music Group, Universal Music Group and BandLab Technologies announce first-of-its-kind strategic AI collaboration, Universal Music (18.10.2023), available at: https://www.universalmusic.com/universal-music-group-and-bandlab-technologies-announce-first-of-its-kind-strategic-ai-collaboration/ ↑
  61. Anna Tong, Echo Wang, Martin Coulter, Exclusive: Reddit in AI content licensing deal with Google, Reuters (21.02.2024), available at: https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/ ↑
  62. OpenAI, Global news partnerships: Le Monde and Prisa Media, OpenAI (13.03.2024), available at: https://openai.com/index/global-news-partnerships-le-monde-and-prisa-media/ ↑
  63. Fairly Trained, About, available at: https://www.fairlytrained.org/about ↑
  64. Fairly Trained, Licensed Model Certification, available at: https://www.fairlytrained.org/certifications ↑

 

Disclaimer: This article is for educational purposes only and is not meant to provide legal advice. Readers should not construe or rely on any comment or statement in this article as legal advice. For legal advice, readers should seek a consultation with an attorney.

Post navigation

Previous Matters of Baldessari: Estate of the Artist Finds Itself on Both Sides of Litigation
Next The Cost of Fakes: The Aesthetic, Legal, and Economic Implications of Forgeries

Related Posts

Case Review: The Mayor Gallery v. Agnes Martin Catalogue Raisonné

November 12, 2019
logo

Thomas Kline for The Wall Street Journal on US v. Wally

September 26, 2010
Fragment: Debtor's Prison, 1840 Paul Gavarni, French, 1804-1866 Lithograph printed in black ink on wove paper Image: 8 1/2 × 7 7/8 inches (21.6 × 20 cm) Sheet: 14 × 10 3/4 inches (35.6 × 27.3 cm) Gift of Mrs. Virginia Booth Vogel

Assets to Auctions: The Role of Art in Bankruptcy Proceedings

March 19, 2024
Center for Art Law
A Gift for You

A Gift for You

this Holiday Season

Celebrate the holidays with 20% off your annual subscription — claim your gift now!

 

Get your Subscription Today!
Guidelines AI and Art Authentication

AI and Art Authentication

Explore the new Guidelines for AI and Art Authentication for the responsible, ethical, and transparent use of artificial intelligence.

Download here
Center for Art Law

Follow us on Instagram for the latest in Art Law!

In 2022, former art dealer Inigo Philbrick was sen In 2022, former art dealer Inigo Philbrick was sentenced to seven years in prison for committing what is considered one of the United States' most significant cases of art fraud. With access to Philbrick's personal correspondence, Orlando Whitfield chronicled his friendship with the disgraced dealer in a 2024 memoir, All that Glitters: A Story of Friendship, Fraud, and Fine Art. 

For more insights into the fascinating story of Inigo Philbrick, and those he defrauded, read our recent book review. 

🔗 Click the link in our bio to read more!

#centerforartlaw #legalresearch #artlaw #artlawyer #lawer #inigophilbrick #bookreview #artfraud
The highly publicized Louvre heist has shocked the The highly publicized Louvre heist has shocked the globe due to its brazen nature. However, beyond its sheer audacity, the heist has exposed systemic security weaknesses throughout the international art world. Since the theft took place on October 19th, the French police have identified the perpetrators, describing them as local Paris residents with records of petty theft. 

In our new article, Sarah Boxer explores parallels between the techniques used by the Louvre heists’ perpetrators and past major art heists, identifying how the theft reveals widespread institutional vulnerability to art crime. 

🔗 Click the link in our bio to read more!

#centerforartlaw #artlaw #legalresearch #artcrime #theft #louvre #france #arttheft #stolenart
In September 2025, 77-year old Pennsylvania reside In September 2025, 77-year old Pennsylvania resident Carter Reese made headlines not only for being Taylor Swift's former neighbor, but also for pleading guilty to selling forgeries of Picasso, Basquiat, Warhol, and others. This and other recent high profile forgery cases are evidence of the art market's ongoing vulnerability to fraudulent activity. Yet, new innovations in DNA and artificial intelligence (AI) may help defend against forgery. 

To learn more about how the art market's response to fraud and forgery is evolving, read our new article by Shaila Gray. 

🔗 Click the link in our bio to read more!

#centerforartlaw #artlaw #legalresearch #artlawyer #lawyer #AI #forgery #artforgery #artfakes #authenticity
Did you know that Charles Dickens visited America Did you know that Charles Dickens visited America twice, in 1842 and in 1867? In between, he wrote his famous “A Tale of Two Cities,” foreshadowing upheavals and revolutions and suggesting that individual acts of compassion, love, and sacrifice can break cycles of injustice. With competing demands and obligations, finding time to read books in the second quarter of the 21st century might get increasingly harder. As we live in the best and worst of times again, try to enjoy the season of light and a good book (or a good newsletter).

From all of us at the Center for Art Law, we wish you peace, love, and understanding this holiday season. 

🔗 Read more by clicking the link in our bio!

#centerforartlaw #artlaw #legalresearch #artlawyer #december #newsletter #lawyer
Is it, or isn’t it, Vermeer? Trouble spotting fake Is it, or isn’t it, Vermeer? Trouble spotting fakes? You are not alone. Donate to the Center for Art Law, we are the real deal. 

🔗 Click the link in our bio to donate today!

#centerforartlaw #artlaw #legalresearch #endofyear #givingtuesday #donate #notacrime #framingartlaw
Whether legal systems are ready or not, artificial Whether legal systems are ready or not, artificial intelligence is making its way into the courtroom. AI-generated evidence is becoming increasingly common, but many legal professionals are concerned that existing legal frameworks aren't sufficient to account for ethical dilemmas arising from the technology. 

To learn more about the ethical arguments surrounding AI-generated evidence, and what measures the US judiciary is taking to respond, read our new article by Rebecca Bennett. 

🔗 Click the link in our bio to read more!

#centerforartlaw #artlaw #legalresearch #artlawyer #lawyer #aiart #courtissues #courts #generativeai #aievidence
Interested in the world of art restitution? Hear f Interested in the world of art restitution? Hear from our Lead Researcher of the Nazi-Era Looted Art Database, Amanda Buonaiuto, about the many accomplishments this year and our continuing goals in this space. We would love the chance to do even more amazing work, your donations can give us this opportunity! 

Please check out the database and the many recordings of online events we have regarding the showcase on our website.

Help us reach our end of year fundraising goal of $35K.

🔗 Click the link in our bio to donate ❤️🖤
Make sure to grab your tickets for our discussion Make sure to grab your tickets for our discussion on the legal challenges and considerations facing General Counsels at leading museums, auction houses, and galleries on December 17. Tune in to get insight into how legal departments navigate the complex and evolving art world.

The panel, featuring Cindy Caplan, General Counsel, The Jewish Museum, Jason Pollack, Senior Vice President, General Counsel, Americas, Christie’s and Halie Klein, General Counsel, Pace Gallery, will address a range of pressing issues, from the balancing of legal risk management with institutional missions, combined with the need to supervise a variety of legal issues, from employment law to real estate law. The conversation will also explore the unique role General Counsels play in shaping institutional policy.

This is a CLE Event. 1 Credit for Professional Practice Pending Approval.

🎟️ Make sure to grab your tickets using the link in our bio! 

#centerforartlaw #artlaw #legalresearch #generalcounsel #museumissues #artauctions #artgallery #artlawyer #CLE
While arts funding is perpetually scarce, cultural While arts funding is perpetually scarce, cultural heritage institutions particularly struggle during and after armed conflict. In such circumstances, funds from a variety of sources including NGOs, international organizations, national and regional institutions, and private funds all play a crucial role in protecting cultural heritage. 

Read our new article by Andrew Dearman to learn more about the organizations funding emergency cultural heritage protection in the face of armed conflict, as well as the factors hindering effective responses. 

🔗 Click the link in our bio to read more! 

#centerforartlaw #artlaw #legalresearch #lawyer #artlawyer #culturalheritage #armedconflict #UNESCO
Join the Center for Art Law in welcoming Attorney Join the Center for Art Law in welcoming Attorney and Art Business Consultant Richard Lehun as our keynote speaker for our upcoming Artist Dealer Relationships Clinic. 

The Artist-Dealer Relationships Clinic helps artists and gallerists negotiate effective and mutually-beneficial contracts. By connecting artists and dealers to attorneys, this Clinic looks to forge meaningful relations and to provide a platform for artists and dealers to learn about the laws that govern their relationship, as well as have their questions addressed by experts in the field.

After a short lecture, attendees with consultation tickets will be paired with a volunteer attorney for a confidential 20-minute consultation. Limited slots are available for the consultation sessions.
Today we held our last advisory meeting of the yea Today we held our last advisory meeting of the year, a hybrid, and a good wrap to a busy season. What do you think we discussed?
We are incredibly grateful to our network of attor We are incredibly grateful to our network of attorneys who generously volunteer for our clinics! We could not do it without them! 

Next week, join the Center for Art Law for our Artist-Dealer Relationships Clinic. This clinic is focused on helping artists navigate and understand contracts with galleries and art dealers. After a short lecture, attendees with consultation tickets will be paired with one of the Center's volunteer attorneys for a confidential 20-minute consultation. Limited slots are available for the consultation sessions.
  • About the Center
  • Contact Us
  • Newsletter
  • Upcoming Events
  • Internship
  • Case Law Database
  • Log in
  • Become a Member
  • Donate
DISCLAIMER

Center for Art Law is a New York State non-profit fully qualified under provision 501(c)(3)
of the Internal Revenue Code.

The Center does not provide legal representation. Information available on this website is
purely for educational purposes only and should not be construed as legal advice.

TERMS OF USE AND PRIVACY POLICY

Your use of the Site (as defined below) constitutes your consent to this Agreement. Please
read our Terms of Use and Privacy Policy carefully.

© 2025 Center for Art Law