• About
    • Mission
    • Team
    • Boards
    • Mentions & Testimonials
    • Institutional Recognition
    • Annual Reports
    • Current & Past Sponsors
    • Contact Us
  • Resources
    • Article Collection
    • Podcast: Art in Brief
    • AML and the Art Market
    • AI and Art Authentication
    • Newsletter
      • Subscribe
      • Archives
      • In Brief
    • Art Law Library
    • Movies
    • Nazi-looted Art Restitution Database
    • Global Network
      • Courses and Programs
      • Artists’ Assistance
      • Bar Associations
      • Legal Sources
      • Law Firms
      • Student Societies
      • Research Institutions
    • Additional resources
      • The “Interview” Project
  • Events
    • Worldwide Calendar
    • Our Events
      • All Events
      • Annual Conferences
        • 2025 Art Law Conference
        • 2024 Art Law Conference
        • 2023 Art Law Conference
        • 2022 Art Law Conference
        • 2015 Art Law Conference
  • Programs
    • Visual Artists’ Legal Clinics
      • Art & Copyright Law Clinic
      • Artist-Dealer Relationships Clinic
      • Artist Legacy and Estate Planning Clinic
      • Visual Artists’ Immigration Clinic
    • Summer School
      • 2026
      • 2025
    • Internship and Fellowship
    • Judith Bresler Fellowship
  • Case Law Database
  • Log in
  • Become a Member
  • Donate
  • Log in
  • Become a Member
  • Donate
Center for Art Law
  • About
    About
    • Mission
    • Team
    • Boards
    • Mentions & Testimonials
    • Institutional Recognition
    • Annual Reports
    • Current & Past Sponsors
    • Contact Us
  • Resources
    Resources
    • Article Collection
    • Podcast: Art in Brief
    • AML and the Art Market
    • AI and Art Authentication
    • Newsletter
      Newsletter
      • Subscribe
      • Archives
      • In Brief
    • Art Law Library
    • Movies
    • Nazi-looted Art Restitution Database
    • Global Network
      Global Network
      • Courses and Programs
      • Artists’ Assistance
      • Bar Associations
      • Legal Sources
      • Law Firms
      • Student Societies
      • Research Institutions
    • Additional resources
      Additional resources
      • The “Interview” Project
  • Events
    Events
    • Worldwide Calendar
    • Our Events
      Our Events
      • All Events
      • Annual Conferences
        Annual Conferences
        • 2025 Art Law Conference
        • 2024 Art Law Conference
        • 2023 Art Law Conference
        • 2022 Art Law Conference
        • 2015 Art Law Conference
  • Programs
    Programs
    • Visual Artists’ Legal Clinics
      Visual Artists’ Legal Clinics
      • Art & Copyright Law Clinic
      • Artist-Dealer Relationships Clinic
      • Artist Legacy and Estate Planning Clinic
      • Visual Artists’ Immigration Clinic
    • Summer School
      Summer School
      • 2026
      • 2025
    • Internship and Fellowship
    • Judith Bresler Fellowship
  • Case Law Database
Home image/svg+xml 2021 Timothée Giet Art law image/svg+xml 2021 Timothée Giet Generative AI and transparency of databases and their content, from a copyright perspective
Back

Generative AI and transparency of databases and their content, from a copyright perspective

May 21, 2024

Collage about transparency ai and art with a rose

By Ana Andrijevic*

In May 2024, the Organisation for Economic Co-operation and Development (OECD) updated its Principles on Artificial Intelligence (AI),[1] including the principle of transparency,[2] which has contributed to shaping policy[3] and regulatory debates on AI and generative AI (i.e. deep learning models that can create new content, such as text, computer code, and images, in response to a user’s short, written description – a “prompt”).[4] From a copyright perspective, the principle of transparency has become increasingly relevant in several respects: the transparency of the databases (or datasets) and their content used to train AI models, the transparency in AI models, and the transparency regarding the use of AI tools in the creative process.

This contribution focuses on the transparency of the databases and their content used to train AI models, which are increasingly being kept under lock and key and made inaccessible for perusal by AI companies to maintain their competitive edge. Since these databases contain a wide range of protected literary and artistic works (e.g. literary works,[5] photographic works,[6] paintings and drawings,[7] musical works,[8] and more), the interests of AI companies collide with those of authors and rights holders, whose works are used without authorization or compensation. In this contribution, we explore this discrepancy of interests from the principle of transparency angle.

Lack of transparency of databases and their content used to train AI models

Over the last few years, AI companies have become more cautious about disclosing the databases used to train their AI models, as illustrated, for instance, by Meta (for the training of Llama)[9] and OpenAI[10] (for the training of GPTs, i.e. Generative Pre-trained Transformers). In particular, OpenAI’s strategy has shifted from openness to limiting the amount of information relating to their training datasets. Thus, in the span of a couple of years (2018 to 2020), the US company has gone from disclosing[11] the use of BooksCorpus[12] (a dataset of 7,000 self-published books retrieved from smashwords.com, which are largely protected under copyright)[13] for the training of GPT-1 (released in June 2018), to indicating the use of several vaguely labeled datasets to train GPT-3 (released in July 2020), including two internet-based books corpora (Books1 and Books2).[14] Although the content of Books1 and Books2 remains unknown, the defendants in Tremblay et al. v. OpenAI et al. (consolidated on March 12, 2024),[15] one of several complaints filed in the USA against AI companies in 2023,[16] have investigated the issue at hand.[17]

With the launch of GPT-4 on March 14, 2023, OpenAI has become increasingly secretive, citing “the competitive landscape and the safety implications of large-scale models like GPT-4”[18] to explain its choice. Nevertheless, as OpenAI CEO Sam Altman recently acknowledged, there is no doubt that their datasets contain large amounts of copyrighted work, since, in his words, “it would be impossible to train today’s leading AI models without using copyrighted materials.”[19] In addition to Meta and OpenAI, other AI companies such as Google[20] and Nvidia[21] have also refrained from disclosing their training datasets and their content over time. The same applies to Stability AI which initially opted for disclosure of its datasets for training its AI model Stable Diffusion, a strategy that proved unsuccessful as it became one of the first AI companies to be taken to court in the USA[22] and in the UK[23] in 2023. However, the research paper on its AI model Stable Video Diffusion (released on November 21, 2023[24]) does not disclose any information about the sources of the training datasets.[25]

Generative AI and transparency from a copyright perspective

From a regulatory perspective, the latest amended EU AI Act,[26] approved by the European Parliament in March 2024,[27] includes art. 53 par. 1 let. d, the aim of which is to promote transparency on the data (including copyrighted data) used by providers[28] for training their General Purpose AI (“GPAI”) models.[29] It reads: “Providers of general-purpose AI models shall: (d) draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.”[30] This provision, which was introduced into the EU AI Act at a late stage in the drafting of the regulation,[31] has been the subject of much criticism[32] on the grounds that the requirement posed by art. 53 par. 1 let. d of the EU AI Act was ambiguous and overly demanding.[33]

However, the recent inclusion of recital 107 provides a further clarification that allays some of the concerns raised by AI model providers,[34] and states that the summary “should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used” (emphasis added). Nevertheless, the AI Office’s role does not imply to verify or proceed to “a work-by-work assessment of the training data in terms of copyright compliance.”[35] On the subject of transparency, while a summary of the databases used by AI companies can be a first source of relevant information, the lack of disclosure of their content (especially if they are private) remains a challenge for rights holders to establish conclusive evidence of copying.

In the USA, the Copyright Office issued a Notice of inquiry and request for comments[36] on August 30, 2023, seeking comments on copyright law and policy issues raised by AI, “including those involved in the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works (…).”[37] More specifically, the US Copyright Office inquired whether developers of AI models should be required “to collect, retain, and disclose records regarding the materials used to train their models”[38] to enable copyright owners to determine whether their works have been used.[39] The US Copyright Office received more than 10,000 written comments[40] by the December 2023 deadline; these are currently under review.[41] It is worth noting, however, that unsurprisingly, AI companies such as Google[42] and Meta[43] (and as previously mentioned, OpenAI) agree that datasets and their content should not be divulged.

Yet, on April 9, 2024, the Generative AI Copyright Disclosure Act[44] was introduced by California Democratic Congressman Adam Schiff[45] and, if approved, would require “a notice be submitted to the Register of Copyright with respect to copyrighted works used in building generative AI systems, and for other purposes.” More specifically, section 2(a)(1) of the Generative AI Copyright Disclosure Act would require “[a] person who creates a training dataset, or alters a training dataset (…) in a significant manner, that is used in building a generative AI system” to submit to the Register of Copyrights a notice that contains: “(A) a sufficiently detailed summary of any copyrighted works used – (i) in the training dataset (…); or (ii) to alter the training dataset (…)” and “(B) the URL for such dataset (in the case of a training dataset that is publicly available on the internet at the time the notice is submitted).” Therefore, unlike the requirement of art. 53 par. 1 let. d of the EU AI Act, which is limited to a summary of the content used to train the GPAI, the proposed Generative AI Copyright Disclosure Act would mandate a notice regarding all copyrighted works used in building or altering the training dataset.

Despite the imprecise nature of this proposal (e.g. databases can be created by more than one person, as in the case of entities such as the nonprofit organizations LAION[46] or Common Crawl[47]), access to the content of a training dataset to build a generative AI system would undeniably give rights holders of protected literary and artistic works sturdier evidence of copyright infringement by the AI company. This point is not innocuous, as the plaintiffs in several lawsuits brought against AI companies did not have access to the content of the training datasets and could therefore only assume that their works had been used by AI companies based on the outputs generated by their AI tools. For instance, in Tremblay et al. v. OpenAI et al., the defendants argue that ChatGPT generates very accurate summaries of their copyrighted works.[48] However, without access to OpenAI’s data, it cannot be ruled out that these summaries were generated from other sources (e.g. other summaries written by third parties).

In fact, plaintiffs have rarely provided conclusive evidence of copying in proceedings against AI companies. One such case is Concord Music Group Inc. et al. v. Anthropic PBC,[49] in which the plaintiffs (the rights holders) were able to present clear examples of the reproduction of their lyrics by Claude, Anthropic’s AI tool.[50] Similarly, The New York Times was able to demonstrate in The New York Times Company v. Microsoft Corporation et al.[51] that the defendants’ AI tools can generate output that “recites Times content verbatim, closely summarizes it, and mimics its expressive style,”[52] as illustrated by several examples produced by the plaintiff.[53] Thus, despite the lack of transparency regarding the databases and their content used by the defendants, the plaintiffs were able to provide concrete evidence of copying by analyzing the outputs proposed by these tools. However, these complaints are unlikely to contribute to greater transparency, which begs the question of alternative solutions.

Possible remedies for copyright holders against AI companies

In cases where copyrighted works have already been harvested, the first challenge is to identify which works have been used by AI companies. The proposed solutions diverge between the EU and the USA: On the one hand, the EU AI Act requires GPAI providers to “draw up and make publicly available a sufficiently detailed summary about the content used for training” of the GPAI (art. 53 par. 1 let. d of the EU AI Act), and on the other hand, the US Generative AI Copyright Disclosure Act would require AI companies to disclose both databases and their content (section 2(a)(1)). Thus, while the latter proposal is more advantageous for authors and rights holders (since it enables them to identify the content of databases), the burden and responsibility of requesting the removal of their works or, if necessary, filing a complaint, still rests with them.

For the time being, curative opt-outs (as opposed to preventive opt-outs permitted by art. 4 par. 3 of the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (DSM Directive)[54] referred to in art. 53 par. 1 let. c of the EU AI Act[55]) are left to the rights holders themselves[56] or to the goodwill of AI companies, whose opt-out mechanisms have proved unsatisfactory.[57] What is more, these processes sustain an important disadvantage: They only apply to future uses, not current ones. Indeed, the possibility of deleting the content of trained models is still under development, although there are interesting avenues for large language models (LLMs).[58] However, curative opt-outs are disadvantageous and not a sustainable solution, as rights holders have to ensure, for each AI company, that their works have not been used.

An upstream approach is therefore needed to promote the transparency of databases and their content. The creation of partnerships between AI companies and providers of literary and artistic works, such as between Getty Images and Nvidia,[59] Universal Music and BandLab Technologies,[60] Google with Reddit,[61] OpenAI with Le Monde and Prisa Media,[62] is one development that has been observed. Yet, this type of collaboration mainly involves AI companies with large-scale content suppliers but does not extend to smaller players. They are, however, taken into account for certification, which is another solution to promote transparency of databases and their content, as is done by Fairly Trained,[63] whose mission is to certify AI companies that get a license for their training data.[64]

Conclusion

Fom a legal point of view, the transparency of databases and their content involves a balance between, on the one hand, the interest of AI companies in preserving a competitive advantage, favored by the EU AI Act, and, on the other hand, the interests of rights holders, who could benefit from the obligation to disclose databases and their content under the US Generative AI Copyright Disclosure Act. However, solutions aimed at improving database transparency, in both the EU and the USA, remain unsatisfactory, as the onus is still on rights holders to opt out (where possible and with the aforementioned constraints) or lodge a complaint. Yet, solutions are available, which include partnerships between AI companies and providers of copyrighted works, and certification.

It should nevertheless be noted that if, in the USA, the fair use doctrine is admitted concerning the unauthorized copying of protected works to train AI companies’ models, the US Generative AI Copyright Disclosure Act will lose its relevance, as will the issue of transparency of databases and their content. However, this matter will remain relevant within the EU insofar as rights holders expressly reserve the right to make reproductions and extractions of their works for text and data mining (see art. 4 par. 3 of the DSM Directive, referred to in art. 53 par. 1 let. c of the EU AI Act), which thus favors the interests of rights holders over those of AI companies.

About the Author:

Ana Andrijevic is a PhD candidate at the University of Geneva. She is also a visiting researcher at Harvard Law School where she is an affiliated researcher at the Berkman Klein Center for Internet & Society (Harvard University).

Sources:

  1. OECD, OECD Legal Instruments, OECD (02.05.2024), available at: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 See also OECD, OECD updates AI Principles to stay abreast of rapid technological developments, OECD (03.05.2024), available at: https://www.oecd.org/newsroom/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.htm ↑
  2. OECD, OECD Legal Instruments, par. 1.3 on Transparency and explainability, which states in particular that: “AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: iii. when feasible and useful, to provide plain and easy-to-understand information on the sources of data/input (…).” ↑
  3. See for instance OECD.AI, OECD Principles, Transparency and explainability (Principle 1.3), available at: https://oecd.ai/en/dashboards/ai-principles/P7 ↑
  4. World Intellectual Property Organization (WIPO), Generative AI, Navigating Intellectual Property, Geneva (2024), p. 2, available at: https://www.wipo.int/export/sites/www/about-ip/en/frontier_technologies/pdf/generative-ai-factsheet.pdf ↑
  5. See for instance Tremblay et al. v. OpenAI et al., Case 3:23-cv-03223-AMO, 13.03.2024 and The New York Times Company, v. Microsoft Corporation et al., Case 1:23-cv-11195, 27.12.2023. ↑
  6. Getty Images (US) v. Stability AI Inc., Case 1:23-cv-00135-UNA, 03.02.2023 and Getty Images (UK) et al. v. Stability AI Ltd., [2023] EWHC 3090 (Ch), Case No: IL-2023-000007, 1.12.2023. ↑
  7. Andersen et al. v. Stability AI Ltd. et al., Case 3:23-cv-00201, 13.01.2023 and Jingna Zhang et al. v. Google LLC et al., Case 3:24-cv-02531, 26.04.2023. ↑
  8. Ashley Carman and Lucas Shaw, Sony Music Warns Companies to Stop Training AI on Its Artists’ Content, Bloomberg (16.05.2024), available at: https://www.bloomberg.com/news/articles/2024-05-16/sony-music-warns-companies-to-stop-training-ai-on-its-artists-content ↑
  9. Meta, Introducing Meta Llama3: The most capable openly available LLM to date, Meta (18.04.2024), available at: https://ai.meta.com/blog/meta-llama-3/ It indicates that: “Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources.” ↑
  10. Chloe Xiang, OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit, Vice (28.02.2024), available at: https://www.vice.com/en/article/5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit ↑
  11. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Improving Language Understanding by Generative Pre-Training, OpenAI (2018), p. 4, available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf ↑
  12. Hugging Face, Datasets: bookcorpus, available at: https://huggingface.co/datasets/bookcorpus ↑
  13. Tremblay et al. v. OpenAI et all., par. 39. ↑
  14. Tom B. Brown et al., Language Models are Few-Shot Learners, p. 8, available at: https://arxiv.org/pdf/2005.14165 ↑
  15. See footnote n°5. ↑
  16. Edward Lee, Status of all 24 copyright lawsuits v. AI companies, 17.05.2024, available at: https://chatgptiseatingtheworld.com/2024/05/17/status-of-all-24-copyright-lawsuits-v-ai-companies-may-17-2024/ ↑
  17. Tremblay et al. v. OpenAI et al., par. 40 to 43. ↑
  18. OpenAI, GPT-4 Technical Report, OpenAI (04.03.2024), p. 2, available at: https://arxiv.org/pdf/2303.08774 ↑
  19. House of Lords Communications and Digital Select Committee, OpenAI – written evidence (LLM0113), London (05.12.2023), p. 4, available at: https://committees.parliament.uk/writtenevidence/126981/pdf/ ↑
  20. Jingna Zhang et al. v. Google LLC et al., par. 31 and 32. ↑
  21. In Abdi Nazemian et al. v. NVIDIA Corporation, Case 3:24-cv-01454, 08.03.2024, par. 22 and 23 and Andre Dubus III et al. v. NVIDIA Corporation, Case 4:24-cv-02655, 02.05.2024, par. 21 and 22, defendants refer to the training of the NeMo Megatron, released in September 2022 and trained on “The Pile” dataset. However, if we take a more recent example, NVIDIA indicates for instance that its AI model PeopleNet was trained on a “proprietary dataset with more than 7.6 million images,” without any further information. For more, see NVIDIA, PeopleNet Model Cart, NVIDIA (11.04.2024), available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet ↑
  22. Andersen et al. v. Stability AI Ltd. et al. and Getty Images (US) v. Stability AI Inc. ↑
  23. Getty Images (UK) et al. v. Stability AI Ltd. ↑
  24. Stability AI, Introducing Stable Video Diffusion, Stability AI (21.11.2023), available at: https://stability.ai/news/stable-video-diffusion-open-ai-video-model ↑
  25. Andreas Blattmann et al., Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets, Stability AI (21.11.2023), p. 2, available at: https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf ↑
  26. European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Act (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD), available at: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html#title2 ↑
  27. European Parliament, Artificial Intelligence Act: MEPs adopt landmark law, Brussels (13.03.2024), available at: https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law As indicated: “The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.” ↑
  28. See definition of providers in art. 3 par. 3 of the EU AI Act. ↑
  29. See definition of GPAI in art. 3 par. 63 of the EU AI Act. ↑
  30. With regard to the AI Office, see art. 3 par. 47 of the EU AI Act: “‘AI Office’ means the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and AI governance carried out by the European Artificial Intelligence Office established by Commission Decision of 24.1.2024; references in this Regulation to the AI Office shall be construed as references to the Commission.” ↑
  31. Paul Keller, A first look at the copyright relevant parts in the final AI Act compromise, Kluwer Copyright Blog (11.12.2023), available at: https://copyrightblog.kluweriplaw.com/2023/12/11/a-first-look-at-the-copyright-relevant-parts-in-the-final-ai-act-compromise/ ↑
  32. Andres Guadamuz, The EU AI Act and Copyright, TechnoLlama (14.03.2024), available at: https://www.technollama.co.uk/the-eu-ai-act-and-copyright ↑
  33. Keller. ↑
  34. Id. ↑
  35. See recital 108 of the EU AI Act. ↑
  36. Library of Congress, Copyright Office, Artificial Intelligence and Copyright, No. 2023-6, in: Federal Register, Vol. 88, No. 167, Washington, DC (30.08.2023), available at: https://www.govinfo.gov/content/pkg/FR-2023-08-30/pdf/2023-18624.pdf ↑
  37. Id., p. 59942. ↑
  38. Id., p. 59947. ↑
  39. Id. ↑
  40. US Copyright Office, Artificial Intelligence and Copyright, Washington, DC, available at: https://www.regulations.gov/docket/COLC-2023-0006/comments ↑
  41. US Copyright Office, Washington, DC (23.02.2024), p. 5, available at: https://copyright.gov/laws/hearings/USCO-Letter-on-AI-and-Copyright-Initiative-Update-Feb-23-2024.pdf?loclr=blogcop ↑
  42. US Copyright Office, Comment from Google, Washington, DC (01.11.2023), pp. 11 and 12, available at: https://www.regulations.gov/comment/COLC-2023-0006-9003 ↑
  43. US Copyright Office, Comment from Meta Platforms, Inc., Washington, DC (01.11.2023), pp. 19 and 20, available at: https://www.regulations.gov/comment/COLC-2023-0006-9027 ↑
  44. Available at: https://schiff.house.gov/imo/media/doc/the_generative_ai_copyright_disclosure_act.pdf ↑
  45. Rep. Schiff introduces groundbreaking bill to create AI transparency between creators and companies, Washington, DC (09.04.2024), available at: https://schiff.house.gov/news/press-releases/rep-schiff-introduces-groundbreaking-bill-to-create-ai-transparency-between-creators-and-companies ↑
  46. LAION, About, available at: https://laion.ai/about/ ↑
  47. Common Crawl, Frequently asked questions, available at: https://commoncrawl.org/faq ↑
  48. Tremblay et al. v. Stability AI et al., par. 5 and 51. ↑
  49. Concord Music Group, Inc. et al., v. Anthropic PBC, Case 3:23-cv-01092, 18.10.2023. ↑
  50. Id., par. 66 to 69. ↑
  51. The New York Times Company, v. Microsoft Corporation et al. ↑
  52. Id., par. 4. ↑
  53. See for instance Id., par. 99, 100, and 104 to 107. ↑
  54. Art. 4 par. 3 of the DSM Directive: “The exception or limitation provided for in paragraph 1 [Exception or limitation for text and data mining] shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.” ↑
  55. Art. 53 par. 1 let. C of the EU AI Act: “Providers of general-purpose AI models shall: (c) put in place a policy to comply with Union copyright law, and in particular to identify and comply with, including through state of the art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790.” ↑
  56. See “Have I Been Trained?, which is mentioned in Andersen et al. v. Stability AI et al., p. 6, footnotes n°1 and 2. ↑
  57. Kali Hays, OpenAI offers a way for creators to opt out of AI training data. It’s so onerous that one artist called it ‘enraging’, Business Insider (29.09.2023), available at: https://www.businessinsider.com/openai-dalle-opt-out-process-artists-enraging-2023-9 ↑
  58. Ronen Eldan and Mark Russinovich, Who’s Harry Potter? Approximate Unlearning in LLMs, 04.10.2023, available at: https://arxiv.org/pdf/2310.02238 ↑
  59. Rick Merritt, Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI, NVIDIA (21.03.2023), available at: https://blogs.nvidia.com/blog/generative-ai-getty-images/ ↑
  60. Universal Music Group, Universal Music Group and BandLab Technologies announce first-of-its-kind strategic AI collaboration, Universal Music (18.10.2023), available at: https://www.universalmusic.com/universal-music-group-and-bandlab-technologies-announce-first-of-its-kind-strategic-ai-collaboration/ ↑
  61. Anna Tong, Echo Wang, Martin Coulter, Exclusive: Reddit in AI content licensing deal with Google, Reuters (21.02.2024), available at: https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/ ↑
  62. OpenAI, Global news partnerships: Le Monde and Prisa Media, OpenAI (13.03.2024), available at: https://openai.com/index/global-news-partnerships-le-monde-and-prisa-media/ ↑
  63. Fairly Trained, About, available at: https://www.fairlytrained.org/about ↑
  64. Fairly Trained, Licensed Model Certification, available at: https://www.fairlytrained.org/certifications ↑

 

Disclaimer: This article is for educational purposes only and is not meant to provide legal advice. Readers should not construe or rely on any comment or statement in this article as legal advice. For legal advice, readers should seek a consultation with an attorney.

Post navigation

Previous Matters of Baldessari: Estate of the Artist Finds Itself on Both Sides of Litigation
Next The Cost of Fakes: The Aesthetic, Legal, and Economic Implications of Forgeries

Related Posts

Cartoon of a man hanging money to dry with AML letters

No Secrets about Money Laundering

July 17, 2016
Red, white, and blue poster of Barack Obama.

Shepard Fairey Pleads Guilty to Criminal Contempt

March 5, 2012
logo

More About the Public Domain: Golan v Holder

October 1, 2011
Center for Art Law
Center for Art Law

Follow us on Instagram for the latest in Art Law!

Don't miss our up coming in-person, full-day train Don't miss our up coming in-person, full-day training aimed at preparing lawyers for working with art market participants and understanding their unique copyright law needs. The bootcamp will be led by veteran art law attorneys, Louise Carron, Barry Werbin, Carol J. Steinberg, Esq., Scott Sholder, Marc Misthal, specialists in copyright law. 

This Bootcamp provides participants -- attorneys, law students, law graduates and legal professionals -- with foundational legal knowledge related to copyright law for art market clients. Through a combination of instructional presentations and mock consultations, participants will gain a solid foundation in copyright law and its specificities as applied to works of visual arts, such as the fair use doctrine and the use of generative artificial intelligence tools.

🎟️ Grab tickets using the link in our bio! 

#centerforartlaw #artlaw #legal #research #lawyer #artlawyer #bootcamp #copyright #CLE #trainingprogram
In order to fund acquisitions of contemporary art, In order to fund acquisitions of contemporary art, The Phillips Collection sold seven works of art from their collection at auction in November. The decision to deaccession three works in particular have led to turmoil within the museum's governing body. The works at the center of the controversy include Georgia O'Keefe's "Large Dark Red Leaves on White" (1972) which sold for $8 million, Arthur Dove's "Rose and Locust Stump" (1943), and "Clowns et pony" an 1883 drawing by Georges Seurat. Together, the three works raised $13 million. Three board members have resigned, while members of the Phillips family have publicly expressed concerns over the auctions. 

Those opposing the sales point out that the works in question were collected by the museum's founders, Duncan and Marjorie Phillips. While museums often deaccession works that are considered reiterative or lesser in comparison to others by the same artist, the works by O'Keefe, Dove, and Seurat are considered highly valuable, original works among the artist's respective oeuvres. 

The museum's director, Jonathan P. Binstock, has defended the sales, arguing that the process was thorough and reflects the majority interests of the collection's stewards. He believes that acquiring contemporary works will help the museum to evolve. Ultimately, the controversy highlights the difficulties of maintaining institutional collections amid conflicting perspectives.

🔗 Click the link in our bio to read more.
Make sure to check out our newest episode if you h Make sure to check out our newest episode if you haven’t yet!

Paris and Andrea get the change to speak with Patty Gerstenblith about how the role international courts, limits of accountability, and if law play to protect history in times of war.

🎙️ Click the link in our bio to listen anywhere you get your podcasts!
Alexander Butyagin, a Russian archaeologist, was a Alexander Butyagin, a Russian archaeologist, was arrested by Polish authorities in Warsaw. on December 4th. Butyagin is wanted by Ukraine for allegedly conducting illegal excavations of Myrmekion, an ancient city in Crimea. Located in present-day Crimea, Myrmekion was an Ancient Greek colony dating to the sixth century, BCE. 

According to Ukrainian officials, between 2014 and 2019 Butyagin destroyed parts of the Myrmekion archaeological site while serving as head of Ancient Archaeology of the Northern Black Sea region at St. Petersburg's Hermitage Museum. The resulting damages are estimated at $4.7 million. Notably, Russia's foreign ministry has denounced the arrest, describing Poland's cooperation with Ukraine's extradition order as "legal tyranny." Russia invaded and annexed Crimea in 2014.

🔗 Read more by clicking the link in our bio

#centerforartlaw #artlaw #artcrime #artlooting #ukraine #crimea
Join us on February 18th to learn about the proven Join us on February 18th to learn about the provenance and restitution of the Cranach painting at the North Carolina Museum of Art.

A beloved Cranach painting at the North Carolina Museum of Art was accused of being looted by the Nazis. Professor Deborah Gerhardt will describe the issues at stake and the evidentiary trail that led to an unusual model for resolving the dispute.

Grab your tickets today using the link in our bio!

#centerforartlaw #artlaw #legal #legalresearch #museumissues #artwork
“In the depth of winter, I finally learned that wi “In the depth of winter, I finally learned that within me there lay an invincible summer."
~ Albert Camus, "Return to Tipasa" (1952) 

Camus is on our reading list but for now, stay close to the ground to avoid the deorbit burn from the 2026 news and know that we all contain invincible summer. 

The Center for Art Law's January 2026 Newsletter is here—catch up on the latest in art law and start the year informed.
https://itsartlaw.org/newsletters/january-newsletter-which-way-is-up/ 

#centerforartlaw #artlaw #lawyer #artlawyer #legalresearch #legal #art #law #newsletter #january
Major corporations increasingly rely on original c Major corporations increasingly rely on original creative work to train AI models, often claiming a fair use defense. However, many have flagged this interpretation of copyright law as illegitimate and exploitative of artists. In July, the Senate Judiciary Committee on Crime and Counterterrorism addressed these issues in a hearing on copyright law and AI training. 

Read our recent article by Katelyn Wang to learn more about the connection between AI training, copyright protections, and national security. 

🔗 Click the link in our bio to read more!
Join the Center for Art Law for an in-person, all- Join the Center for Art Law for an in-person, all-day  CLE program to train lawyers to work with visual artists and their unique copyright needs. The bootcamp will be led by veteran art law attorneys specializing in copyright law.

This Bootcamp provides participants -- attorneys, law students, law graduates and legal professionals -- with foundational legal knowledge related to copyright law for art market clients. Through a combination of instructional presentations and mock consultations, participants will gain a solid foundation in copyright law and its specificities as applied to works of visual arts, such as the fair use doctrine and the use of generative artificial intelligence tools. 

🎟️ Grab tickets using the link in our bio!
Our interns do the most. Check out a day in the li Our interns do the most. Check out a day in the life of Lauren Stein, a 2L at Wake Forest, as she crushes everything in her path. 

Want to help us foster more great minds? Donate to Center for Art Law.

🔗 Click the link below to donate today!

https://itsartlaw.org/donations/new-years-giving-tree/ 

#centerforartlaw #artlaw #legal #legalresearch #caselaw #lawyer #art #lawstudent #internships #artlawinternship
Paul Cassier (1871-1926 was an influential Jewish Paul Cassier (1871-1926 was an influential Jewish art dealer. He owned and ran an art gallery called Kunstsalon Paul Cassirer along with his cousin. He is known for his role in promoting the work of impressionists and modernists like van Gogh and Cézanne. 

Cassier was seen as a visionary and risk-tasker. He gave many now famous artists their first showings in Germany including van Gogh, Manet, and Gaugin. Cassier was specifically influential to van Gogh's work as this first showing launched van Gogh's European career.

🔗 Learn more about the impact of his career by checking out the link in our bio!

#centerforartlaw #artlaw #legalresearch #law #lawyer #artlawyer #artgallery #vangogh
No strike designations for cultural heritage are o No strike designations for cultural heritage are one mechanism by which countries seek to uphold the requirements of the 1954 Hague Convention. As such, they are designed to be key instruments in protecting the listed sites from war crimes. Yet not all countries maintain such inventories of their own whether due to a lack of resources, political views about what should be represented, or the risk of misuse and abuse. This often places the onus on other governments to create lists about cultures other than their own during conflicts. Thus, there may be different lists compiled by different governments in a conflict, creating an unclear legal landscape for determining potential war crimes and raising significant questions about the effectiveness of no strikes as a protection mechanism. 

This presentation discusses current research seeking to empirically evaluate the effectiveness of no strike designations as a protection mechanism against war crimes in Syria. Using data on cultural heritage attacks from the height of the Syrian Conflict (2014-2017) compiled from open sources, a no strike list completed in approximately 2012, and measures of underlying risk, this research asks whether the designations served as a protective factor or a risk factor for a given site and the surrounding area. Results and implications for holding countries accountable for war crimes against cultural heritage are discussed. 

🎟️ Grab your tickets using the link in our bio!

#centerforartlaw #artlaw #artlawyer #legalresearch #lawyer #culturalheritage #art #protection
What happens when culture becomes collateral damag What happens when culture becomes collateral damage in war?
In this episode of Art in Brief, we speak with Patty Gerstenblith, a leading expert on cultural heritage law, about the destruction of cultural sites in recent armed conflicts.

We examine the role of international courts, the limits of accountability, and whether the law can truly protect history in times of war.

We would like to also thank Rebecca Bennett for all of her help on this episode. 

 🎙️ Click the link in our bio to listen anywhere you get your podcasts.

#centerforartlaw #artlaw #legalresearch #artlawyer #lawyer #podcast #artpodcast #culturalheritage #armedconflict #internationallaw
  • About the Center
  • Contact Us
  • Newsletter
  • Upcoming Events
  • Internship
  • Case Law Database
  • Log in
  • Become a Member
  • Donate
DISCLAIMER

Center for Art Law is a New York State non-profit fully qualified under provision 501(c)(3)
of the Internal Revenue Code.

The Center does not provide legal representation. Information available on this website is
purely for educational purposes only and should not be construed as legal advice.

TERMS OF USE AND PRIVACY POLICY

Your use of the Site (as defined below) constitutes your consent to this Agreement. Please
read our Terms of Use and Privacy Policy carefully.

© 2026 Center for Art Law