• About
    • Mission
    • Team
    • Boards
    • Mentions & Testimonials
    • Institutional Recognition
    • Annual Reports
    • Current & Past Sponsors
    • Contact Us
  • Resources
    • Article Collection
    • Podcast: Art in Brief
    • AML and the Art Market
    • AI and Art Authentication
    • Newsletter
      • Subscribe
      • Archives
      • In Brief
    • Art Law Library
    • Movies
    • Nazi-looted Art Restitution Database
    • Global Network
      • Courses and Programs
      • Artists’ Assistance
      • Bar Associations
      • Legal Sources
      • Law Firms
      • Student Societies
      • Research Institutions
    • Additional resources
      • The “Interview” Project
  • Events
    • Worldwide Calendar
    • Our Events
      • All Events
      • Annual Conferences
        • 2025 Art Law Conference
        • 2024 Art Law Conference
        • 2023 Art Law Conference
        • 2022 Art Law Conference
        • 2015 Art Law Conference
  • Programs
    • Visual Artists’ Legal Clinics
      • Art & Copyright Law Clinic
      • Artist-Dealer Relationships Clinic
      • Artist Legacy and Estate Planning Clinic
      • Visual Artists’ Immigration Clinic
    • Summer School
      • 2025
    • Internship and Fellowship
    • Judith Bresler Fellowship
  • Case Law Database
  • 2025 Year-End Appeal
  • Log in
  • Become a Member
  • Donate
  • 2025 Year-End Appeal
  • Log in
  • Become a Member
  • Donate
Center for Art Law
  • About
    About
    • Mission
    • Team
    • Boards
    • Mentions & Testimonials
    • Institutional Recognition
    • Annual Reports
    • Current & Past Sponsors
    • Contact Us
  • Resources
    Resources
    • Article Collection
    • Podcast: Art in Brief
    • AML and the Art Market
    • AI and Art Authentication
    • Newsletter
      Newsletter
      • Subscribe
      • Archives
      • In Brief
    • Art Law Library
    • Movies
    • Nazi-looted Art Restitution Database
    • Global Network
      Global Network
      • Courses and Programs
      • Artists’ Assistance
      • Bar Associations
      • Legal Sources
      • Law Firms
      • Student Societies
      • Research Institutions
    • Additional resources
      Additional resources
      • The “Interview” Project
  • Events
    Events
    • Worldwide Calendar
    • Our Events
      Our Events
      • All Events
      • Annual Conferences
        Annual Conferences
        • 2025 Art Law Conference
        • 2024 Art Law Conference
        • 2023 Art Law Conference
        • 2022 Art Law Conference
        • 2015 Art Law Conference
  • Programs
    Programs
    • Visual Artists’ Legal Clinics
      Visual Artists’ Legal Clinics
      • Art & Copyright Law Clinic
      • Artist-Dealer Relationships Clinic
      • Artist Legacy and Estate Planning Clinic
      • Visual Artists’ Immigration Clinic
    • Summer School
      Summer School
      • 2025
    • Internship and Fellowship
    • Judith Bresler Fellowship
  • Case Law Database
Home image/svg+xml 2021 Timothée Giet Art law image/svg+xml 2021 Timothée Giet Can AI Tell the Truth, the Whole Truth, and Nothing But the Truth? The Courts Aren’t Sure
Back

Can AI Tell the Truth, the Whole Truth, and Nothing But the Truth? The Courts Aren’t Sure

November 14, 2025

Center for Art Law AI tell truth

By Rebecca Bennett

As artificial intelligence (AI) becomes an increasingly ubiquitous presence across virtually every industry, legal systems are forced to grapple with the implications of this technology seeping into courtroom proceedings. The legal system plays a significant role in setting standards for ethical conduct surrounding evolving technologies, like AI. And while there have already been numerous relevant cases addressing complaints related to the technology — such as courts fining attorneys for using AI hallucinated content, and the ongoing legal battle between Thomson Reuters and Ross Intelligence, where AI is the crux of the dispute— AI-generated evidence is becoming increasingly common in disputes seemingly unrelated to the technology itself.[1][2] For instance, ChatGPT was reportedly used to identify the perpetrator accused of starting the Pacific Palisades fire in Los Angeles last January.[3] Yet, as AI evidence enters the courtroom, many professionals are concerned that existing legal frameworks are not prepared to handle the significant challenges posed by the technology’s ability to generate highly realistic falsified content.[4]

These issues are particularly relevant to the visual arts for a number of reasons. First, artists are already involved in significant copyright lawsuits against AI companies. For example, Sarah Andersen, Kelly McKernan, and Karla Ortiz sued Stability AI, Midjourney and Deviant Art in 2023 over the use of their works to train AI models.[5] However, beyond lawsuits directly probing boundaries of what constitutes permissible and impermissible use of human-generated content in AI model training, AI systems are increasingly recognized for their potential to support authentication and heritage conservation efforts.[6][7] Art authentication stands to benefit from the integration of AI methods, given that the field currently relies on placing high levels of trust in highly specialized human experts. The subjective nature of these analyses means two different experts may come to different conclusions regarding the authenticity of a work, or that highly skilled forgers can succeed in deceiving multiple experts. However, researchers have succeeded in developing AI tools that can reliably distinguish between authentic and forged works, when extensively trained.[8] As a result, AI-generated evidence may be increasingly called upon to provide additional expertise or corroborate the reports of human authentication experts in legal disputes.

Traditionally suspicious, the art market remains wary of displacing the connoisseurship of human professionals in favor of technological alternatives.[9] Similar concerns are prevalent in the legal field. Currently, the United States’ judiciary is adapting to AI’s entrance into the courtroom. As AI’s capabilities and potential applications rapidly evolve, ethical debates have encouraged the court to solidify verification procedures and guidelines for judges and juries.

Ethical Concerns

As an evidentiary tool, AI raises a multitude of ethical quandaries. In order to handle the inevitable influx of AI-generated evidence, courts must prepare themselves to balance potential benefits of the emerging technology with its risks. This is especially prudent in the context of jury trials, due to the potential for generative AI products to produce extraordinarily realistic false information.[10] Fears of deepfakes are not unfounded, as a 2021 study by researchers at the University of Amsterdam demonstrates that people cannot reliably identify falsified content.[11] Such incidents are common, as evidenced by television host Chris Cuomo’s recent outrage over a falsified video of US Representative Alexandria Ocasio-Cortez.[12] Although the video displayed a watermark indicating AI was used to create it, Cuomo took to the internet to bash Ocasio-Cortez for the opinions her falsified image expressed in the video.[13]

Unfortunately, AI technology designed to detect AI-generated content remains unreliable, creating a difficult paradox for legal professionals.[14] Professor Maura P. Grossman, a leading researcher investigating the integration of AI in the legal system argues it is paramount that courts respond proactively to these issues, because audiovisual evidence is much more memorable than, for example, verbal or written testimony.[15] On the one hand, it is concerning that audiovisual evidence is likely to be perceived as reliable without further insight into the methods used to gather the evidence, however an overly cautious approach could also cause jurors to become too distrustful of the legal process.

Trust in the authority of evidence is critical, due to the phenomenon of defensive processing; once people accept that something is fake, it is impossible to recalibrate their perceptions.[16] In a 2019 article published by the California Law Review, professors Danielle Citron and Bobby Chesney introduced the now frequently cited “liar’s dividend,” a concept encompassing the danger that rising distrust will encourage claims of fakery to be unduly leveled at legitimate evidence.[17] Therefore, courts must carefully consider how they approach discussing the validity of AI-generated evidence, as maintaining a high level of trust in the courtroom is necessary to protect the ethical functioning of the legal process.

In order to combat these challenges, Grossman advocates an approach that encourages critical analysis without causing jurors to be overly skeptical of the evidence presented to them.[18] Here, she distinguishes between the challenges posed by evidence that is readily acknowledged by all parties to incorporate AI, and unacknowledged evidence where parties dispute the presence of manipulation.[19] Where, in her view, acknowledged evidence simply requires confirmation of its validity and reliability, the content of unacknowledged evidence must be proven to be genuine.[20]

In a webinar co-hosted by the National Center for State Courts and the Thomson Reuters Institute on August 20, 2025, assembled legal professionals outlined a series of measures courts could adopt as standard when faced with AI-generated evidence.[21] Ideally, they argue, any generative AI-evidence should be clearly acknowledged as such and accompanied by expert witness testimony speaking to the chain of conduct that led to the model’s findings.[22] These practices should be integrated throughout trial proceedings, from jury selection and instructions, to the trial itself. During selection, technological literacy and bias screenings could be conducted, while unambiguous plain language explanations and guidelines surrounding authenticity should be communicated during jury instructions.[23] While these suggestions are certainly prudent, it is also important to consider the existing legal frameworks designed to handle evidence verification.

Updates to the Federal Rules of Evidence

In response to the concerns outlined above, the federal courts’ advisory committee on evidence rules has acknowledged the need to update the Federal Rules of Evidence by adding specific provisions governing AI. Beginning in 2023, the committee debated amendments to Rule 901, which governs evidence authentication.[24] Rule 901 sets a low threshold for authenticity, generally assuming that evidence is derived from reliable sources.[25] Numerous proposals were considered, yet, in May of 2025 the committee ultimately chose not to adopt any amendments to rule 901.[26] The committee reasoned that acting on authenticity concerns may not be immediately necessary, given that the rules have proven capable in handling authenticity concerns regarding social media posts.[27] However, during the same session, the committee also considered a proposal to adopt a new rule, 707, aimed at addressing issues stemming from AI-evidence that is admitted without expert testimony.[28] Rule 707 was preliminarily accepted by the committee and released for public comment in August.[29] The rule states that in cases when “machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702 (a)-(d).”[30] An exception is specified indicating that Rule 707 does not apply “to the output of simple scientific instruments.”[31]

If enacted, this rule would subject any machine-generated evidence to the same admissibility standards applied to expert testimony (Rule 702).[32] Under this framework, AI evidence would be held to the same standards of validity and reliability as a human expert, ideally increasing transparency regarding the process by which AI outputs are generated. This addresses many concerns raised by legal scholars by requiring litigants to clearly convey the methodology used to generate the evidence and how it is relevant to the case at hand. The proposed rule is open to public comment until February of 2026.[33]

Conclusion

Whether this rule will ultimately be enacted by the committee remains to be seen. And, while amendments to the Federal Rules of Evidence are an encouraging step, they should not be seen as an end to the discussion. The potency and novelty of AI technologies requires ongoing discussion and the adoption of flexible legal frameworks. Rigid regulations could easily become obsolete as the applications and capabilities of AI continue to expand, necessitating an attitude of flexibility and creativity from legal professionals. Rather than viewing these developments with pessimism, such an attitude acknowledges potential benefits while remaining cognizant of its consequences. Instituting safeguards against deepfakes and ensuring AI models are made comprehensible to all parties should bolster confidence in the legal process, rather than detracting from equity and transparency.

Art authentication is, as noted earlier, an area where incorporating AI analyses with human expert opinions could serve to increase confidence in findings. It is a search for truth. A clear parallel can be drawn between the skepticism common when discussing the value of AI-generated content and the art market’s attitude towards the subject of authentication. In both cases, trust in the intrinsic value of the object under scrutiny is paramount. A forgery, even a great one, is of lesser value due to the importance of genuine authorship and creativity in artistic production. Similarly, courts dealing with deepfaked evidence are understandably skeptical of allowing fully computer-generated materials to contribute to trial outcomes. Yet, the fact remains that whether the courts are ready or not, AI is permeating every aspect of society and an attitude of complacency and inaction is far more dangerous than taking measured, thoughtful steps towards managing its consequences.

Further Resources:

  1. George Washington University, AI Litigation Database
  2. Bruce Barcott, AI Lawsuits Worth Watching, TechPolicy.Press (July 1, 2024).
  3. Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, 107 Cal. L. Rev. 1753 (2019).

About the author:

Rebecca Bennett is a recent graduate of McGill University with a BA in Art History and International Development. Currently interning with the Center as a graduate intern, she is working to pursue a career in Art Law.

Select References:

  1. Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., No. 1:20-CV-613-SB (D. Del. Feb. 11, 2025). ↑
  2. Jaclyn Diaz, A Recent High-Profile Case of AI Hallucination Serves as a Stark Warning, NPR NEWS (July 10, 2025). ↑
  3. Ana Faguy and Nardine Saad, ChatGPT Image Snares Suspect in Deadly Pacific Palisades Fire, BBC NEWS (October 8, 2025). ↑
  4. Natalie Runyon, AI Evidence in Jury Trials: Navigating the New Frontier of Justice THOMSON REUTERS (October 6, 2025). ↑
  5. Andersen v. Stability AI Ltd., No. 23-cv-00201-WHO (LJC), 2025 U.S. Dist. LEXIS 50848 (N.D. Cal. Mar. 19, 2025). ↑
  6. Shelby Jorgensen, How to Catch a Criminal in the 21st Century and Why AI Might be Able to Help, the Center for Art Law (August 3, 2025). ↑
  7. J.H. Smith, C. Holt, N.H. Smith & R.P. Taylor, Using Machine Learning to Distinguish Between Authentic and Imitation Jackson Pollock Poured Paintings: A Tile-Driven Approach to Computer Vision, 19 PLOS ONE e0302962 (2024). ↑
  8. Sandro Boccuzzo, Deborah Desirée Meyer & Ludovica Schaerf, Art Forgery Detection Using Kolmogorov Arnold and Convolutional Neural Networks, in European Conference on Computer Vision 187 (Springer Nature Switzerland 2024). ↑
  9. George Nelson, AI is Trying to Take Over Art Authentication, But Longtime Experts Are Skeptical, ARTNews (August 30, 2024). ↑
  10. Dalal, Abhishek, et. al., Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases. University of Chicago Legal Forum (2024). ↑
  11. N.C. Köbis, B. Doležalová & I. Soraperra, Fooled Twice: People Cannot Detect Deepfakes but Think They Can, 24 iScience 103364 (2021). ↑
  12. Michael Sainato, Chris Cuomo mocked for response after falling for deepfake AOC video, The Guardian (August 7, 2025). ↑
  13. Id. Chris Cuomo mocked for response after falling for deepfake AOC video. ↑
  14. Stuart A. Thompson and Tiffany Hsu, How Easy Is It to Fool A.I.-Detection Tools?, The New York Times (June 28, 2023). ↑
  15. Id. Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases. ↑
  16. Thomson Reuters Institute/National Center for State Courts, AI Evidence in Jury Trials: Authenticity, Admissibility, and the Role of the Court and Juries, Vimeo (August 20, 2025). ↑
  17. Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, 107 Cal. L. Rev. 1753 (2019). ↑
  18. University of Waterloo, Generative AI and the Legal System (April 16, 2024). ↑
  19. Id. https://www.thomsonreuters.com/en-us/posts/ai-in-courts/ai-evidence-trials/ ↑
  20. Thomson Reuters Institute/National Center for State Courts, AI Evidence in Jury Trials: Authenticity, Admissibility, and the Role of the Court and Juries, Vimeo (August 20, 2025). ↑
  21. Id. https://vimeo.com/showcase/11715086?video=1112900955 ↑
  22. Id. https://vimeo.com/showcase/11715086?video=1112900955 ↑
  23. Id. https://vimeo.com/showcase/11715086?video=1112900955 ↑
  24. Fed. R. Evid. 901. ; Riana Pfefferkorn, The Ongoing Fight to Keep Evidence Intact in the Face of AI Deception, TechPolicy.Press (August 14, 2025). ↑
  25. Id. Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases. ↑
  26. US Courts, Advisory Committee on Evidence Rules-May 2025, Agenda Book (May 2, 2025). ↑
  27. Avi Gesser, Matt Kelly, Gabriel A. Kohan, and Jim Pastore, Federal Judicial Conference to Revise Rules of Evidence to Address AI Risks, Debevoise and Plimpton (March 20, 2025). ↑
  28. US Courts, Preliminary Draft of Proposed Amendments to the Federal Rules of Evidence, (August 13, 2025). ↑
  29. US Courts, Proposed Amendments Published for Public Comment, (August 15, 2025). ↑
  30. Id. US Courts, Preliminary Draft of Proposed Amendments to the Federal Rules of Evidence, (August 13, 2025). ↑
  31. Id. US Courts, Preliminary Draft of Proposed Amendments to the Federal Rules of Evidence, (August 13, 2025). ↑
  32. Fed. R. Evid. 702. ↑
  33. US Courts, Proposed Amendments Published for Public Comment, (August 15, 2025). ↑

 

Disclaimer: This article is for educational purposes only and is not meant to provide legal advice. Readers should not construe or rely on any comment or statement in this article as legal advice. For legal advice, readers should seek a consultation with an attorney.

Post navigation

Previous Don’t Blame Me: How the Art Market Battles Forgeries
Next Power of “x”: Legal Questions and Possibilities of Artist x Brand Collaborations

Related Posts

star trek fan fiction thumbnails

Realities of Fan Fiction: Paramout To Boldly Drop Lawsuit

June 7, 2016
Signs in the shape of human silhouettes.

Censorship at the Kiev Biennale: "Apocalypse and Renaissance at the Chocolate House" Closes Its Doors

June 14, 2012
Images found in Complaint, Heritage Auctioneers & Galleries v. Christie’s, Sup Ct, New York County, 2014, Oing, J., index No. 651806/2014.

Parting Is Such Sweet Sorrow: Covenants Not to Compete Between Auction Houses

June 3, 2016
Center for Art Law
Sofia Tomilenko Let there be light!

A Gift for Us

this Holiday Season

Thank you to Sofia Tomilenko (the artist from Kyiv, Ukraine who made this Lady Liberty for us) and ALL the artists who make our life more meaningful and vibrant this year! Let there be light in 2026!

 

Last Gift of 2025
Guidelines AI and Art Authentication

AI and Art Authentication

Explore the new Guidelines for AI and Art Authentication for the responsible, ethical, and transparent use of artificial intelligence.

Download here
Center for Art Law

Follow us on Instagram for the latest in Art Law!

Where did you go to recharge your batteries? Where did you go to recharge your batteries?
Let there be light! Center for Art Law is pleased Let there be light! Center for Art Law is pleased to share with you a work of art by Sofia Tomilenko, an illustration artist from Kyiv, Ukraine. This is Sofia's second creation for us and as her Lady Liberty plays tourist in NYC, we wish all of you peace and joy in 2026! 

Light will overcome the darkness. Світло переможе темряву. Das Licht wird die Dunkelheit überwinden. La luz vencerá la oscuridad. 

#artlaw #peace #artpiece #12to12
Writing during the last days and hours of the year Writing during the last days and hours of the year is de rigueur for nonprofits and what do we get?

Subject: Automatic reply: Thanks to Art Law! 

"I am now on leave until January 5th. 
. . .
I will respond as soon as I can upon on my return. For anything urgent you may contact ..."

Well, dear Readers, Students, Artists and Attorneys, we see you when you're working, we know when you're away, and we promise that in 2026 Art Law is coming to Town (again)!

Best wishes for 2026, from your Friends at the Center for Art Law!

#fairenough #snowdays #2026ahead #puttingfunback #fundraising #EYO2025
Less than a week left in December and together we Less than a week left in December and together we have raised nearly $32,000 towards our EOY fundraising $35,000 goal. If we are ever camera shy to speak about our accomplishments or our goals, our work and our annual report speak for themselves. 

Don’t let the humor and the glossy pictures fool you, to reach our full potential and new heights in 2026, we need your vote of confidence. No contribution is too small. What matters most is knowing you are thinking of the Center this holiday season. Thank you, as always, for your support and for being part of this community! 

#artlaw #EOYfundraiser #growingin2026 #AML #restitution #research #artistsright #contracts #copyright #bringfriends
This summer, art dealer James White and appraiser This summer, art dealer James White and appraiser Paul Bremner pleaded guilty for their participation in the third forgery ring of Norval Morisseau works uncovered by Canadian authorities. Their convictions are a key juncture in Canda's largest art fraud scheme, a scandal that has spanned decades and illuminated deep systemic failures within the art market to protect against fraud. 

Both White and Bremner were part of what is referred to as the 'Cowan Group,' spearheaded by art dealer Jeffrey Cowan. Their enterprise relied on Cowan fabricating provenance for the forged works, which he claimed were difficult to authenticate. 

In June, White, 87, pleaded guilty to to creating forged documents and possessing property obtained by crime for the purpose of trafficking. Later, in July, Paul Bremner pleaded guilty to producing and using forged documents and possessing property obtained through crime with the intent of trafficking. While Bremner, White, and Cowan were all supposed to face trial in the Fall, Cowan was the only one to do so and was ultimately found guilty on four counts of fraud. 

🔗 Click the link in our bio to read more.

#centerforartlaw #artlaw #legalresearch #artfraud #artforgery #canada #artcrime #internationallaw
It's the season! It's the season!
In 2022, former art dealer Inigo Philbrick was sen In 2022, former art dealer Inigo Philbrick was sentenced to seven years in prison for committing what is considered one of the United States' most significant cases of art fraud. With access to Philbrick's personal correspondence, Orlando Whitfield chronicled his friendship with the disgraced dealer in a 2024 memoir, All that Glitters: A Story of Friendship, Fraud, and Fine Art. 

For more insights into the fascinating story of Inigo Philbrick, and those he defrauded, read our recent book review. 

🔗 Click the link in our bio to read more!

#centerforartlaw #legalresearch #artlaw #artlawyer #lawer #inigophilbrick #bookreview #artfraud
The highly publicized Louvre heist has shocked the The highly publicized Louvre heist has shocked the globe due to its brazen nature. However, beyond its sheer audacity, the heist has exposed systemic security weaknesses throughout the international art world. Since the theft took place on October 19th, the French police have identified the perpetrators, describing them as local Paris residents with records of petty theft. 

In our new article, Sarah Boxer explores parallels between the techniques used by the Louvre heists’ perpetrators and past major art heists, identifying how the theft reveals widespread institutional vulnerability to art crime. 

🔗 Click the link in our bio to read more!

#centerforartlaw #artlaw #legalresearch #artcrime #theft #louvre #france #arttheft #stolenart
In September 2025, 77-year old Pennsylvania reside In September 2025, 77-year old Pennsylvania resident Carter Reese made headlines not only for being Taylor Swift's former neighbor, but also for pleading guilty to selling forgeries of Picasso, Basquiat, Warhol, and others. This and other recent high profile forgery cases are evidence of the art market's ongoing vulnerability to fraudulent activity. Yet, new innovations in DNA and artificial intelligence (AI) may help defend against forgery. 

To learn more about how the art market's response to fraud and forgery is evolving, read our new article by Shaila Gray. 

🔗 Click the link in our bio to read more!

#centerforartlaw #artlaw #legalresearch #artlawyer #lawyer #AI #forgery #artforgery #artfakes #authenticity
Did you know that Charles Dickens visited America Did you know that Charles Dickens visited America twice, in 1842 and in 1867? In between, he wrote his famous “A Tale of Two Cities,” foreshadowing upheavals and revolutions and suggesting that individual acts of compassion, love, and sacrifice can break cycles of injustice. With competing demands and obligations, finding time to read books in the second quarter of the 21st century might get increasingly harder. As we live in the best and worst of times again, try to enjoy the season of light and a good book (or a good newsletter).

From all of us at the Center for Art Law, we wish you peace, love, and understanding this holiday season. 

🔗 Read more by clicking the link in our bio!

#centerforartlaw #artlaw #legalresearch #artlawyer #december #newsletter #lawyer
Is it, or isn’t it, Vermeer? Trouble spotting fake Is it, or isn’t it, Vermeer? Trouble spotting fakes? You are not alone. Donate to the Center for Art Law, we are the real deal. 

🔗 Click the link in our bio to donate today!

#centerforartlaw #artlaw #legalresearch #endofyear #givingtuesday #donate #notacrime #framingartlaw
Whether legal systems are ready or not, artificial Whether legal systems are ready or not, artificial intelligence is making its way into the courtroom. AI-generated evidence is becoming increasingly common, but many legal professionals are concerned that existing legal frameworks aren't sufficient to account for ethical dilemmas arising from the technology. 

To learn more about the ethical arguments surrounding AI-generated evidence, and what measures the US judiciary is taking to respond, read our new article by Rebecca Bennett. 

🔗 Click the link in our bio to read more!

#centerforartlaw #artlaw #legalresearch #artlawyer #lawyer #aiart #courtissues #courts #generativeai #aievidence
  • About the Center
  • Contact Us
  • Newsletter
  • Upcoming Events
  • Internship
  • Case Law Database
  • Log in
  • Become a Member
  • Donate
DISCLAIMER

Center for Art Law is a New York State non-profit fully qualified under provision 501(c)(3)
of the Internal Revenue Code.

The Center does not provide legal representation. Information available on this website is
purely for educational purposes only and should not be construed as legal advice.

TERMS OF USE AND PRIVACY POLICY

Your use of the Site (as defined below) constitutes your consent to this Agreement. Please
read our Terms of Use and Privacy Policy carefully.

© 2026 Center for Art Law