Can AI Tell the Truth, the Whole Truth, and Nothing But the Truth? The Courts Aren’t Sure
November 14, 2025
By Rebecca Bennett
As artificial intelligence (AI) becomes an increasingly ubiquitous presence across virtually every industry, legal systems are forced to grapple with the implications of this technology seeping into courtroom proceedings. The legal system plays a significant role in setting standards for ethical conduct surrounding evolving technologies, like AI. And while there have already been numerous relevant cases addressing complaints related to the technology — such as courts fining attorneys for using AI hallucinated content, and the ongoing legal battle between Thomson Reuters and Ross Intelligence, where AI is the crux of the dispute— AI-generated evidence is becoming increasingly common in disputes seemingly unrelated to the technology itself.[1][2] For instance, ChatGPT was reportedly used to identify the perpetrator accused of starting the Pacific Palisades fire in Los Angeles last January.[3] Yet, as AI evidence enters the courtroom, many professionals are concerned that existing legal frameworks are not prepared to handle the significant challenges posed by the technology’s ability to generate highly realistic falsified content.[4]
These issues are particularly relevant to the visual arts for a number of reasons. First, artists are already involved in significant copyright lawsuits against AI companies. For example, Sarah Andersen, Kelly McKernan, and Karla Ortiz sued Stability AI, Midjourney and Deviant Art in 2023 over the use of their works to train AI models.[5] However, beyond lawsuits directly probing boundaries of what constitutes permissible and impermissible use of human-generated content in AI model training, AI systems are increasingly recognized for their potential to support authentication and heritage conservation efforts.[6][7] Art authentication stands to benefit from the integration of AI methods, given that the field currently relies on placing high levels of trust in highly specialized human experts. The subjective nature of these analyses means two different experts may come to different conclusions regarding the authenticity of a work, or that highly skilled forgers can succeed in deceiving multiple experts. However, researchers have succeeded in developing AI tools that can reliably distinguish between authentic and forged works, when extensively trained.[8] As a result, AI-generated evidence may be increasingly called upon to provide additional expertise or corroborate the reports of human authentication experts in legal disputes.
Traditionally suspicious, the art market remains wary of displacing the connoisseurship of human professionals in favor of technological alternatives.[9] Similar concerns are prevalent in the legal field. Currently, the United States’ judiciary is adapting to AI’s entrance into the courtroom. As AI’s capabilities and potential applications rapidly evolve, ethical debates have encouraged the court to solidify verification procedures and guidelines for judges and juries.
Ethical Concerns
As an evidentiary tool, AI raises a multitude of ethical quandaries. In order to handle the inevitable influx of AI-generated evidence, courts must prepare themselves to balance potential benefits of the emerging technology with its risks. This is especially prudent in the context of jury trials, due to the potential for generative AI products to produce extraordinarily realistic false information.[10] Fears of deepfakes are not unfounded, as a 2021 study by researchers at the University of Amsterdam demonstrates that people cannot reliably identify falsified content.[11] Such incidents are common, as evidenced by television host Chris Cuomo’s recent outrage over a falsified video of US Representative Alexandria Ocasio-Cortez.[12] Although the video displayed a watermark indicating AI was used to create it, Cuomo took to the internet to bash Ocasio-Cortez for the opinions her falsified image expressed in the video.[13]
Unfortunately, AI technology designed to detect AI-generated content remains unreliable, creating a difficult paradox for legal professionals.[14] Professor Maura P. Grossman, a leading researcher investigating the integration of AI in the legal system argues it is paramount that courts respond proactively to these issues, because audiovisual evidence is much more memorable than, for example, verbal or written testimony.[15] On the one hand, it is concerning that audiovisual evidence is likely to be perceived as reliable without further insight into the methods used to gather the evidence, however an overly cautious approach could also cause jurors to become too distrustful of the legal process.
Trust in the authority of evidence is critical, due to the phenomenon of defensive processing; once people accept that something is fake, it is impossible to recalibrate their perceptions.[16] In a 2019 article published by the California Law Review, professors Danielle Citron and Bobby Chesney introduced the now frequently cited “liar’s dividend,” a concept encompassing the danger that rising distrust will encourage claims of fakery to be unduly leveled at legitimate evidence.[17] Therefore, courts must carefully consider how they approach discussing the validity of AI-generated evidence, as maintaining a high level of trust in the courtroom is necessary to protect the ethical functioning of the legal process.
In order to combat these challenges, Grossman advocates an approach that encourages critical analysis without causing jurors to be overly skeptical of the evidence presented to them.[18] Here, she distinguishes between the challenges posed by evidence that is readily acknowledged by all parties to incorporate AI, and unacknowledged evidence where parties dispute the presence of manipulation.[19] Where, in her view, acknowledged evidence simply requires confirmation of its validity and reliability, the content of unacknowledged evidence must be proven to be genuine.[20]
In a webinar co-hosted by the National Center for State Courts and the Thomson Reuters Institute on August 20, 2025, assembled legal professionals outlined a series of measures courts could adopt as standard when faced with AI-generated evidence.[21] Ideally, they argue, any generative AI-evidence should be clearly acknowledged as such and accompanied by expert witness testimony speaking to the chain of conduct that led to the model’s findings.[22] These practices should be integrated throughout trial proceedings, from jury selection and instructions, to the trial itself. During selection, technological literacy and bias screenings could be conducted, while unambiguous plain language explanations and guidelines surrounding authenticity should be communicated during jury instructions.[23] While these suggestions are certainly prudent, it is also important to consider the existing legal frameworks designed to handle evidence verification.
Updates to the Federal Rules of Evidence
In response to the concerns outlined above, the federal courts’ advisory committee on evidence rules has acknowledged the need to update the Federal Rules of Evidence by adding specific provisions governing AI. Beginning in 2023, the committee debated amendments to Rule 901, which governs evidence authentication.[24] Rule 901 sets a low threshold for authenticity, generally assuming that evidence is derived from reliable sources.[25] Numerous proposals were considered, yet, in May of 2025 the committee ultimately chose not to adopt any amendments to rule 901.[26] The committee reasoned that acting on authenticity concerns may not be immediately necessary, given that the rules have proven capable in handling authenticity concerns regarding social media posts.[27] However, during the same session, the committee also considered a proposal to adopt a new rule, 707, aimed at addressing issues stemming from AI-evidence that is admitted without expert testimony.[28] Rule 707 was preliminarily accepted by the committee and released for public comment in August.[29] The rule states that in cases when “machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702 (a)-(d).”[30] An exception is specified indicating that Rule 707 does not apply “to the output of simple scientific instruments.”[31]
If enacted, this rule would subject any machine-generated evidence to the same admissibility standards applied to expert testimony (Rule 702).[32] Under this framework, AI evidence would be held to the same standards of validity and reliability as a human expert, ideally increasing transparency regarding the process by which AI outputs are generated. This addresses many concerns raised by legal scholars by requiring litigants to clearly convey the methodology used to generate the evidence and how it is relevant to the case at hand. The proposed rule is open to public comment until February of 2026.[33]
Conclusion
Whether this rule will ultimately be enacted by the committee remains to be seen. And, while amendments to the Federal Rules of Evidence are an encouraging step, they should not be seen as an end to the discussion. The potency and novelty of AI technologies requires ongoing discussion and the adoption of flexible legal frameworks. Rigid regulations could easily become obsolete as the applications and capabilities of AI continue to expand, necessitating an attitude of flexibility and creativity from legal professionals. Rather than viewing these developments with pessimism, such an attitude acknowledges potential benefits while remaining cognizant of its consequences. Instituting safeguards against deepfakes and ensuring AI models are made comprehensible to all parties should bolster confidence in the legal process, rather than detracting from equity and transparency.
Art authentication is, as noted earlier, an area where incorporating AI analyses with human expert opinions could serve to increase confidence in findings. It is a search for truth. A clear parallel can be drawn between the skepticism common when discussing the value of AI-generated content and the art market’s attitude towards the subject of authentication. In both cases, trust in the intrinsic value of the object under scrutiny is paramount. A forgery, even a great one, is of lesser value due to the importance of genuine authorship and creativity in artistic production. Similarly, courts dealing with deepfaked evidence are understandably skeptical of allowing fully computer-generated materials to contribute to trial outcomes. Yet, the fact remains that whether the courts are ready or not, AI is permeating every aspect of society and an attitude of complacency and inaction is far more dangerous than taking measured, thoughtful steps towards managing its consequences.
Further Resources:
- George Washington University, AI Litigation Database
- Bruce Barcott, AI Lawsuits Worth Watching, TechPolicy.Press (July 1, 2024).
- Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, 107 Cal. L. Rev. 1753 (2019).
About the author:
Rebecca Bennett is a recent graduate of McGill University with a BA in Art History and International Development. Currently interning with the Center as a graduate intern, she is working to pursue a career in Art Law.
Select References:
- Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., No. 1:20-CV-613-SB (D. Del. Feb. 11, 2025). ↑
- Jaclyn Diaz, A Recent High-Profile Case of AI Hallucination Serves as a Stark Warning, NPR NEWS (July 10, 2025). ↑
- Ana Faguy and Nardine Saad, ChatGPT Image Snares Suspect in Deadly Pacific Palisades Fire, BBC NEWS (October 8, 2025). ↑
- Natalie Runyon, AI Evidence in Jury Trials: Navigating the New Frontier of Justice THOMSON REUTERS (October 6, 2025). ↑
- Andersen v. Stability AI Ltd., No. 23-cv-00201-WHO (LJC), 2025 U.S. Dist. LEXIS 50848 (N.D. Cal. Mar. 19, 2025). ↑
- Shelby Jorgensen, How to Catch a Criminal in the 21st Century and Why AI Might be Able to Help, the Center for Art Law (August 3, 2025). ↑
- J.H. Smith, C. Holt, N.H. Smith & R.P. Taylor, Using Machine Learning to Distinguish Between Authentic and Imitation Jackson Pollock Poured Paintings: A Tile-Driven Approach to Computer Vision, 19 PLOS ONE e0302962 (2024). ↑
- Sandro Boccuzzo, Deborah Desirée Meyer & Ludovica Schaerf, Art Forgery Detection Using Kolmogorov Arnold and Convolutional Neural Networks, in European Conference on Computer Vision 187 (Springer Nature Switzerland 2024). ↑
- George Nelson, AI is Trying to Take Over Art Authentication, But Longtime Experts Are Skeptical, ARTNews (August 30, 2024). ↑
- Dalal, Abhishek, et. al., Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases. University of Chicago Legal Forum (2024). ↑
- N.C. Köbis, B. Doležalová & I. Soraperra, Fooled Twice: People Cannot Detect Deepfakes but Think They Can, 24 iScience 103364 (2021). ↑
- Michael Sainato, Chris Cuomo mocked for response after falling for deepfake AOC video, The Guardian (August 7, 2025). ↑
- Id. Chris Cuomo mocked for response after falling for deepfake AOC video. ↑
- Stuart A. Thompson and Tiffany Hsu, How Easy Is It to Fool A.I.-Detection Tools?, The New York Times (June 28, 2023). ↑
- Id. Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases. ↑
- Thomson Reuters Institute/National Center for State Courts, AI Evidence in Jury Trials: Authenticity, Admissibility, and the Role of the Court and Juries, Vimeo (August 20, 2025). ↑
- Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, 107 Cal. L. Rev. 1753 (2019). ↑
- University of Waterloo, Generative AI and the Legal System (April 16, 2024). ↑
- Id. https://www.thomsonreuters.com/en-us/posts/ai-in-courts/ai-evidence-trials/ ↑
- Thomson Reuters Institute/National Center for State Courts, AI Evidence in Jury Trials: Authenticity, Admissibility, and the Role of the Court and Juries, Vimeo (August 20, 2025). ↑
- Id. https://vimeo.com/showcase/11715086?video=1112900955 ↑
- Id. https://vimeo.com/showcase/11715086?video=1112900955 ↑
- Id. https://vimeo.com/showcase/11715086?video=1112900955 ↑
- Fed. R. Evid. 901. ; Riana Pfefferkorn, The Ongoing Fight to Keep Evidence Intact in the Face of AI Deception, TechPolicy.Press (August 14, 2025). ↑
- Id. Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases. ↑
- US Courts, Advisory Committee on Evidence Rules-May 2025, Agenda Book (May 2, 2025). ↑
- Avi Gesser, Matt Kelly, Gabriel A. Kohan, and Jim Pastore, Federal Judicial Conference to Revise Rules of Evidence to Address AI Risks, Debevoise and Plimpton (March 20, 2025). ↑
- US Courts, Preliminary Draft of Proposed Amendments to the Federal Rules of Evidence, (August 13, 2025). ↑
- US Courts, Proposed Amendments Published for Public Comment, (August 15, 2025). ↑
- Id. US Courts, Preliminary Draft of Proposed Amendments to the Federal Rules of Evidence, (August 13, 2025). ↑
- Id. US Courts, Preliminary Draft of Proposed Amendments to the Federal Rules of Evidence, (August 13, 2025). ↑
- Fed. R. Evid. 702. ↑
- US Courts, Proposed Amendments Published for Public Comment, (August 15, 2025). ↑
Disclaimer: This article is for educational purposes only and is not meant to provide legal advice. Readers should not construe or rely on any comment or statement in this article as legal advice. For legal advice, readers should seek a consultation with an attorney.