Artificial intelligence is moving quickly from novelty to inevitability, and the courtroom is no exception. Generative AI (“GAI”) can easily summarize, manipulate, and fabricate evidence. While the technology promises efficiency and analytical power, it also threatens core principles of our adversarial system: authenticity, reliability, and the truth-seeking function of trial. Courts, lawyers, and policymakers must proactively address these risks to preserve trust in the justice system.
This article highlights the key evidentiary and ethical issues raised by AI, explains the difference between deterministic and generative models, and evaluates proposed amendments to the Federal Rules of Evidence designed for an AI-driven era.
- Governing Law: The Constitution, Evidence Rules, and Professional Duties
The Sixth Amendment guarantees a criminal defendant the right to confront witnesses.[1] Traditionally, cross-examination tests human limitations, including bias, memory, perception.[2] But when evidence is generated by a machine, fundamental questions emerge: What does it mean to “confront” an algorithm? How do you probe a model’s bias or memory if its reasoning is hidden behind layers of code?
Federal evidence law further complicates matters. Rule 702 requires expert testimony to be reliable and grounded in sound methodology.[3] Under Daubert, a court considers testability, peer review, error rates, standards, and community acceptance.[4] These factors apply naturally to human experts but cannot meaningfully assess an LLM without reproducible “reasoning.”
Rule 901 governs authentication of physical and digital evidence.[5] Photos, videos, and audio recordings (already historically susceptible to tampering) are now vulnerable to seamless GAI manipulation. If authenticity is challenged, a proponent must show the evidence is what they claim it is.[6] When AI is involved, that burden becomes far more complicated.
Finally, the Model Rules of Professional Conduct impose a duty of candor and competence.[7] Lawyers must understand the tools they use must not mislead the court, and they must disclose AI use when relevant. Unacknowledged AI evidence is inherently deceptive and violates both the spirit and letter of professional responsibility.
- Deterministic AI vs. Generative AI
Deterministic AI (‘DAI’) is rule-based, traceable, and reproducible.[8] It functions within a closed system and generates the same output from the same input.[9] A spreadsheet running formulas is deterministic.[10] It may be automated, but it is not mysterious. DAI poses minimal evidentiary risk.
Generative AI (GAI) is different. Trained on vast datasets, LLMs generate content through pattern prediction, not fixed rules.[11] Their decision pathways are not transparent. Worse, they “hallucinate” and confidently producing wrong answers, fake citations, or fabricated images.[12]
The infamous 2023 case, Mata v. Avianca, illustrated the danger of unchecked AI in the courtroom.[13] An attorney submitted a ChatGPT-generated brief filled with nonexistent cases. The court, after spending substantial time trying to locate the phantom authorities, sanctioned the lawyer.[14] The lesson was simple: GAI is powerful, but lawyers remain responsible for verifying its output.
Still, used carefully, GAI can aid justice. It can analyze enormous data sets, model accidents, or summarize technical evidence.[15] The risk is not the tool; it is the all-trusting or undisclosed use of the tool.
- Acknowledged vs. Unacknowledged AI
Acknowledged AI use is transparent and can be evaluated under existing standards. The real threat comes from unacknowledged AI, where a party presents manipulated evidence as authentic. High-quality deepfakes can mislead jurors who have no meaningful way to distinguish real from artificial.[16]
This leads to the Liar’s Dividend[17] or so-called “Deepfake Defense,”[18] whereby parties exploit uncertainty by claiming that damaging but genuine evidence is “AI-generated.” After the insurrection attempt on January 6, 2021, some defendants hinted that videos of their actions could be deepfakes.[19] Even without proof, the suggestion alone undermines trust in the fact-finding process and taints jury perception.[20] As generative AI improves, the public’s ability to identify fakery is not keeping pace, and the risk of misuse is growing accordingly.
- Getting a Grip on GAI: Proposed Rules 707 and 901(c)
Courts are not powerless. The Judicial Conference has proposed two additions to the Federal Rules of Evidence:
- Proposed Rule 707: Machine-Generated Evidence
If machine-generated evidence is offered without expert testimony, it must meet Rule 702’s reliability requirements.[21] The text reads as follows:
When machine-generated evidence is offered without 3 an expert witness and would be subject to Rule 702 if 4 testified to by a witness, the court may admit the evidence 5 only if it satisfies the requirements of Rule 702(a)-(d). This rule does not apply to the output of simple scientific instruments.
This ensures that AI-generated content is treated with the same scrutiny as expert analysis. In a Committee Note following the proposed rule, the committee noted that simple scientific instruments clause is intended to “avoid unnecessary litigation” over items relied upon in everyday life, such as a mercury-based thermometer or an electronic scale.[22]
- Proposed Rule 901(c): Fabrication by Generative AI
This amendment, proposed by Professor Rebecca A. Delfino[23], creates a burden-shifting framework, as follows:
901(c). Notwithstanding subdivision (a), if a party challenging the authenticity of computer-generated or other electronic evidence presents evidence sufficient to support a factual finding that the challenged evidence has been manipulated or fabricated, in whole or in part, by generative artificial intelligence, the proponent of the evidence must authenticate evidence under subdivision (b) and provide additional proof establishing its reliability. The court must decide the admissibility of the challenged evidence under rule 104(a).[24]
This provision directly targets the Liar’s Dividend by preventing frivolous claims of fabrication while still protecting against actual deepfakes. Additionally, this provision retains judicial determination under FRE 104(a)[25], ultimately leaving the power to decide admissibility to the Court.
- The Path Forward
Until reliable AI-detection tools exist, authenticity will depend on the integrity and diligence of attorneys. The duties of candor, competence, and fairness are not new, but they take on renewed importance in the age of GAI.
Courts and lawyers must insist on transparency and challenge suspicious evidence. They also must acknowledge AI use when it shapes the materials they present. If we can combine judicial oversight, clear rules, and honest advocacy, our justice system can adapt to AI without sacrificing the truth.
[1] U.S. Const. Annotated, Sixth Amendment—Rights in Criminal Prosecutions, Amdt 6.5.2, Confrontation Clause Cases During the 1960s through 1990s, https://constitution.congress.gov/browse/essay/amdt6-5-2/ALDE_00013455/#ALDF_00024518 (last visited Nov. 24, 2025). (constitution.congress.gov)
[2] A criminal defendant has the right to cross examine a witness for “bias, poor eyesight, lack of care and attentiveness …bad memory,”[2] or any other topic that may help in their defense. United States v. Owens, 484 U.S. 564 (1988).
[3] Fed.R.Evid.702, https://www.law.cornell.edu/rules/fre/rule_702
[4] Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579 (1993).
[5] Fed.R.Evid. 901. https://www.law.cornell.edu/rules/fre/rule_901
[6] Maura R. Grossman & Hon. Paul W. Grimm (ret.), Judicial Approaches to Acknowledged and Unacknowledged AI‑Generated Evidence, 26 Colum. Sci. & Tech. L. Rev. 110, 120 (2025), https://doi.org/10.52214/stlr.v26i2.13890
[7] Model Rules of Prof’l Conduct (Am. Bar Ass’n 2025), https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/model_rules_of_professional_conduct_table_of_contents/
[8] John T. Bandler, Artificial Intelligence, Cyberlaw: Law for Digital Spaces and Information Systems (Bandler Grp. LLC 2025).
[9] Id. at 370
[10] What is Generative AI?, IBM,https://www.ibm.com/think/topics/generative-ai (last visited Nov. 242025).
[11] Id.
[12] Id.
[13] James H. Curlin IV, ChatGPT Didn’t Write This . . . or Did It? The Emergence of Generative AI in the Legal Field and Lessons from Mata v. Avianca, 78 Ark. L. Rev. 1 (2025), https://scholarworks.uark.edu/cgi/viewcontent.cgi?article=1277&context=alr.
[14] Id.
[15] Id.
[16] Herbert B. Dixon, Jr., The “Deepfake Defense”: An Evidentiary Conundrum, 63 Judges’ J. 2 (Spring 2024), https://www.americanbar.org/groups/judicial/resources/judges-journal/2024-spring/deepfake-defense-evidentiary-conundrum/
[17]Josh A. Goldstein & Andrew Lohn, Deepfakes, Elections, and Shrinking the Liar’s Dividend, Brennan Ctr. for Justice (Jan. 23, 2024), https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend
[18] Rebecca Delfino, The Deepfake Defense — Exploring the Limits of the Law and Ethical Norms in Protecting Legal Proceedings from Lying Lawyers, Loyola Law Sch. Los Angeles Legal Studies Research Paper No. 2023‑02, 84 Ohio St. L.J. 1068 (2024), https://ssrn.com/abstract=4355140
[19] Dixon, Deepfake Defense, supra note 19
[20] AI in Action ‑ Current Applications in State Courts, video series, Thomson Reuters Institute/National Center for State Courts (2024), https://vimeo.com/showcase/11701825
[21] Preliminary Draft of Proposed Amendments to the Federal Rules (Aug. 15, 2025), https://www.uscourts.gov/sites/default/files/document/preliminary-draft-of-proposed-amendments-to-federal-rules_august2025.pdf
[22] Supra at note 21
[23] Delfino, Rebecca, DEEPFAKES ON TRIAL 2.0: A REVISED PROPOSAL FOR A NEW FEDERAL RULE OF EVIDENCE TO MITIGATE DEEPFAKE DECEPTIONS IN COURT (February 15, 2025). Loyola Law School, Los Angeles Legal Studies Research Paper No. 2025-10, DOI: 10.13140/RG.2.2.12632.61447, Available at SSRN: https://ssrn.com/abstract=5188767
[24] Id at 3.
[25] Preliminary Questions, Fed. R. Evid. 104.
Recent Comments