Justice Speakers Institute

  • Home
  • What We Do
    • What JSI Can Do For You
    • Curriculum & Training Development
    • Corporate Road Safety
    • Selected Trainings & Publications
    • Service Inquiry
  • Meet JSI
    • Why the JSI?
    • The Partners and Associates of JSI
    • Our Topics of Expertise
    • Upcoming Events
    • Worldwide Expertise
    • Testimonials
    • Becoming JSI Associate
    • JSI Code of Ethics
  • JSI Blog
    • JSI Blog Menu
    • Justice and AI
      • AI in the Courts – An AI Series Hub
      • Hardwiring Justice – An AI Series Hub
  • JSI Podcast
  • JSI Justice Publications
    • JSI Justice Publications
    • Science Bench Book for Judges
      • Additional Resources
    • Drug Testing Programs
    • Corporate Road Safety
  • Resources
    • JSI Justice Publications
      • JSI Justice Publications
      • Science Bench Book for Judges
        • Additional Resources
    • Veterans Courts
    • Drug Testing Programs
    • Corporate Road Safety
    • Procedural Justice
    • Drugged Driving
  • Contact Us
Contact
JSI
Criminal justice reform consultant
Hon. Brian MacKenzie (Ret.)
Tuesday, 21 October 2025 / Published in Artificial Intelligence, Law

Part Four: AI in the Courts:  Judicial Decision-Making: Transparency, Accountability, and the Judicial Role

Share Button

Judges are increasingly working with artificial intelligence (AI) systems to analyze legal documents, predict case outcomes, and recommend sentences based on prior data. These technologies promise to improve efficiency in judicial decision-making, helping overburdened courts manage caseloads and identify relevant precedents. In theory, AI can promote fairness by reducing inconsistency and human error. Yet as reliance on these systems grows, so does the risk that judges relinquish their central decision-making role. Judicial legitimacy depends not only on fair outcomes, but also on the public’s confidence that a judge’s decisions are reasoned, ethical, and explainable.

AI and Legal Research: Accuracy and Over Reliance

AI-driven legal research tools, particularly those built on large language models (LLMs)[1], are reshaping how judges and lawyers locate authority, draft opinions, and decide cases. These systems can summarize vast databases of case law in seconds, identify relevant rulings, and even generate draft memoranda. For overworked courts, such efficiency is tempting. Yet speed and convenience can mask significant dangers.

Unlike databases such as Westlaw or LexisNexis, which rely on verified sources, AI models generate text probabilistically, which means they predict what comes next based on patterns in their training data. This means they can produce “hallucinations,[2]” fabricating cases, misquoting precedents, or distorting holdings. Several recent incidents have seen attorneys sanctioned for submitting briefs citing non-existent cases generated by AI.[3]  In one of these, a widely publicized case, Mata v. Avianca, Inc[4]. two attorneys were sanctioned after submitting a legal brief containing six fictitious case citations generated by AI.  The Judge held that Mata’s lawyers had acted with “subjective bad faith” sufficient for sanctions of $5000.00 under Rule 11 of the Federal Rules of Civil Procedure.[5]  

It is not just lawyers who have been embarrassed. In 2024, a New Jersey federal judge withdrew an opinion in a shareholder lawsuit against the pharmaceutical company after it was discovered that the decision included AI-generated citations that referenced non-existent cases.[6] The citations were reportedly inserted by a staff member using an AI tool to help draft the opinion. Once the inaccuracies were brought to the judge’s attention, he promptly issued a corrected version. While the judge was not formally sanctioned, the incident underscored growing concerns about the use of AI in judicial decision making.  Any judge who places an uncritical reliance on AI risks incorporating false or misleading authority into the judicial record.

Equally concerning is the potential erosion of deep legal reasoning. Effective legal analysis depends on careful reading, analogical thinking, and the weighing of competing principles, skills cultivated through experience and reflection. When AI systems summarize or paraphrase complex precedent, they may omit critical nuance. Over time, excessive reliance on AI in judicial decision making could narrow judicial understanding of evolving doctrine, reducing legal reasoning to pattern recognition rather than interpretation.

Judges and lawyers must therefore treat AI-generated research as a starting point, not an endpoint. Verification, cross-checking, and independent analysis remain essential. Courts should also establish ethical guidelines governing the use of generative AI in legal research, ensuring transparency about when and how such tools are employed.

AI in judicial decision making

Transparency: Opening the Black Box

For centuries, the legitimacy of judicial decision-making has rested on the principle that justice must be seen to be done. Parties and the public alike must be able to understand the reasoning behind a judgment. Yet many AI systems operate as “black boxes,”[7] their internal logic hidden behind proprietary code or complex statistical models. Even developers often cannot fully explain how a machine-learning model arrives at a particular result.

When courts rely on such systems, transparency suffers. Defendants may not know what factors contributed to their risk score. Lawyers cannot effectively challenge an algorithm they cannot inspect. And judges may be left trusting an outcome they cannot independently verify. This opacity undermines due process and erodes the adversarial system’s commitment to testing evidence through scrutiny and cross-examination. Without transparency, the appearance of fairness is lost, no matter how efficient the technology.

Accountability: Who Is Responsible for the Algorithm’s Errors?

Accountability is the cornerstone of judicial ethics. Judges are sworn to uphold the law, explain their reasoning, and take responsibility for their rulings. When algorithms influence those rulings, responsibility becomes diffuse. Who bears the blame when an AI system produces a biased or inaccurate outcome, the judge, the software vendor, or the data scientist who designed the model?

This diffusion of responsibility creates both ethical and constitutional concerns. Developers and vendors are not subject to judicial canons or disciplinary oversight. Yet their tools may shape decisions that profoundly affect liberty and rights. Courts must therefore insist on clear lines of accountability for any AI system used in judicial processes. Judges cannot delegate their constitutional duty of decision-making to machines.. AI may assist, but it must never decide.

AI in judicial decision making

Preserving the Human Element

At the center of every judicial decision lies judicial judgment, a blend of experience, empathy, and moral reasoning that no algorithm can replicate. Sentencing, bail, and custody decisions often require consideration of context, compassion, and community safety, none of which can be reduced to data points. AI systems process patterns; judges understand people.

A machine cannot perceive remorse in a defendant’s voice, nor can it appreciate the nuances of rehabilitation, deterrence, or mercy. The judicial oath requires more than logical consistency; it demands moral discernment. As AI grows more capable, judges must guard against “automation bias”[8], the tendency to trust machine-generated results. Above all, a judge must never abdicate the act of judgment itself. Technology may inform, but only a human being, bound by law, ethics, and conscience, can render a judicial decision.

Conclusion

AI is reshaping how courts think, decide, and deliver justice. It can enhance efficiency and insight, but it can also obscure reasoning, diffuse responsibility, and threaten the uniquely human qualities that give judicial decisions legitimacy. The path forward demands balance: embracing technological progress while ensuring that conscience, context, and accountability remain human. AI may aid justice, but it must never replace a judge’s judgment.

Other Articles in this series

Introduction: Artificial Intelligence and the Courts: A Blog Series from Justice Speakers Institute
Part 1: AI in the Courtroom: Opportunities and Risks
Part 2: AI in the Courts: Ethical Challenges
Part 3: AI on Trial – Admissibility of AI-Generated Evidence
Part 4: Judicial Decision-Making: Transparency, Accountability, and the Judicial Role
Part 5: Courts of the Future-Innovation, Access, and Global Trends
Part 6: Judging the Machine–Lessons, Guardrails, and the Path Forward

Citations

[1] A large language model (LLM) is an artificial intelligence system trained on vast amounts of text data to understand, generate, and predict human-like language.

[2] AI hallucinations are instances where an artificial intelligence system generates information or responses that appear plausible but are factually false or unsupported by its data.

[3] Several recent incidents have seen attorneys sanctioned for submitting briefs citing non-existent cases generated by AI. See, e.g., Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023); Park v. Kim, No. 22-cv-1543 (E.D.N.Y. Nov. 27, 2023); United States v. Cohen, No. 20-CR-108 (S.D.N.Y. July 17, 2024).

[4] Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023); https://www.casemine.com/judgement/us/6499235932898b72d5cfb1aa

[5] Id.

[6] Mia Sato, Judge Withdraws Opinion After AI-Generated Citations Appear in Court Ruling, The Verge (Sept. 27, 2024), https://www.theverge.com/news/713653/judge-withdraws-cormedix-case-ai-citation-errors.

[7] A black box is a system, device, or process whose internal workings are hidden or unknown, but whose inputs and outputs can be observed and analyzed.

[8] Automation bias is the human tendency to place excessive trust in automated systems, often accepting their outputs without sufficient critical evaluation.


INTERESTED IN AI AND THE COURTS?

Artificial Intelligence is transforming justice. But it also raises complex questions of ethics, fairness and accountability. At the Justice Speakers Institute (JSI), we provide training, consulting, and expert presentations to help courts, policymakers, and legal professionals navigate these challenges responsibly.

Contact us today to learn how JSI can support your organization in understanding and implementing AI in the courts.

Get more articles like this
in your inbox

Subscribe to our mailing list and get the latest information and updates to your email inbox.

Thank you for subscribing.

Something went wrong.

We respect your privacy and take protecting it seriously

Related

Tagged under: AI in judicial decision making, Automation bias, Judicial Ethics, Judicial transparency, legal technology

What you can read next

racial bigotry
Confronting Racial Bigotry: The Manmade Hurricane Within
Supreme Court cases
Justice Scalia’s Death: What It Means for the Supreme Court cases
Justice for all
Justice for All: Lessons from Dr. King’s Memorial

Subscribe to JSI’s Blog Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts

  • Treatment Court Data

    Treatment Court Data and National Infrastructure with Dr. DeVall

    This Justice Speaks episode examines how treatm...
  • AI systems in the criminal justice system

    AI Describes Many Technologies and None of Them Are Intelligent

    What courts call “AI” is rarely intelligent—and...
  • AI governance in the justice system

    Hardwiring Justice: Governing AI Before It Governs the Justice System

    Artificial intelligence is already embedded acr...

Upcoming Events

MENU

  • Home
  • Our Services
  • Why the JSI?
  • JSI Blog
  • Contact JSI

Copyright © 2022  Justice Speakers Institute, LLC.
All rights reserved.



The characteristics of honor, leadership and stewardship are integral to the success of JSI.

Therefore the Partners and all Associates subscribe to a Code of Professional Ethics.

JOIN US ON SOCIAL MEDIA

JUSTICE SPEAKERS INSTITUTE, LLC

P.O. BOX 20
NORTHVILLE, MICHIGAN USA 48167

CONTACT US

TOP

Get more information like this
in your inbox

Subscribe to our mailing list
and get interesting content and updates to your email inbox.

Thank you for subscribing.

Oops. Something went wrong.

We respect your privacy and take protecting it seriously

https://justicespeakersinstitute.com/wp-admin/admin-ajax.php