Justice Speakers Institute

  • Home
  • What We Do
    • What JSI Can Do For You
    • Curriculum & Training Development
    • Corporate Road Safety
    • Selected Trainings & Publications
    • Service Inquiry
  • Meet JSI
    • Why the JSI?
    • The Partners and Associates of JSI
    • Our Topics of Expertise
    • Upcoming Events
    • Worldwide Expertise
    • Testimonials
    • Becoming JSI Associate
    • JSI Code of Ethics
  • JSI Blog
    • JSI Blog Menu
    • Justice and AI
      • AI in the Courts – An AI Series Hub
      • Hardwiring Justice – An AI Series Hub
  • JSI Podcast
  • JSI Justice Publications
    • JSI Justice Publications
    • Science Bench Book for Judges
      • Additional Resources
    • Drug Testing Programs
    • Corporate Road Safety
  • Resources
    • JSI Justice Publications
      • JSI Justice Publications
      • Science Bench Book for Judges
        • Additional Resources
    • Veterans Courts
    • Drug Testing Programs
    • Corporate Road Safety
    • Procedural Justice
    • Drugged Driving
  • Contact Us
Contact
JSI
Criminal justice reform consultant
Hon. Brian MacKenzie (Ret.)
Tuesday, 30 September 2025 / Published in Artificial Intelligence, Judicial Ethics

Part Two: AI in the Courts: Ethical Challenges

Share Button

In part two of our series on AI and the courts, we explore how AI’s growing role in the criminal justice system raises ethical challenges for judges and lawyers.

Artificial intelligence (AI) is entering the judicial system, offering tools that promise lower costs, greater efficiency, and improved access to justice. Courts are already using AI, whether judges and lawyers know it or not, for tasks ranging from docket management to risk assessments in sentencing and bail. This creates opportunities for innovation, but also presents profound ethical challenges. At the heart of the challenges are questions of bias, transparency, professional responsibility, and public trust: issues that are fundamental to judicial legitimacy.

Bias and Fairness: The Hidden Danger in the Data

One of the greatest perils with AI is its potential to replicate and even amplify existing inequities in the justice system. AI systems draw from historical data, and if that data reflects patterns of racial or socioeconomic bias, such as disproportionate arrest or sentencing practices, the algorithm will perpetuate those disparities.

Risk assessment tools, for example, are now being used to assist in bail or sentencing decisions.  Many of these tools are neutral and objective, however, studies have shown that some disproportionately identify defendants from marginalized communities as “high risk.” This is not because the algorithm is malicious, but because it has been trained on data shaped by decades of biased practices.¹

For judges and lawyers, the ethical dilemma is clear: which AI tools carry built-in bias, and which can be trusted? To answer that question, they must scrutinize how these systems are designed, the data they use, and how their outputs are validated. To do that they need to look inside.

Transparency and the Challenge of the “Black Box”

This in turn raises another ethical issue of transparency. Many AI systems operate as “black boxes,” where the internal reasoning process is opaque even to their developers. When courts rely on such systems, it becomes difficult, if not impossible, for judges, attorneys, or litigants to understand the basis for the recommendations provided.

This lack of transparency threatens core values of accountability and due process.² If a defendant cannot challenge the basis of an algorithmic risk assessment, how can the right to a fair hearing be protected? This is not merely a technical concern; it is a constitutional one.

To address this, some scholars and judicial bodies advocate for rules requiring disclosure of how AI systems are trained, what data sets are used, and what safeguards are in place to mitigate bias.³  Judges must also consider whether proprietary claims by AI vendors can ever outweigh a litigant’s right to challenge the evidence against them. If courts allow vendor secrecy to shield AI systems from scrutiny, they risk undermining both due process and the integrity of the justice system itself.

AI in the Courts Ethics

Ethical Duties of Lawyers and Judges

Judges and lawyers alike bear ethical responsibilities when using AI. As one recent ethics opinion emphasized judicial officers have a duty to maintain technological competence.⁴ Lawyers must understand both the capabilities and the limitations of AI tools to satisfy their duty of competence under the Model Rules of Professional Conduct.²

This responsibility includes verifying AI outputs, disclosing when AI has been used in legal filings, and ensuring that client confidentiality is not compromised when information is processed through third-party systems. The recent spate of disciplinary actions against attorneys who submitted briefs containing fictitious AI-generated case citations illustrates the stakes. Competence now requires not only knowing the law but also knowing the tools used to practice it. 

Judges, in particular, must not only understand the capabilities and limitations of AI tools but also ensure these tools never undermine their role as decision makers. The ethical duty of judges is clear: they alone bear the responsibility to decide cases, guided by the law and their oath of office. Independence, impartiality, and accountability are not delegable to machines.

While AI can assist by analyzing data, identifying patterns, or even suggesting outcomes, it cannot replace the uniquely human responsibility of weighing evidence, applying legal principles, and rendering judgments grounded in fairness and empathy. As AI becomes more integrated into court processes, judges must ensure that these tools remain aids to justice, not substitutes for it. To do otherwise would risk eroding public confidence in the courts and weakening the very legitimacy of judicial decision-making.

Public Trust and the Legitimacy of Judicial Decisions

The legitimacy of the justice system rests on public confidence that courts are fair, impartial, and transparent. The use of AI may complicate that perception. If litigants believe that algorithms, rather than judges, are deciding their cases, trust in the system may erode.

This risk is heightened when courts adopt AI without clear communication. Openness with stakeholders, litigants, attorneys, and the public, is essential to building confidence in AI’s role.² Just as importantly, courts must ensure that AI is viewed as a tool to assist judges, not to replace them. Judicial empathy, discretion, and human judgment cannot be automated, and their absence would fundamentally alter the nature of justice.

AI in the Courts Ethics

Keeping Humans in the Loop

The promise of AI lies in its ability to assist, not supplant, human decision-makers. Ethical use of AI requires maintaining a “human in the loop.” Judges must retain ultimate responsibility for decisions, using AI outputs as one factor among many. This safeguard recognizes the limits of technology while affirming the irreplaceable role of human judgment in applying law to fact.

Empathy, the ability to consider context, appreciate nuance, and weigh the human impact of a decision, cannot be programmed into an algorithm. While AI may help courts process information more efficiently, it lacks the moral and ethical reasoning that underpins judicial decision-making. Ensuring that human oversight remains central is both an ethical necessity and a safeguard for due process.

Conclusion: Balancing Promise with Prudence

AI offers the potential to improve efficiency, reduce costs, and expand access to justice. Yet the perils of bias, opacity, and diminished public trust are equally real. The ethical path forward requires vigilance: demanding transparency, ensuring human oversight, and requiring judges and lawyers to maintain the competence necessary to use these tools responsibly.

The promise of AI is undeniable; but its role in the courts must be carefully circumscribed to preserve the fairness, accountability, and legitimacy of the justice system. The guiding principle is clear: technology should serve justice, not the other way around.²

Next in the Series: Part 3: AI on Trial – Admissibility of AI-Generated Evidence

Other Articles in this series

Introduction: Artificial Intelligence and the Courts: A Blog Series from Justice Speakers Institute
Part 1: AI in the Courtroom: Opportunities and Risks
Part 2: AI in the Courts: Ethical Challenges
Part 3: AI on Trial – Admissibility of AI-Generated Evidence
Part 4: Judicial Decision-Making: Transparency, Accountability, and the Judicial Role
Part 5: Courts of the Future-Innovation, Access, and Global Trends
Part 6: Judging the Machine-Lessons, Guardrails, and the Path Forward

Citations

  1. Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218 (2019).
  2. National Center for State Courts, AI and the Courts: Judicial and Legal Ethics Issues (2023), https://www.ncsc.org/resources-courts/ai-courts-judicial-and-legal-ethics-issues.
  3. Cary Coglianese & David Lehr, Transparency and Algorithmic Governance, 71 Admin. L. Rev. 1 (2019).
  4. Michigan State Bar, Judicial Ethics Opinion JI-155 (Oct. 2023), https://www.michbar.org/opinions/ethics/numbered_opinions/JI-155.

Other JSI Posts/Podcasts on Judicial Ethics

Judicial Ethics in Problem-Solving Courts: Staying Impartial
Judicial Ethics and Compassion: A Veterans Treatment Court Dilemma
Ethical Boundaries for Veteran Treatment Court Mentors
Problem Solving Court Leadership Training by JSI


INTERESTED IN AI AND THE COURTS?

Artificial Intelligence is transforming justice. But it also raises complex questions of ethics, fairness and accountability. At the Justice Speakers Institute (JSI), we provide training, consulting, and expert presentations to help courts, policymakers, and legal professionals navigate these challenges responsibly.

Contact us today to learn how JSI can support your organization in understanding and implementing AI in the courts.

Get more articles like this
in your inbox

Subscribe to our mailing list and get the latest information and updates to your email inbox.

Thank you for subscribing.

Something went wrong.

We respect your privacy and take protecting it seriously

Related

Tagged under: AI in the Courts, AI in the Courts Ethics, Bias and Fairness in AI, Judicial Ethics, legal technology

What you can read next

AI in the courtroom
Part One: AI in the Courtroom: Opportunities and Risks
AI in Treatment Courts
AI in Treatment Courts: Keeping Treatment Human
AI Tort Liability
AI Tort Liability: Does Negligence Law Still Apply?

Subscribe to JSI’s Blog Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts

  • Treatment Court Data

    Treatment Court Data and National Infrastructure with Dr. DeVall

    This Justice Speaks episode examines how treatm...
  • AI systems in the criminal justice system

    AI Describes Many Technologies and None of Them Are Intelligent

    What courts call “AI” is rarely intelligent—and...
  • AI governance in the justice system

    Hardwiring Justice: Governing AI Before It Governs the Justice System

    Artificial intelligence is already embedded acr...

Upcoming Events

MENU

  • Home
  • Our Services
  • Why the JSI?
  • JSI Blog
  • Contact JSI

Copyright © 2022  Justice Speakers Institute, LLC.
All rights reserved.



The characteristics of honor, leadership and stewardship are integral to the success of JSI.

Therefore the Partners and all Associates subscribe to a Code of Professional Ethics.

JOIN US ON SOCIAL MEDIA

JUSTICE SPEAKERS INSTITUTE, LLC

P.O. BOX 20
NORTHVILLE, MICHIGAN USA 48167

CONTACT US

TOP

Get more information like this
in your inbox

Subscribe to our mailing list
and get interesting content and updates to your email inbox.

Thank you for subscribing.

Oops. Something went wrong.

We respect your privacy and take protecting it seriously

https://justicespeakersinstitute.com/wp-admin/admin-ajax.php