Justice Speakers Institute

  • AI and Justice
  • AI Consultation
  • Request AI Training
  • Home
  • What We Do
    • What JSI Can Do For You
    • Curriculum & Training Development
    • Corporate Road Safety
    • Selected Trainings & Publications
    • Service Inquiry
  • Meet JSI
    • Why the JSI?
    • The Partners and Associates of JSI
    • Our Topics of Expertise
    • Upcoming Events
    • Worldwide Expertise
    • Testimonials
    • Becoming JSI Associate
    • JSI Code of Ethics
  • JSI Blog
    • JSI Blog Menu
    • Justice and AI
      • AI in the Courts – An AI Series Hub
      • Hardwiring Justice – An AI Series Hub
  • JSI Podcast
  • JSI Justice Publications
    • JSI Justice Publications
    • Science Bench Book for Judges
      • Additional Resources
    • Drug Testing Programs
    • Corporate Road Safety
  • Resources
    • JSI Justice Publications
      • JSI Justice Publications
      • Science Bench Book for Judges
        • Additional Resources
    • Veterans Courts
    • Drug Testing Programs
    • Corporate Road Safety
    • Procedural Justice
    • Drugged Driving
  • Contact Us
Contact
JSI
David Wallace Traffic Safety Expert
David Wallace
Tuesday, 17 March 2026 / Published in Artificial Intelligence, Law

Hardwiring Justice: Artificial Intelligence in Prosecution

Share Button

This article is part of the Hardwiring Justice series on Artificial Intelligence and the Justice System. This is Part 3B-2 in the series examining how AI is shaping policing, prosecution, defense practice, and the courts.*

Artificial intelligence is increasingly present in prosecutors’ offices, often embedded quietly within existing software systems. Tools that summarize documents, transcribe audio, assist with research, or organize large volumes of digital evidence promise efficiency at a time when caseloads and discovery obligations continue to expand.

But for prosecutors, the relevant question is not whether AI is powerful or convenient. It is whether AI can be used without compromising constitutional obligations, ethical duties, statutory confidentiality, and public trust.

The answer is yes, but only if AI is treated as an assistive technology, governed intentionally, and constrained by existing legal and ethical rules. AI does not create new discretion. It does not reduce responsibility. And it does not alter the prosecutor’s core duty to seek justice within the bounds of the law.

One Principle Above All: Responsibility Never Transfers

Across ethical rules, court decisions, and national guidance, one principle governs the use of AI in prosecution:

When AI-assisted work is wrong, the prosecutor is accountable.

There is no ethical safe harbor in automation.
There is no delegation of judgment to software.
There is no defense that “the system recommended it.”

AI may assist prosecutorial work, but it does not carry ethical duties, constitutional obligations, or professional responsibility. Those remain human, personal, and non-delegable.[1]

Threshold Legal Constraints: When AI May Not Be Used

Before considering how AI might assist prosecution, offices must identify when AI may not be used at all. These are not best practices or aspirational guidelines, they are legal gatekeepers.

CJIS Compliance Is Mandatory

Any AI system that stores, processes, or analyzes criminal justice information must comply with Criminal Justice Information Services (CJIS) security requirements. Many publicly available or consumer AI tools do not meet these standards.[2]

If prosecutors cannot clearly establish:

  • where data is processed,
  • whether it is retained,
  • who has access to it,
  • and whether the system is CJIS-compliant,

the tool may not be used for case-related work. Security failures are not technical errors; they are failures of professional judgment.

Victim Confidentiality Is Non-Negotiable

Many states impose statutory duties to protect victim information, including personal identifiers, medical or counseling records, safety planning details, and information that could reasonably lead to identification.

AI tools may not be used to process, summarize, prioritize, or analyze victim information unless the system is expressly approved for that purpose and fully compliant with all legal protections. Attempts to anonymize or hypothetically reframe victim data do not eliminate these obligations.

AI in prosecution

AI as an Assistive Tool, Not a Decision Maker

When legally permissible systems are in place, AI may assist prosecutors in limited, defined ways. Even then, its role must remain subordinate to human judgment.

Charging and Case Screening

AI tools may surface patterns, flag inconsistencies, or assist with workload management.[3] They may not:

  • determine charges,
  • recommend enhancements as defaults,
  • drive declinations without human review,
  • or replace individualized prosecutorial judgment.

Charging authority is vested by law in prosecutors, not vendors or algorithms. Any system that subtly shifts discretion away from human decision-makers raises serious ethical and constitutional concerns.

Discovery Review and Evidence Organization

AI can assist in organizing large volumes of discovery,[4] identifying duplicates, or clustering materials for review. It may not:

  • define disclosure obligations,
  • determine what is or is not Brady material,
  • filter evidence in a way that obscures exculpatory information.

Prosecutors remain responsible for ensuring that discovery is complete, fair, and constitutionally compliant. Speed is not neutral, and filtering mechanisms must never substitute for legal judgment.

Evidence Triage and Case Preparation

AI may help surface information or manage complexity, but prosecutors must remain alert to the risk that algorithmic prioritization can reinforce historical bias, standardize narratives, or crowd out alternative interpretations of the evidence.

AI identifies patterns. It does not assess credibility, context, or justice.

Transparency, Candor, and Due Process

AI use in prosecution cannot be invisible. If AI meaningfully influences:

  • evidence review,
  • case screening,
  • charging decisions,
  • or trial preparation,

prosecutors must be prepared to explain that use to supervisors, courts, and, where required, the defense. Candor to the tribunal includes candor about process, not just outcomes.

As courts increasingly scrutinize algorithmic decision-making, prosecutors should assume that undisclosed AI influence will draw heightened attention, not deference.

AI in prosecution

Supervision, Training, and Institutional Responsibility

AI use is not solely an individual attorney issue. It implicates supervisory responsibility at every level.

Office leadership has an affirmative duty to:

  • adopt written AI policies,
  • define permitted and prohibited uses,
  • train attorneys and staff on limitations and risks,
  • monitor compliance,
  • and update guidance as technology evolves.

When AI is adopted without structure, errors become institutional rather than individual. Courts and disciplinary authorities will look upstream to leadership when failures are foreseeable and preventable.

AI does not reduce supervisory responsibility. It intensifies it.

Documentation and Governance

Responsible AI use requires governance, not experimentation. Offices should:

  • document when AI is used in casework,
  • preserve auditability,
  • ensure IT and legal review of vendor systems,
  • and periodically reassess tools for compliance, bias, and reliability.

Documentation does not authorize impermissible use, but it protects prosecutors when use is appropriate and lawful.

Public Trust and the Legitimacy of Prosecution

Prosecutors do not merely process cases; they exercise public authority. When decisions appear automated, opaque, or outsourced to systems the public does not understand, confidence in the justice system erodes.

AI can support fairness and consistency, but only if its role is visible, constrained, and governed deliberately. Prosecutorial discretion does not disappear when mediated by technology; it simply moves. If it moves into systems beyond scrutiny, accountability is lost.

Conclusion: Control, Don’t Drift

AI will continue to enter prosecutors’ offices, often through vendor updates and embedded features rather than deliberate adoption. Offices that fail to act intentionally do not avoid risk, they inherit it silently.

Used carefully, AI can assist prosecutors in managing complexity and workload. Used casually, it can undermine ethics, legality, and trust.

The responsible path forward is neither rejection nor blind adoption. It is disciplined use grounded in existing ethical rules, legal constraints, and prosecutorial values, anchored by one immutable principle:

AI does not make prosecutorial decisions. It does not bear ethical responsibility. When AI-assisted work is wrong, the prosecutor remains accountable.


* This article was edited with the assistance of AI in the form of a large language model. It was used solely for grammar, editing, and footnote support. All substantive content and conclusions reflect human authorship.

[1] For a more in-depth discussion on ethical concerns for prosecutors, see Hardwiring Justice, Part 3B-1

[2] See Integrating AI: Guidance and Policies for Prosecutors, National Best Practices Committee, January 2025

[3] ABC 12 News, Genesee County Prosecutor Using AI to Manage Caseload, May 12, 2025.

[4] Teale, C, Prosecutors turn to AI for evidence management and analysis, Route Fifty, January 6, 2025.

Get more articles like this
in your inbox

Subscribe to our mailing list and get the latest information and updates to your email inbox.

Thank you for subscribing.

Something went wrong.

We respect your privacy and take protecting it seriously

Related

Tagged under: AI Governance, Artificial Intelligence in the Courts, Criminal Justice Technology, Hardwiring Justice Series, Prosecutorial Ethics

What you can read next

Judicial Leadership
Judicial Leadership in Problem-Solving Courts: The Key to Success Worldwide
David Wallace MATCP President
David Wallace Elected President of MATCP, Advancing Treatment Courts
Jeff Sauter Treatment Court Award
JSI VP David J. Wallace Receives Jeff Sauter Treatment Court Award

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Subscribe to JSI’s Blog Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts

  • AI ethics for prosecutors

    Ethical Use of Artificial Intelligence by Prosecutors: One Rule Above All

    Artificial intelligence is increasingly used in...
  • Public Defense on the Edge of the Map: A Conversation with Howard Phillips

    This episode of Justice Speaks examines public ...
  • law enforcement artificial intelligence

    Law Enforcement and Artificial Intelligence: Justice at the Front Door

    Artificial intelligence is already embedded in ...

Upcoming Events

MENU

  • Home
  • Our Services
  • Why the JSI?
  • JSI Blog
  • Contact JSI

Copyright © 2022  Justice Speakers Institute, LLC.
All rights reserved.



The characteristics of honor, leadership and stewardship are integral to the success of JSI.

Therefore the Partners and all Associates subscribe to a Code of Professional Ethics.

JOIN US ON SOCIAL MEDIA

JUSTICE SPEAKERS INSTITUTE, LLC

P.O. BOX 20
NORTHVILLE, MICHIGAN USA 48167

CONTACT US

TOP

Get more information like this
in your inbox

Subscribe to our mailing list
and get interesting content and updates to your email inbox.

Thank you for subscribing.

Oops. Something went wrong.

We respect your privacy and take protecting it seriously

https://justicespeakersinstitute.com/wp-admin/admin-ajax.php