Justice Speakers Institute

  • AI and Justice
  • AI Consultation
  • Request AI Training
  • Home
  • What We Do
    • What JSI Can Do For You
    • Curriculum & Training Development
    • Corporate Road Safety
    • Selected Trainings & Publications
    • Service Inquiry
  • Meet JSI
    • Why the JSI?
    • The Partners and Associates of JSI
    • Our Topics of Expertise
    • Upcoming Events
    • Worldwide Expertise
    • Testimonials
    • Becoming JSI Associate
    • JSI Code of Ethics
  • JSI Blog
    • JSI Blog Menu
    • Justice and AI
      • AI in the Courts – An AI Series Hub
      • Hardwiring Justice – An AI Series Hub
  • JSI Podcast
  • JSI Justice Publications
    • JSI Justice Publications
    • Science Bench Book for Judges
      • Additional Resources
    • Drug Testing Programs
    • Corporate Road Safety
  • Resources
    • JSI Justice Publications
      • JSI Justice Publications
      • Science Bench Book for Judges
        • Additional Resources
    • Veterans Courts
    • Drug Testing Programs
    • Corporate Road Safety
    • Procedural Justice
    • Drugged Driving
  • Contact Us
Contact
JSI
David Wallace Traffic Safety Expert
David Wallace
Tuesday, 17 March 2026 / Published in Artificial Intelligence, Law

Ethical Use of Artificial Intelligence by Prosecutors: One Rule Above All

Share Button

This article is part of the Hardwiring Justice series on Artificial Intelligence and the Justice System. This is Part 3B-1 in the series examining how AI is shaping policing, prosecution, defense practice, and the courts.*

Artificial intelligence is no longer optional in modern prosecutors’ offices. It is already embedded in discovery platforms, transcription tools, research products, drafting workflows, and administrative systems. National guidance from the American Bar Association and other legal associations recognize this reality: AI may be used, but it must remain within the boundaries of existing ethical obligations.

For prosecutors, those obligations are not abstract; they are constitutional, professional, and public-facing. And across all guidance, one ethical rule governs everything: 

If the work product is flawed, the responsibility belongs to the prosecutor—not the AI. There is no ethical safe harbor in automation.

AI Ethics for Prosecutors’ Offices: The Current Reality

Prosecutors are using AI today to manage the scale and complexity of discovery and caseloads:

  • Organizing and reviewing discovery
  • Transcribing and translating audio and video evidence
  • Drafting routine motions and notices
  • Conducting preliminary legal research
  • Managing case flow and administrative backlogs

The ABA, in its Formal Opinion 512, is clear.  Generative AI produces statistically plausible outputs, not legal reasoning. These systems can fabricate citations, distort legal holdings, and reproduce bias, often with confidence.[1]

Courts are now seeing a growing number of cases in which attorneys have submitted filings containing fictitious cases, inaccurate quotations, or distorted authorities generated by AI systems.[2] In nearly every instance, the problem has not been the technology itself, but the lawyer’s failure to read, verify, and cross-check what was submitted in their name.

When those errors reach a courtroom, they are not “AI mistakes.” They are the prosecutor’s mistakes. 

AI ethics for prosecutors

The Ethical Throughline: Accountability Never Transfers

First and foremost, prosecutors have an ethical duty “to seek justice within the bounds of the law, not merely to convict.”[3]  The use of artificial intelligence does not alter that obligation. Technology may change how legal work is performed, but it does not change what prosecutors are responsible for.

Existing ethical rules already provide substantial guidance for the use of AI in legal practice. They do not need to be rewritten for new technology. They need to be understood and applied. Several core duties are especially relevant: 

  • Competence: Prosecutors must understand the tools they use and independently verify their outputs. (Model Rule 1.1)[4]
  • Candor to the Tribunal: False or misleading submissions, intentional or not, violate ethical duties. (Model Rules 3.3 and 8.4(c))[5]
  • Confidentiality: Sensitive criminal justice information must be protected from unauthorized disclosure.[6]
  • Supervisory Responsibility: Office leadership must set policies, train staff, and enforce compliance. [7]
  • Misconduct: Inaccurate or deceptive filings remain misconduct even when generated by software.

Artificial intelligence does not dilute these duties. It concentrates them. Speed increases risk. Scale amplifies error. The ethical burden remains exactly where it has always been: on the prosecutor.

For that reason, prosecutors must independently review and verify any AI-generated material before relying on it. They may not delegate professional judgment to a machine. Tasks that require legal analysis, credibility assessments, charging decisions, or strategic discretion must remain firmly within human control.

AI can assist. It cannot replace responsibility.

CJIS Compliance Is Not Optional

One critical issue emphasized in prosecutorial guidance,[8] but often overlooked in general ethics discussions, is Criminal Justice Information Services (CJIS) compliance.

Any AI system that stores, processes, or analyzes criminal justice information must meet strict security requirements, including:

  • Role-based access controls 
  • Multi-factor authentication
  • Audit logging
  • Encryption at rest and in transit

Overall, publicly available AI tools used casually by lawyers do not meet these standards. Uploading police reports, discovery materials, criminal histories, or witness information into non-compliant systems risks unauthorized disclosure and ethical violations.

From an ethics standpoint, security failures are not technical glitches. They are failures of professional judgment.

Six Practical Guardrails for Prosecutors Using AI

  1. Use AI Only as an Assistive Tool
    AI may organize, summarize, and draft—but it must never determine charging decisions, plea positions, or credibility assessments.
  2. Verify Everything Before It Leaves the Office
    Every AI-assisted draft, summary, or research product must be independently reviewed. If an error appears in court, responsibility follows the signature, not the software.
  3. Adopt a Written Office Policy on AI Use
    Supervisory prosecutors have an affirmative ethical duty to define permitted uses, prohibited uses, review requirements, and documentation standards.
  4. Restrict AI Access to Criminal Justice Information
    Only secure, vetted systems should handle sensitive data. General-purpose AI tools should be limited to hypothetical, anonymized, or administrative tasks.
  5. Document AI Use in Casework
    Transparency protects prosecutors. Documenting AI assistance helps address discovery questions, court inquiries, and ethical scrutiny.
  6. Train Continuously and Assume AI Tools Will Change
    Competence is not static. Prosecutors and staff must be trained on AI limitations, bias risks, and security obligations, and policies must evolve as tools evolve.

Is This in a Prosecutor’s Best Interest?

Yes, ethically, legally, and institutionally, these guidelines are in a prosecutor’s best interest. Following these guardrails aligns with national ethics guidance and prosecutorial best practices. It reduces exposure to discovery disputes, evidentiary challenges, and disciplinary complaints. Most importantly, it protects the legitimacy of prosecutorial decision-making in an era of heightened scrutiny.

AI will continue to enter prosecutors’ offices, often invisibly, through vendor systems and software updates. Offices that adopt AI without structure do not avoid risk; they inherit it silently.

AI is already here. The ethical question is whether prosecutors will control it deliberately, or inherit its risks by default.  AI does not make prosecutorial decisions; it does not bear any ethical responsibility. That responsibility stays with us, and our use of AI must always reflect our obligations to competence, confidentiality, and justice.  The responsible path forward is a deliberate, disciplined use grounded in one immutable rule:

When AI-assisted work is wrong, the prosecutor is accountable.

That principle is not a limitation. It is the foundation of ethical prosecution in the age of artificial intelligence.


* This article was edited with the assistance of AI in the form of a large language model. It was used solely for grammar, editing, and footnote support. All substantive content and conclusions reflect human authorship.

[1] ABA Formal Opinion 512, Generative Artificial Intelligence Tools, American Bar Association, July 29, 2024.

[2] See, e.g., Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023) (sanctioning attorneys for submitting a brief containing multiple fictitious cases generated by ChatGPT); Park v. Kim, No. 22-cv-2057 (D.N.J. 2023) (referred to Court’s Grievance Panel for submitting fabricated AI-generated authorities); In Minnesota’s Fourth Judicial District (Hennepin County), Nuvola, LLC vs. Morgan Wright, No. 27-CV-HC-15-3802 involved an attorney submitting legal briefs with fake cases, including “Royer v. Nelson,” which were hallucinated by ChatGPT. Attorney was referred to the Professional Responsibility Board and ordered to pay $1,000 penalty). These cases reflect a growing judicial concern that lawyers are relying on AI outputs without meaningful verification.

[3] ABA Standards for Criminal Justice: Prosecution Function std. 3-1.2(b) (Am. Bar Ass’n 4th ed. 2017).

[4] ABA Model rules of Professional Conduct.  The ABA Model Rules are used here as a national framework, however, prosecutors must comply with the professional conduct rules in effect in their own jurisdictions.

[5] Id.

[6] Integrating AI: Guidance and Policies for Prosecutors, National Best Practices Committee, January 2025

[7] Id.

[8] Id.

Get more articles like this
in your inbox

Subscribe to our mailing list and get the latest information and updates to your email inbox.

Thank you for subscribing.

Something went wrong.

We respect your privacy and take protecting it seriously

Related

Tagged under: AI Governance, Artificial Intelligence in Justice, Hardwiring Justice Series, legal technology, Prosecutorial Ethics

What you can read next

Therapeutic justice
Therapeutic Justice: Integrating Law and Well-Being in Courts
telejurisprudence
Telejurisprudence and the Courts: How COVID-19 Changed Justice
Justice for all
Justice for All: Lessons from Dr. King’s Memorial

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Subscribe to JSI’s Blog Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts

  • Hardwiring Justice: Artificial Intelligence in Prosecution

    Artificial intelligence is entering prosecutors...
  • Public Defense on the Edge of the Map: A Conversation with Howard Phillips

    This episode of Justice Speaks examines public ...
  • law enforcement artificial intelligence

    Law Enforcement and Artificial Intelligence: Justice at the Front Door

    Artificial intelligence is already embedded in ...

Upcoming Events

MENU

  • Home
  • Our Services
  • Why the JSI?
  • JSI Blog
  • Contact JSI

Copyright © 2022  Justice Speakers Institute, LLC.
All rights reserved.



The characteristics of honor, leadership and stewardship are integral to the success of JSI.

Therefore the Partners and all Associates subscribe to a Code of Professional Ethics.

JOIN US ON SOCIAL MEDIA

JUSTICE SPEAKERS INSTITUTE, LLC

P.O. BOX 20
NORTHVILLE, MICHIGAN USA 48167

CONTACT US

TOP

Get more information like this
in your inbox

Subscribe to our mailing list
and get interesting content and updates to your email inbox.

Thank you for subscribing.

Oops. Something went wrong.

We respect your privacy and take protecting it seriously

https://justicespeakersinstitute.com/wp-admin/admin-ajax.php