Artificial intelligence is no longer a distant possibility for treatment courts. It is already influencing how participants seek help, how probation interacts with clients, how digital evidence is reviewed, and how courts assess risk and supervision. Treatment courts remain the justice system’s most human centered institutions, created to engage, support, and rehabilitate people whose underlying needs contribute to criminal behavior. That mission does not change because AI has arrived. Courts must establish clear boundaries now before the technology sets them on its own.
A growing concern reflects this moment. Participants are turning to AI chatbots as a substitute for therapy. Courts are beginning to see individuals who prefer AI because it feels nonjudgmental, always available, and less intimidating than a human clinician. Some courts are uncertain how to respond.
They should not be uncertain.
Participants with a history of suicidal ideation, self-harm, acute trauma, or other clinical instability cannot rely on AI in place of real treatment. Every credible clinical, medical, and judicial ethics body that has reviewed this issue reaches the same conclusion. AI may supplement support, but it cannot conduct risk assessments, deliver therapy, manage crises, or assume responsibility for clinical care. It cannot detect escalating distress, intervene in real time, coordinate safety plans, or replace the judgment and accountability of a licensed therapist.
Treatment courts are built on evidence-based practice. The evidence is clear. AI is not therapy.

Why This Matters for Treatment Courts
Treatment court participants are often clinically fragile. Many face co-occurring disorders, trauma histories, unstable medication regimens, or abrupt emotional shifts caused by stress or withdrawal. These courts depend on transparent communication, strong therapeutic alliances, and reliable human oversight. None of these elements exist when participants rely on AI systems that function as sophisticated text prediction tools.
This is where the danger lies. AI sounds therapeutic. It mimics empathy. It produces reassuring, counselor like responses. But it does not understand risk, nuance, or context, and it has no duty of care. It may offer comforting statements while missing signs of crisis. It may unintentionally give advice that undermines treatment, supports avoidance, or normalizes harmful behavior.
Courts have already seen what happens when new technologies enter justice environments without clear rules. As the Justice Speakers Institute’s AI series shows, AI tools such as risk assessments and video analysis systems can be useful when properly governed but dangerous when unregulated. The same principle applies here. When courts lack policies, technology fills the vacuum, often at the expense of safety, fairness, and clinical integrity.
AI as a Supplemental Tool, Not a Substitute
Some participants will continue to use AI because it feels supportive or helps them process emotions between sessions. Courts do not need to prohibit that use entirely. AI can play a limited, supplemental role, similar to journaling apps, wellness trackers, or psychoeducation tools, as long as that use is discussed with their therapist.
The core treatment must always be delivered by a licensed human clinician. That clinician must evaluate risk, track progress, assess suicidality, adjust treatment plans, and maintain responsibility for safety. Clinicians should also inform the treatment court team, in an appropriate manner, about a participant’s reliance on AI tools. Courts must be explicit. No AI system can assume the therapeutic role.
Treatment courts that have already addressed this issue classify AI as strictly secondary. Participants may use it, but never in place of therapy. Individuals with any history of suicidal ideation, self-harm, or severe mental illness should be strongly discouraged from using AI for emotional support. This is not resistance to technology. It is protection of the participant’s life.

What Treatment Courts Need to Do Now
If your treatment court does not have a written policy addressing participant use of AI, now is the time to develop one. A strong policy should include at least four components.
1. Human Provided Treatment Is Mandatory
AI tools cannot replace individual therapy, group therapy, trauma counseling, medication management, or crisis intervention.
2. Enhanced Protections for High-Risk Participants
Individuals with histories of suicidal ideation, suicide attempts, acute psychiatric symptoms, or active self-harm behaviors should not use AI for emotional support.
3. Transparent Communication With Participants
Courts should explain clearly, both verbally and in writing, that AI is not a clinician, cannot assess safety, cannot intervene, and cannot provide treatment. Participants must understand these limits.
4. Integration Into Supervision and Treatment Plans
Any AI use should be discussed during staffing, documented in treatment plans, and reviewed by clinicians. Increased reliance on AI should be treated as clinically relevant information, not simply as a personal preference.
AI Will Shape the Future of Treatment Courts, but It Must Not Replace Their Core
Treatment courts succeeded because they rejected the assembly line model of justice and built interventions grounded in behavioral science, compassion, and accountability. AI can support that mission by helping staff flag early warning signs, streamline administrative tasks, or expand access to educational materials. That is only possible when courts maintain strict oversight.
AI cannot build trust. It cannot provide empathy. It cannot treat trauma, addiction, or severe mental illness. That work requires people.
The strength of treatment courts has always been personal connection. Judges speak directly to participants. Teams coordinate care. Clinicians guide change. Participants learn that accountability and support can exist together. AI may enhance some parts of that process, but it can never replace it.
This is the moment for treatment courts to adopt clear, written policies that protect participants, preserve clinical standards, and ensure that technology serves the court and not the other way around.
Other Articles On AI in the Courts
Introduction: Artificial Intelligence and the Courts: A Blog Series from Justice Speakers Institute
Part 1: AI in the Courtroom: Opportunities and Risks
Part 2: AI in the Courts: Ethical Challenges
Part 3: AI on Trial – Admissibility of AI-Generated Evidence
Part 4: Judicial Decision-Making: Transparency, Accountability, and the Judicial Role
Part 5: Courts of the Future-Innovation, Access, and Global Trends
Part 6: Judging the Machine-Lessons, Guardrails, and the Path Forward
Get more articles like this
in your inbox
Subscribe to our mailing list and get the latest information and updates to your email inbox.
Thank you for subscribing.
Something went wrong.






