Justice Speakers Institute

  • Home
  • What We Do
    • What JSI Can Do For You
    • Curriculum & Training Development
    • Corporate Road Safety
    • Selected Trainings & Publications
    • Service Inquiry
  • Meet JSI
    • Why the JSI?
    • The Partners and Associates of JSI
    • Our Topics of Expertise
    • Upcoming Events
    • Worldwide Expertise
    • Testimonials
    • Becoming JSI Associate
    • JSI Code of Ethics
  • JSI Blog
    • JSI Blog Menu
    • Justice and AI
      • AI in the Courts – An AI Series Hub
      • Hardwiring Justice – An AI Series Hub
  • JSI Podcast
  • JSI Justice Publications
    • JSI Justice Publications
    • Science Bench Book for Judges
      • Additional Resources
    • Drug Testing Programs
    • Corporate Road Safety
  • Resources
    • JSI Justice Publications
      • JSI Justice Publications
      • Science Bench Book for Judges
        • Additional Resources
    • Veterans Courts
    • Drug Testing Programs
    • Corporate Road Safety
    • Procedural Justice
    • Drugged Driving
  • Contact Us
Contact
JSI
Criminal justice reform consultant
Hon. Brian MacKenzie (Ret.)
Tuesday, 16 December 2025 / Published in Artificial Intelligence, Law

AI Tort Liability: Does Negligence Law Still Apply?

Share Button

As artificial intelligence (AI) systems become more autonomous, more deeply integrated into everyday activities, and increasingly capable of initiating actions without direct human supervision, a question has emerged: Do we need entirely new liability such as AI tort liability to govern harms caused by agentic[1] decision makers? A review of negligence law suggests we may not. The principles that U.S. courts have applied for more than a century to evaluate dangerous conduct, foreseeability, risk creation, and the allocation of responsibility are fully capable of addressing AI-related harms. 

Negligence law centers on this inquiry: whether the actor owed a duty of care, breached that duty, and caused a foreseeable harm by failing to act as a reasonable person under similar circumstances. This framework emerged from cases involving human behavior or tangible products, appears to apply equally to negligence claims arising from AI.

Why AI Tort Liability Does Not Require a New Legal Framework

Every actor in the AI lifecycle; developers, deployers[2], and end users, may act in ways that can increase risk. Developers design systems they know will interact with the world, sometimes in unpredictable ways. Deployers integrate those systems into sensitive environments such as healthcare, finance, justice, transportation, or public safety. Users rely on outputs that may appear authoritative, even when the result is not always reliable. Each of these roles can easily be analyzed under traditional negligence principles: What did the actor know? What should they have known? What risks were reasonably foreseeable? What precautions could a prudent actor have taken?

AI technology does not relieve these actors of legal responsibility. If anything, it heightens the duty to anticipate how the system may behave when operating with partial independence.

Because developers control the architecture, training, and safety mechanisms of AI systems, they are often the parties most capable of preventing harm before it arises. Negligence law already recognizes this logic: the person with specialized knowledge or control over a risk-producing instrument must take care to prevent foreseeable misuse or malfunction.

AI developers therefore have a responsibility to design systems with appropriate constraints, test for known failure modes, evaluate foreseeable misuse, implement guardrails against dangerous outputs, and issue clear warnings about limitations and proper use. A developer who releases an autonomous system without adequate testing or without disclosing known risks may breach the traditional duty of reasonable care. The fact that the harmful action was carried out by the AI itself does not break the causal chain; the negligence lies in releasing a system whose behavior was reasonably foreseeable, even if not perfectly predictable.

AI Tort Liability

Foreseeability and Risk Creation in Autonomous AI Systems

Entities that deploy AI systems, such as private-sector companies in healthcare, finance, retail, and transportation, as well as public institutions like courts, policing agencies, social-service departments, and regulatory bodies, also assume traditional negligence duties toward those affected by the technology. 

Deployers can be negligent in several ways, including choosing an AI system that is unfit for the task, failing to train staff on its limitations, allowing the system to operate without adequate human oversight, relying on its outputs in high-stakes decisions despite known accuracy problems, or using AI in environments where a reasonable organization would anticipate harm. Each of these failures reflects a departure from the level of caution expected when integrating complex, risk-bearing technology into real-world operations.

Negligence does not require expertise in software engineering; it requires acting as a reasonably prudent organization would under similar circumstances. When deployers rely blindly on AI, ignore its limitations, or treat it as infallible, traditional negligence doctrine already provides a mechanism for accountability.

End users can also be negligent when they rely on AI in ways that exceed their competence or the system’s capabilities. If a user employs AI for a task requiring professional judgment, despite warnings or known limitations, the user may be responsible for resulting harm. This mirrors standard negligence principles: individuals must use tools appropriately and with awareness of the risks they reasonably should know.

Critically, negligence law allows fault to be shared among multiple actors. If a developer fails to warn, a deployer fails to supervise, and a user relies unreasonably on the system, each can bear a portion of responsibility. Traditional comparative-fault rules appear to be capable of allocating responsibility among developers, deployers, and users.

Some have already argued that AI’s complexity and opacity make it difficult to establish causation[3], but negligence law routinely addresses harms involving intricate causal chains, including chemical exposures, pharmaceutical side effects, mechanical failures, and industrial processes. Tort law does not require perfect explanations; it requires evidence that the harm was more likely than not caused by the defendant’s breach of duty.

If a developer created an unreasonably dangerous system, if a deployer ignored warnings, or if a user acted irresponsibly, causation can be established even if the internal mechanics of the AI decision are not fully understood. The inquiry focuses on human failures, not decoding every internal computational step.

AI Tort Liability

Applying Negligence Law to AI Tort Liability Without Reinvention

The impulse to create new liability laws for AI tort liability often reflects anxiety about emerging technology rather than actual gaps in legal doctrine. Tort law has long demonstrated flexibility in responding to new forms of risk; it has absorbed automobiles, pharmaceuticals, industrial machinery, toxic substances, and consumer products of every kind. Autonomous systems are not so fundamentally different that they fall outside the common law tort framework. Rather, they introduce new factual scenarios, not new categories of legal responsibility.

Negligence principles already supply a duty to act reasonably, a standard for assessing necessary precautions, a mechanism for allocating responsibility, and a framework for evaluating causation even in complex environments. Rather than attempt to reinvent AI-specific liability principles from scratch, courts may choose to apply established doctrines while clarifying how reasonable care should operate in the context of emerging technologies.

Agentic AI introduces novel factual situations but perhaps not novel legal categories. Negligence law is fundamentally about conduct, risk, foreseeability, and responsibility, concepts that apply as readily to the deployment of autonomous systems as to any other potentially dangerous human creation. The path forward need not be a legal revolution. Rather it may simply be the application of longstanding tort principles to new forms of risk.


[1] Agentic AI is a type of artificial intelligence that runs independently to design, execute, and optimize workflows – allowing enterprises to more effectively make decisions and get work done. AI agents can make decisions, plan, and adapt to achieve predefined goals – with little human intervention or completely autonomously.

[2] An AI deployer is defined as any individual or entity that uses an AI system within their professional scope, excluding personal and non-professional activities

[3] From Optional to Obligatory: Why AI’s Statistical Superiority Doesn’t Dictate Tort Law Duties

Get more articles like this
in your inbox

Subscribe to our mailing list and get the latest information and updates to your email inbox.

Thank you for subscribing.

Something went wrong.

We respect your privacy and take protecting it seriously

Related

Tagged under: AI tort liability, Artificial intelligence risk, Autonomous systems, Legal Accountability, Negligence law

What you can read next

Harm Reduction and Accountability
Harm Reduction Needs Guardrails: Balancing Compassion and Accountability
Daubert standard
The Daubert Double Standard: Scientific Evidence in Courtrooms
Romer v. Evans
Romer v. Evans: A Landmark in LGBT Rights History

Subscribe to JSI’s Blog Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts

  • Treatment Court Data

    Treatment Court Data and National Infrastructure with Dr. DeVall

    This Justice Speaks episode examines how treatm...
  • AI systems in the criminal justice system

    AI Describes Many Technologies and None of Them Are Intelligent

    What courts call “AI” is rarely intelligent—and...
  • AI governance in the justice system

    Hardwiring Justice: Governing AI Before It Governs the Justice System

    Artificial intelligence is already embedded acr...

Upcoming Events

MENU

  • Home
  • Our Services
  • Why the JSI?
  • JSI Blog
  • Contact JSI

Copyright © 2022  Justice Speakers Institute, LLC.
All rights reserved.



The characteristics of honor, leadership and stewardship are integral to the success of JSI.

Therefore the Partners and all Associates subscribe to a Code of Professional Ethics.

JOIN US ON SOCIAL MEDIA

JUSTICE SPEAKERS INSTITUTE, LLC

P.O. BOX 20
NORTHVILLE, MICHIGAN USA 48167

CONTACT US

TOP

Get more information like this
in your inbox

Subscribe to our mailing list
and get interesting content and updates to your email inbox.

Thank you for subscribing.

Oops. Something went wrong.

We respect your privacy and take protecting it seriously

https://justicespeakersinstitute.com/wp-admin/admin-ajax.php