Can AI Be Legally Liable for False Statements, or Is It Always the Humans’ Problem?
Can AI Be Legally Liable for False Statements,
or Is It Always the Humans' Problem?
When an AI model invents a court case that never existed, fabricates a quote from a real person, or gives dangerously wrong medical advice — someone is responsible. But who? The machine that generated the content cannot be sued. The company that built it says "we disclaim all warranties." The business that deployed it points at the end user. This post cuts through the legal fog and explains exactly where liability sits today — and where it is heading.
-
1What Does "Liability" Actually Mean — and Can a Machine Hold It?Legal personhood explained: why AI cannot be sued, the three human actors who can, and the three types of liability that already apply to AI systems today.›
-
2AI Hallucinations and False Statements — The Scale of the ProblemWhat hallucinations are technically, how often they occur, the six categories of false AI output that carry legal risk, and why "the model said it" is no longer a defence.›
-
3Who Is Currently Liable — Developer, Deployer, or User?Product liability, negligence, and contract law frameworks applied to AI. The deployer's central exposure, why "as is" disclaimers have limits, and how liability shifts between actors.›
-
4Real Cases and Legal Precedents — AI False Statements in CourtsMata v. Avianca, AI defamation lawsuits, fabricated citations, and what courts in the US, EU, and UK are actually deciding about AI-generated false content.›
-
5How the EU AI Act and Emerging Liability Laws Are Reshaping ResponsibilityThe EU AI Liability Directive, revised Product Liability Directive, EU AI Act deployer obligations, and the UK and US approaches to the same problem.›
-
6Protecting Your Business — an AI Liability Risk Management FrameworkEight practical steps to reduce exposure from AI-generated false statements, from contractual safeguards and human review to insurance and incident response.›
Before asking whether AI can be legally liable for false statements, we need to understand what "liability" means in legal terms — and why the answer, for AI itself, is currently no. Legal liability is the obligation to answer in law for harm caused to another. It requires a legal subject — an entity capable of holding rights and obligations. Under the law of every major jurisdiction today, AI systems have no legal personhood, can hold no rights, own no property, and face no legal consequences for the harms they cause. The question of AI liability is, therefore, always a question about which human or corporate actor bears responsibility for what the AI did.
"Hallucination" is the term the AI industry uses for the phenomenon of language models producing outputs that are confidently stated, plausibly formatted, and factually wrong. The term is somewhat misleading — it implies a perceiving subject experiencing distorted reality, which no AI model does. A more accurate description is systematic confabulation: the model generates statistically coherent text that is not grounded in truth, because it was not trained to distinguish between true and false — only to produce contextually appropriate next tokens. Understanding this distinction matters legally, because it goes to the question of what safeguards were possible and what steps were reasonable.
The question "who is liable for AI-generated false statements?" does not have a single, universal answer — it depends on the facts of each case, the applicable legal framework, the relationship between the parties, and increasingly, on the specific regulatory requirements that applied to the deployment. What the law does provide is a set of analytical frameworks — product liability, negligence, defamation, contract — that courts apply to allocate responsibility across the three possible defendants: the model developer, the deploying business, and the end user. Here is how each framework applies.
AI-generated false content is no longer a hypothetical legal risk. Courts across the US, UK, and EU have already confronted lawyers sanctioned for hallucinated citations, individuals defamed by AI-fabricated stories, and businesses held accountable for AI-generated misinformation. Here is what the case law says so far — and what it signals for the future of AI liability.
-
1The Non-Delegable Verification DutyAny professional — lawyer, doctor, financial adviser — who relies on AI-generated content in a professional context retains full responsibility for accuracy. The AI's error is their error. Courts will not accept "I relied on the AI" as a defence to professional negligence or misconduct.
-
2The Deployer Agency PrincipleA business that deploys an AI tool to customers is treated as the agent responsible for that tool's statements. Air Canada v. Moffatt makes this explicit: you cannot attribute liability to the AI itself or disclaim responsibility because the output was machine-generated.
-
3AI-Generated Defamation Is Actionable DefamationCourts in the US, Australia, and UK have consistently declined to treat AI as a special category immune from defamation law. A false statement of fact about a real person — whether generated by a human or an AI — can found a defamation claim. The question is only which human actor (developer, deployer, or user) bears the liability.
-
4Section 230 Does Not Clearly Shield AI DevelopersSection 230 of the US Communications Decency Act protects platforms from liability for third-party content. Courts are divided on whether it applies to AI developers when the model itself generates (rather than hosts) the false content. This ambiguity significantly increases developer exposure in US litigation.
-
5Good Faith Reliance Is Not a Complete DefenceEven where a professional genuinely believed the AI output was accurate, courts have found liability where the professional failed to exercise reasonable care in verification. Good faith reduces culpability in some contexts but does not eliminate liability for objectively unreasonable reliance on known-unreliable tools.
The legal landscape for AI liability is shifting from case-by-case common law into a structured regulatory framework — particularly in Europe. The EU AI Act, the proposed AI Liability Directive, and the revised Product Liability Directive together form a comprehensive regime that makes AI liability clearer, more predictable, and significantly more enforceable. Here is what businesses operating with AI need to understand about the rules that are either already in force or taking effect soon.
| Dimension | 🇺🇸 United States | 🇬🇧 United Kingdom | 🇪🇺 European Union |
|---|---|---|---|
| Legislative Framework | No federal AI liability law. State-level patchwork (CA SB 1047 vetoed; Colorado SB 205 enacted 2024). FTC and sector regulators applying existing law. | No AI-specific liability legislation. Online Safety Act 2023 addresses platforms; AI regulation framework in development (pro-innovation approach). Common law applies. | EU AI Act (force 2024), revised Product Liability Directive (Dec 2026 transposition), AI Liability Directive (proposed). Most structured globally. |
| Primary Liability Theory for False Statements | Defamation, negligence, product liability (state law), professional malpractice. Section 230 status for AI developers unresolved. | Defamation (strict), negligence. No Section 230 equivalent — stronger claimant position for AI-generated false statements about real persons. | Negligence (AI Liability Directive causation shortcut), product liability (revised Directive), EU AI Act violations as supporting evidence. Multi-layered routes. |
| Developer Exposure | Medium — uncertain Section 230 shield; FTC oversight increasing; NIST AI RMF voluntary. | High — no platform immunity equivalent; ICO data protection enforcement active; strong defamation law. | Highest — explicit regulatory obligations; fines up to 7% global turnover; disclosure obligations; product liability expanding. |
| Deployer Exposure | Medium — common law negligence; sector-specific regulator risk (FDA, FINRA, SEC); Air Canada-type deployer agency principle being adopted. | Medium-High — negligence, breach of contract, consumer protection law. Increased FCA oversight of AI in financial services. | Highest — EU AI Act deployer obligations, GDPR intersection, consumer protection enforcement, national courts applying AI Liability Directive principles. |
| Direction of Travel | More litigation, more sector regulation, possible federal AI bill. Courts establishing doctrine case by case. | Pro-innovation but tightening. Sector regulators (FCA, CMA, ICO) active. Possible AI Liability Bill if EU approach gains traction. | Accelerating compliance burden. Clear trajectory toward strict liability for systemic-risk models. Will influence global norms. |
Understanding who is liable is only the first step. The practical question for any business deploying AI is: how do you reduce your exposure before a false statement causes harm? The following eight-step framework moves from policy and contracts through technical controls, human oversight, insurance, and incident response — covering the full lifecycle of AI liability risk management.
Understand and Reduce Your AI Liability Exposure
The law on AI liability is developing fast — and the businesses that structure their AI use proactively will be far better positioned than those that wait for a claim to understand their exposure. Our AI risk and liability advisory team works with developers, deployers, and businesses across all sectors to implement frameworks that are both legally sound and commercially workable.


