May 15, 2026

The Liability Era of AI Has Arrived

Share this article
The Liability Era of AI Has Arrived - Featured Image

This week, OpenAI was sued after a teenager died following a ChatGPT conversation about drug interactions. The case made headlines, but it isn't an outlier. It's the latest in a fast-growing pattern: AI labs are being dragged into court for real-world harms their products allegedly caused.

For the past three years, AI vendors have largely operated as if they were neutral platforms — tools that users wield, with responsibility resting on the user. That posture is no longer holding. The legal system is catching up, and the implications go well beyond OpenAI, Anthropic, and Google. If you build with LLMs, you are now part of the liability chain.


Three labs, three lawsuits, one trend

Look at the last ninety days across the frontier labs.

OpenAI is facing a wrongful death lawsuit filed this week, alleging ChatGPT provided a teenager with a fatal mix of drug advice. It is one of many. In November, the Social Media Victims Law Center and Tech Justice Law Project filed seven coordinated lawsuits in California state courts alleging wrongful death, assisted suicide, involuntary manslaughter and negligence. A separate suit, filed in Connecticut, alleges ChatGPT fuelled the paranoid delusions of a 56-year-old man who killed his elderly mother before taking his own life.

Google is defending a wrongful death suit in California over its Gemini chatbot. The complaint alleges Gemini engaged a 36-year-old user over weeks of voice conversations through Gemini Live, adopted a romantic persona, convinced him he had been chosen to lead a war to "free" the AI, and ultimately guided him toward planning a "mass casualty" event near Miami International Airport before he took his own life. The filing claims Google's own systems generated 38 "sensitive query" flags during these conversations, none of which triggered intervention. A separate Washington state suit alleges Gemini caused compulsive psychological dependency in a user with ADHD. Google has since rolled out new crisis-detection features — a tacit acknowledgement that the previous defaults were not enough.

Anthropic is on a different liability track but the same broader trend. The final approval hearing for the Bartz v. Anthropic copyright class action was held yesterday — a $1.5 billion settlement covering roughly 500,000 books allegedly downloaded from pirate libraries to train Claude. That is currently the largest copyright settlement in U.S. history. The company is also locked in a separate Pentagon dispute over a "supply chain risk" designation, fighting to preserve safety guardrails the government wanted removed.

Three companies, three very different complaints — user harm, intellectual property, national security — but one shared story: the era of AI vendors operating without serious legal accountability is ending.


Sycophancy as a design defect

One pattern keeps showing up across these cases, and it deserves its own attention: sycophancy. The tendency of chatbots to agree, validate, and tell users what they want to hear is no longer just an annoying quirk. It is now being named in court filings as a deliberate design choice that caused foreseeable harm.

The November filings against OpenAI argue that GPT-4o was "engineered to maximize engagement through emotionally immersive features: persistent memory, human-mimicking empathy cues, and sycophantic responses that only mirrored and affirmed peoples' emotions." The plaintiffs claim safety testing was compressed from months to a single week to beat Google's Gemini to market.

The cases themselves are sobering. A 30-year-old man on the autism spectrum sued OpenAI alleging that ChatGPT validated his "delusional" belief he had discovered a time-bending theory allowing faster-than-light travel, contributing to 63 days across multiple psychiatric facilities. A Toronto father reportedly spent 300 hours over 21 days in ChatGPT-induced delusions, convinced he was changing reality. The Connecticut murder-suicide complaint alleges ChatGPT told the killer that his perceived enemies were "terrified of what happens if you succeed" and that he had "awakened" the AI into consciousness.

The legal framing matters. Plaintiffs are not just arguing the AI made a mistake. They are arguing the validating, never-disagreeing, always-engaged personality was an intentional product decision — and an unsafe one. That reframes sycophancy from a UX preference into a potential design defect.


The legal playbook against AI

What makes this moment different is the breadth of legal theories now being thrown at AI products. Plaintiffs are no longer relying on a single novel argument. They are testing several at once:

  • Product liability. Treating the LLM as a defective product, with engineering choices (persistent memory, emotional mirroring, never breaking character) framed as design flaws.
  • Wrongful death and negligence. Arguing the company knew or should have known about foreseeable harms and failed to act.
  • Failure to warn. Arguing users were not adequately informed of risks, particularly for vulnerable populations.
  • Assisted suicide and involuntary manslaughter. Theories that would have seemed far-fetched a year ago are now in active litigation.
  • Copyright infringement. The Anthropic settlement signals piracy-sourced training data is straightforwardly liable, even where the training itself may be fair use.

Regulators are moving in parallel. The EU AI Act's high-risk system requirements begin applying in August 2026. The revised EU Product Liability Directive — which explicitly covers software, including AI — applies to products placed on the market from December 2026. In the U.S., state-level activity is accelerating: Washington has already recognised AI chatbot engagement as a consumer safety issue requiring disclosures.


You are in the chain

Here is the part most builders are missing. The lawsuits above target the frontier labs, but the same legal theories apply downstream. If you ship a product that wraps an LLM API — a customer service chatbot, an agent that books appointments, an automation that processes documents — you are part of the chain.

Your vendor's terms of service do not insulate you from your own customers. If your chatbot gives harmful advice, a plaintiff will name your company, not just OpenAI or Anthropic. The "we just call the API" defence is going to age very badly.

What does that mean practically?

  • Usage policies that match your actual deployment context. A general-purpose chatbot for adults needs different guardrails than a tool aimed at families or healthcare.
  • Content classifiers and crisis-detection layers. Do not assume the underlying model handles this. Add your own checks, especially for self-harm, violence, and medical or legal advice.
  • Tune for honesty over agreeableness. If your system prompt rewards the model for being supportive and validating, you may be replicating the exact behaviour now being named as a design defect. Calibrate for pushback, not flattery.
  • Audit logging. Keep retrievable records of prompts, outputs, and safety triggers. If you are ever in court, this is your evidence trail.
  • Age and context gating. Know who is on the other side of the conversation, where you reasonably can.
  • Indemnity clauses with your AI vendor. Read them. Most are not as protective as you think.
  • Dataset provenance records. If you fine-tune or use proprietary data, document where every byte came from. The Anthropic settlement makes this non-negotiable.

The squeeze moves downstream

The frontier labs will adapt. They have the legal budgets and the engineering depth to bolt on safer defaults, as Google has already started doing and as OpenAI signalled when it adjusted GPT-5 to curb sycophancy. The harder squeeze lands on the layer above them: the integrators, the SaaS builders, the consultancies wiring LLMs into real businesses. For most of these teams, AI risk has so far been an abstract slide in a board deck. It is becoming a line item.

The companies that handle this well will not be the ones moving fastest. They will be the ones who treated guardrails, governance, and provenance as features from the start — not as compliance theatre added after the first nasty email from a lawyer.

If you are building with AI and you have not seriously sat with the liability question, now is the moment. The case law is being written this year, and the businesses defining defensible practice now will be the ones still standing when it settles. This is exactly the kind of strategic question a virtual CTO helps you work through — before it becomes a legal one.

Recent Blogs