“Liability in AI.” Those three words might sound like something you’d hear in a dry legal seminar—the kind of phrase that makes people check their phones. But stay with me, because this is a story worth hearing. Liability, at its core, is society’s way of asking: when technology causes harm, who should take responsibility? That question becomes urgent when we move from abstract possibilities to real-world impacts.
Today, we’re no longer dealing with hypothetical harm. Imagine a hiring algorithm screening job applications. If the historical data it’s trained on reflects patterns where certain groups were systematically excluded—women, people with disabilities, racialized candidates—those patterns can echo forward. Qualified applicants might never even get an interview. That’s not a glitch; it’s a harm with real human consequences.
Or think about a healthcare system adopting AI diagnostic tools. If those tools are trained mostly on one demographic, they can misdiagnose conditions in populations who weren’t adequately represented. That’s not just a technical oversight—it’s a health equity issue. In both cases, harm arises not because someone set out to discriminate or misdiagnose, but because assumptions built into the technology were never questioned or corrected.
This is where liability comes in. When these harms occur, who is responsible? The engineers who trained the model? The company that deployed it? The professionals who used it? To explore this, in this week’s post, we’re going to look at AI through three legal lenses: fault-based liability, strict liability, and product liability. None of these frameworks were created for AI. They were designed for collapsing staircases, faulty brakes, and, yes, the occasional wild animal. We’re now trying to stretch them over systems that are adaptive, opaque, and global.
Fault-Based Liability: The “You Should Have Known Better” Model
Fault-based liability is the legal model most of us know intuitively. Liability here depends on proving that someone was negligent—that they failed to take reasonable care. Think about walking past a construction site. If a warning sign falls off a loose bracket and injures you, fault-based liability says the site managers had a duty to secure that sign. If they failed to take reasonable steps, they’re liable. Simple enough.
Now translate that to AI. Imagine a healthcare startup builds an algorithm to analyze radiology scans. The tool consistently underperforms on scans from certain populations because the training data didn’t adequately include those groups. A missed diagnosis follows, and harm results. Under fault-based liability, the question becomes: should the developers have known their dataset wasn’t representative? Should they have tested performance across diverse populations? If the answer is yes, then failing to do so could be negligence.
Fault-based liability gets messy with AI because responsibility is distributed. It’s not just one engineer’s decision—it’s a chain of decisions: Who chose the data sources? Who set the validation criteria? Who approved deployment into a clinical setting? Negligence doesn’t sit with one person—it’s threaded through the system. And proving negligence often requires technical expertise that victims don’t have. Machine learning pipelines are complex, and uncovering hidden biases or design assumptions requires time and resources. That high burden means many harmful outcomes go unaddressed.
“Reasonable care” doesn’t stay the same forever. What counts as being careful changes over time, as we learn more, invent better tools, and expect higher standards. Think about cars. In the early days, seat belts weren’t standard. Automakers weren’t considered negligent for leaving them out. But as safety research advanced and seat belts proved effective, the standard shifted. Today, a car without seat belts is indefensible.
AI is on the same trajectory. Right now, fairness testing or bias audits are sometimes treated as optional. But within a few years, those practices will be baseline expectations. Courts will likely see failure to conduct them as negligence. That means designers and leaders have a window to help define what “reasonable care” should look like for AI.
Practical steps:
Regular risk reviews. Just as user-experience teams schedule sessions with users, set a cadence for harm reviews. Every six weeks, bring designers, engineers, ethicists, and legal experts together to map out failure modes.
Explainability by design. Fault-based liability cases often hinge on whether decision-making was transparent. Build features that let users see why the AI made a choice, even if it’s just highlighting key factors or displaying a confidence level.
Negligence checklists. Create a living document that asks: Did we check for bias? Did we test accessibility? Did we validate performance across diverse populations? Documenting these answers can be a shield later.
Liability rehearsals. Run exercises where you imagine your system caused harm and you’re in court tomorrow. Who’s on the hook? What evidence would you present? These rehearsals expose weak spots before reality does.
Case study: Amazon’s hiring tool. Amazon once experimented with an automated hiring tool trained on past resumes. It turned out to disadvantage women because it mirrored historical hiring patterns. Was that a defect in the product, or negligence in how the data was handled? Under fault-based liability, the argument is that Amazon should have anticipated those risks and put safeguards in place.
Another example is the COMPAS algorithm used in the U.S. criminal justice system to predict recidivism. Investigations revealed it disproportionately flagged Black defendants as higher risk compared to white defendants with similar records. The company behind the tool argued, “We didn’t intend for this to happen.” But intent doesn’t matter under fault-based liability. The real question is: should they have tested for disparate impact? If the answer is yes, failing to do so looks a lot like negligence.
Fault-based liability demands us to ask: Are we taking reasonable care to prevent foreseeable harms? If we’re not, are we ready to be held accountable? The challenge is that many harms in AI are systemic; they don’t show up in lab tests. They emerge when real people interact with systems in their own contexts. That means reasonable care must include anticipating systemic harms, not just technical ones.
Equity and inclusion. Negligence lands hardest on marginalized communities. Consider SyRI, an algorithm deployed by the Dutch government to detect welfare fraud. It disproportionately targeted low-income and immigrant neighborhoods, leading to stigma and wrongful investigations. A court eventually ruled it violated human rights. Under fault-based liability, the designers should have anticipated that their targeting methods would amplify existing inequities. They didn’t—and that negligence caused real harm.
Limits of fault-based liability. There are practical limits to this model. Proving negligence often takes years, requires expensive expert testimony, and places the burden on victims. In the meantime, harm continues. That’s one reason societies look to the next model: strict liability.
Strict Liability: The “Tiger-Level Risk” Model
Strict liability flips the question. It says: forget intent, forget negligence. If your system caused harm, you’re responsible. If you own a tiger and it mauls the neighbor’s dog, you’re liable. It doesn’t matter that you fed it organic chicken and built a strong cage. You chose to own a tiger; you bear the cost.
Now apply that logic to AI. Systems like self-driving cars, robotic surgeons, or predictive policing tools carry tiger-level risks. Even if developers did everything by the book, if harm occurs, strict liability says the people introducing the technology must absorb the cost. Victims shouldn’t have to prove negligence when the risks are foreseeable and the stakes are high. The burden shifts to those who profit from deploying the system.
This isn’t just legal—it’s cultural. Strict liability signals to companies that if you’re going to build high-risk AI, safety isn’t a feature; it’s your license to operate. It creates a powerful incentive to engineer for safety. If every crash comes directly off your balance sheet, you obsess about prevention. Consider pharmaceuticals: companies do clinical trials and follow regulations, but if a drug causes unforeseen harm, the manufacturer can still be liable. They price that risk into their business through insurance, legal reserves, and monitoring systems.
However, strict liability can also chill innovation. Big tech firms may afford the insurance and risk reserves, but small startups may not survive a single lawsuit. That could lead to consolidation, where only a few giant players can compete. Some countries are experimenting with solutions. Germany, for example, requires self-driving cars to carry liability insurance. If there’s a crash, the insurer pays immediately, then seeks compensation from the manufacturer if the AI is at fault. Victims don’t wait; manufacturers still face consequences; insurers become watchdogs pressing for safer standards. It’s a model that balances fairness and innovation.
Design implications:
Design with insurers in the loop. Treat insurers as stakeholders. If they can quantify your risk, you can design to lower it. That means thinking about diverse failure scenarios—bias, accessibility gaps, system crashes—not just technical performance.
Redundancy and safeguards. Airplanes don’t rely on a single sensor; they use multiples so no single failure is catastrophic. High-risk AI systems should do the same. Build human overrides, accessibility checks, and layered safeguards that protect different user groups.
Continuous monitoring. Strict liability doesn’t end at launch. If your system evolves, you’re still responsible. Monitoring performance across diverse populations isn’t optional—it’s how you catch harm before it scales.
Culture of accountability. Under strict liability, pointing fingers is the worst thing you can do. Build a culture where safety, fairness, and inclusion are everyone’s job. If someone gets hurt, your organization needs a way to repair the damage and support the people affected.
It’s important to note that not every AI system is a tiger. Some are house cats. A recommendation engine that plays the wrong playlist is an inconvenience, not a life-or-death issue. If every AI carried tiger-level liability, the legal risk would crush innovation. Strict liability makes the most sense where risks are serious and foreseeable—self-driving cars, healthcare AI, drones, or tools affecting safety, rights, or livelihoods. For the rest, we need a different lens, which brings us to product liability.
Product Liability: The “Defective Tools” Model
Product liability is the third lens. At its simplest, it says: if you sell something defective and it causes harm, you’re responsible. If your washing machine leaks and floods the basement, the manufacturer is liable. If your toaster sparks and catches fire, consumers don’t need to prove negligence. They just need to show the product was defective and caused harm.
This model has protected consumers for decades, but it assumes products are static. The toaster that leaves the factory is the toaster in your kitchen. Its behavior doesn’t change over time. AI flips that assumption upside down. AI systems learn, update, and adapt. That makes “defects” far more complex.
Consider a smart thermostat. On day one, it works fine. Six months later, after a series of updates and adaptive learning, it starts mismanaging your home’s temperature, causing skyrocketing bills or health risks during a heatwave. The defect wasn’t there at purchase; it emerged later. Who’s liable? The original manufacturer? The software team that pushed the update?
Or think about a baby monitor with AI-based sound detection. If an update introduces a bug that causes the monitor to stop alerting parents to noise, that’s not a defect in the original product—it’s a defect introduced later. Under old liability frameworks, this scenario doesn’t fit neatly.
It gets even trickier with personalization. Imagine an AI fitness app. For most users, its recommendations are fine. For others—perhaps with undisclosed health conditions—the same advice could be dangerous. Was the product defective, or was it a mismatch between system design and user context? Traditional liability law doesn’t have an easy answer.
Design strategies for AI product liability:
Treat every update like a new release. Run safety and risk reviews for each significant change. If your AI updates automatically, you need a system to monitor for new harms continuously.
Build a “defect log.” Track failures and anomalies the way airlines track near misses. Document what went wrong, how it was corrected, and who was affected. A defect log is both a design tool and a liability defense.
Design for rollback. Make it possible for users to revert to a stable version if an update causes problems. Think of it as the AI equivalent of a product recall.
Communicate limitations clearly. Failure to warn is a classic trigger in product liability. If your AI doesn’t work well under certain conditions—or for certain populations—be upfront. Transparency reduces harm and strengthens trust.
Let’s ground this with a familiar example. When Apple launched Face ID, reports emerged that it struggled with siblings and twins. Is that a defect or a foreseeable limitation? The distinction matters. If it’s a defect, liability applies. If it’s a limitation, Apple’s responsibility is to communicate it clearly so users can make informed choices. Our role as designers is to make sure those limitations are visible, not hidden, because trust erodes quickly when a product surprises people in harmful ways.
Ties That Bind: What These Lenses Teach Us
In this week’s post, we walked through three lenses of liability in AI:
Fault-based liability asks: Did someone fail to take reasonable care?
Strict liability says: If your system causes harm, you’re responsible, regardless of intent.
Product liability focuses on defects: Whether in design, manufacturing, or communication, and how they cause harm to everyday users.
None of these frameworks fit AI perfectly. But together, they offer a toolkit for answering that fundamental question: when technology causes harm, who takes responsibility?
Here’s the deeper insight: liability isn’t just a legal question. It’s a design question. It asks us to think about our systems not only in terms of performance but in terms of potential harm. It reminds us that our choices today—about documentation, testing, transparency, equity, and inclusion—are shaping what “reasonable care” will mean tomorrow.
What can you do right now?
Document everything. Record design decisions, data sources, and testing methods. Pretend you’ll need to explain them in front of a jury. That discipline doesn’t just protect you; it sharpens your design practice. It forces you to articulate why you made the choices you did.
Design for accountability. Whether the law lands on fault-based, strict, or product liability, you’ll need systems that can explain themselves, show safeguards, and demonstrate care for equity and inclusion. That means investing in explainability, auditability, and transparent communication.
Run liability fire drills. Ask yourself: if this system harmed someone tomorrow, what would we do? Who would be responsible? How quickly could we make it right? If the answers make you uncomfortable, that’s where your next design sprint should begin.
Liability frameworks will evolve. Seat belts, fire codes, and food labeling were once optional. Now they’re mandatory. AI will follow the same path. We can either wait for courts and regulations to force our hand, or we can proactively design for accountability. If we do, we don’t just avoid lawsuits—we build trust. We create systems people feel safe using. We reduce harm before it happens.
That’s the deeper story of liability in AI. It’s not a courtroom drama; it’s a design challenge. It forces us to confront our responsibilities and reminds us that technology doesn’t exist in a vacuum. It exists in a world with people—people who deserve fairness, safety, and dignity.
So as you build your next model, tool, or platform, remember: the law may not be caught up yet, but the ethical expectation is already here. Our job is to meet it—not just to avoid liability, but to build a future where AI enhances human life without leaving some people behind.