Legal Liability When AI Causes Harm: Global Precedents
As artificial intelligence systems become more deeply embedded in our daily lives—powering everything from medical diagnostics to autonomous vehicles—the legal consequences of AI-caused harm have become increasingly urgent and complex. Legal systems worldwide are now grappling with the challenge of attributing liability when damage results not directly from a human’s action, but from an autonomous system acting in unpredictable or opaque ways.
1. Foundations of Legal Liability
This section explores how different jurisdictions are addressing AI legal liability, organized around three key categories that have emerged in legal theory and practice: contractual liability, non-contractual (tort) liability, and product liability. Notably, AI itself is not currently recognized as a legal subject capable of bearing liability under any legal regime.
1.1 Contractual Liability: Breach of Agreed Expectations
When AI is used as part of a service or product governed by a contract (e.g., SaaS platforms, autonomous trading systems, or AI-powered legal tools), and that system fails to perform as promised, contractual liability can be triggered.
Under most legal systems, if an AI tool fails to deliver the functionality or accuracy stipulated in the contract, the provider may be liable for breach—even if the failure results from the AI's unpredictable behavior.
In the UK, under the Supply of Goods and Services Act 1982 and the Consumer Rights Act 2015, sellers are liable if digital products (including AI) do not match their description or purpose. Similar provisions exist in the Uniform Commercial Code (UCC) in the United States.
The challenge here lies in foreseeability and allocation of risk: can providers disclaim liability for “black-box” behaviors? Courts are still developing jurisprudence on how unpredictability is “reasonable” for AI.
1.2 Tort Liability: Negligence and Strict Responsibility
Outside of contractual relationships, tort law governs civil harm. In AI contexts, this typically means:
A growing number of tort claims involving AI are being tested under existing negligence principles, especially in medical, automotive, and surveillance contexts.
A well-known case is the 2018 Uber self-driving car incident, where a pedestrian was killed during a test run. Although Uber avoided criminal charges, the safety driver was charged with negligent homicide. The case raised critical questions: who is at fault—the manufacturer, software developer, or the operator? And can the plaintiff prove that the AI’s decision directly caused harm?
1.3 Product Liability: AI as a Defective Product
Product liability is a specific form of tort liability governed by consumer protection statutes and strict liability doctrines. In many legal systems, including the EU, U.S., and China, producers are strictly liable if their product is defective and causes harm—even without negligence.
The critical question is whether an AI system qualifies as a "product”. Under the EU Product Liability Directive (85/374/EEC), a "product" includes all movables, even those incorporated into other movables. This has led to interpretations that software and AI may be covered, particularly when integrated into tangible products (e.g., robots, vehicles).
The EU tried to clarify this by explicitly extending civil liability to high-risk AI systems (in the proposed AI Liability Directive), easing the burden of proof for claimants. It introduces a rebuttable presumption of causality, recognizing the opacity of AI (“black-box problem”).
However, in February 2025, the European Commission withdrew the proposed AI Liability Directive, citing a lack of political consensus. While the Product Liability Directive remains in play, no standalone liability directive for AI is currently being pursued. The Commission may propose a new framework in the future, but no timeline has been set.
2. Jurisdictional Differences and Global Trends
While the EU is at the forefront of codifying AI-specific liability rules (e.g., the AI Act and the withdrawn AI Liability Directive), other jurisdictions like the United States continue to rely heavily on traditional tort and product liability frameworks. China, meanwhile, is taking a hybrid approach: its Civil Code and AI governance rules increasingly frame providers’ duties in algorithmic transparency and data handling.
Key challenge across systems: Legal frameworks were designed for static products and predictable behavior—not self-learning, autonomous systems that evolve after deployment. Courts are thus improvising within legacy doctrines.
3. Conclusion: Toward a Harmonized Risk-Based Framework
AI-caused harm challenges foundational assumptions in liability law, particularly regarding control, foreseeability, and causation. While no global consensus has emerged, the trend is moving toward a risk-based, tiered liability framework—especially for high-risk AI applications.
As regulators consider how to balance innovation with protection, clarity around who should bear responsibility for autonomous harm will be key—not just to protect consumers, but to ensure AI developers and users operate within clear ethical and legal boundaries.
For further information, please contact us via email at
(Call-to-Action Section)
Author: Tran Nguyen Phuong Hieu - Consultant
Date: June, 2025.
Legal Liability When AI Causes Harm: Global Precedents
As artificial intelligence becomes increasingly integrated into daily life—from autonomous vehicles to diagnostic tools—the question of who bears legal responsibility when AI causes harm has become both urgent and complex. Legal systems around the world are actively grappling with how to handle cases where damage is caused not by human action, but by autonomous systems acting unpredictably.
1. Three Main Legal Frameworks for AI Liability
1.1 Contractual Liability
When an AI system fails to meet the standards promised under a contract (e.g., SaaS, legal AI tools), the provider may be liable for breach. For instance, under UK laws like the Consumer Rights Act 2015 or U.S. commercial codes, providers must deliver digital products that meet agreed specifications—even if AI behavior was unpredictable. The legal challenge lies in whether “black-box” failures are foreseeable and if liability can be disclaimed.
1.2 Tort Liability (Negligence and Strict Liability)
Outside of contracts, tort law governs AI-related harm. Liability may stem from:
The 2018 Uber self-driving car case is illustrative: a pedestrian was killed, raising questions about who is legally responsible—the developer, the manufacturer, or the safety operator. Although Uber avoided prosecution, the operator was charged with negligent homicide.
1.3 Product Liability
In many systems, including the EU and U.S., a manufacturer can be strictly liable if a defective product (including AI-integrated machines) causes harm. Legal debate continues on whether AI software qualifies as a “product.” Under the EU’s Product Liability Directive (85/374/EEC), the answer is often yes when AI is embedded in physical goods.
The EU once proposed an AI Liability Directive (2022) to shift the burden of proof in high-risk AI cases. However, in February 2025, the European Commission withdrew the proposal due to lack of political consensus. This leaves the Product Liability Directive in force, but without a standalone AI liability law—at least for now.
2. Global Divergence
A shared challenge globally: legacy legal systems were not built for autonomous, learning systems that evolve after deployment. Courts and regulators are adapting slowly.
3. Toward Risk-Based Accountability
AI disrupts core liability concepts like control and foreseeability. While no universal legal model exists, most frameworks are trending toward risk-based, tiered approaches—especially for high-risk uses.
As AI becomes more autonomous, legal clarity is essential. Assigning responsibility is not only about justice after harm occurs, but also about guiding ethical innovation and responsible deployment.
For further information, please contact us via email at
(Call-to-Action Section)
Author: Tran Nguyen Phuong Hieu - Consultant
Date: June, 2025.
🤖 AI làm bậy ... Vậy ai phải bồi thường? 💸
AI bây giờ đâu còn là chuyện viễn tưởng nữa mà đã len lỏi vào đời sống hắng ngày – từ xe tự lái đến chẩn đoán bệnh, thậm chí còn được áp dụng để viết văn chia tay 👀. Thế nhưng, lỡ một ngày nó gây hại cho mình thì ai sai – ai chịu? 🤯
⚖️ Ba hướng xử lý thường gặp:
🔹 Hợp đồng: Bạn trả tiền xài AI xịn xò mà nó lại cho ra kết quả "xà lơ"? Có thể kiện nhà cung cấp vì không thực hiện theo đúng cam kết đó nha!
🔹 Ngoài hợp đồng: Dù không ký hợp đồng gì với nhau nhưng AI làm bậy thì mình vẫn phải chịu trách nhiệm, dựa trên lỗi sơ suất hoặc trách nhiệm nghiêm ngặt – tức là dù có cố ý hay không thì vẫn phải chịu.
🔹 Sản phẩm lỗi: Nếu AI tích hợp trong sản phẩm gây ra hậu quả gì (ví dụ như con robot hút bụi “quẩy banh” phòng khách nhà bạn), nhà sản xuất có thể bị quy là đưa sản phẩm lỗi ra thị trường.
🌍 Thế giới đang xử lý thế nào?
EU: Là nơi đi đầu về luật AI. Định làm hẳn một “AI Liability Directive” nhưng rồi... bị rút lại vào tháng 2/2025 vì chưa đạt được sự đồng thuận giữa các bên.
Mỹ: Vẫn dựa nhiều vào các khuôn khổ trách nhiệm sản phẩm và hành vi vi phạm pháp luật truyền thống, chưa có bước tiến mới nào.
Trung Quốc: Tiếp cận kiểu kết hợp – vừa có quy định về minh bạch thuật toán, vừa nhét luôn các quy định liên quan tới AI vào luật dân sự cho chắc cú.
🎯 Tổng kết
AI không có “não” nên không thể ra toà tự nhận lỗi, cũng không thể tự đền bù thiệt hại. Vậy nên, trách nhiệm thường rơi vào nhà phát triển, bên cung cấp, hoặc người vận hành.
💡 Xu hướng chung là các nước đang nghiêng về khung pháp lý theo mức độ rủi ro, đặc biệt cho những hệ thống AI “nguy hiểm cao độ” như AI y tế, AI tự lái…
👉 Vì thế, nếu bạn là start-up AI hay dùng AI cho công việc: Đừng lơ là chuyện pháp lý. Có thể AI làm, nhưng bạn là người “dính” nếu có chuyện!
For further information, please contact us via email at
(Call-to-Action Section)
Author: Tran Nguyen Phuong Hieu - Consultant
Date: June, 2025.