top of page

Liability in the Use of Artificial Intelligence: What Companies Need to Know Now

  • Writer: Sara Farahmand-Nejad
    Sara Farahmand-Nejad
  • Aug 26
  • 4 min read

Updated: Aug 28

In German law, the basic principle is that only natural or legal persons can hold rights and obligations. AI is not yet recognized as a legal subject. The growing integration of artificial intelligence (AI) into digital products, platforms, and services raises fundamental legal questions, particularly around liability.


Who is liable when AI causes damage? The legal situation is complex and steadily evolving.


Liability under the German Civil Code (BGB) and the Product Liability Act


In the German Civil Code (BGB), there are currently no specific provisions on liability for AI; the general principles of civil liability apply (§§ 280 et seq. BGB, tort law). The user of the AI, such as a company or platform provider, therefore bears liability for errors or damages arising from the use of AI. The same applies to breaches of contractual obligations, for example if an AI-generated result is defective or if contractually guaranteed functions are missing.


Disclaimers such as “We are not liable for errors caused by AI” that aim to protect the manufacturer from liability are invalid if they attempt to limit fundamental statutory liability obligations.


Strict Rules for “Defective” Digital Products


The Product Liability Act (ProdHaftG) grants consumers compensation for damages caused by defective products regardless of fault. Unlike under the Civil Code (BGB), the injured party does not need to prove any fault on the part of the manufacturer.


Requirements for Product Liability:


  • The product is defective, meaning it does not provide the level of safety that one may reasonably expect.

  • Personal injury occurs (damage to property used for commercial purposes is excluded).

  • A causal link exists between the product defect and the damage.


Whether pure software or AI applications are currently considered a “product” within the meaning of § 2 of the Product Liability Act (ProdHaftG) had, until recently, not been conclusively clarified. Courts have so far taken a cautious approach to applying the law by analogy. This is now changing with the new Product Liability Directive 2024/2853.


New EU Law: Product Liability Directive 2024/2853 coming into effect Since December 2024


On the 8th of December of 2024, the new EU Product Liability Directive (2024/2853) came into effect. It replaces the 1985 Directive and must be transposed into national law by December 2026. The directive significantly expands the scope of liability for digital products.


Key Innovations at a Glance:

  • The definition of “product” is explicitly extended to include software, AI systems, digital services, and software updates.

  • “Connected” products such as smart home systems, drones, or robots are also covered by the directive.

  • Broader group of liable parties: not only manufacturers, but also their authorized representatives, software developers, fulfillment service providers, and under certain conditions even online retailers can be held liable.

  • Companies that substantially modify a product (e.g., by integrating their own AI) will in the future be considered manufacturers and thus assume corresponding liability.

  • Enforcement of compensation claims for consumers will be significantly simplified.

  • Timing of a Product Defect:

    • It is no longer only the act of placing a product on the market that determines whether it is defective. The point of commissioning, when the product is no longer under the manufacturer’s control or safety-related measures are also relevant.

  • Definition of Damage: Loss and damage of personal data not used exclusively for professional purposes are now included.

  • Limitation Periods: Limitation periods differ in certain cases.

  • Disclosure Obligation: If a claimant can plausibly demonstrate entitlement to compensation, courts may order the opposing party to disclose key evidence.


Result: The new directive ensures that AI products are no longer in a legal grey area. Companies must prepare for broader liability and stricter product safety requirements.


And what about a specific AI liability regulation?


An originally planned EU directive on non-contractual liability for AI systems was intended to establish clear rules for compensation claims, particularly in cases where fault is difficult to prove. It was meant to complement existing tort law with evidence-related measures in favor of those potentially harmed by artificial intelligence. The draft provided that victims of AI-related damage, whether consumers, businesses, or public authorities, would be able to assert claims more easily.


However, the new European Commission has withdrawn this directive from its current work programme. As a result, a key regulatory gap remains. The question of how to classify defective AI within non-contractual liability in a legally secure manner thus remains unresolved for the time being.


Conclusion: Companies are liable - AI itself is not responsible.


Companies that use AI systems, whether for texts, images, decisions, or automated services, are liable for the use of AI under civil law. This also includes responsibility for AI-generated content, for example in marketing, customer service, or diagnostic systems.


Manufacturers, in turn, are liable under general warranty law for ensuring that the AI meets the contractually agreed features as well as safety requirements. Once the new product liability legislation is implemented, this liability will be significantly tightened and extended to new actors.


Legally, it is justified to hold individuals or companies accountable for decisions, even if they do not fully understand the exact functioning of the AI. The law does not require technical expertise in every detail, but rather responsible risk management: choosing appropriate systems, conducting sufficient testing, ongoing monitoring, and implementing safeguards against errors. Those who derive economic benefit from AI must also bear responsibility for its risks.


My personal assessment: Companies should promptly adapt their contract design, IT security standards, and internal risk management processes to the upcoming legal framework and, where necessary, seek legal support. It also remains important to stay informed about potential new legislation.


The contents of this article are provided for general informational purposes only and do not constitute legal advice.



"Recht Logisch" ("Legally Speaking" in German): AI Meets the Law is a series by PANTA in which Sara Farahmand-Nejad, AI Fellow at PANTA and aspiring lawyer, explains legal questions surrounding artificial intelligence in a clear and accessible way. The focus is on liability and responsibility, data protection and copyright, shifting norms, new grey areas, and upcoming regulation. Clear, concise, and practical: what applies today, what is coming next, and what it means for businesses, public administration, and everyday life.

 
 
bottom of page