On 28 September 2022, the European Commission published the proposed AI Liability Directive. The Directive joins the Artificial Intelligence (AI) Act (which we wrote about here) as the latest addition to the EU’s AI focused legislation. Whilst the AI Act proposes rules that seek to reduce risks to safety, the liability rules will apply where such a risk materialises and damage occurs.

In a European enterprise survey, 33% of companies considering adopting AI quoted ‘liability for potential damages’ as a major external challenge. The proposed Directive hopes to tackle this challenge by establishing EU-wide rules to ensure consumers obtain the same level of protection as they would if they issued a claim for damages from using any other product.

Why is change needed?

The proposed Directive applies to non-contractual civil law claims for damages caused by an AI system, where such claims are brought under fault-based liability regimes. Existing national liability rules based on fault are not appropriate for handling liability claims for damage caused by AI-enabled products and services. This is because victims tend to need to prove wrongful action by a person in order to succeed in a claim and the complexity, autonomy and lack of transparency of AI may make it difficult or too expensive for victims to identify the liable person due to the number of parties involved in the design, development, deployment and operation of AI.

What is the impact for companies developing and supplying AI systems?

  1. To address the difficulty of proving the causal link, the Directive proposes, in certain cases,  a rebuttable presumption of a causal link between the fault of the defendant and the output that gave rise to the damage, where all of the following conditions are met:
    1. The claimant has demonstrated fault on the part of the AI provider, in the form of non-compliance with an obligation of EU or national law design to protect against such damage (e.g. certain requirements under the AI Act);
    2. It can be considered reasonably likely, based on the circumstances of the case, that the fault demonstrated in a. above, has influenced the output produced by the AI system/the failure of the AI system to produce an output; and
    3. The claimant has demonstrated that such output/failure to produce an output gave rise to the damage.
  2. The proposed Directive establishes a right for claimants to request from a court, an order for a defendant to disclose relevant evidence about a high-risk AI system (as defined in the AI Act). This will be useful when proving fault (under part a. above) because the AI Act provides for specific documentation, information and logging requirements to be prepared in respect of high-risk AI systems, but does not provide a right to victims to access that information. However, courts are only permitted to order disclosure of evidence where such evidence is necessary and proportionate for supporting the claim and so long as the claimant has made all proportionate attempts at gathering the relevant evidence themselves.

These proposed AI liability rules give all those involved in activities relating to AI systems an additional incentive to comply with their obligations (as if the threat of a €30m fine for failings under the AI Act was not incentive enough!). 

The AI Liability Directive will be subject to review and approval by the European Council and Parliament before taking effect. Once implemented, Member States will have 2 years to implement the requirements into local law.