EU Directives on AI Liability and Product Liability

On September 28, 2022, the European Commission (“EC”) published a set of proposals with the goals of giving businesses legal certainty, modernizing, and adapting the existing liability regime of the EU so that it can accommodate AI systems, and harmonizing the national liability rules for AI that each member state has in place. In its February 2020 Report on Safety and Liability, the EC had provided a sneak peek at the draft rules, putting particular emphasis on the specific challenges that are posed by the complex, opaque, and autonomous characteristics of AI products. The proposed EU AI Act, the AI Liability Directive (abbreviated as “ALD”), and an updated version of the Product Liability Directive (abbreviated as “PLD”) are all designed to work in tandem with one another and are expected to significantly alter the liability risks that are incurred by developers, manufacturers, and suppliers who place AI-related products on the market in the EU.

AI intelligence working in an office by datatunnel

The Product Liability Directive (also abbreviated as “PLD”)

Claimants need only show that harm resulted from the use of a defective product under the strict liability framework that is established by the draft of the Product Liability Directive in the European Union (EU). This framework applies to AI systems. Notably, the mandatory safety requirements that are outlined in the draft of the AI Act can be taken into consideration by a court when determining whether or not a product has a flaw in its design or manufacture.

AI Liability Directive (also abbreviated as “ALD”)

The Artificial Intelligence Liability Directive (also known as “ALD”), which would apply to fault-based liability regimes in the European Union (EU), would create a rebuttable “presumption of causality” against any artificial intelligence system’s developer, provider, or user. Additionally, it would make it easier for potential claimants to access information about specific “High-Risk” AI Systems, as defined by the draft EU AI Act. The new disclosure obligation related to “High-Risk” AI systems is of particular significance to businesses that are developing and deploying products related to artificial intelligence (AI). This disclosure obligation could potentially require businesses to disclose technical documentation, testing data, and risk assessments; however, this requirement would be subject to safeguards to protect sensitive information such as trade secrets. In the event that such evidence is not produced in response to an order from the court, the court will have the ability to invoke a presumption of breach of duty.

Before going into effect, both the PLD and the ALD will be submitted to the scrutiny and approval of the European Council and the European Parliament. After the requirements have been implemented, member states will have two years to incorporate them into their own local laws.

European Commission Sources

Similar Posts