Home PublicationsCommentary Why the EU Should Not Create a Separate AI Product Liability Regime

Why the EU Should Not Create a Separate AI Product Liability Regime

by Benjamin Mueller
by

As part of a proactive push into AI legislation and regulation, the European Commission is considering new civil liability laws for products that contain AI. This would be a mistake, as the EU’s existing product liability rules are sufficient and comprehensive. The proposed changes would make it more difficult to develop and bring to market new AI-enabled products, harming the European tech ecosystem.

In 1985, the EU created a product liability framework in the form of the Product Liability Directive (PLD). Widely considered a success, it strikes a finely tuned balance that protects consumers from harm without unduly burdening producers with spurious tort claims. The PLD holds producers strictly liable for injury and damage caused by faulty products. Member states supplement the PLD through national tort and contract laws. The resulting European framework for product liability is robust and forward-looking: the PLD is a technology-agnostic approach that, in combination with national liability laws, works across a wide range of contexts, including products that barely existed in 1985, like personal computers. As late as 2018, the Commission found that the PLD enhances consumer protection, innovation, and product safety.   

In October 2020, the European Parliament adopted a proposal to update the PLD. Legislators are concerned that the PLD, which applies to products but not services, is ill-suited to digital technologies. MEPs called on the Commission to create a new and separate civil liability regime for AI, with strict liability and mandatory insurance for operators of “high-risk” AI systems. This proposal is a solution in search of a problem, which would have a chilling effect on companies hoping to offer or acquire innovative AI-powered products in Europe.

The PLD already covers products that incorporate software, including AI. A report from the Commission’s Expert Group on Liability and New Technologies found that a new EU civil liability regime for AI is unnecessary since “the harmful effects of the operation of emerging digital technologies can be compensated under existing laws on damages in contract and in tort in each Member State.” Consumers can make claims under the PLD against manufacturers whose products cause damage due to software problems. At the same time, member states provide remedies under national contract law for situations where a product does not work as intended or agreed. 

The EU’s priority should be to encourage widely available and cost-effective insurance for products that contain AI. Suppliers and vendors, especially SME’s, rely on insurance to help them pay damages for defective products. For start-ups, insurance provides reputational benefits (since they need to meet basic due diligence requirements before they can be insured) and financial protection in the event of claims. European insurers are developing new insurance products for AI-powered systems on a sector-specific basis. The report by the Commission’s Expert Group makes it clear the insurance market will adapt existing coverage or devise new products for different types of AI-systems in various sectors. Onerous new civil liability rules would burden the EU’s already-struggling tech industry further, harming innovation by impairing the development and adoption of cutting-edge AI in Europe.

Policymakers could introduce additional guidance suggesting manufacturers track software updates to physical goods, which would help determine which version was in use when a defect occurred. The usage data generated by digital devices should be stored securely and, in the event of a liability dispute, made accessible to users, producers, and insurers to establish whether harms occurred due to a product defect, misuse, or third-party intervention (e.g. upgrades or repairs).

Establishing a new civil liability regime for AI, however, is counterproductive. New AI-powered products whose risk profile is not fully understood could end up uninsurable, which would make it harder to bring products to market. Where insurance for such products does exist, premiums would likely be driven up. All this makes it more difficult for firms to launch AI-enabled products in Europe. Moreover, it could make suppliers liable for damage related to uses over which they have no control (e.g. malicious use) or cannot foresee. 

Changing the EU’s liability framework before the first cases of AI-incurred damages even occur is likely to bring about unintended consequences. Evidence is needed in the form of court cases where parties are unable to obtain appropriate redress under the existing framework. Real-life damage caused by AI-enabled systems, dealt with by national laws and the PLD in the first instance, will offer more clarity over what liability reforms, if any, are warranted. 

 

Image Credit: Unsplash

You may also like

Show Buttons
Hide Buttons