News & Views

Liability for Artificial Intelligence and Other Emerging Digital Technologies

29 April 2020

Report from the Expert Group on Liability and New Technologies New Technologies Formation for the European Commission 2019

Legislation is often accused of being behind its own time, old fashioned, and not equipped for solving issues arising from modern technologies. The question of who is liable for a pedestrian getting hit by an autonomous vehicle dates back at least a few years, and only a few jurisdictions have explicitly solved this issue. The report entitled “Liability for artificial intelligence and other emerging digital technologies” (hereafter referred to as the “Report”) analyses existing national tort laws in the EU Member States and highlights their incompatibility with emerging digital technologies (hereafter referred to as the “EDT”) and the challenges arising from their use. Our post summarises the Report and offers some additional insights into the issue of liability for AI and other EDTs.

1. Yesterday’s Solutions Are Ill Fit for Today’s Problems

Within the EU, the Member States are, to a large extent, responsible for questions related to liability under the national tort laws of the Member States. Only in a few legal areas, liability regulation is harmonised within the EU, for example through the Product Safety Directive (85/374/EC). Current legislation may, in some cases and to some extent, be used to solve issues emerging from the domain of EDT. For example, vicarious liability regimes applicable to human auxiliaries may also be applied to the operators of machines. The existing regulations may, however, lead to unfair outcomes and inefficient or costly processes for the victim.

2. Tomorrow’s Issues

The Report concludes that the existing legislation could lead to inadequate outcomes where victims are not compensated for suffered harm, or to situations where it might be impossible or very expensive to provide evidence because the tort law failed to account for characteristics of EDT that separates liability issues relating to EDT from regular liability questions. These characteristics include the complexity and opacity, as well as the openness and vulnerability of EDT, including AI products, services, and systems.

For instance, the complexity may make it nearly impossible for a victim to prove that a defect or flaw in the system had caused the damage as the chain of causality cannot be proven. Consequently, the victim will lose their right to compensation under tort law that often requires that the victim prove the chain of causality. If machine learning (“ML”) AI is involved, it is even more difficult for the victim to prove what “went wrong” in the algorithm because of the ML AI’s “black-box” characteristic. The ML AI will receive some input, apply some inscrutable logic, and then produce an output that might, in fact, cause damage. How the output was produced and how the ML AI system applied the logic is the “black box” and nobody - not even the developer of the ML AI system - knows how the outcome had been reached.

3. Tomorrow’s Solutions

To deal with the above-mentioned challenges, there are a few mechanisms suggested by the Report. It aims to strike a balance between offering the victims adequate protection whilst not chilling future development in the EDT domain. The benefit of adapting some of the suggested mechanisms is that the burden on the victims will be eased, and it will ensure that the victims have equal access and opportunity to compensation as the victims of human conduct and conventional technology. Some of the mechanisms recommended by the Report are summarised below:

  • The challenges motivate a reversed burden of proof in some instances when, for example, there is disproportionate difficulties in establishing a relevant level of safety for the EDT service or when the producer or operator of the EDT has failed to comply with the applicable safety rules. Such levels of safety and safety rules could be set by industry standards within the EDT domain. Such standards or “duties of care” will also simplify the process to prove a deviation from them, as it will be easier to pass blame on a party failing to meet the standards. Adopting clear standards will also help with the foreseeability of liability within the domain in general.
  • Requiring equipping technology to include logging systems about the operation of the technology will ease some of the causality chain challenges by making it easier to, for example, establish whether a risk of the technology had materialised. The Report suggests that, in the absence of logged information or a failure of the producer or operator to produce such information, the condition that would have been proven by the information be deemed proven.
  • Lastly, formation of commercial and technological units will make it possible for a victim to hold several entities jointly and severally liable. The units may either be formed on a contractual basis or a collection of EDT producers might be interpreted as such a unit if they, for example, have a joint or coordinated marketing. This will also motivate producers of EDT to build products which include logging mechanisms in the event of a redress situation between the parties in the unit.

It is worth noting that the Report also holds that there is no need to give autonomous systems a legal personality of their own, mostly because there are other entities that may be held accountable. Further, giving such systems a legal status would require that the system have financial assets to compensate the victim.

4. Risks and Liability

There is an ever-increasing need to review tort laws and to update them with respect to issues arising from AI and EDT. However, this cannot be done without an understanding of the risks relating to the different types of EDT and what type of systems and standards, if any, are already in place to limit risks and to guarantee a level of EDT safety.

One option, which is not highlighted in the Report, is that the level of safety could be dealt with through Service Level Agreements (“SLA”). SLAs will guarantee a certain uptime and may, for example, guarantee that a service will work 99.9999999% of the time, which translates to 1 second of downtime per year. Most users would probably accept this percentage, even for critical software. Unfortunately, not all EDT producers are able to guarantee this, nor is it necessary or possible for all EDT to offer such guarantees. For that reason, SLAs might not be the sole solution to the liability issue but may offer some insurance for those who need it right now.

It will also be interesting to see to what extent software licences will play a role in answering the liability question, and if, for example, they will, to an even larger extent, include liability clauses. For example, currently, licences for open source software commonly only shift the IP liability from the author of the code to the user. Can the author then be held accountable for malfunction in their software by the users’ customer? Or is the open source software in fact distributed “as is” and any risk associated with the usage of the software falls on the user?

5. Key takeaways

  • AI and other EDT have unique characteristics that make current tort law inadequate. The characteristics need to be taken into consideration when addressing issues of liability. For example, the black box of AI is incompatible with the requirement to prove a causality chain.
  • Tort law needs to be updated to compensate the victims of EDT. As tort law within the EU is not fully harmonised, it is important to develop EU-wide minimum standards with respect to liability related to AI and other EDT.
  • Stricter requirements and standards should be imposed on the EDT and AI producers and operators, and for high risk domains different insurance solutions might be appropriate to ensure compensation for victims.

 

Authors: Elisabeth Vestin, Linn Alfredsson, and Malin Männikkö