In recent years that has been increased legislative focus and regulatory scrutiny of how certain technology companies have collected, used and monetised data collected from users and generated through use of the services supplied by these companies. That legislative focus is now moving to AI technology, as it matures and approaches mass adoption.
The newly proposed AI Liability Directive seeks to introduce a "presumption of causality" in an environment where algorithms continue to be seen as "black boxes" that are too technical for consumers to understand, and there is a desire amongst certain legislators and regulators to create a presumption that companies are responsible for injuries that consumers may suffer as a result of services driven by algorithms (relieving consumers of having to establish causation by working with experts).
However, whether the introduction of such rules alongside the EU AI Act that is currently under negotiation will chill innovation remains to be seen. In any event, this proposal from the European Commission should not be seen in isolation, but rather as part of the broader set of initiatives to turn Europe into "the global hub for trustworthy AI".
The AI Liability Directive, published by the European Commission on Wednesday, will introduce a "presumption of causality" for those claiming injuries by AI-enabled products. This means victims will not have to untangle complicated AI systems to prove their case, so long as a causal link to a product's AI performance and the associated harm can be shown.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.