On 6 December 2022, the Council of the European Union has adopted its common position (general approach) on the Artificial Intelligence Act (AI Act). Upon the adoption of the general approach the Council can now enter into negotiations with the European Parliament (so-called trilogues), once the Parliament adopts its own position with a view to reaching an agreement on the proposed AI Act.
Since the presentation of the first draft of the AI Act by the European Commission in April 2021, the AI Act has taken shape and undergone quite some changes. The main developments in the proposed regulation are the following:
- The definition of ‘AI systems’, which determines the scope of applicability of the AI Act, has been narrowed down to systems developed through machine learning approaches and logic- and knowledge-based approaches. Compared to the original proposal including a list of technlogies to be considered as AI technologies in an annex, this approach seems leaner and more tailored approach to distinguish between other software and artificial intelligence at first. By comparing the original Annex I of technologies and the newly included recitals broadly describing the understanding of the legislator in relation to machine learnging and logic- and knowledge-based approaches, only statistical methods and search, estimation and optimization methods were excluded from the scope of AI technologies.
- Prohibited AI practices have been extended to include private actors using AI for social scoring. Also, the provision prohibiting the use of AI systems that exploit the vulnerabilities of a specific group of persons now also covers persons who are vulnerable due to their social or economic situation. With regard to the highly sensitive field of prohibiting the use real-time biometric identification for law enforcement purposes, the requirements for an exemption are now laid down in more detail. The paragraph on real time biometric identification for law enforcement now rather reads like a legal basis for application, rather than a firm prohibition.
- Regarding the classification of AI systems as high-risk, a horizontal layer was added on top of the high-risk classification, to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured. As regards the high-risk categories in Annex III, three categories have been deleted (deep fake detection by law enforcement authorities, crime analytics, verification of the authenticity of travel documents), two categories have been added (critical digital infrastructure and life and health insurance) and others have been fine-tuned. Also, the requirements for high-risk AI systems have been clarified and adjusted, in particular taking into account different roles in the software supply chain.
- General purpose AI has been included in the scope of the AI Act. General purpose AI, which may be used as high risk AI system or as a component thereof, shall need to fulfill certain requirements for high-risk AI systems. How the requirements of high-risk AI systems are translated to general purpose AI is however currently open and shall be further specified by an implementing act.
For more details, we have included the reference to the press release of the Council of the European Union. Also, we will share our more detailed evaluation of the current status of the AI Act and its implications on the AI community and society via our website and keep you updated on the progress.
🚥 AI and Software Liability
Uncertainties in relation to liability for software, in particular AI, have been identified as one of the key obstacles for institutions of all magnitudes - from startup over SME to corporates and conglomerates - to deploy AI services and products on a larger scale. Clarifying AI liability has been on the agenda of the European Comission since the legislative work on finding a suitable legal framework for AI has started.
End of September, the European Commission published two legislative proposals: the draft of an AI Liability Directive and the draft of a revised product liability directive - both in parallel addressing aspects of AI liability.
The AI Liability Directive aims at introducing a right to evidence, covering the level of disclosure required by a defendant in a claim involving high-risk category AI systems, as defined in the AI Act. Further, a (rebuttable) presumption of causality shall be introduced, to guide how and where fault should be attributed in claims for damages caused by an AI system, depending on its categorisation. The AI Liability Directive has a very narrow scope, as descibed before, and does not address further issues of national liability rules, for example the definition of damages and the general rules of procedure. Due to its proposal as a directive, this legislative act will need to be implemented via national legislation in the member states.
Far more interesting and impactful than the AI Liability Directive is the proposed update of the product liability directive, implemented in Austria via the Product Liability Act (Produkthaftungsgesetz, PHG). It shall be explicitly clarified that also software shall be considered as a product and hence certain actors in the software supply chain shall be liable for failures or malfunctioning. Notably, liability for software shall continue after the product is launched on the market, covering updates, failure to address cybersecurity risks, and machine learning. This holistic full lifcycle approach will hold developers responsible for AI systems learning independently. Moreover, certain scenarios shall be introduced, wehere the causal link between defectiveness and damage is (rebuttably) presumed. The general liability period shall expire within 10 years after placing the product on the market.