The European Commission has identified four possible avenues for providing a legal framework for the ethical aspects of artificial intelligence, ranging from the status quo to binding criteria for all applications. Its inception impact assessment, published on Thursday 23 July, can be commented on until 10 September 2020.
“Given that artificial intelligence tools can be used to perform functions that previously could only be performed by humans or not at all, there is a need to define specific requirements to prevent and/or mitigate intended or unintended negative outcomes”, the Commission notes in its document.
This inception impact assessment follows the February White Paper (open for public consultation until June) in which the Commission advocated a risk-based approach (see EUROPE 12429/5). It will be complemented by a more detailed impact assessment, which is expected to be finalised in December 2020.
Four options on the table
The European Commission is putting four separate options on the table, on the basis of Article 114 of the TFEU, on which to hang its proposal for a regulation on artificial intelligence. However, it refrains from stating its preference.
In addition to the status quo (Option 0), these options are incremental: Option 1 is the flexible, non-legislative approach designed to facilitate and stimulate action by industry.
Option 2 proposes, as the White Paper does, the introduction of a voluntary labelling scheme that demonstrates that the application meets a number of criteria and is trustworthy.
Option 3 suggests the introduction of mandatory European criteria relating to, inter alia, training data, record keeping for datasets and algorithms, information to be provided, robustness and accuracy and human control. The initial impact assessment considers three sub-scenarios based on the applications that would be affected by these criteria. They could be, and this is the first sub-option, limited to certain categories of applications, such as remote biometric identification systems (e.g. facial recognition). It might be appropriate, notes the Commission, to provide for circumstance-related provisions and common safeguards for biometric identification only. The other two sub-options suggest either limiting these criteria to only high-risk applications (identifiable by sector and use or other criteria) or to all AI applications.
Finally, Option 4 is based on a combination of the above options, taking into account the levels of risk.
It should be noted that the document stresses the importance of implementation, through ex ante and/or ex post mechanisms, through conformity/safety assessment procedures, as provided for in product safety legislation. (Original version in French by Sophie Petitjean)