MEPs are preparing to decide their position on artificial intelligence (AI). Following compromises earlier this week, MEPs on the European Parliament Committee on Legal Affairs are expected to adopt their draft report on Thursday 1 October. The report involves two legislative reports (on ethical aspects and the civil liability regime) and one non-legislative report (on intellectual property rights).
The report by Sabine Verheyen (EPP, Germany) on artificial intelligence in education, culture and audiovisual media, which has been discussed in the Culture Committee (CULT), will be voted on later this year. It is a non-legislative report.
In the light of the compromise amendments, the reports by the Committee on Legal Affairs (JURI) call for an exhaustive list of high-risk technologies that are subject to more extensive obligations, the list to be updated by the European Commission. The reports contain proposals for a civil liability regime for artificial intelligence with a compensation system of up to 2 million euros.
MEPs have taken inspiration from the ideas in the European Commission’s white paper and roadmap, which are intended to support the legislative proposals it will present in early 2021 (see EUROPE 12429/5, 12538/13). The European Commission questioned stakeholders on the appropriateness of introducing a risk-based approach, with a voluntary labelling scheme for AI applications and specific obligations for high-risk AI applications.
MEPs embrace risk-based approach
The report prepared by Ibán García Del Blanco (S&D, Spain), which contains 75 pages of compromises, takes into account the opinions of seven parliamentary committees (including two associated committees). The report, which deals with ethical aspects, calls on the Commission to present a regulation on artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by these technologies developed, deployed or used in the EU.
It proposes that high-risk technologies should be subject to an “impartial, objective and external ex-ante assessment” to ensure that they are developed, deployed and used in a secure, safe and cybersecure manner, with reliable, accurate, easily explainable, informative performance and in a way that allows them to be disabled or restored to a previous state in the event of a control failure. The draft report stresses that operators of high-risk technologies may also have to provide public authorities with documentation relating to their use, design and safety instructions, including the source code, development tools and data used by the system.
In an annex, the document identifies the technologies that can be considered high-risk, according to the sector and use. The annex is the result of a compromise, and now includes finance and insurance in the high-risk sectors, whereas the environment has been removed. Amongst the high-risk uses it includes recruitment, granting loans, brokering and taxation, waste management, emission control, etc., for example.
The report states that any technologies that receive a positive assessment should receive a “European certificate of ethical compliance”, which would also be open to other AI products following a voluntary assessment.
A penalty of up to 2 million euros
Alongside the report on AI ethics, the report by Axel Voss (EPP, Germany) supports a compulsory liability insurance regime for high-risk artificial intelligence systems, which would require operators of such systems to take out insurance. It believes that the Commission should include additional rules for cases where, for example, the third party is untraceable or insolvent.
It specifies that an operator of a high-risk AI system who has been held liable for harm or damage should provide compensation of up to 2 million euros in the event of death or harm caused to the health or physical integrity of an individual, and up to 1 million in the event of significant immaterial harm that results in economic loss or damage caused to property.
Elsewhere, the draft report argues that, where there is more than one operator, they should all be held “jointly and severally” liable (the proportion to be determined by the respective degrees of control the operators have over the risk connected with the operation). The report considers that product traceability should be improved in order to better identify those involved in the different stages and calls for consideration to be given to the possibility of reversing the rules governing the burden of proof for emerging technologies in certain cases.
Facial recognition not banned
MEPs also want to tightly regulate facial recognition, but not to ban it. They therefore stipulate in the Del Blanco report that “the use and gathering of biometric data for remote identification purposes in public areas, such as biometric or facial recognition, shall be deployed or used only by Member States’ public authorities for substantial public interest purposes”.
Those authorities shall ensure that such deployment or use is disclosed in a proportionate and targeted manner and limited to specific objectives and locations, and restricted in time, they add.
Fuller account to be taken of intellectual property rights
On Monday 28 September, MEPs also voted on compromise amendments to the report prepared by Stéphane Séjourné (Renew Europe, France).
The text regrets that the issue of intellectual property rights has not yet been addressed in the work of the Commission. This non-legislative contribution calls for an impact assessment on the issue and puts the case for AI and related technologies to be protected by patents.
The final vote on these three draft reports is scheduled to take place in the Committee on Legal Affairs on Thursday 1 October. The compromise texts can be found at: https://bit.ly/3kVcirm , https://bit.ly/3cHvJRJ and https://bit.ly/36dpMKZ (Original version in French by Sophie Petitjean)