The European Union has set a new milestone in its race to develop artificial intelligence (AI) on Monday 8 April. This time, it places ethics at the heart of its action by supporting the seven principles identified by the High Level Expert Group on AI. And it is giving itself 1 year to see whether the actors follow these principles or if regulatory action is necessary.
At this stage, the European Commission has already published two communications on artificial intelligence: a global strategy on 25 April 2018 and a coordinated plan with Member States to raise funds on 7 December (see EUROPE 12009/4, 12155/11).
This new communication aims to clarify the type of AI Europe wants to aim for. The new communication is based on a three-step approach: defining the key requirements for a trustworthy AI, launching a large-scale pilot phase to collect feedback from stakeholders and developing an international consensus for a people-centred AI.
A step-by-step approach
This work is mainly focused on the High Level Expert Group on AI, which has been active since June 2018. This group is expected to formally publish guidelines on the ethical dimension of AI on 9 April on Digital Day and policy and investment recommendations on how to strengthen Europe's AI competitiveness in June 2019 (see EUROPE 12228/9).
In its communication, the Commission states that it supports the guidelines as developed by the group of experts and that it will launch a pilot phase in June 2019 to ensure that these guidelines can be implemented in practice. It also instructs the expert group to review, in early 2020, the assessment lists of key requirements. Finally, it instructs the next Commission to analyse the results obtained on the basis of this assessment and to propose possible next steps.
It also undertakes to continue its discussions with international actors sharing the same ideas, such as Japan, Canada or Singapore, without being discouraged by other countries “that do not share the same vision, such as China”, highlights an EU source. This subject will be at the heart of the French presidency of the G7 and should also be discussed at the G20 in Osaka, Japan in June.
The work of the expert group
The work of the group of experts, which is divided into about 50 pages, includes seven key principles and an evaluation grid for professionals.
The key principles are: human factor and control; robustness and security; privacy and data governance; transparency; diversity, non-discrimination and equity; societal and environmental well-being; and accountability. This implies, among other things, that users must be informed if they are in contact with algorithms, that decisions taken must be explainable, that a human being must be able to modify the decision of an algorithm or that a recourse mechanism must be available in case of damage, summarised the Commissioner for the Digital Economy and Society, Mariya Gabriel, at a press conference.
This approach is based on self-regulation. It does not settle the most controversial issues, such as the degree of explicability. On autonomous lethal weapon systems, another sensitive issue, it supports the European Parliament's call for a legally binding instrument, negotiated at international level, to ban them. Through its High Representative for Foreign Affairs, the Commission has already indicated on this issue that it supports a discussion within the framework of the Geneva Convention on Certain Conventional Weapons (UNODA) to ensure that such weapons remain in human hands and are in conformity with international law.
Reactions
Overall, stakeholders seemed rather satisfied with the report of the expert group and the Commission's communication.
The incumbent carriers (ETNO) and the software industries (BSA) welcomed these documents with enthusiasm.
The European Trade Union Confederation (ETUC), which sat on the expert group, also described the guidelines as “good basis”, while calling for them to continue. “It is a good start, but only a start”, said Thiébault Weber, Confederal Secretary of the ETUC. “The document only concerns non-binding recommendations, it says nothing about the role that Europe should play as a legislator in the future”, referring in particular to the anti-discrimination or occupational safety and health directives.
The same is true for AccessNow (Internet accessibility), ANEC (standardisation) and BEUC (consumers), which issued a joint statement in which they called on the European Commission in particular to: - identify legal gaps and update legislation, if necessary (in particular in the areas of security, liability, consumer and data protection law); - evaluate and update enforcement mechanisms, including redress and market surveillance; - establish new consumer rights to ensure transparency, fairness and accountability of AI-based algorithms.
Status of national strategies
It should also be recalled that in terms of time frame, the coordinated AI plan mandates Member States to develop national AI strategies. According to a latest count by the Commission, France, Finland, Sweden, the United Kingdom, Germany, Denmark, Belgium and Lithuania have targeted strategies in this area. Some countries, such as Luxembourg, the Netherlands, Ireland and Norway, include AI-related actions in their more general digitisation strategies. Austria, the Czech Republic, Estonia, Italy, Latvia, Poland, Portugal, the Netherlands, Slovenia, Slovakia, Spain, Malta and Romania are currently drafting it, while Finland has revised its own. [See the Commission communication: https://bit.ly/2UlS8he ] (Sophie Petitjean)