The 52 independent experts convened by the European Commission have issued their final position statement on the most controversial issues relating to artificial intelligence: they speak out against mass surveillance, lethal weapons or citizens' ratings.
These are some of the policy and investment recommendations that the group intends to present on 26 June at the first meeting of the European AI Alliance, which brings together some 3,000 people.
"An agenda for the next Commission"
The draft text, published by the American outlet Politico, includes 33 recommendations to the European Commission and the Member States. It finalises and goes into details of a first working document of the group, the guidelines on the ethical dimension of artificial intelligence (see EUROPE 12231/10).
"It is virtually a working agenda for the next Commission", says one of the group's experts, who points out that an exercise like this was no easy task. According to him, the discussions were intense, sometimes lengthy and also led to confrontations. He gives the example of the recommendation, eventually included in the draft text, that citizens who encounter a problem in accessing a public service using artificial intelligence should be able to be redirected to a real person in charge.
The question of liability
The fifty-page document examines, on the one hand, how to achieve trustworthy artificial intelligence, focusing on the protection of human beings, the public and private sectors and artificial intelligence in the world. On the other hand, it addresses the main facilitators of AI: infrastructure, skills, governance and funding.
Overall, it rejects 'regulation for regulation's sake' and suggests a risk-based approach; the notion of risk being understood as an adverse effect on the individual or society.
Unlike the April guidelines, it addresses the most sensitive issues, such as civil and criminal liability. On the former, it recommends that, for applications that present a security risk or endanger fundamental rights, consideration be given to introducing traceability and monitoring criteria. It specifies that civil liability rules must be able to ensure adequate compensation in the event of injury and/or violation of rights (via strict or criminal liability). These "may need to be supplemented by mandatory insurability provisions", he says. On criminal liability, he called for consideration of the requirement to ensure that criminal liability can be attributed in accordance with fundamental principles of criminal law.
Surveillance, violence, etc.
The document addresses in particular the issue of mass surveillance and marketing surveillance "while governments may be tempted by 'secure societies' using intrusive surveillance systems based on AI systems". And he went on to say: commercial surveillance of individuals (especially consumers) and society must be prevented in order to be strictly "in line with fundamental rights such as confidentiality (even when it is a matter of 'free services'), taking into consideration the effects of alternative business models".
The document also recommends controlling and restricting the development of autonomous lethal weapons. It calls on Member States, in the context of current international discussions, to propose the adoption of a moratorium on the development of lethal offensive weapons to their partners.
It also proposes that AI systems be self-identified: "Those who deploy AI systems should be responsible for clarifying that in reality, these systems are not human", the document states.
Monitoring of ethical guidelines
In addition to these policy and investment recommendations, the Expert Group is expected to launch the steering process for the ethics guidelines, which will conclude with the presentation of a revised document in early 2020. This steering process includes sectoral case studies, as well as company tests of the list of evaluation criteria. (Original version in French by Sophie Petitjean)