The European Parliament’s co-rapporteurs for the artificial intelligence legislation discussed compromise amendments on Wednesday 11 January. One of the issues discussed was the inclusion of a fundamental rights impact assessment for AI systems classified as ‘high risk’.
This assessment would be based on concepts such as geographical scope, specific risks for certain groups of people, environmental impact, or the conformity of the Regulation with existing European and national legislation. Users should also make a detailed plan on how risks to fundamental rights can be avoided.
The issue of obligations for users of high-risk AI systems was also discussed. On this point, the latest version of the compromise text states that users of high-risk systems will have to inform the supplier or distributor of the AI system and the competent national authority where they believe there is a risk to health, safety or respect for fundamental rights.
In addition, the version of the text on which MEPs based their discussion also specifies that users of a high-risk AI system should obtain the consent of employees before it is implemented in a workplace.
Still on the subject of obligations, the co-rapporteurs also addressed the fact that the various intermediaries - distributors, importers, users - would be considered as suppliers and thus subject to the same rules as suppliers when they change, for example, the initial purpose of an AI system involving high risk. (Original version in French by Thomas Mangin)