The European Parliament continued its work on Thursday 26 January in a technical meeting to reach a position on the Artificial Intelligence legislation (‘AI Act’) (see EUROPE 13097/10).
A new version of the compromise text was discussed on this occasion. It proposes to review certain criteria for the classification of high-risk artificial intelligence (AI) systems. Thus, a system could be classified as ‘high risk’ if it presents a risk of harm to health, safety or fundamental rights.
Developers and suppliers of AI systems would be able to use a regulatory sandbox - which is intended to allow suppliers to test an AI system for a defined period of time without necessarily having to comply with the regulatory framework - to define the level of risk that the system represents. During these test phases, AI system providers should also take into consideration the risks that could be induced by misuse of the system concerned.
The issue of regulatory sandboxes was also reviewed. On this point, the text now provides for Member States to set up common sandboxes. Member States should respect the obligation to have at least one regulatory sandbox in place.
The issue of the quality of the data available and required, which was the subject of much discussion, was also addressed. Manufacturers of AI systems should therefore ensure that the datasets used cannot impact on health, safety and fundamental rights. (Original version in French by Thomas Mangin)