EU Member States discussed, last Tuesday 25 October at a meeting of the EU Council’s Working Party on Telecommunications, the latest version of the Czech Presidency’s compromise text on harmonised rules on artificial intelligence (AI) (see EUROPE 13029/9).
The new version of the text includes several relatively important changes, such as the reintroduction of the term ‘remote’ in the wording “remote biometric identification”.
As a result, the Czech Presidency of the EU Council hopes to “eliminate doubts about the possible inclusion of systems used for identification, which should not be covered by this definition, including, for instance, the fingerprint identification systems”. The text now also specifies that remote biometric identification is done “without active involvement of persons”.
Furthermore, the text also returns to the question of the competent national authorities. Thus, the compromise proposal establishes that when AI systems are commissioned or used by the EU institutions, the European Data Protection Supervisor should assume the responsibilities that in the Member States are entrusted to the competent national authorities.
Exemptions have also been added in this new version of the document, so as not to oblige AI systems authorised representatives and providers to register in the EU database certain high-risk AI systems in the areas of law enforcement, migration, asylum and border control management and critical infrastructure.
Similarly, the text also proposes that users of high-risk AI systems in the field of critical infrastructure that are public authorities, agencies or bodies should also be exempted from the registration requirement in the EU database for high-risk AI systems.
The compromise document also proposes that providers of general-purpose AI systems should also be able to participate in ‘regulatory sandboxes’ - which should allow players to test the technology or service developed without necessarily having to comply, for a limited period of time, with the full normal regulatory framework - and to apply the codes of conduct provided for in the text if they so wish.
Finally, the word “may” has been replaced by “shall” in order to make the publication of exit reports mandatory for the competent national authorities. For the European Commission and the future European Council for AI, it would be possible to use the results of ‘regulatory sandbox’ learning to produce and make publicly available reports, subject to the agreement of the parties concerned.
See the document: https://aeur.eu/f/3vt (Original version in French by Thomas Mangin)