login
login
Image header Agence Europe
Europe Daily Bulletin No. 12978
Contents Publication in full By article 24 / 44
SECTORAL POLICIES / Digital

French Presidency of EU Council clarifies possibilities of use for biometric recognition in artificial intelligence

On Friday 17 June, Member States discussed the consolidated version of the French Presidency of the EU Council’s (FPEU) compromise text on future harmonised rules on artificial intelligence (see EUROPE 12973/34) at a meeting of the EU Council Telecommunications Group.

In concrete terms, the text focuses first on biometric recognition (see EUROPE 12950/7). On this point, the FPEU provides that the notion of biometric identification should be defined in a “functional” way and that verification and authentication systems, where the sole purpose is to confirm that a natural person is the person he or she claims to be or to identify a person seeking access to a service or premises, should be excluded.

This exclusion is justified by the fact that such systems are likely to have a minor impact on fundamental rights of natural persons compared to biometric identification systems which may be used for the processing of the biometric data of a large number of persons”, explains the FPEU.

Furthermore, the text states that “considering the significant consequences for persons in case of incorrect matches by certain biometric identification systems”, the systems concerned should be subject to enhanced human oversight.

This would ensure that “no action or decision may be taken by the user on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons”, the compromise document adds.

The persons responsible for this verification could belong to one or more entities and include the person operating the AI system in question, the compromise document details.

More flexibility for regulatory sandboxes

The text also returns to the issue of regulatory sandboxes in detail and gives flexibility to Member States, through the possibility for national authorities to create regulatory sandboxes for the development, training, testing and validation of AI systems before they are placed on the market or put into service.

Regulatory sandboxes - which allow, for example, industry players to test the service developed without being obliged to comply with the entire regulatory framework for a limited period of time - could also be supervised by the competent national authorities when it comes to AI systems provided by EU institutions, bodies and agencies.

The text also addresses the issue of errors in the data sets used (see EUROPE 12928/24). In this respect, the FPEU believes that the data sets should be as complete and error-free as possible, taking into account technical feasibility and data availability. There should be “appropriate” risk management measures to deal with issues with errors within the datasets.

Finally, the FPEU has also added in this new version of the compromise text the fact that high-risk AI systems should be “designed and developed” with appropriate technical solutions to “prevent or minimise harmful or undesirable behaviour”, including through mechanisms that allow systems to safely interrupt its operation in the presence of certain anomalies.

See the compromise document: https://aeur.eu/f/2a5 (Original version in French by Thomas Mangin)

Contents

EUROPEAN COUNCIL
EUROPEAN PARLIAMENT PLENARY
SECTORAL POLICIES
ECONOMY - FINANCE - BUSINESS
COURT OF JUSTICE OF THE EU
Russian invasion of Ukraine
NEWS BRIEFS