login
login

Europe Daily Bulletin No. 13311

12 December 2023
Contents Publication in full By article 17 / 40
SECTORAL POLICIES / Digital
After more than 37 hours of negotiations, European Parliament and EU Council reach an agreement on artificial intelligence
Brussels, 11/12/2023 (Agence Europe)

Negotiators from the European Parliament and the EU Council reached a provisional political agreement on the draft legislation on artificial intelligence (‘AI Act’) (see EUROPE 13310/8) shortly before midnight on Friday 8 and Saturday 9 December, after more than 37 hours of intense discussions.

It was long and intense, but the effort was worth it. Thanks to the resilience of the European Parliament, the world’s first horizontal legislation on artificial intelligence will deliver on Europe’s promise, ensuring that rights and freedoms are at the heart of the development of this revolutionary technology. Its implementation will be essential”, said one of the two co-rapporteurs on the dossier, Brando Benifei (S&D, Italian).

This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies”, commented Spain’s Secretary of State for Digitalisation and Artificial Intelligence, Carme Artigas.

During the first 22 hours of negotiations - which began at 3pm on Wednesday 6 December and were suspended at 1pm the following day - the European co-legislators had already cleared the thorny issue of foundation models and general-purpose artificial intelligence systems.

On this point, although France, Germany and Italy had put pressure on the Presidency of the EU Council to replace the existing rules with codes of conduct (see EUROPE 13297/24), the European Parliament’s approach seems to have prevailed.

AI systems and the models on which they are based representing systemic risks will be subject to strict rules, including model evaluation, systemic risk assessment and mitigation, and adversarial testing. Reports will be made to the Commission on serious incidents and measures should also be taken to ensure cyber security. In addition, reports on the energy efficiency of the models will be produced.

The codes of conduct called for by Paris, Berlin and Rome will still see the light of day, but will complement the AI legislation by serving as a support for providers of systems and models representing systemic risks so that they can comply with the future rules.

Models representing systemic risks will be defined by the amount of calculation used in training. Below the threshold of 10 yottaflops (10^25), which is used to calculate computing speed, systems and models will be required to comply with more flexible transparency requirements, such as updating technical documentation, complying with the provisions of the Copyright Directive and disseminating detailed summaries of the content used to train them.

High-risk AI systems, which represent high risks in terms of health, safety, fundamental rights, the environment, democracy and the rule of law, will also have to comply with a set of strict rules, failing which the product concerned could be withdrawn from the European market.

An impact assessment on fundamental rights should be carried out, and users should be informed of the ‘high-risk’ nature of the systems they are using. Public entities using this type of system should register in the EU database. However, in areas deemed critical, exemptions may be granted if suppliers can prove that the system in question does not pose significant risks.

Open source models will be exempt, unless they have been identified as systemic or are marketing a system that could be considered high-risk.

Victory for the EU Council on security issues

In the second part of the negotiations, which began at 9am on Friday 8 December and ended with an agreement shortly before midnight, it was the EU Council that seemed to have the upper hand. The day was devoted to prohibited practices and national security issues. The text contains a list of prohibited practices, but numerous exemptions have been introduced at the request of certain Member States.

The provisional agreement allows AI systems to be used for military or defence purposes, including when they are used by an external contractor. The EU Council also obtained the abandonment of the ban on real-time remote biometric identification - despite being a key issue for the European Parliament - in certain cases, such as the prevention of terrorist attacks or the location of victims or suspects of a predefined list of serious crimes, and after obtaining prior authorisation and assessing the risk to fundamental rights.

A sensitive point in the negotiations, the agreement also introduces an urgency procedure to allow law enforcement agencies to deploy a high-risk AI tool that has not passed the conformity assessment procedure in an emergency.

The untargeted extraction of facial images from the Internet or video surveillance to create facial recognition databases would be prohibited, as would the recognition of emotions in the workplace and educational establishments, or social rating based on behaviour or personal characteristics. AI systems that manipulate human behaviour to circumvent free will and systems used to exploit people’s vulnerabilities because of their age, disability, social or economic situation will also be banned.

Up to €35 million fine

The co-legislators also agreed on the future European AI Office, responsible for overseeing the implementation of the text, contributing to the development of standards and testing practices, and enforcing common rules in all Member States.

A scientific group of independent experts will guide the ‘European Office’ by developing standards, evaluation methods and advice for the designation and emergence of high-impact foundation models. The future ‘European AI Office’ will be made up of representatives from the Member States and will be involved in drawing up codes of practice for foundation models.

The text provides for penalties to be imposed. These could be capped at €35 million or 7% of the company’s annual worldwide turnover for breaches of prohibited AI applications. Companies could face fines of €15 million or 3% of annual global turnover for breaching obligations under AI legislation. If the information provided is inaccurate, the fine could amount to €7.5 million or 1.5% of sales.

Finally, the text also provides that individuals may lodge a complaint with the relevant market supervisory authority if the provisions of the AI Act are not complied with.

The text still has to be put to the vote in the European Parliament and finally approved by the Member States. Before that, the EU27 ambassadors to the EU are due to examine the document on Friday 15 December, before officially approving it in January. A series of technical meetings will be held in the meantime to finalise the last aspects of the text.

The provisions relating to bans will come into force 6 months after the publication of the regulation in the Official Journal of the EU. Six months later, the rules on foundation models and conformity assessment bodies will come into effect. It will be another year before the text is fully implemented. (Original version in French by Thomas Mangin)

Contents

Russian invasion of Ukraine
EXTERNAL ACTION
SECTORAL POLICIES
INSTITUTIONAL
ECONOMY - FINANCE - BUSINESS
SOCIAL - EDUCATION - CULTURE - YOUTH
SECURITY - DEFENCE
FUNDAMENTAL RIGHTS - SOCIETAL ISSUES
NEWS BRIEFS
Kiosk