login
login
Image header Agence Europe
Europe Daily Bulletin No. 13864
SECTORAL POLICIES / Digital

EU Council and European Parliament reach agreement to simplify AI rules and ban ‘digital nudifier’ apps like Grok

On Thursday 7 May, after nine hours of negotiations in the third trilogue, the Council and the European Parliament reached a provisional political agreement on the ‘Digital Omnibus’ package aimed at simplifying the application of the Artificial Intelligence Act (AI Act). The agreement introduces a new ban on non-consensual ‘nudification’ in the wake of the Grok scandal.

The agreed text will now undergo a legal and linguistic revision, and will still have to be formally adopted by the two co-legislators before the summer. The European Parliament plans to put the agreement to the vote at its plenary session in June.

Deferral of the application of the requirements applicable to high-risk AI systems. The provisional agreement introduces new application dates: - 2 December 2027 for autonomous high-risk AI systems, such as those used in biometrics, critical infrastructure, education, employment, migration, asylum and border management; - 2 August 2028 for high-risk AI systems embedded in products, such as toys, covered by EU sector-specific legislation.

In the absence of an agreement, the requirements applicable to high-risk systems under the AI Act would effectively have come into force on 2 August, even though the necessary technical standards and support tools are not yet available on the market.

The agreement also restores the obligation for suppliers to register in the European database high-risk AI systems that they consider to be exempt from this classification, as well as the strict necessity requirement for the processing of particular categories of personal data in order to ensure the detection and correction of biases.

New prohibitions provided for in Article 5. The co-legislators have agreed on a new provision in Article 5 of the AI Act prohibiting AI practices that generate non-consensual sexual or intimate content depicting identified persons, as well as material involving sexual abuse of minors. This provision will apply from 2 December.

The prohibition on non-consensual intimate content should be limited to realistic depictions of intimate parts, in particular the genitals, pubic area, anus, exposed buttocks or exposed female breasts, including nipples or areolas, as well as sexually explicit activities”, says the agreed text in the recital, according to sources consulted by Agence Europe.

The prohibition covers AI systems placed on the EU market for the purpose of creating this type of content, their placement on the market without reasonable safeguards to prevent such creation, and users exploiting these systems for this purpose, said the European Parliament in a press release. 

Machinery exempt from AI rules, but with safeguards. This provision responds to a key demand from Germany, which has become one of the main sticking points in the negotiations: to transfer the machinery regulation from Section A to Section B of Annex I, which would exempt these products from the application of AI Act rules.

Safeguards will be included to ensure that the requirements of the machinery regulation reflect certain provisions of the AI Act”, Agence Europe has learned from an official European source directly involved in the negotiations.

The relationship between the rules of the AI Act and sector-specific legislation in sectors such as medical devices, toys, lifts, machinery and boats was also a major issue for the European Parliament, which wanted to exempt from the AI Act all products covered by sector-specific legislation in order to avoid overlap and reduce bureaucracy. 

The European Parliament’s lead negotiator, Arba Kokalari (EPP, Swedish), said that the Parliament wanted to “transfer 12 pieces of legislation to sectoral law where they include AI-related provisions, but the Council has only accepted the machinery regulation” deeming the Council less ambitious in reducing administrative burden and regulatory overlap.

It has been a major battle”, she admitted, rejecting the idea that this was a concession made only to Germany, since “many European companies have complained” about the confusion surrounding the rules applicable to machines. 

The co-legislators also agreed on a mechanism to resolve situations where sector-specific legislation contains AI-specific requirements similar to those of the AI Act, by limiting the application of the latter in these specific cases by means of implementing acts, the Council said in a press release.

The Commission has been empowered to adopt delegated acts under the machinery regulation to add health and safety requirements for AI systems classified as high risk under the AI Act. This solution would effectively address any overlap between the requirements for high-risk systems in the AI Act and those in sector-specific legislation.

The co-legislators have set deadlines for delegated acts amending the machinery regulation (by 2 August 2028), limiting the applicability of the AI Act requirements where they overlap with sectoral legislation (by 2 August 2027) and Commission guidelines on complementarity between the AI Act and sectoral legislation (August 2027), Agence Europe has learned from a European source. 

In addition, as part of the provisional agreement, the Commission will provide guidance to help economic operators in high-risk AI systems covered by harmonised sectoral legislation to comply with the requirements of the AI Act while reducing administrative burden.

Regulatory sandbox and deadlines for digital watermarking. Other elements of the provisional agreement include the extension of the deadline for the implementation of national AI regulatory sandboxes to 2 August 2027, and the reduction from six to three months of the period allowed for suppliers to implement transparency solutions for artificially generated content, with the new deadline set for 2 December next year.

The agreement postpones the application of digital watermarking obligations for AI systems generating synthetic audio, visual, video or text content already placed on the market until 2 August 2026. These watermarking techniques enable AI-generated content to be detected and traced. According to European sources, the necessary technical solutions should be ready by June.

Competences of the AI OfficeThe co-legislators have also clarified the competences of the AI Office to supervise AI systems based on general-purpose AI models developed by a single supplier. To this end, the agreement lists the exemptions for which national authorities retain competence, notably in the areas of law enforcement, border management, judicial authorities and financial institutions, the Council stated. (Original version in French by Ana Pisonero Hernandéz)

Contents

SECTORAL POLICIES
EXTERNAL ACTION
SECURITY - DEFENCE - SPACE
ECONOMY - FINANCE - BUSINESS
SOCIAL AFFAIRS - EMPLOYMENT
COURT OF JUSTICE OF THE EU
NEWS BRIEFS
Op-Ed