On Friday 1 December, the Member States’ ambassadors to the European Union will discuss the issue of foundation models in the context of the ‘AI Act’ (see EUROPE 13301/9). They will attempt to reach a compromise at a time when the approach proposed by the Spanish Presidency of the EU Council on this point is not supported by France, Germany and Italy, and when a potential final round of inter-institutional negotiations (‘trilogue’) will take place on 6 December.
In a document submitted to the Member States on Tuesday 28 November, the Spanish Presidency attempted to strike a balance in order to obtain a revised negotiating mandate.
To this end, it proposes to match the French, German and Italian approach - which seeks to replace the strict rules sought by the European Parliament with codes of conduct - while introducing obligations in terms of technical documentation available to suppliers of general-purpose AI systems, to guarantee that they will comply with the regulation.
AI models likely to represent systemic risks should comply with certain additional rules. To define these systems, the EU Council Presidency is proposing to base them on the sum of calculations used by the models for their training and the number of professional users. For the time being, this threshold will be set at 10,000 professional users in the EU.
Adopting part of the approach proposed by the European Commission in an attempt to relaunch negotiations (see EUROPE 13297/24), the Spanish Presidency suggests that all model suppliers should be subject to codes of conduct. However, these codes would include more measures for models representing greater systemic risks.
In an attempt to reach a compromise, it confirms the creation of the future ‘European Bureau for Artificial Intelligence’, responsible for overseeing the implementation of the text. Member States, for their part, could appoint at least one notification authority and at least one market surveillance authority. (Original version in French by Thomas Mangin)