Member States met, on Wednesday 3 April in a working party, to discuss a final compromise proposal from the Belgian Presidency of the EU Council on the regulation on the removal of online child sexual abuse material.
This was the first comprehensive compromise document following the Presidency’s presentation earlier this year of a new approach based on risk categorisation (see EUROPE 13360/20), which subsequently justified issuing injunctions to the platforms to detect this material in private communications. The text was also intended to supplement and clarify an initial interim compromise published on 13 March.
On Wednesday, however, the Member States had not yet been able to identify a clear trend indicating that an agreement was close, a source reported, given that the positions of these same Member States had remained unchanged overall, with countries such as Germany and Austria, in particular, still expressing scepticism about these detection orders, but also about the possible consequences for the encryption of private communications, which remains a sensitive issue.
Furthermore, according to this source, Germany has continued to argue in favour of splitting the regulation, as it did at the end of 2023. The question of whether or not to include content that is already known or new content, and whether or not to include child sexual abuse in the regulations, has not yet been settled either.
The Presidency will present a new text to the working group on 15 April. While the text of 13 March explained the broad outlines of the risk categorisation, designed to allay fears of widespread surveillance of private communications, the text dated 27 March provided details of the methodology, added this source.
On 13 March, the Presidency suggested developing a methodology for determining the risk of specific services or parts or components of these services that are more at risk. “The idea would be to establish three categories in which (parts or components of) services could be classified as high-risk, medium risk or low-risk. This classification would be objectively defined following a specific procedure and based on a set of objective parameters (for example related to the type of service, the core architecture of the service, the provider’s policies and safety by design functionalities and user tendencies)”.
This process gives service providers the tools to self-assess the risks to their services, with higher risk meaning a higher level of safeguards and a higher number of obligations for the providers.
The new text proposed, among other things, a timetable for risk analysis updates adapted to the degree of risk (at least once a year for high-risk services).
For countries that are sceptical from the outset, reserving detection orders for high-risk services alone may not mitigate the spectre of generalised surveillance, as the services concerned could be mass-market and mass-user platforms, which would mean affecting a large number of people. (Original version in French by Solenn Paulic)