MEPs have warned that the European Union does not have a sufficient regulatory framework to deal with the security risks associated with artificial intelligence models such as Anthropic's Mythos. At a debate in the IMCO Committee on Wednesday 6 May, devoted specifically to the cybersecurity risks of these models, they called on the European Commission to take urgent action.
MEP Pablo Arias Echeverría (EPP, Spanish) pointed out that Mythos has highlighted that the US has a clear cybersecurity problem, exposing significant vulnerabilities and the fact that “many people” can access these models. He asked the European Commission what measures could be taken to “mitigate” the risk of them being used “for political ends”.
MEP Christel Schaldemose (S&D, Danish) questioned whether the EU has the financial resources and skills to develop language models to “protect itself” against such advanced AI systems. “Are we capable of defending ourselves?” she asked, insisting on the need to apply the Cyber Resilience Act “sooner”. Her colleague José Cepeda (S&D, Spanish) warned that the AI Act does not cover systems for military use.
“There is a serious problem with security systems. AI generates hallucinations. It could create suspicions; we could end up with an AI like Mythos that would create non-existent conflicts”, warned Gheorghe Piperea (ECR, Romanian).
The Renew Europe and The Left groups regretted that Anthropic had refused to take part in the debate. Anna Stürgkh (Renew Europe, Austrian) warned that systems like Mythos open the door to “major large-scale cyber attacks” against hospitals, banks, etc., and stressed that the EU needs “clear action” and “clear rules” to meet this challenge. Her colleague Bart Groothuis (Renew Europe, Dutch) pointed out that the majority of risks stem from “legacy software” and called on the European Commission to provide clear guidelines for companies to “get rid of” such software. “That’s where hackers live”, he warned.
From The Left, Leila Chaibi (GUE/NGL, French) also questioned the “sufficiency” of the current EU regulatory framework to deal with open source models that could be developed in six to twelve months, including in China. She also stressed that the EU must develop its own “sovereign capabilities”.
The European Commission has stated that the EU already has a cybersecurity framework that is “designed to be future-proof” and capable of responding to potential risks. “We now need to implement it properly and clarify certain aspects. The combined application of the AI Act and the Cyber Resilience Act will ensure that all AI systems with systemic risks and AI-enabled products benefit from a high level of cyber security”, insisted Despina Spanou, Deputy Director General at DG CNECT. However, she acknowledged that discussions on the Cybersecurity Act 2 could identify any gaps, “including those related to AI”.
Lucilla Sioli, Director of the Artificial Intelligence Office at DG CNECT, pointed out that although military uses are excluded from the scope of the AI Act, from 2 August the Commission will be able to request information and access models for evaluation purposes. “We will also have the power to impose fines of up to 3% of annual worldwide turnover. In extreme cases, we may also restrict the availability of the model on the European market”, she added.
“Difficult times lie ahead, with many organisations facing more attacks and software and hardware problems than they do today”, said Hans de Vries, Director of Operations and Cybersecurity at ENISA, acknowledging that “legacy environments” pose challenges.
“Perhaps we also need to learn the lessons of the Mythos case and incorporate them into the legislation currently under review or soon to be reviewed. We need to improve our common use, our security capabilities and our resilience, and ENISA is determined to contribute to this”, he concluded, highlighting the potential of these models to help businesses resolve the vulnerabilities identified. (Original version in French by Ana Pisonero Hernández)