login
login
Image header Agence Europe
Europe Daily Bulletin No. 13179
SECTORAL POLICIES / Digital interview

Despite “major battles” ahead, Dragoș Tudorache stresses need to complete future inter-institutional negotiations on artificial intelligence before end of mandate

On Thursday 11 May, Members of the European Parliament’s Committee on the Internal Market and Consumer Protection (‘IMCO’) and of its Committee on Civil Liberties, Justice and Home Affairs (‘LIBE’) will vote on the report on the Artificial Intelligence Act (‘AI Act’) (see EUROPE 13172/12). European Parliament majority, future inter-institutional negotiations and ChatGPT the co-rapporteur on the dossier, Dragoș Tudorache (Renew Europe, Romanian), reflects on the situation with EUROPE. (Interview by Thomas Mangin)

Agence Europe - Parliament’s committee vote takes place on Thursday 11 May. Do you think you have gone as far as possible in seeking a balance between the political groups?

Dragoș Tudorache - I think it’s a good balance, yes. Perfect is the enemy of good, as the saying goes. Each political group probably wanted something different, or something more, including my own political group. When we started a year ago, we had over 3 000 amendments, going in almost every direction. It was not easy to reach a common agreement. Again, not everything is perfectly aligned with what any other political group, or I, initially wanted. But it is a package we can all live with and, in a way, it allows all political groups to come together ideologically in what they want to achieve.

From a purely political point of view, do you think that the text will be supported by a large majority?

I think the committee vote will be close on one point: the issue of biometric recognition and public spaces, which has been a very ideological point from the beginning, and remains one. For the rest, I think, there will be a comfortable majority.

The plenary vote of the Parliament is expected to take place in June. The trilogues will then begin with widely differing positions, against a backdrop of ever-changing technology. Is a ‘quick’ agreement really possible?

I am convinced that we can reach an agreement by the end of the year. The EU Council Presidency shares this optimism and this commitment to work hard and finish before the end of the year. This must be done before the end of this mandate. We cannot afford to start from scratch in 2024. Yes, there are issues where the EU Council is in one corner of the room and the Parliament in another. This is how EU processes work. There will be four or five major battles that we will have to fight, but I am still optimistic. Regarding the changing environment, many people ask how you can regulate a technology that is evolving so rapidly. My answer is that there is a lot of legislation in place that is over 100 years old. That is why we have created this comitology: when adaptations are necessary, they can be made quickly, with the help of experts, so that the legislation keeps up.

There were many changes at the end of the negotiations. Why?

We negotiated to the end on the most important issues. The items that we closed at the end were the ones that were the most hotly debated, such as article five on prohibited practices, article six on the classification of high-risk systems, general purpose AI, and generative AI such as ChatGPT. We fought until the end to reach an agreement. Now we have one.

To what extent has the emergence of ChatGPT driven you to rethink your approach?

If you are asking whether it was ChatGPT and the sudden upsurge in public attention regarding this subject that pushed us to do something, my answer is ‘not really’. Last year in the Parliament, we were already convinced that we should do something about this kind of AI systems. The EU Council was also considering this, but in the end decided to do nothing. Some, for example on the left, wanted stricter rules. Others, more to the right, wanted to have as few obligations as possible for general purpose AI. As co-rapporteurs, Brando Benifei (S&D, Italian) and I took the approach of first of all recognising that you cannot put general purpose AI into risk categories that were already in the regulation.

Is this why you have created a special regime for such systems?

The very logic of how risk categories work is that you look at the actual use, and that it is the use of the technology that makes it high risk, low risk or prohibited. In the case of ChatGPT, you can’t say that processing a text is risky, but you have to be careful about copyright. Putting it in one category, for example high risk, or in another category would not have been consistent with the way the whole text was constructed. This is why we realised that we needed to design a special regime that recognised the characteristics of general purpose systems.

Some MEPs felt that generative AI deserved a legislative proposal of its own (see EUROPE 13168/12). What do you think?

I think it is far too early for that. I don’t think we know enough to work on a dedicated regulation for generative AI. I think what we have done now with the special regime in the text is the right balance when it comes to regulating the key elements of liability for developers of generative AI. We have also linked these elements to copyright law. And I think if, with time, it becomes apparent that there are other manifestations or uses of generative AI or other effects of generative AI that would be worth regulating further, then we can think about that. But for now, I think what we are doing is right.

Contents

SECTORAL POLICIES
EUROPEAN PARLIAMENT PLENARY
COURT OF JUSTICE OF THE EU
ECONOMY - FINANCE - BUSINESS
EXTERNAL ACTION
SECURITY - DEFENCE - SPACE
SOCIAL AFFAIRS - EMPLOYMENT
NEWS BRIEFS