login
login
Image header Agence Europe
Europe Daily Bulletin No. 13539
Contents Publication in full By article 25 / 33
COUNCIL OF EUROPE / Human rights

Council of Europe creates new tool to assess and mitigate impact of artificial intelligence on human rights

The Council of Europe announced, on Wednesday 4 December, the development of a new tool for assessing and adjusting the impact of artificial intelligence (AI) systems on human rights.

Called HUDERIA, this methodology can be used by public and private stakeholders to help them identify and address the risks and impact of AI systems throughout their lifecycle on democracy, the Rule of law and human rights.

It provides for a programme to mitigate or even eliminate identified risks in order to protect citizens from potential harm.

For example, if an AI system used in a recruitment process proves to be biased against certain demographic groups, the mitigation programme may include adjusting the algorithm or implementing human controls.

This methodology requires regular reassessments in line with changes in the situation and in technology.

It was adopted by the Council of Europe’s Committee on Artificial Intelligence (CAI) at its plenary session on 26-28 November, and will be complemented in 2025 by the HUDERIA model, which will provide supporting materials and resources, including flexible tools and scalable recommendations.

Created in 2022, the CAI is behind the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, opened for signature last September and signed by the EU.

The HUDERIA methodology is designed to support the implementation of this Framework Convention. 

Link to the HUDERIA methodology: https://aeur.eu/f/eno (Original version in French by Véronique Leblanc)

Contents

EXTERNAL ACTION
SECTORAL POLICIES
INSTITUTIONAL
SECURITY - DEFENCE
ECONOMY - FINANCE - BUSINESS
FUNDAMENTAL RIGHTS - SOCIETAL ISSUES
COUNCIL OF EUROPE
NEWS BRIEFS