With less than a week to go before the European elections, the Commission carried out a new assessment on 17 May of the measures taken by digital platforms to combat online disinformation. Although it identifies some room for improvement, it is relatively satisfied with the progress made during this fourth and penultimate fiscal year.
The Commission defines disinformation as 'information that can be verified as false or misleading, that is created, presented and disseminated for profit or with the deliberate intention of misleading the public and that is likely to cause public harm'. In recent months, it has launched a myriad of initiatives to protect democratic debate in Europe: - the code of good practice against disinformation (see EUROPE 12104/1); - the action plan for a coordinated approach to disinformation (see EUROPE 12153/8); - the election integrity package and the East Stratcom working group.
"We are not judging content but the manipulation of the online public space", noted a European official who came to testify to the progress made.
The measures implemented by the platforms
In April 2018, the European Commission decided to rely on platforms to combat online disinformation and not to legislate (see EUROPE 12010/5).
Since the beginning of 2019, it has published a monthly assessment of the measures taken by Facebook, Twitter and Google. As in the third report (see EUROPE 12240/6), it again adopts a more conciliatory tone, noting that in April 2019 the three platforms made political advertising libraries available to the public and provided specific data on their actions to better monitor advertising placement. It is also pleased that Microsoft announced in early May its willingness to subscribe to the code.
Among the black spots identified, the European institution expects more progress on the integrity of services, including advertising services, as well as the level of detail of the data to enable an independent and accurate assessment of progress. Google and Twitter have not done enough on the ads they have engaged.
"Google reported on its ongoing efforts to provide transparency around issue-based advertising, but announced that a solution would not be in place before the European elections", the Commission notes.
In addition, the institution is calling for better cooperation with fact checkers and the research community.
Whatever happens, it gives itself until the end of the year before considering regulatory measures.
Networks, alert system and toolbox
This election eve is also a good opportunity to take stock of the initiatives put in place by the EU, both on the Commission side and on the side of the European External Action Service (EEAS).
It seems that no warnings have yet been issued under the early warning system to combat disinformation, the flagship measure of the action plan.
"We defined a threshold for identifying cross-border issues that may have security implications and require a response from one or more Member States. And we have not yet reached the level that could fall into these criteria", noted one European official.
However, he stressed that the effectiveness of the early warning system should not be measured in terms of the number of alerts issued, but rather in terms of its ability to increase the level of preparedness and cooperation. And to also highlight the new regime of sanctions against cyber attacks, adopted on Friday by the Council of the EU (see EUROPE 12257/9).
The national electoral cooperation network, provided for in the electoral integrity package, met three times and all Member States were represented (see EUROPE 12094/6, 12177/22).
According to this European official, it seems that a "post-election package" is being considered at this stage. The subject will be discussed at the June European Council.
A few figures
According to the website 'EU vs Disinfo' managed by the EEAS East Stratcom working group, 1,483 cases of disinformation have been identified over the past year, including 661 cases directed at Ukraine, 351 at the USA, 255 at the EU and 105 at Syria.
The monthly reports of the platforms also show other countries of origin, such as Iran, Brazil and India. However, "the more Facebook look at manipulative behaviour, the more you see the presence of bad actors in the EU also", said another European official.
Facebook and transnational political advertising
Although not provided for in the code of good practice, Facebook introduced new rules in mid-April to prevent transnational political advertising. These rules, which require advertisers to have a legal representative in the broadcasting country, were a problem for European political parties.
After many delays, it seems that the social network has finally temporarily lifted this obligation while the European elections are taking place. The Commission received confirmation of this from Parliament and NGOs on 10 May.
Link to the interim report: https://bit.ly/2w1jXgq. (Original version in French by Sophie Petitjean)