European Tech Law Faces Test to Address Interference, Threats, and Disinformation in 2024 Elections
from Diamonstein-Spielvogel Project on the Future of Democracy, Women Around the World, Women and Foreign Policy Program, and Women’s Political Leadership
from Diamonstein-Spielvogel Project on the Future of Democracy, Women Around the World, Women and Foreign Policy Program, and Women’s Political Leadership

European Tech Law Faces Test to Address Interference, Threats, and Disinformation in 2024 Elections

Pin badges encouraging people to vote in June's European Parliament elections are placed at the Malta office information stand during a conference at the European Parliament, in Rabat, Malta May 22, 2024.
Pin badges encouraging people to vote in June's European Parliament elections are placed at the Malta office information stand during a conference at the European Parliament, in Rabat, Malta May 22, 2024. REUTERS/Darrin Zammit Lupi

May 22, 2024 11:54 am (EST)

Pin badges encouraging people to vote in June's European Parliament elections are placed at the Malta office information stand during a conference at the European Parliament, in Rabat, Malta May 22, 2024.
Pin badges encouraging people to vote in June's European Parliament elections are placed at the Malta office information stand during a conference at the European Parliament, in Rabat, Malta May 22, 2024. REUTERS/Darrin Zammit Lupi
Article
Current political and economic issues succinctly explained.

The European Union (EU) began implementing the Digital Services Act (DSA) this year, just in time to combat online disinformation and other electoral interference in the dozens of elections taking place in Europe’s twenty-seven member countries and the European Parliament elections taking place June 6 through June 9. To prepare, the EU conducted a stress test of the DSA mechanisms to address elections targeted by false and manipulated information, incitement, and attempts to suppress voices. The DSA has also opened investigations against Meta, TikTok, and X out of concern they are not doing enough to prevent these scenarios. 

More From Our Experts

The DSA is a landmark piece of legislation not only because it is the most comprehensive regulatory effort to address digital threats to date and impacts the 740 million people living in the EU; its implementation will also inform other countries’ efforts to provide a secure and safe internet space. Even without additional legislation, the European law may induce the largest technology companies to voluntarily apply the same standards globally, as was the case with the EU’s Global Digital Privacy Regulation, which caused many platforms to routinely seek user permission for data collection and retention.  

More on:

Election 2024

Technology and Innovation

Influence Campaigns and Disinformation

European Union

Women's Political Leadership

Tech companies’ responses to the DSA during the EU elections will be watched closely in the United States, where disinformation and electoral interference could roil the already contentious November elections. Despite years of debate, no U.S. guardrails have been implemented. Concerns over government censorship and free speech have stalled dozens of legislative proposals to require tech companies to address various threats in the digital space and risks arising from powerful new artificial intelligence (AI). The free speech argument overlooks the speech of those who are being doxed, threatened, attacked, and driven out of the public arena by vicious online actors—including women, who are far and away the most frequent targets of these attacks. Legislative action has also been impeded by concerns that overly burdensome regulation will inhibit tech companies amid a worldwide race to gain competitive edge through generative artificial intelligence and other innovations. 

The DSA is a useful model constructed around three principles: due diligence requirements for tech companies, mandated transparency via public reporting of their compliance with those requirements, and the threat of hefty fines to ensure compliance and accountability. The size of the EU market is large enough that, as with the General Data Protection Regulation (GDPR), some tech companies may be incentivized to comply with the law’s provisions even without sanctions. The DSA’s strictest provisions apply to the world’s largest online platforms and search engines (those with more than 45 million users). These companies are required to routinely assess activity on their platforms and services for “systemic risks” involving elections, illegal content, human rights, gender-based violence, protection of minors, and public health and safety. 

Companies have delivered initial assessments, which are publicly available, as well as information about the actions they have taken to comply. The EU website hosts a massive and growing online archive of hundreds of thousands of content moderation decisions made by the companies. In early enforcement actions, the EU has requested that Meta and other companies take down false ads and sought more information about their safety practices. For example, the EU queried X about its decision to cut its content moderation team by 20 percent since last October. Reduced content moderation on one of the world’s largest platforms is obviously a great concern given the number of elections and aggressive disinformation and interference campaigns by Russia and its proxies as it seeks to boost the fortunes of rising rightwing populists, Euroskeptic parties, and pro-Russian and anti-Ukraine candidates, as recently occurred in Slovakia and other Central and Eastern European countries.  

More From Our Experts

Thus far, the DSA has not yet levied fines, but the threat alone of stiff penalties of up to 6 percent of gross revenues has led most companies to provide the required information. This treasure trove of information about how these tech companies are policing their own platforms is itself valuable; it enables governments and researchers to understand the effectiveness of measures being employed by trust and safety divisions of companies, some of which embrace the goal of a safe internet. The EU law explicitly seeks to guard free speech as well as innovation by companies, but the experience of implementation will inform lingering concerns about free speech, direct government decision-making, and censorship of content, including whether an authoritarian government could exercise control over their populations through digital policing and firewalls. Those concerns color the current negotiations at the United Nations over a Global Digital Compact, which is to be announced as part of the Summit of the Future in September. 

The essence of the DSA is not to make content decisions directly but to set standards for due diligence and require companies to demonstrate that they are monitoring and mitigating risks via their own codes of conduct. Voluntary standards may vary, but the sharpest debates revolve around defining what constitutes illegal content. The EU has taken additional measures to harmonize laws regarding what is illegal content across the EU member states, which has been a difficult and contentious matter. The United Kingdom (UK) went through a similar multiyear debate over concerns about curtailing free speech before passing its Online Safety Act late last year. The UK law adopted some features in the DSA, including the due diligence reporting requirement and fines of up to ten percent of gross revenue. It defines the scope of risks more narrowly than the DSA, although the UK law does criminalize “extreme” pornography and may criminalize the creation of deepfake porn. Enforcement of the UK law awaits finalization of codes of conduct by year’s end. The EU also has moved to harmonize what constitutes illegal content as the laws of the twenty-seven member states currently vary greatly in defining what is illegal. Germany’s Network Enforcement Act, which was passed in 2018, is one of the world’s stiffest hate speech laws, which aims to stem rising neo-Nazi hate speech. The far-right Alternative fur Deutschland party has surged in state elections and exceeded the popularity of the leading Social Democrats in national polls. 

More on:

Election 2024

Technology and Innovation

Influence Campaigns and Disinformation

European Union

Women's Political Leadership

The process of making the internet safer is iterative; several countries have revised their laws based on the experience of implementing them as well as evolving circumstances. For example, Germany amended its law in 2021 to stiffen its requirement that companies take down “clearly illegal” content within twenty-four hours. Australia has revised its online safety law twice since its initial passage in 2015, to require faster takedown of material deemed illegal and to greatly expand the law’s original focus on stopping child sexual abuse and exploitation and terrorist material. Speed of response is a critical factor in countering mis- and disinformation. Delayed action by tech companies has allowed viral propagation of material to proceed unhindered—as occurred in early January when deepfake porn of pop star Taylor Swift spread to 47 million viewers shortly after it was uploaded from the notorious 4chan message board. 

That highly publicized episode drew attention to the disproportionate targeting of women and girls by internet violence, especially women in public life like politicians, journalists, and human rights activists, and minorities. The chilling effect on political participation has also been documented. The UK parliament rushed to act on deepfake porn after a number of women candidates were targeted this spring. Growing attention to the magnitude of the effects on women spurred the Biden administration to form a fourteen-country global partnership for action on online harassment and abuse. And last month, the EU concluded years-long negotiations to issue a directive on online gender-based violence and threats, including nonconsensual sharing of intimate images, deepfake porn, and other forms of attack. Member states are required to pass laws to implement the directive within two years. 

The 2024 elections will serve as an initial test case for the DSA’s ability to rein in this wide variety of election interference, threats, and disinformation. Given the nascent regulatory architecture and companies’ varied compliance records, it is certain that further scrutiny and modification will be needed. Big tech will be required to provide public after-action reviews of the effectiveness of their measures to label AI-generated content, moderate discourse, identify foreign interference, and meet other guidelines for each country’s elections. These much-needed first steps will help light the way for others. 

This publication is part of the Diamonstein-Spielvogel Project on the Future of Democracy.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close

Top Stories on CFR

Trade

President Trump doubled almost all aluminum and steel import tariffs, seeking to curb China’s growing dominance in global trade. These six charts show the tariffs’ potential economic effects.

Ukraine

The Sanctioning Russia Act would impose history’s highest tariffs and tank the global economy. Congress needs a better approach, one that strengthens existing sanctions and adds new measures the current bill ignores.

China Strategy Initiative

At the Shangri-La dialogue in Singapore last week, U.S. Secretary of Defense Pete Hegseth said that the United States would be expanding its defense partnership with India. His statement was in line with U.S. policy over the last two decades, which, irrespective of the party in power, has sought to cultivate India as a serious defense partner. The U.S.-India defense partnership has come a long way. Beginning in 2001, the United States and India moved from little defense cooperation or coordination to significant gestures that would lay the foundation of the robust defense partnership that exists today—such as India offering access to its facilities after 9/11 to help the United States launch operations in Afghanistan or the 123 Agreement in 2005 that paved the way for civil nuclear cooperation between the two countries. In the United States, there is bipartisan agreement that a strong defense partnership with India is vital for its Indo-Pacific strategy and containing China. In India, too, there is broad political support for its strategic partnership with the United States given its immense wariness about its fractious border relationship with China. Consequently, the U.S.-India bilateral relationship has heavily emphasized security, with even trade tilting toward defense goods. Despite the massive changes to the relationship in the last few years, and both countries’ desire to develop ever-closer defense ties, differences between the United States and India remain. A significant part of this has to do with the differing norms that underpin the defense interests of each country. The following Council on Foreign Relations (CFR) memos by defense experts in three countries are part of a larger CFR project assessing India’s approach to the international order in different areas, and illustrate India’s positions on important defense issues—military operationalization, cooperation in space, and export controls—and how they differ with respect to the United States and its allies. Sameer Lalwani (Washington, DC) argues that the two countries differ in their thinking about deterrence, and that this is evident in three categories crucial to defense: capability, geography, and interoperability. When it comes to increasing material capabilities, for example, India prioritizes domestic economic development, including developing indigenous capabilities (i.e., its domestic defense-industrial sector). With regard to geography, for example, the United States and its Western allies think of crises, such as Ukraine, in terms of global domino effects; India, in contrast, thinks regionally, and confines itself to the effects on its neighborhood and borders (and, as the recent crisis with Pakistan shows, India continues to face threats on its border, widening the geographic divergence with the United States). And India’s commitment to strategic autonomy means the two countries remain far apart on the kind of interoperability required by modern military operations. Yet there is also reason for optimism about the relationship as those differences are largely surmountable. Dimitrios Stroikos (London) argues that India’s space policy has shifted from prioritizing socioeconomic development to pursuing both national security and prestige. While it is party to all five UN space treaties that govern outer space and converges with the United States on many issues in the civil, commercial, and military domains of space, India is careful with regard to some norms. It favors, for example, bilateral initiatives over multilateral, and the inclusion of Global South countries in institutions that it believes to be dominated by the West. Konark Bhandari (New Delhi) argues that India’s stance on export controls is evolving. It has signed three of the four major international export control regimes, but it has to consistently contend with the cost of complying, particularly as the United States is increasingly and unilaterally imposing export control measures both inside and outside of those regimes. When it comes to export controls, India prefers trade agreements with select nations, prizes its strategic autonomy (which includes relations with Russia and China through institutions such as the Shanghai Cooperation Organization and the BRICS), and prioritizes its domestic development. Furthermore, given President Donald Trump’s focus on bilateral trade, the two countries’ differences will need to be worked out if future tech cooperation is to be realized.