EU deal targets big tech for hate speech and disinformation

The European Union reached a historic agreement early Saturday to tackle hate speech, misinformation and other harmful content online.

The law will force big tech companies to do more self-monitoring, make it easier for users to report problems, and empower regulators to punish non-compliance with billions in fines.

EU officials finally reached the tentative agreement in the early hours of Saturday. The Digital Services Act will overhaul the digital regulations of 27 countries and cement Europe’s reputation as a global leader in harnessing the power of social media companies and other digital platforms, such as Facebook, Google and Amazon .

“With the DSA, the days of big online platforms behaving as if they were ‘too big to care’ are coming to an end,” said EU Internal Market Commissioner Thierry Breton.

European Commission Vice-President Margrethe Vestager added that “with today’s agreement, we are ensuring that platforms are held accountable for the risks their services may pose to society and consumers. citizens”.

The law is the EU’s third major law targeting the tech industry, a notable contrast to the United States, where lobbyists representing Silicon Valley interests have largely been successful in keeping federal lawmakers at bay.

While the Justice Department and the Federal Trade Commission have filed major antitrust suits against Google and Facebook, Congress remains politically divided on efforts to tackle competition, online privacy, misinformation and more.

User Content Manager

New EU rules, designed to protect internet users and their “fundamental rights online”, are expected to make tech companies more liable for content created by users and amplified by their platforms’ algorithms.

Breton said they will have plenty of stick to back up their laws.

“It entrusts the Commission with the supervision of very large platforms, including the possibility of imposing effective and dissuasive sanctions of up to 6% of global turnover, or even a ban on operating in the EU single market. in the event of repeated serious breaches,” he said. .

The agreement in principle has been concluded between the European Parliament and the Member States. It has yet to be officially endorsed by these institutions but should not pose a political problem.

“The DSA is nothing less than a paradigm shift in technology regulation. It is the first major attempt to establish rules and standards for algorithmic systems in digital media markets,” Ben said. Scott, a former technology policy adviser to Hillary Clinton who is now executive director of advocacy group Reset.

Negotiators hoped to strike a deal before Sunday’s French elections. A new French government could define different positions on digital content.

The need to regulate Big Tech more effectively became clearer after the 2016 US presidential election, when it was discovered that Russia had used social media platforms to try to influence the country’s vote. Tech companies like Facebook and Twitter have promised to crack down on misinformation, but the problems have only gotten worse. During the pandemic, health misinformation has flourished and once again companies have been slow to act, cracking down after years of letting anti-vaccine lies flourish on their platforms.

Tools for reporting content

Under EU law, governments could ask companies to remove a wide range of content that would be considered illegal, including material promoting terrorism, child sexual abuse, hate speech and commercial scams. Social media platforms such as Facebook and Twitter should provide users with tools to report such content in an “easy and efficient” way so that it can be quickly removed. Online marketplaces like Amazon should do the same for questionable products, such as counterfeit sneakers or dangerous toys.

These systems will be standardized so that they work the same on any online platform.

Tech giants have lobbied furiously in Brussels to relax EU rules.

Twitter said on Saturday it would review the rules “in detail” and supported “smart, forward-thinking regulation that balances the need to address online harm with protecting the open internet.”

Google said in a statement Friday that it looked forward to “working with policymakers to get the remaining technical details to make sure the law works for everyone.” Amazon referenced a blog post from last year that said it welcomed measures that boost trust in online services. Facebook did not respond to requests for comment.

The Digital Services Act would ban ads aimed at minors, as well as ads targeting users based on their gender, ethnicity and sexual orientation. It would also ban deceptive techniques companies use to trick people into doing things they didn’t intend to do, like signing up for services that are easy to accept but hard to decline.

Annual risk assessments

To show that they are making progress in limiting these practices, tech companies should conduct annual risk assessments of their platforms.

Until now, regulators haven’t had access to the inner workings of Google, Facebook and other popular services. But under the new law, companies will have to be more transparent and provide information to regulators and independent researchers about content moderation efforts. This could mean, for example, that YouTube transmits data indicating whether its recommendation algorithm directed users to more Russian propaganda than normal.

To enforce the new rules, the European Commission would need to hire more than 200 new staff. To pay for it, tech companies will be charged a “monitoring fee,” which could amount to up to 0.1% of their annual global net income, according to negotiations.

The EU reached a separate agreement last month on its so-called Digital Markets Act, a law aimed at limiting the market power of tech giants and making them treat smaller rivals fairly.

And in 2018, the EU’s General Data Protection Regulation set the global standard for protecting data privacy, although it was criticized for not being effective in changing the behavior of tech companies. . A big part of the problem is that a company’s primary privacy regulator is in the country where its European headquarters are located, which for most tech companies is Ireland.

Irish regulators have opened dozens of data privacy inquiries, but have issued judgments on only a handful. Critics say the problem is a lack of staff, but the Irish regulator says the cases are complex and time-consuming.

EU officials say they have learned from this experience and will make the bloc’s Executive Commission the enforcer of the Digital Services Act and the Digital Markets Act.