FIGHT AGAINST FAKE NEWS: POLITICIANS RELY A LOT ON TECHNOLOGY GIANTS

 

When it comes to combating fake news and hate speech, politicians from Brussels to Washington rely on a style of decision-making that combines the best of both worlds.

As the world prepares for a new round of elections in the US and Europe, lawmakers want companies like Facebook and Google to take more responsibility for controlling the content that is posted on their social networks, while warning that these tech giants are gaining too much power over every aspect of the internet lives of the citizens.

The reality is that it can’t do both.

Policy makers can certainly hand over control of high technology, shifting responsibility for deciding what can and cannot be shared online to companies that have the financial and technical resources to get the job done. But such efforts (and we already see them in the wake of the ban imposed by tech companies to Alex Jones, an American media personality and far-right politician, and the forthcoming anniversary of Germany’s hate speech rules) are likely to tame these companies as digital watchdogs, at a time when even politicians who are their biggest supporters are starting to wonder whether Silicon Valley has too much power.

These hi-tech companies must bear the responsibility for the photos, postings and – increasingly – the misinformation and extremist speech that have started to define the social media.

It all comes down to the awkward choice of who we want to decide what freedom of speech, misinformation and hate speech is: Democratically elected officials, many of whom do not manage the technological complexity of these issues, or private companies with responsibility for their shareholders, and not the voters.

Don’t get it wrong: Big Tech must take responsibility for the photos, postings and – increasingly – the misinformation and extremist speech that started defining social media in 2018. The time has gone when Facebook could be called a neutral platform, and Twitter could hide behind the First Amendment. These companies do not want to admit, but they have transformed from being just digital platforms into media barons of the 21st century. And with such power, inevitably, comes great responsibility, especially when two out of every five Europeans visit some form of social media every day, according to EU statistics. (The number is even higher in the United States, according to the Pew Research Center).

So far, however, governments are more than willing to let companies decide how to respond to online misinformation and extremist reporting.

The European Commission’s voluntary code – a series of measures aimed at suppressing the worst forms of hate speech and promoting media literacy – is just that, voluntary.

Despite threats by Vera Jourova, the European Commissioner for Justice, that stricter laws will be adopted if Facebook and the others do not get their hands on it, it is unlikely that Brussels will implement such threats, according to several who are familiar with the Commission’s thinking. There remains a divided opinion on the role that governments should play in deciding what constitutes hate speech and misinformation in the Twitter era.

Others say that because Big Tech is making so much money, they, not the regulators, should pay for the legions of the content moderators that will be needed to keep up with fake news marketers and social media bots.

The United States are not any better.

Although Facebook has removed a number of “false profiles” that have attempted to stir up existing social and cultural divisions, politicians cannot lay down new rules to increase transparency of political advertising, much less decide how to stop hate speech in a way which would be in accordance with the First Amendment.

The result?

Governments rely too much on tech companies for even the most basic information, especially about who buys political ads ahead of the November half-term elections in the United States. It’s like a farmer asking the turkeys to organize Christmas. By making technology companies the first station in the fight against misinformation and extremist speech, politicians are taking a serious trap.

Yes, no regulator anywhere in the world has the technological expertise nor the deep pockets of Silicon Valley, which believes that artificial intelligence and machines can solve the problem of writing and spreading (by humans or machines) the harmful or untrue reports on the Internet.

But by giving them the trust of Big Tech – many of the same companies that actually created these platforms – politicians are making two key mistakes.

First, they empower technology companies as pseudo-regulators with almost no oversight by government agencies, something that has already happened to Google under Europe’s strict privacy rules. If a group of people publicly say they don’t like it that Facebook, for example, is collecting online data, how will they feel when this gigantic social network routinely starts making quasi-judicial decisions?

There are no easy answers in the battle between protecting free speech and controlling harmful content on the Internet. And in a world where many are currently questioning the dominance of a handful of big names on the American West Coast, politicians are reinforcing such supremacy by giving these companies a central role in the way governments respond to digital misinformation and hate speech.

Of course, it would be unreasonable to expect that politicians alone would solve the problem of misinformation and hate speech. Google, Facebook, Twitter and the like (inadvertently) have made changes to eradicate the worst forms of internet content, especially when it comes to alleged interference in elections and Internet trolling. But governments must draw a thicker line between them and technology companies whose media platforms have had an almost unprecedented impact on people’s everyday internet lives.

There is no easy answer in the battle between protecting free speech and controlling harmful content on the Internet. However, such difficult decisions should be made (for good and for bad) by elected government officials, not technological moguls. Otherwise, hate speech and misinformation will not be the only ones that will undermine the country’s democratic institutions.

Date: 10.06.2019
Source: NovaTv.

logo

FINANCED BY

sponsor

This project was funded in part through a U.S. Embassy grant. The opinions, findings, and conclusions or recommendations expressed herein are those of the implementers/authors and do not necessarily reflect those of the U.S. Government.

PARTNERS

sponsor
© 2023 F2N2.