The fight against fake news should have been a simple task. France wants new laws that will allow courts to decide the accuracy of online reporting before national elections. The United States has proposed legislation that will increase the transparency of who buys (political) social media ads. The European Commission, meanwhile, will present at the end of the month its code of practice against misinformation aimed at suppressing digital lies ahead of next year's European Parliament elections.

But as voters from Bavaria to Georgia rush to polling stations, politicians' efforts to stamp out fake news face an unpleasant reality: fake news traders are one step ahead thanks to techniques that enable them to hide their location, camouflage themselves as local activists and buy local currency political ads to avoid rules against foreign influence.

New tricks that also include moving towards disinformation through photos, using apps for Internet messaging such as WhatsApp are designed to defeat our outdated definition of what fake news is - bought from abroad, easily recognizable and grossly untrue.

The reason for such failure is largely due to the way politicians approach the problem. Legislators in Europe, the United States and elsewhere in the world have focused on the battles of the last war – the one that began after the US presidential election in 2016, when Russian-backed groups were buying social media ads (mostly in rubles) to raise suspicion between western voters and to promote tweets and Facebook groups in bad English to polarize the political debate.

"At the policy level, the conversations that take place are based on the events of 2016. The challenge is that politicians know almost nothing about how these platforms actually work." Says Clare Wardle, CEO of “First Draft”, a nonprofit organization that combats misinformation around the world, located at the John F. Kennedy School of Management at Harvard University.

But that does not mean that the global response to misinformation is a complete waste of time. The time has come to an end when Mark Zuckerberg frowned at the idea that lies spread virally through his social network influenced voters in the 2016 US election. After much scrutiny and fear from investors, tech companies like Facebook have finally realized that they are part of the problem, putting a brake on digital fraudsters trying to buy ads to spread misinformation as well as removing hundreds of thousands of social media profiles sponsored by the state that promoted polarized political messages.

It is still difficult for the average voter to recognize misinformation. But now that the term "fake news" is already a part of everyday vocabulary, there is at least a greater awareness that people do not read everything and what they see online should be taken for granted. However, suppressing fake news is an ongoing process.

Four of the five Twitter profiles that spread fake news during the 2016 US election, for example, are still active today, according to a recent study by George Washington University academics. Much of Facebook's digital universe, still the biggest platform for fake news - remains inaccessible to researchers, despite the company's promises to provide insider data to outsiders who look for misinformation.

Amid the global call for technology companies to do more, these private companies are taking on more responsibility to control Internet speech, something that is usually the job of public regulators. The fact that platforms now have to act as quasi-regulators of the truth is at odds with politicians who regularly use these platforms to share their biased content leading to accusations that Big Tech is restricting legitimate political speech.

"I'd rather see how democratically elected officials make decisions instead of shifting that responsibility to the private companies. Do we really want to encourage private companies to restrict speech that might be legal?" says Rasmus Kleis Nielsen, director of the Reuters Institute for Journalism Studies at Oxford University.

Maria Gabriel, European Commissioner for Digital Economy, will present the new code against misinformation on October 16. It is a collection of non-binding guidelines that encourage companies like Google, Facebook and Twitter to increase the transparency of political advertising on the Internet and reduce the number of bogus social media profiles. The Code was prepared in close consultation with the technology companies themselves. However, the media monitoring group composed of media companies - among others - criticized the EU's response to fake news, claiming, among other things, that it does not contain “compliance and implementation tools, and thus no opportunity to monitor the implementation process”.

Despite criticism, other countries from Brazil to India are considering similar codes of conduct for Big Tech similar to the European one. But they are not keeping up with the tactics of the new generation of fraudsters and government sponsored entities. While earlier groups spread fake news on social networks from computers that were easily identified as being located in Russia, today individuals routinely disguise their activities, pretending to be located in the United States or parts of Europe. They also buy political ads in local currencies, not rubles, according to researchers who locate misinformation online.

That makes it almost impossible to determine who is real and who is not in this fake news game, even when tech companies employ entire teams of engineers and sophisticated artificial intelligence to find the villains.

"The tactic has shifted. People are now working harder to hide their traces," said Ben Nimo of the Atlantic Council's Digital Technology Research Lab, which locates online misinformation campaigns.

The fight has also moved from words to pictures, a trend that very few legislative proposals have taken into account. Part of the change is practical. For those who want to spread misinformation during an election campaign in a non-native language country, a dedicated viral meme can avoid detection much easier compared to a poorly translated Facebook post. The advantage? - Such misinformation remains extremely difficult to censor or regulate because the same photograph can be used for both satire and fake news.

The rise of Internet messaging applications such as WhatsApp which are almost impossible to regulate due to high levels of encryption in underdeveloped countries such as India is happening in parallel with the new generation of misinformation, which can be shared between groups of thousands of users.

Such is the situation with fake news at the end of 2018, a world different from just two years ago. To counter the threat, politicians must either revise their tactics or stay behind the times.

 Source: novatv.mk
logo

ФИНАНСИРАНО ОД

sponsor

Овој проект е делумно поддржан од Амбасадата на САД. Мислењата, откритијата и заклучоците, или препораките изнесени овде се на имплементаторот(ите)/авторот(ите), и не ги одразуваат оние на Владата на САД

ПАРТНЕРСТВО

sponsor
© 2023 F2N2.