The information we encounter online everyday can be misleading, incomplete or fabricated. Being exposed to “fake news” on social media platforms such as Facebook and Twitter can influence our thoughts and decisions. We’ve already seen misinformation interfere with elections in the United States.

Facebook founder Mark Zuckerberg has repeatedly proposed artificial intelligence (AI) as the solution to the fake news dilemma. However, the issue likely requires high levels of human involvement, as many experts agree that AI technologies need further advancement.

Gianluca Demartini, Associate professor, The University of Queensland and two colleagues have received funding from Facebook to independently carry out research on a “human-in-the-loop” AI approach that might help bridge the gap. Human-in-the-loop refers to the involvement of humans (users or moderators) to support AI in doing its job. For example, by creating training data or manually validating the decisions made by AI. Their approach combines AI’s ability to process large amounts of data with humans’ ability to understand digital content. This is a targeted solution to fake news on Facebook, given its massive scale and subjective interpretation.

According to Demartini and the researchers; “The dataset they’re compiling can be used to train AI. But we also want all social media users to be more aware of their own biases, when it comes to what they dub fake news.”

Facebook has employed thousands of people for content moderation. These moderators spend eight to ten hours a day looking at explicit and violent material such as pornography, terrorism, and beheadings, to decide which content is acceptable for users to see. Consider them cyber janitors who clean our social media by removing inappropriate content. They play an integral role in shaping what we interact with. A similar approach could be adapted to fake news, by asking Facebook’s moderators which articles should be removed and which should be allowed. AI systems could do this automatically at a large scale by learning what fake news is from manually annotated examples. But even when AI can detect “forbidden” content, human moderators are needed to flag content that is controversial or subjective.

Demartini says; “While benchmarks to evaluate AI systems that can detect fake news are significant, we want to go a step further. Instead of only asking AI or experts to make decisions about what news is fake, we should teach social media users how to identify such items for themselves. We think an approach aimed at fostering information credibility literacy is possible.”

 

Meral Musli Tajroska

 

Source: https://theconversation.com/users-and-their-bias-are-key-to-fighting-fake-news-on-facebook-ai-isnt-smart-enough-yet-123767

logo

FINANCED BY

sponsor

This project was funded in part through a U.S. Embassy grant. The opinions, findings, and conclusions or recommendations expressed herein are those of the implementers/authors and do not necessarily reflect those of the U.S. Government.

PARTNERS

sponsor
© 2023 F2N2.