After reviewing the background to the introduction of fake news blocking systems, their effectiveness, and their side effects, this article discusses the risks they pose to freedom of information and the need for improvement. It covers the impact of fake news, the limitations of blocking systems, and better solutions.
Introduction
Over the past few years, fake news has emerged as a major social problem around the world, especially after the impact of fake news became a major issue in the 2016 US presidential election, leading social media giants such as Facebook and Google to introduce fake news blocking systems. To filter out fake news, these systems rely on user reports and their own algorithms to identify and block news. However, the debate is still ongoing about how effective these systems are in practice and what their side effects are.
The impact of fake news
During the 2016 US presidential election, fake news accounted for 8.7 million of the top 20 most talked about stories on Facebook (in terms of likes, shares, and comments), outnumbering real news by 7.36 million. This has sparked a lot of discussion about the impact of fake news on the outcome of the election, with Facebook and Google being the main culprits. In response, both companies introduced systems to block fake news and began to take other steps to restore public trust.
Since the US election, the impact of fake news has also attracted attention in elections in other countries. For example, in the French presidential election, Facebook and Google focused on curbing the spread of fake news, and analyses showed that the spread of fake news was lower than in the US election. While these data suggest that fake news suppression systems have been effective to some extent, they also show that the effectiveness of suppression can vary depending on the media environment in each country and how citizens consume news.
The effectiveness of fake news systems
Initially a reactive system that followed up on reports from users, the Fake News Initiative has evolved into a proactive measure that stops fake news before it spreads. Facebook has suspended more than 30,000 accounts for spreading fake news, Google has reduced the visibility of fake news sites in its search algorithms, and more.
However, there is still debate about how effective these blocking systems actually are. For example, fake news that is blocked on Facebook can spread to other social media platforms, such as Twitter, as well as covertly through private messages and private groups. Furthermore, if the criteria for determining fake news is vague, there is a risk that legitimate news or opinions could also be blocked. This raises concerns that the free flow of information could be impeded.
It has also been pointed out that even after Facebook and Google introduced their blocking systems, fake news hasn’t completely disappeared and is still spreading through other channels. This suggests that blocking systems are not foolproof, and that the problem of fake news cannot be solved by simply suspending accounts or blocking news.
Limitations and side effects of blocking systems
The biggest problem with blocking fake news is that it’s difficult to instantly determine whether a story is true or false. Even traditional media outlets often publish misinformation or write stories without clear evidence, and there is a risk that these stories will be labelled as fake news. Furthermore, if the content of the news is controversial, the blocking system may reflect a one-sided view and suppress certain opinions.
For example, some users tend to flag news that doesn’t align with their political leanings as fake news. If this reporting system is abused, it can lead to the over-suppression of certain groups of people’s opinions or the inability of dissenting voices to be heard. Facebook founder Mark Zuckerberg has commented on this, saying, ‘People often tend to report things they disagree with as fake news.’ This shows that the subjective judgement of users can influence the blocking system.
In addition, pre-blocking systems can also preemptively block issues that require social discussion, depriving us of important debate. In the absence of a clear definition of fake news, pre-blocking news can go against the basic principles of freedom of information and democracy.
Conclusions and recommendations
Current systems for blocking fake news still face many problems and have limited effectiveness. It is difficult to prevent fake news completely, and there is a possibility of biased blocking based on the subjectivity of the system operator. To be more effective, fake news systems need more objective criteria and algorithms to determine the authenticity of news. Additionally, blocking fake news alone will only go so far, so it’s important to educate the public and empower them to determine the truthfulness of information.
Furthermore, Facebook and Google need to rebuild trust by increasing the transparency of their moderation systems and making it clear to users why news is being blocked. Fake news is not just a technical problem, it is also a social and political problem. Therefore, fake news blocking systems will need to evolve to ensure freedom of information while minimising the negative impact of misinformation on society.
Facebook and Google’s blocking systems need constant improvement. The systems need to go beyond screening and blocking fake news to earn the trust of users in a more comprehensive and sophisticated way. To do this, platform operators need to work with external experts to verify the truthfulness of news, provide clear communication about the reasons for blocking, and ensure that they have the technical and ethical standards in place to effectively curb fake news.