Facebook adopts new measures to combat dissemination of disinformation
The Facebook social networking site escalated the fight against disseminating disinformation on the platform through a set of new measures deployed during the week of 15 April. For example, Facebook is beginning to lower the impact of posts from groups where such manipulation appears repeatedly and is also considering the overall position of those posting material when choosing what material to make available first to other users, which is a new approach.
The Washington Post has reported that these steps are a counter-balance to the basic principle of Facebook’s algorithms. The measures were presented on 10 April by the vice-president of Facebook in charge of “integrity” and security, Guy Rosen.
Rosen described the measures as part of a three-year-long initiative aiming to “remove, reduce and inform” users about problematic content. “That means removing content that violates our principles, reducing the dissemination of problematic content that does not violate our principles, and providing people with additional information so they are able to reflect about what to click on, what to read and what to share,” Agence France-Presse (AFP) quoted Rosen as saying.
Disinformation or other sensational content falls into the category of the kinds of posts that Facebook does not automatically remove on its own. Currently, however, the social network has decided to reduce the visibility of groups on users’ “main pages” if material appears regularly there that is considered untrue by independent auditors.
According to the tech website The Verge, this is a significant change because “the pages of such groups were frequently exploited around the American [presidential] elections in the year 2016 to distribute disinformation and propaganda.” The positions of posts to users’ “main pages” as of 10 April are also being influenced by the more general positions of the author of the post, not just by the degree to which the author is liked on Facebook.
The Internet giant is attempting to reduce the impact of organizations that are markedly more popular on Facebook than they are on the Internet per se. “That can mean a domain is succeeding in [Facebook’s] News Feed in a way that does not reflect its authority beyond Facebook and that it produces content of low quality,” the California-based company said.
Another innovation is that Facebook users will see “Trust Indicators” associated with media organizations in their News Feeds through which it will be possible to access information about the ethical and journalistic principles of their editors. That product has been created for Facebook by a group of news companies through an initiative called The Trust Project.
Disinformation and other flawed content currently presents an enormous challenge to Facebook, a problem that became visible mainly during the 2016 presidential elections in the USA. Mark Zuckerberg, the boss of Facebook, even publicly called on governments and state institutions at the end of March to contribute to auditing content on the Internet, including on his platform.
The Washington Post reports that while the world’s biggest social network is still “massively profitable”, it is grappling with a decline in public trust. It is, therefore, still more willing to take steps that counter-balance the basic mission of its own algorithms, which is to maximize the number of clicks and the time that users spend on the network.
“The question is whether these changes are just fringe adjustments, or whether this is a major repair to the service,” the newspaper writes. Rosen said that “Over the last two years we have intensively concentrated on restricting disinformation on Facebook.”
The company also announced that it is involving external experts in seeking for new ways to quickly combat untrue posts. Last but not least, it has expanded its collaboration with the Associated Press in the United States with respect to verifying the truthfulness of material shared on Facebook.