Having been most recently bombarded with tons of complaints over fake news and data insecurity, Facebook rushed to lay out a number of features the other day that are aimed at informing users in a more profound way about the content they are reading.
Some news articles shared on the social media platform will now feature an “I” icon, standing for additional information. The box will tell netizens more about the publisher, will show where in the world the article has gone viral and will offer more publications by the same media outlet for reading.
Separately, it will show if any of your friends have already shared the story. While the first two features were tested last year, the latter two are presented as the newest ones.
The newly unveiled features, however, seem to have a couple of issues. Firstly, the information about media outlets is borrowed from Wikipedia, which, though it is commonly considered as an authoritative website, is still crowdsourced, which does not guarantee immunity from political, or other, biases. Moreover, the reliance on this source is dubbed “humorous” on Twitter, with many people feeling deeply skeptical about the move at large:
Secondly, the feature is for the time being applied inconsistently, with some news stories featuring no information about their publisher whatsoever. For instance, video posts, which are often a prime platform for generating fake news, as proven during the Las Vegas shooting in 2017, do not feature the icon at all. Neither do many right-wing conspiracy sites, like Gateway Pundit, although both of these sites are covered by Wikipedia.
Facebook has, however, promised to address the inconsistencies and make the features universally applied.
In response, many Twitter users started sharing a video proving that it takes next to no time and effort to create a fake news page on Facebook:
One Twitter user sorrowfully remarked that it’s the users who are ultimately responsible for sharing fake news:
In a separate move, Facebook said they had removed a feature which allows users to enter a phone number or email address to find others.
That was being used by rogues to scrape public profile information, Facebook said, apparently with reference to the Cambridge Analytica case, in which, as it was originally thought data on roughly 50 million Facebook users was improperly handled.
Facebook now revealed, though, that as many as 87 million users, most of them in the US, may have had their information illegally obtained and misused by the data mining firm Cambridge Analytica. The revelation essentially signals that nearly twice as many Facebook users may have been directly affected by the unauthorized sale of the social network’s user data to the third-party company, which was contracted by the Trump team to assit with election ad targeting.
Separately, Facebook has for a long while been at the center of the fake news scandal, all the way since the American 2016 presidential election. The measures earlier introduced by the company’s management included marking dodgy articles with red flags, as well as inserting a “related articles” section, which was meant to ultimately provide readers with related sources of information on the same topic.
Also, Facebook tried prioritizing negative comments, which expressed disbelief in the news, but the move led to the wrong labeling of news sources, as many reliable ones were slammed as fake.