How Facebook is taking the fight again fake news seriously

Facebook has vowed to defeat the scourge of 'fake news,' but its ultimate success relies on the user.

Facebook fake news
Credit: Michael Kan

Facebook has been making waves recently with the roll out of its campaign to tackle the influx of fake news appearing on its newsfeeds.

The contemporary political, social and economic climate is largely influenced by social media. It impacts a large cross section of the population—1.23 billion people according to Facebook’s statisticians—and, according to studies performed by the Pew Research Center, 62 percent of adults derive their news directly from social media and not from primary sources.

This has opened up an avenue for misinformation, fake news and hoaxes to proliferate in an attempt to sway public opinion to any number of causes.

Campaign against fake Facebook accounts

Recently, Facebook has taken a stand against what it deems as “false news.” Utilizing third party fact checking organizations, the company is mounting a campaign to discredit and remove fraudulent accounts. In France, some 30, 000 fake Facebook accounts were removed in the lead up to the presidential elections. This was an effort to crack down on the amount of misinformation and deceptive content being generated and disseminated by the creators of these fake accounts. Facebook is also running fact checking programs in collaboration with First Draft, a non-profit organization, in an effort to combat the surfeit of misinformation.

European Union authorities have pressurized Facebook to remove inflammatory speech and far right propaganda, in accordance with Hate Speech Legislation. However, they have met with a certain amount of resistance from Facebook which, although promotes the safety of its community with its Community Standards, nevertheless advocates freedom of speech and believes it provides a platform for self-expression.

Facebook has an entrenched vetting system for user posts but the lines blur significantly when it comes to news. An example of this is the decision to leave the post regarding President Trump’s incendiary rant about evicting Muslims from the U.S. This openly prejudiced post was allowed to remain online against protocol because it was newsworthy.

Facebook releases 'educational tools'

More recently, Facebook has added a voluntary link to its newsfeed in 14 countries in a bid to educate and inform users to differentiate between legitimate and false news content. The link directs the individual to a list of the fundamentals of critical reading and is an attempt at improving media literacy among its users. However, it has been criticized on many fronts because the onus is on the user to differentiate between legitimate news and falsehoods.

With President Trump’s anti-media campaign still in full swing, he has made it a point to dismiss any news unfavorable to his administration as “fake” thus prompting Facebook to conspicuously opt for the term “false” news to label its campaign. However, Mark Zuckerberg, Facebook’s founder and CEO, has stressed against treating the social media platform as an arbitrator of truth, attempting to distance the company from being seen as a news and media business. Instead, he has emphasised that the most prevalent reason people log on to use Facebook is to connect with friends and family and not to get their news content.

To check, or not to check? That is the question

In spite of these developments, Facebook has implemented tools allowing users to “dispute” news posts. These are then reviewed by independent third party fact checkers, who have signed the Poynter Code of Principles, which is a guideline with the intention of ensuring transparency and nonpartisanship. The company is keenly sensitive to accusations of censorship, particularly in light of accusations that the editors intentionally suppressed trending conservative news content. Therefore, disputed articles are not removed entirely from the system but rather a prompt is issued to users’ intent on sharing the post stating that the story has been disputed by independent fact checkers. This allows the user to make the ultimate decision. This in itself might spark controversy as posts marked “disputed” may inspire bias for that very reason one way or the other. Research conducted for the American Press Institute suggests that Republicans do not hold fact checkers in as high esteem as Democrats do.

A user-based solution

Similarly, Google has come under scrutiny for its search engine results delivering fabricated content. It too has established initiatives to decrease the quantity of fake news and fabrications by adding a “Fact Check” tag to vet news stories via external fact checking organizations. This is a very challenging situation as both Google and Facebook have discovered. Deciding what is true and what is false is not as black and white as one would believe but fraught with shades of grey. For example, with sensitive topics such as religion, would these posts be disputed by fact checkers on the basis that beliefs are typically unsupported by science? Further caveats arise when stories are told from varying perspectives and evolving stories where facts emerge with time. There are various instances where these tools do not represent a blanket approach to solve a very complex issue where human psychology is a significant factor. Fake news is a damaging phenomenon and needs to be eradicated, although it seems unlikely that this tentative campaign will do it.

Despite the infrastructure and resources devoted to this campaign, the onus for deciding on the truth currently rests solely on the shoulders of the user.

This article is published as part of the IDG Contributor Network. Want to Join?

NEW! Download the State of the CIO 2017 report