by Mark MacCarthy

Should social media delete “provably false” stories?

Opinion
Sep 09, 2019
GovernmentLegalTechnology Industry

Itu2019s a dangerous standard that misses the real challenges facing digital platforms.

Paul Barrett, head of the Center for Business and Human Rights at New York University’s Stern School of Business, recently released a set of recommendations for social media to consider concerning the 2020 elections. It’s a thoughtful paper containing clear, short, actionable and specific suggestions for social media.

The key recommendation, however, is a mistake. Barrett urges social media to “remove provably false content” as a way to combat the “disinformation campaigns” that are likely to occur in connection with the upcoming elections.

His example of what social media should do is Pinterest, whose policy is to remove “content created as part of disinformation campaigns.”  But on the key issue of what needs to be shown in order to take down false content he just rephrases the “provably false” standard as “material that can be definitively shown to be untrue.”

Removing probably false material is a dangerous path for social media to take

It is hard for educated people of good will to believe that there could be any real disagreement on the basic facts of what goes on in the world. “We are entitled to our own opinion,” Daniel Patrick Moynihan is supposed to have said, “but not our own facts.” The Pope didn’t actually endorse Trump, and reports saying that he did are “provably false.” Why shouldn’t social media delete the false and retain only the true?

But there are basic disagreements about factual matters large and small. Do vaccinations cause more harm than good?  Is climate change caused by human activity? Are the basic postulates of evolution really plausible?  (If you think only kooks deny them, read NYU philosophy professor Thomas Nagel’s argument that the “materialist neo-Darwinian conception of nature is almost certainly false.”). Are the Hong Kong protestors pro-democracy activists or violent separatists? Social media cannot simply silence one side of these controversies.

It is as if Barrett, a law professor and journalist, hadn’t read John Stuart Mill’s famous take down of the ideas that “the propagation of error” should be forbidden and that governments and other institutions should “impose their opinions on others” when they are “quite sure of being right.”  Truth often starts out as perceived error, Mill pointed out, and only open discussion of all perspectives can lead to progress in science and politics. Instead, Barrett has urged social media to adopt the conservative 19th Century doctrine, often associated with the Catholic Church’s claim of infallibility, that “error has no rights.”

In her summary of Barrett’s recommendations for the Washington Post, Cat Zakrzewski, inadvertently perhaps, illustrates how this idea of removing provably false content can go wrong. She notes that “…the companies don’t automatically remove content from their sites just because it’s provably false, as highlighted earlier this year when Facebook decided to leave up a doctored video that made House Speaker Nancy Pelosi appear drunk.”

Anyone want to cite for me what was provably false about the Pelosi video?  What assertion did it make that fact checkers could show was mistaken with mathematical certainty?  It allowed viewers to suspect or infer that something was wrong with the Speaker, and enabled critics to speculate that it might be drugs or alcohol. It created this effect by the absurdly simple trick of just slowing down the video speed, which leads viewers to a false conclusion without making a false statement.

The provably false standard misconstrues the real challenge facing social media companies

Scholars and policymakers have long ago given up the idea that the problem with disinformation campaigns is their falsity. “The threat,” said a European Union High Level Experts Group in 2018, “is disinformation, not ‘fake news.’” A nuanced 2017 Shorenstein Center study on information disorder identified seven different types of mis-and dis-information ranging from satire and misleading content to out-and-out fabricated content.

Disinformation campaigns rely mostly on misdirection, rhetoric, exaggeration, repetition, emphasis, innuendo and other persuasive techniques that do not cross the line into outright falsehood but encourage people to draw conclusions about the meaning of public events that go beyond the ascertainable facts.

The problem is that legitimate political campaigns and indeed all good journalism use the techniques of framing, providing an interpretative context and putting individual events into a narrative pattern that gives them meaning. These techniques provide readers with an analysis and understanding of public events, but they are hard to distinguish from the techniques embedded in disinformation campaigns.

Pro-Chinese newspapers and Chinese Internet users are not lying when they point out that the Hong Kong protesters carry American flags, and adorn their material with Pepe the Frog, which has become an alt-right symbol in the U.S. It is not “provably false” that one of the protest leaders recently met with a U.S. diplomat and then published an op-ed in the New York Times calling on the world to support the demonstrators’ “crusade” for “freedom” from China, a “communist-cum-fascist regime.”

Traditional media in the U.S. and Europe verified and published each and every one of these underlying stories. Chinese outlets and citizens are not making things up, but they are trying to use this constellation of facts to convince the world that the Hong Kong protests aim at separation from China and that the U.S. is behind them and is fanning the flames of discontent in Hong Kong to weaken and embarrass China.

Digital platforms need a content-neutral standard to identify disinformation campaigns

If social media platforms are to be justified in taking down Chinese-sourced material hostile to the Hong Kong protesters, it cannot be because of its content, which is a combination of factual material and political opinion that is hard to distinguish from a legitimate, but divergent, political campaign.

When Twitter and Facebook recently removed Chinese-based accounts that contained material critical of the Hong Kong protestors, it was partly on a behavioral bases – coordinated inauthentic activity – and partly because the people involved attempted to conceal their true identity. The social media investigators also said they found some indications that the Chinese government was involved in some official capacity. So, the stated basis for the removals was unrelated to content.

Facebook and other digital media seem committed to using a content-neutral basis for attacking disinformation campaigns. Facebook doesn’t focus on taking down fake news and instead targets information operations (“actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment”) and “coordinated activity by inauthentic accounts with the intent of manipulating political discussion.” It aims to take down material and accounts “based on their behavior, not the content they posted.”

Another challenge for digital media is the perception of selective enforcement of neutral rules

Of course, there is always the possibility of selective enforcement of content-neutral rules against “coordinated inauthentic behavior.”  Loitering is a racially neutral offense, but if almost everyone arrested for it is from minority neighborhoods, it makes sense to wonder if the rule is being used as a tool in a racially motivated campaign. The perception of selective enforcement is a real danger for social media companies attempting to control disinformation campaigns.

If digital media companies take down accounts and material from the geo-political adversaries of the United States and not from others who use identical behavioral tricks of deception and inauthenticity, they risk being perceived as implementers of U.S. foreign policy rather than neutral platforms for speech and the discussion of public issues. Ironically, such a widespread misimpression of digital platforms as unduly influenced by the interests of the U.S. government would undermine their value as promoters and exemplifications of the values of openness and free expression.

The last thing social media need to become is the truth police, scouring their systems for signs of the “provably false.” The way forward is genuine neutrality in the administration of platform rules against disinformation campaigns. And sufficient transparency for outside experts and scholars to verify this content-neutral approach.