Paul Barrett, head of the Center for Business and Human Rights at New York University\u2019s Stern School of Business, recently released a set of recommendations for social media to consider concerning the 2020 elections. It\u2019s a thoughtful paper containing clear, short, actionable and specific suggestions for social media.\nThe key recommendation, however, is a mistake. Barrett urges social media to \u201cremove provably false content\u201d as a way to combat the \u201cdisinformation campaigns\u201d that are likely to occur in connection with the upcoming elections.\nHis example of what social media should do is Pinterest, whose policy is to remove \u201ccontent created as part of disinformation campaigns.\u201d\u00a0 But on the key issue of what needs to be shown in order to take down false content he just rephrases the \u201cprovably false\u201d standard as \u201cmaterial that can be definitively shown to be untrue.\u201d\nRemoving probably false material is a dangerous path for social media to take\nIt is hard for educated people of good will to believe that there could be any real disagreement on the basic facts of what goes on in the world. \u201cWe are entitled to our own opinion,\u201d Daniel Patrick Moynihan is supposed to have said, \u201cbut not our own facts.\u201d The Pope didn\u2019t actually endorse Trump, and reports saying that he did are \u201cprovably false.\u201d Why shouldn\u2019t social media delete the false and retain only the true?\nBut there are basic disagreements about factual matters large and small. Do vaccinations cause more harm than good?\u00a0 Is climate change caused by human activity? Are the basic postulates of evolution really plausible?\u00a0 (If you think only kooks deny them, read NYU philosophy professor Thomas Nagel\u2019s argument that the \u201cmaterialist neo-Darwinian conception of nature is almost certainly false.\u201d). Are the Hong Kong protestors pro-democracy activists or violent separatists? Social media cannot simply silence one side of these controversies.\nIt is as if Barrett, a law professor and journalist, hadn\u2019t read John Stuart Mill\u2019s famous take down of the ideas that \u201cthe propagation of error\u201d should be forbidden and that governments and other institutions should \u201cimpose their opinions on others\u201d when they are \u201cquite sure of being right.\u201d\u00a0 Truth often starts out as perceived error, Mill pointed out, and only open discussion of all perspectives can lead to progress in science and politics. Instead, Barrett has urged social media to adopt the conservative 19th Century doctrine, often associated with the Catholic Church\u2019s claim of infallibility, that \u201cerror has no rights.\u201d\nIn her summary of Barrett\u2019s recommendations for the Washington Post, Cat Zakrzewski, inadvertently perhaps, illustrates how this idea of removing provably false content can go wrong. She notes that \u201c\u2026the companies don't automatically remove\u00a0content from their sites just because it's provably false, as highlighted earlier this year when Facebook\u00a0decided to leave\u00a0up a doctored video that made House Speaker Nancy Pelosi appear drunk.\u201d\nAnyone want to cite for me what was provably false about the Pelosi video?\u00a0 What assertion did it make that fact checkers could show was mistaken with mathematical certainty?\u00a0 It allowed viewers to suspect or infer that something was wrong with the Speaker, and enabled critics to speculate that it might be drugs or alcohol. It created this effect by the absurdly simple trick of just slowing down the video speed, which leads viewers to a false conclusion without making a false statement.\nThe provably false standard misconstrues the real challenge facing social media companies\nScholars and policymakers have long ago given up the idea that the problem with disinformation campaigns is their falsity. \u201cThe threat,\u201d said a European Union High Level Experts Group in 2018, \u201cis disinformation, not \u2018fake news.\u2019\u201d A nuanced 2017 Shorenstein Center study on information disorder identified seven different types of mis-and dis-information ranging from satire and misleading content to out-and-out fabricated content.\nDisinformation campaigns rely mostly on misdirection, rhetoric, exaggeration, repetition, emphasis, innuendo and other persuasive techniques that do not cross the line into outright falsehood but encourage people to draw conclusions about the meaning of public events that go beyond the ascertainable facts.\nThe problem is that legitimate political campaigns and indeed all good journalism use the techniques of framing, providing an interpretative context and putting individual events into a narrative pattern that gives them meaning. These techniques provide readers with an analysis and understanding of public events, but they are hard to distinguish from the techniques embedded in disinformation campaigns.\nPro-Chinese newspapers and Chinese Internet users are not lying when they point out that the Hong Kong protesters carry American flags, and adorn their material with Pepe the Frog, which has become an alt-right symbol in the U.S. It is not \u201cprovably false\u201d that one of the protest leaders recently met with a U.S. diplomat and then published an op-ed in the New York Times calling on the world to support the demonstrators\u2019 \u201ccrusade\u201d for \u201cfreedom\u201d from China, a \u201ccommunist-cum-fascist regime.\u201d\nTraditional media in the U.S. and Europe verified and published each and every one of these underlying stories. Chinese outlets and citizens are not making things up, but they are trying to use this constellation of facts to convince the world that the Hong Kong protests aim at separation from China and that the U.S. is behind them and is fanning the flames of discontent in Hong Kong to weaken and embarrass China.\nDigital platforms need a content-neutral standard to identify disinformation campaigns\nIf social media platforms are to be justified in taking down Chinese-sourced material hostile to the Hong Kong protesters, it cannot be because of its content, which is a combination of factual material and political opinion that is hard to distinguish from a legitimate, but divergent, political campaign.\nWhen Twitter and Facebook recently removed Chinese-based accounts that contained material critical of the Hong Kong protestors, it was partly on a behavioral bases - coordinated inauthentic activity - and partly because the people involved attempted to conceal their true identity. The social media investigators also said they found some indications that the Chinese government was involved in some official capacity. So, the stated basis for the removals was unrelated to content.\nFacebook and other digital media seem committed to using a content-neutral basis for attacking disinformation campaigns. Facebook doesn\u2019t focus on taking down fake news and instead targets information operations (\u201cactions taken by governments or organized non-state actors to distort domestic or foreign political sentiment\u201d) and \u201ccoordinated activity by inauthentic accounts with the intent of manipulating political discussion.\u201d It aims to take down material and accounts \u201cbased on their behavior, not the content they posted.\u201d\nAnother challenge for digital media is the perception of selective enforcement of neutral rules\nOf course, there is always the possibility of selective enforcement of content-neutral rules against \u201ccoordinated inauthentic behavior.\u201d\u00a0 Loitering is a racially neutral offense, but if almost everyone arrested for it is from minority neighborhoods, it makes sense to wonder if the rule is being used as a tool in a racially motivated campaign. The perception of selective enforcement is a real danger for social media companies attempting to control disinformation campaigns.\nIf digital media companies take down accounts and material from the geo-political adversaries of the United States and not from others who use identical behavioral tricks of deception and inauthenticity, they risk being perceived as implementers of U.S. foreign policy rather than neutral platforms for speech and the discussion of public issues. Ironically, such a widespread misimpression of digital platforms as unduly influenced by the interests of the U.S. government would undermine their value as promoters and exemplifications of the values of openness and free expression.\nThe last thing social media need to become is the truth police, scouring their systems for signs of the \u201cprovably false.\u201d The way forward is genuine neutrality in the administration of platform rules against disinformation campaigns. And sufficient transparency for outside experts and scholars to verify this content-neutral approach.