Companies generate controversy when they don’t moderate content – but even when they do. Credit: Thinkstock Recently, in response to high-profile mass shootings, payment card companies have begun to refuse service to merchants who traffic in guns. Card companies have the right to choose with whom they do business, a discretion limited by laws protecting people from discrimination based on race, gender, religious belief and the like, none of which seem to apply in this case. The gun merchants are outraged and are considering going to Congress to seek a new law that would treat them as a constitutionally-protected class. The Supreme Court, after all, has ruled that the Second Amendment protects an individual’s right to bear arms. How can that right be realized in practice if people cannot use their credit cards to buy guns? Platform companies run into these content controversies all the time In response to criticism that it was not doing enough to protect the public from political manipulation, Facebook recently instituted a process to identify political ads: place them in a public file and require that their sponsors demonstrate that they are U.S. citizens. This would provide transparency, slow down the process of delivering potentially manipulative ads, and create a barrier excluding foreign actors. It wasn’t long before mistakes showed up, including the misclassification of a day care and a dog rescue benefit as political ads. More seriously, publishers accused Facebook itself of Orwellian manipulation. They were concerned that ads promoting news stories were mislabeled as political ads and put in the political content file. More recently, Facebook CEO Mark Zuckerberg appeared to many as unable to articulate a decent reason for generally allowing access to Holocaust-denial content on their service, despite blocking such content when asked to do so by public authorities in countries where it is illegal and taking down material that can lead to physical violence. Some of the most intense criticism comes from the traditional U.S. press Commentary from mainstream journalists does not advocate for government censorship. Rather, they want Facebook to act like a traditional editor and publish only those things it thinks advances the public discussion. But this ignores a crucial difference. Traditional newspapers and broadcasters, constrained as they are by limited space and air time, publish only what they think is of highest value that will attract and hold a sufficient audience to pay for itself. Facebook and other platforms are not equally constrained by technology and economics. This allows them to serve a broader public purpose. They don’t vet pieces to fit a pre-set vision of what should be in the public discourse. Their role is to let a thousand flowers bloom and constrain content only when it threatens to harm their community, or the public or is so far outside the boundaries of acceptable discourse that no reasonable person could allow it. Drawing the right boundaries is not easy Internet infrastructure provider Cloudflare faced the problem last summer. When their CEO pulled the plug on the neo-Nazi site, Daily Stormer, after it published a story appearing to celebrate the killing of an anti-Nazi demonstrator in Charlottesville, they were immediately condemned by free speech groups for censoring the Internet. Facebook, too, is in trouble on the free speech side of this conversation. Many conservatives think Facebook discriminates against their point of view by downgrading and blocking sites like Diamond and Silk. Facebook and other social media outlets have had to explain themselves to angry Representatives at two separate Congressional hearings on this topic. Some think the answers are easy, and they make fun of Facebook’s fumbles. We all know, they seem to say, fake news and hate speech when we see it; we all can distinguish a disinformation campaign from legitimate political advertising. But, they say, platforms have “business interests” that are “contrary to doing anything real about lies and fake news.” They argue that platforms cannot be trusted to handle this on their own and some (unspecified) outside source of the right answer must instruct them what to do. If only the world were so simple. In fact, turning the job over to bodies like Congress, the Supreme Court, the Federal Communications Commission, or civil liberties groups won’t solve this thorny problem. In an ironic illustration of the difficulties, Russia recently announced that it will propose its own regulatory steps to deal with fake news in a way that appears likely to limit free expression online. Internet platforms, like the payment card networks before them, do not want to be where they are Internet platforms need all the advice and guidance they can get. Fortunately, academics and civil society groups are responding with thoughtful workshops and reports. In January, Harvard law professor Cass Sunstein provided some solicited advice for Facebook, and recently the Information Society Project at Yale recently issued a workshop report on Fighting Fake News recommending several new approaches, but warning that there is no “quick, permanent, or easy fix” to these problems. Dipayan Ghosh and Ben Scott at the New America Foundation examined the technology underlying disinformation campaigns in Digital Deceit and discovered that it is identical to the technology used for mainstream marketing and political campaigns. Former FCC Commissioner Susan Ness is beginning a project through the German Marshall Fund to explore how to address online hate speech and viral disinformation while preserving freedom of expression. She recently held an initial convening with Member of the European Parliament Jeppe Kofod and U.S. Representative Susan DelBene to explore the complexities of the issues. It is hard to sympathize with Facebook’s unforced errors. But it is struggling with the free speech issue of our time – how to adapt speech rules designed for an older media to the new technology. Facebook is doing a lot of growing up in public, but anyone who pretends that what it needs to do is easy or that the solutions are obvious hasn’t been paying attention. Related content opinion What’s next for content moderation? The Transatlantic Working Group considers transparency, alternative dispute resolution mechanisms and limitations on algorithms as ways forward. By Mark MacCarthy Nov 25, 2019 6 mins Government Technology Industry Legal opinion Social media companies shouldn’t censor campaign ads from legitimate political candidates Congress needs to extend the old broadcasting rules against media control of candidate messages to cover cable and social media. By Mark MacCarthy Oct 24, 2019 8 mins Government Technology Industry opinion Digital platforms are under attack The state AGsu2019 new antitrust investigation and Californiau2019s new independent contractor law both target digital platforms. Ironically, that could be their defense too. By Mark MacCarthy Sep 19, 2019 7 mins Government Technology Industry Legal opinion Should social media delete “provably false” stories? Itu2019s a dangerous standard that misses the real challenges facing digital platforms. By Mark MacCarthy Sep 09, 2019 8 mins Government Technology Industry Legal Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe