by Mark MacCarthy

The challenge of balanced content moderation

Opinion
Jun 26, 2019
GovernmentLegalTechnology Industry

Digital platforms have to ban the bad stuff and promote diverse political perspectives.

It’s hard not to sympathize with Facebook. Alex Stamos, formerly chief security officer for Facebook, said in his recent Senate testimony that digital platforms face demands for “incompatible solutions.”

“The news media regularly bemoans the power of these companies while calling for them to regulate the political speech of millions. Policymakers demand that the companies collect as little identifiable data as possible on users while also expecting them to be able to discern professional spies from billions of legitimate users. Political parties around the world ask for the platforms to censor their opponents and appeal any equivalent moderation of their own content.”

Indeed, the platforms have complex negative and positive roles to play in public discourse.  They have to remove the genuinely harmful material on their systems and still allow the full expression of competing political ideas.  The public policy puzzle is how to make sure dominant platforms don’t censor important points of view in their zeal to keep harmful material off their systems.

Digital platforms do not want to play a negative censorship role all by themselves

In some cases, they simply try to deny that their takedowns involve making content decisions at all. Here’s Facebook’s rationale on May 28, 2019 for taking down manipulative posts from Iranian sources:

“We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. As is always the case with these takedowns, we’re removing these Pages, Groups and accounts based on their behavior, not the content they posted. In this case, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”

In other cases, they recognize that they are making content decisions but want to share the responsibility with the government. Here’s what Nick Clegg, former Deputy Prime Minister of Great Britain and now Facebook’s head of global affairs, said in his recent speech about their content moderation decisions:

“But it would be a much easier task as well as a more democratically sound one if some of the decisions that we have to make were instead taken by people who are democratically accountable to the people at large rather than by a private company.”

The boundaries of legally acceptable discourse in the US are far broader than in any other country in the history of the world

If we needed any further proof of that, it was provided by this week’s Supreme Court decision ruling unconstitutional on First Amendment grounds the ban on immoral or scandalous trademarks.  The idea behind this ban was to disincentivize the use of racist, anti-Semitic, anti-immigrant, misogynist, white supremacist and other vulgar, highly charged words in commerce “by denying them the benefit of trademark registration.” But now as far as trademark law is concerned government-protected brands using such words are just fine. As Justice Breyer warned in his dissent, we should brace ourselves for a world in which people we encounter on the street will be “wearing a t-shirt or using a product emblazoned with an odious racial epithet.”

So, in the United States, government cannot take over or even share the job of negative content moderation.  The democratically elected representatives of the people cannot even get trademarked racial slurs off of t-shirts in the U.S.  In our system, government has outsourced the function of determining the socially acceptable boundaries of public discourse to the private sector, and as long as First Amendment jurisprudence is what it is, there’s no other way to do it here.  

As Mark Osiel says in his new book, the law provides a companies with a “right to do wrong.” Social norms provide some public defense against harmful but legal speech. Media companies including digital platforms that allow socially unacceptable content would face shame, outrage, and stigma, but this social pressure is backed up only by the ability of the audience and the advertisers to stay away if they think a particular outlet has gone too far.

What about the public’s right of access to diverse political perspectives?

This positive requirement of the First Amendment has deep roots in free speech theory. In 1948, the famous free speech theorist Alexander Meiklejohn wrote that the First Amendment “is not the guardian of unregulated talkativeness…What is essential is not that everyone shall speak, but that everything worth saying shall be said.”

He added that “…no suggestion of policy shall be denied a hearing because it is on one side of the issue rather than another…citizens…may not be barred because their views are thought to be false or dangerous.”

Is there something the government can do to vindicate this citizen right of access to diverse political points of view on digital platforms?

Senator Josh Hawley’s recently introduced bill, the Ending Support for Internet Censorship Act, might be a step in that direction. It would require platforms to avoid “politically biased” practices in their content moderation programs. It appears aimed at preventing them from moderating information in a manner that is designed to “negatively affect” or that “disproportionately restricts or promotes access to, or the availability of, information from a political party, political candidate, or political viewpoint.” Enforcement would be assigned to the Federal Trade Commission.

Reaction from both left and right has been harshly critical, with many comparing it to the discredited Fairness Doctrine that required broadcasters to air competing sides of controversial issues of public importance.  The Federal Communications Commission repealed that regulation in the 1980s.  In fact, in its current form Senator Hawley’s proposed bill is far broader than needed to achieve its purpose of ensuring that a wide range of views are presented on the major digital platforms.  It would almost certainly succumb to a facial First Amendment challenge.

All is not lost for a positive, progressive vision of the First Amendment for digital platforms

For one thing, a more narrowly tailored bill might pass First Amendment scrutiny. While the details of Fairness Doctrine enforcement practice are not suited to the different economic and technical capabilities of digital media, the general idea is sound and has been upheld by the Supreme Court in a 1969 decision that is still good law. 

The history of broadcast regulation shows many other attempts to craft policies that would expand the range of ideas available to the public beyond what would be provided by the private interests of media companies. In the spirit of a positive and progressive approach to the First Amendment, it’s time to look back at that history and see what lessons might be learned to update those policies for the age of the digital platform.