by Mark MacCarthy

Facebook embraces content moderation regulation…or does it?

Opinion
Apr 08, 2019
LegalRegulationSocial Networking Apps

The companyu2019s stance suggests a sensible alternative to government censorship.

Facebook CEO Mark Zuckerberg’s recent apparent call for content moderation regulation surprised and annoyed many free speech advocates.  Daphne Keller, a law professor at the Stanford Center for Internet and Society, immediately opined on Twitter that the op ed was a triumph of the Facebook communications staff over its lawyers, since the proposal transparently violated the First Amendment.

But Facebook’s head of domestic policy, former FCC commissioner Kevin Martin, tried to reassure the public that Facebook continued to support the First Amendment.  Zuckerberg had been calling for industry coordination in the United States, he said, not for a government censorship role.

Facebook, it seems, is prepared to work with governments as they craft measures that reflect a societal judgment on the limits of acceptable speech. In other countries, those measures might be much more prescriptive than anything that could be adopted in the US, where the First Amendment creates barriers to content regulation.

The rest of the world is pushing ahead with counterproductive content moderation mandates

In Germany, a network enforcement law fines companies for failing to remove hate speech and other illegal content within 24 hours. The French government is considering new platform content rules modeled on the German law.  Australia has just passed a new law with fines up to 10% of global revenue and as much as three years in jail for platform officials who violate new requirements for rapid removal of violent content. The U.K. government is about to impose a duty of care on platforms obligating them to remove harmful content or face substantial fines. The European Union’s proposed terrorist content bill, which mandates removal of terrorist material within one hour of law enforcement notification, establishes fines up to 4% of global turnover and is strongly backed by Great Britain, is scheduled for a vote in the European Parliament on April 8.

These inflexible and intrusive measures will almost certainly do more harm than good, forcing companies to remove important and worthwhile speech in order to limit legal liability.  Moreover, they couldn’t survive a First Amendment challenge in the United States because, as the Washington Post points out, they amount to “letting legislators tell tech companies to ban even some categories of legal speech.”

More social media competition won’t solve the content moderation conundrum

Some policymakers, seeking to avoid content regulation, put their hopes in competition policy. If there were many different social media platforms, they reason, there would be a competitive incentive for all of them to do a better job of content moderation. An interoperability requirement, for instance, might create alternatives to the current dominant platform companies by allowing users in one social network to reach users in all the others.

But interoperability would make content moderation even more difficult than it already is, because harmful content posted on one platform would automatically appear on all platforms, greatly complicating the task of identifying the source of malicious content. Each platform would have to devise new gateway controls to screen content and block users from other networks, an even greater technological and institutional challenge than managing its own network.

Moreover, additional competition by itself wouldn’t necessarily do much to increase content moderation efforts. Competition could just as well lead social media platforms to do less content moderation if users think they are censoring too much speech and flee to more permissive outlets.  Many users and policymakers already think that that the platforms are doing too much, with conservatives convinced that platforms are biased against conservatives and liberals fearing the further exclusion of marginalized voices.

This is not to say that measures to promote competition are a bad idea; but their aim should be to fix economic dominance in a market, not to substitute for social regulation. Traditional broadcast regulation combined ownership limits with behavioral rules such as the requirement to air a minimum amount of children’s educational programming. For generations, financial regulation separated commercial and investment banking, had detailed conduct requirements to ensure safety and soundness and provided incentives for lending to minority groups.

In a similar way, regulation of the tech industry might require a mix of sector-specific competition regulations such as interoperability or separation requirements and regulation to foster social goals such as content moderation and privacy.

Facebook’s proposal for an independent oversight board might be a way forward

Facebook has embraced the idea of establishing and maintaining an independent board, and has opened a public consultation process to address questions around its design. In principle, such a board could review Facebook’s content moderation standards, the reasonableness of its policies and procedures to control harmful content, and the adequacy of its procedural protections.  The board could also run a process for appeal of particular cases.

Some have criticized Facebook’s oversight board as a faux regulator or an attempt to outsource responsibility for content moderation.  But Facebook remains responsible; the oversight board is there to ensure the efficacy and accountability of the company’s program.  And an empowered external board would be able to push platforms to establish a moderation program stronger than what their natural financial self-interest would allow.

A board could, for instance, review the poor performance of Facebook in connection with the New Zealand shooter video. Facebook didn’t even realize it had a problem until notified by the police hours after the live-feed video of the shooting was posted. Once up, the video ricocheted into the far corners of the Internet.  Facebook counted 1.5 billion attempts to upload it again to its own service but succeeded in blocking only 1.2 billion of them.

An independent review might require Facebook to reconsider its policy of allowing unmoderated uploading of live videos.  Why not screen them for potentially harmful content? Or prevent reposting them until after a review?  A directive from the board would overcome the company’s understandable reluctance to do this on their own.

Policymakers who have run out of patience with today’s limited content moderation programs should consider an independent board as an alternative to the impractical, punitive and counterproductive approaches they are currently pursuing.