by Mark MacCarthy

It’s time to think seriously about regulating platform content moderation practices

Opinion
Feb 14, 2019
LegalRegulationTechnology Industry

A sensible regulatory regime can discourage harmful content and still protect free speech.

debate speech balloons thought bubble hands holding bubbles argue disagree
Credit: Getty Images

Something funny happened at the recent University of Colorado Law School Silicon Flatirons conference on Internet Platforms’ Rising Dominance, Evolving Governance.  One moderator asked the audience whether they thought the current legal regime, which largely protects platforms from liability for user content, should be left alone or whether they thought it was time to consider some reforms.  The relatively libertarian audience, which included lawyers, economists, technologists, and industry policy experts, overwhelmingly voted for reform, an outcome that would have been unthinkable just a few short years ago.

Congressional sentiment has shifted dramatically as well. Another conference participant, a former Administration official, reminded the audience of that in March 2018, a virtuously unanimous Congress passed legislation to require increased platform responsibility to fight online sex trafficking. It wasn’t a fair fight, the speaker pointed out: on the one side were the equities of innovation, growth, and jobs and on the other there were dead children. The Senate vote was 97 to 2.

Legislation passed out of sense of urgency can have horrible unintended implications

The anti-sex trafficking law is an example. It held platforms liable if third parties post ads on their systems for consensual sex work.  To avoid this liability, many platforms closed down their “personals” sections and private websites which sex workers had used to screen potential clients.  This has driven sex workers back out into the dangerous streets, where they are more easily preyed upon. In San Francisco, human trafficking reports increased  by 170% in 2018.  

No legislation can avoid all unintended consequences, but the chances of getting it right increase dramatically with thoughtful and deliberate discussion and debate of possible approaches before an emergency or crisis forces legislative action.

It was set in the late 1990s, with the three pillars of limited liability for user content under Section 230 of the Communications Decency Act; a safe harbor for user copyright violations pursuant to a statutory notice and takedown regime; and full First Amendment under the 1997 Reno v. ACLU Supreme Court ruling.

The consequence of this framework has been that Congress looked elsewhere to address problematic online content and conduct.  For instance, in 2005, it imposed obligations on payment card companies, not internet platforms, to take action against illegal internet gambling. 

During this time period, most platforms engaged in voluntary content moderation, developing, for instance, technology to screen their systems for material that content owners identified as protected by copyright. They gradually expanded their terms of service to deal with harmful content such as hate speech and incitement to violence.

This seemingly stable U.S. regime was abruptly challenged in the aftermath of the 2016 election cycle.  Widespread concern from policymakers, the intelligence community, and the public over political disinformation campaigns implemented through digital platforms has generated growing pressure for them to do more to control content on their systems.

The platforms have responded with voluntary measures aimed at disclosing the source of political advertising, special rules for political advertising, and new schemes to delete, demote, or delay the appearance of problematic content including material flagged as false information related to public affairs deliberately distributed to create confusion and doubt among political actors. 

Government patience for these voluntary efforts at content moderation has worn thin 

As noted, the US took decisive if misguided action in the area of online sex trafficking. In Europe, the spread of terrorist incidents led to legislation aimed at these threats.  In 2017, Germany passed a law requiring platforms to remove fake news, hate speech, and other illegal material within 24 hours of being told about it.  But shortly after the law began to be enforced in 2019, opposition political parties called for its repeal, asking why social media companies from the United States should be supervising the boundaries of acceptable political speech in Germany. In September 2018, the European Commission proposed a regulation requiring removal of terrorist material within an hour of notification, which was met with widespread concern about suppressing legitimate speech and the lack of due process protections. 

It is time to look for sensible alternatives

We need to consider ways to increase the responsibilities of digital platforms for content provided by their users, while still preserving free speech. 

In a different climate of opinion several years ago, legal scholars Danielle Citron and Ben Wittes proposed, in effect, that platforms should be given a legal incentive to undertake “reasonable efforts to address unlawful activity” on their systems. This idea might be given another look in today’s world, through a requirement for platforms to establish and maintain policies and procedures reasonably designed to address the harmful consequences of hate speech, disinformation campaigns, and terrorist material disseminated on their systems.

Perhaps the biggest problem is that much of the underlying activity of concern today – hate speech, for instance – has a legitimate place under U.S. First Amendment law. How can platforms be required to take action against lawful conduct and speech?

Another issue is that due process safeguards are needed to fulfill the internet’s promise of full and open discussion of issues and points of view.  These process protections might include requiring easy and convenient access to platform standards, notice to users in the case of violations of these standards, and an internal procedure for redress in the case of mistakes

Facebook floated an intriguing version of due process protection that has been dubbed a Supreme Court for Facebook, a body independent of Facebook that would “ultimately make the final judgment call on what should be acceptable speech” on that platform.

The details are sketchy and implementation could be a nightmare.  But this is the kind of constructive, positive proposal that could put in place a structure to control harmful content while still protecting free speech.  We need more of this kind of thinking, and we need it now.