by Mark MacCarthy

What’s next for content moderation?

Opinion
Nov 25, 2019
GovernmentLegalTechnology Industry

The Transatlantic Working Group considers transparency, alternative dispute resolution mechanisms and limitations on algorithms as ways forward.

man hiding behind laptop computer sneaky employee hiding censorship by xesai getty images
Credit: Xesai / Getty Images

In its final meeting in Bellagio this month, the Transatlantic Working Group on Content Moderation and Freedom of Expression confronted the difficult choices for policymakers as they consider how best to throw a regulatory net around the content moderation practices of social media companies. The group was searching for a middle ground between the political stalemate in the US, which would leave matters largely in the hands of the social media companies themselves, and a rush to content regulation in Europe, which could have dire consequences for freedom of expression.

The group considered transparency measures as one possible way forward

Social media companies have already embraced transparency as a step toward responsible behavior and accountability to the public. They have all disclosed their content rules and to some degree their enforcement mechanisms and appeals processes. They issue regular reports measuring how well they are doing in taking action against content that violates their rules. They have posted archives of political ads so that researchers and the public can see how political actors are addressing their targeted audience on social media. They are working with outside researchers to disclose data that will help assess the larger impacts on political processes and society generally.

The group discussed how much more of this transparency is in the public interest, the purposes served by disclosures and the harms prevented, how to balance disclosures against other values such as data privacy and the extent to which the whole process should be overseen by an engaged and competent regulatory agency.

The group also considered the role of algorithms in improving content moderation practices

Automatic techniques are good at recognizing problematic images. When child sexual exploitation images or images of terrorist conduct have been identified, automated systems are very accurate in making sure that this material is not posted a second time. Facebook recently extended this technique to text material involving hate speech and found that automatically taking down previously identified instances of hate speech improved their proactive rate from 68% to 80%.

As the companies themselves acknowledge, however, these techniques are terrible when it comes to assessing nuance and context. As a result, regulations that explicitly or implicitly push companies to overuse these techniques do risk overbreadth, forcing them to take down increasing amounts of legitimate speech in the hunt to make sure that not even one piece of problematic content is present on their systems.

Finally, the group considered the role of alternative dispute resolution institutions

Content moderation decisions are so important that they should not be left entirely to the determinations of the social media companies themselves. Facebook has recognized the need to spread the responsibility for making these determinations to an external institution and has created an independent review board capable of overriding Facebook’s own decisions in certain cases. The group considered proposals to expand on this external review function. For determinations of violations of a company’s own standards, they looked at the possibility of social media councils composed of community groups to act as review boards. For determinations of violations of local law, they assessed the possible role of e-courts, legal bodies empowered to make expedited judgement of whether certain content on social media platforms violated local laws. Both of these measures would, to some degree, take content moderation decisions out of the hands of these large powerful private companies and vest them in other institutions with a larger claim of legitimacy to make these calls.   

The group met at crucial time in the governance of social media companies

The moment for tough but measured process-oriented regulation may have already come and gone in Europe, while it has not yet arrived in the more libertarian climate in the US.

Europe is poised to move ahead with increasing forms of content regulation. The Europe-wide regulation requiring the removal of terrorist material on social media is going through trialogue now to reconcile the different versions adopted by the Commission, the Parliament and the Council. It will emerge soon in the coming year. The German NetzDG law which mandates platform take-down of illegal content is becoming a model for the rest of Europe. A version of it passed the French Assembly and is on track to become law soon. The proposed Digital Services Act will impose an EU-wide regime of hate speech control on social media companies.

The contrast with the US is striking. There is much noise and posturing from political figures in the US about what social media companies should or should not do. The latest brouhaha concerns whether the platforms should run candidate ads and if so what sort of editorial control and limits on targeting they should impose. But there is no discernible movement to pass a new law significantly restricting their right to run their platforms as they see fit. Under US constitutional law, moreover, social media companies have strong expression rights that seem to override the interests of their users and affected third parties. To take Taylor Swift totally out of context, in the US, it’s their house, they make the rules.

We’re at an inflection point on how to govern the Internet in a multi-polar world

As Kieron O’Hara and Wendy Hall have pointed out, four competing Internets have developed in recent years. Europe wants a bourgeois Internet emphasizing content control, China wants an Internet that promotes social and political stability, the US wants an Internet that is the handmaid of commerce, and Russia wants an Internet that props up the state at home and enables disinformation campaigns abroad. The dream of a single global communications network subject to the same governance structure is giving way to more or less autonomous and mostly interconnected but separate communications networks.

Is there a way to restore the vision of common Internet governance or do we need to just provide for interconnection and manage our differences as best we can? In the coming year, the US and Europe have an opportunity to forge a common approach that provides for strong and effective content moderation decisions by the platforms themselves, combined with government regulation or other oversight mechanisms that validate these decisions while ensuring full, robust debate of conflicting political ideals and visions. Will policymakers on both sides of the Atlantic seize the opportunity, or let it slip from their grasp? The difficulties are immense, as the recent Transatlantic Working Group discussion at Bellagio demonstrated, but the stakes could not be higher.