New transatlantic initiatives focus on an increased government role. Credit: Tanaonte / Getty Images The last two weeks have been good ones for those interested in seeing governments take stronger action to regulate the content moderation practices of digital platforms. The Digital, Culture, Media and Sport Committee of the UK Parliament released its Final Report on Disinformation and ‘fake news,’ calling for an independent regulator to supervise a new requirement that digital platforms take action against illegal or harmful speech. The German Marshall Fund launched its Digital Innovation and Democracy Initiative aimed, among other things, to address disinformation campaigns on social media platforms with solutions that support democratic values. A new Transatlantic Working Group on Content Moderation Online and Freedom of Expression was formed, supported by the Annenberg Foundation, the Institute for Information Law at the Faculty of Law of the University of Amsterdam and the Dutch embassy in Washington, D.C. The UK Parliament Committee has proposed a reasonable regulatory structure The Committee report calls for a new independent regulator with powers to require tech companies to take action against “harmful or illegal content on their sites.” The regulator would enforce a new ethical code to be defined by technical experts “setting out what constitutes harmful content.” The goal would be to “create a regulatory system for online content that is as effective as that for offline content industries.” There’s a lot to admire in this proposal. It focuses on reasonable processes and procedures, requiring companies to have “relevant systems in place to highlight and remove” harmful material. It thereby avoids the arbitrary time frames that make the German network enforcement act and the proposed European Commission regulation on terrorist content threats to free expression. But it doesn’t solve the fundamental regulatory problem, which is that under current U.K. law fake news and disinformation campaigns are not illegal, online or offline. It is perfectly sensible to require social media to take action against illegal material on their systems. But how can they be required to remove legal expression that has a legitimate, albeit undesirable, place in public discourse? The report seems to resolve this issue by moving the determination of what constitutes “harmful material” to a non-governmental committee of technical experts. But if the U.K. Parliament intends to make fake news and disinformation campaigns illegal, it cannot outsource the definition of these key concepts to a technical committee of experts, because getting the definition right is fraught with controversial ethical, philosophical and policy questions that are beyond the scope of technical experts. A way forward might be for government to establish in the law a broad definition of harmful material, but then allow platforms to interpret the definition and apply it in particular cases. That leaves open the question of whether a government regulator should be authorized to second guess these judgments when they are outside the bounds of reasonableness. In the U.S., at least, a government role to second guess the judgment of platforms in these circumstances might run afoul of the First Amendment. A further defect in the report is the absence of any procedural protections. Platforms must be required to publish detailed standards for removing material or accounts, provide specific explanations for removals and establish reasonable redress processes for users to challenge removal decisions. Without these protections, public accountability would be lacking. The GMF’s new digital innovation and democracy initiative is a promising step forward Led by former U.S. ambassador to the OECD Karen Kornbluh, the GMF initiative has attracted talented experts, including MIT’s Danny Weitzner, Rutgers law professor Ellen Goodman and Public Knowledge’s head Gene Kimmelman, who did not speak at the launch event in Washington DC, but whose first-class work is already posted on the group’s website. The initiative will convene roundtables, workshops and transatlantic working groups to seek ways to reform current law to protect discourse vital to democratic self-governance in the 21st Century. Senator Mark Warner gave the group a spirited endorsement at the launch ceremony. He also promised he would introduce legislation to implement some of the regulatory ideas he floated in last year’s widely-read tech reform memo. Unfortunately, the launch was marred by a discordant note of “red-baiting” when some speakers seemed to dismiss the black NFL players’ on-field demonstrations against police brutality because these protests had been endorsed by Russian activists using disinformation techniques on social media. Given the group’s evident sensitivity to free expression issues, this was a surprising slip. It is worth saying plainly that the involvement of Russian bots on social media doesn’t discredit a movement or a point of view, any more than, generations ago, the support of the American Communist Party for civil rights and a 40-hour work week made those noble causes somehow suspect. The new Transatlantic Working Group on Content Moderation Online and Freedom of Expression is off to a good start Led by Susan Ness, a former commissioner on the U.S. Federal Communications Commission, the Transatlantic Working Group includes a broad range of lawyers, public policy experts, business representatives, civil society leaders and policymakers, including Damien Collins, Chair of the Committee of the U.K. Parliament that just released its report on fake news and disinformation. The group will have its initial meeting February 28-March 3 in the U.K. and will release the first in a series of recommendations shortly after that. In her launch statement, Susan Ness focused on the very real danger that the same structures democratic countries put in place to combat hate speech, disinformation campaigns and terrorist material can be used by authoritarian governments to censor reports of government corruption and abuse by falsely labeling them as harmful speech. This is a delicate task requiring careful thought and nuanced judgment. The good news is that several different groups of talented policymakers and experts are looking for balanced solutions that would require action by platforms to control harmful speech but in a way that preserves the full range of points of view and ideas that is essential to a flourishing democracy. Related content opinion What’s next for content moderation? The Transatlantic Working Group considers transparency, alternative dispute resolution mechanisms and limitations on algorithms as ways forward. By Mark MacCarthy Nov 25, 2019 6 mins Government Technology Industry Legal opinion Social media companies shouldn’t censor campaign ads from legitimate political candidates Congress needs to extend the old broadcasting rules against media control of candidate messages to cover cable and social media. By Mark MacCarthy Oct 24, 2019 8 mins Government Technology Industry opinion Digital platforms are under attack The state AGsu2019 new antitrust investigation and Californiau2019s new independent contractor law both target digital platforms. Ironically, that could be their defense too. By Mark MacCarthy Sep 19, 2019 7 mins Government Technology Industry Legal opinion Should social media delete “provably false” stories? Itu2019s a dangerous standard that misses the real challenges facing digital platforms. By Mark MacCarthy Sep 09, 2019 8 mins Government Technology Industry Legal Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe