It\u2019s hard not to sympathize with Facebook. Alex Stamos, formerly chief security officer for Facebook, said in his recent Senate testimony that digital platforms face demands for \u201cincompatible solutions.\u201d\n\u201cThe news media regularly bemoans the power of these companies while calling for them to regulate the political speech of millions. Policymakers demand that the companies collect as little identifiable data as possible on users while also expecting them to be able to discern professional spies from billions of legitimate users. Political parties around the world ask for the platforms to censor their opponents and appeal any equivalent moderation of their own content.\u201d\nIndeed, the platforms have complex negative and positive roles to play in public discourse.\u00a0 They have to remove the genuinely harmful material on their systems and still allow the full expression of competing political ideas.\u00a0 The public policy puzzle is how to make sure dominant platforms don\u2019t censor important points of view in their zeal to keep harmful material off their systems.\nDigital platforms do not want to play a negative censorship role all by themselves\nIn some cases, they simply try to deny that their takedowns involve making content decisions at all. Here\u2019s Facebook\u2019s rationale on May 28, 2019 for taking down manipulative posts from Iranian sources:\n\u201cWe\u2019re constantly working to detect and stop this type of activity because we don\u2019t want our services to be used to manipulate people. As is always the case with these takedowns, we\u2019re removing these Pages, Groups and accounts based on their behavior, not the content they posted. In this case, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.\u201d\nIn other cases, they recognize that they are making content decisions but want to share the responsibility with the government. Here\u2019s what Nick Clegg, former Deputy Prime Minister of Great Britain and now Facebook\u2019s head of global affairs, said in his recent speech about their content moderation decisions:\n"But it would be a much easier task as well as a more democratically sound one if some of the decisions that we have to make were instead taken by people who are democratically accountable to the people at large rather than by a private company."\nThe boundaries of legally acceptable discourse in the US are far broader than in any other country in the history of the world\nIf we needed any further proof of that, it was provided by this week\u2019s Supreme Court decision ruling unconstitutional on First Amendment grounds the ban on immoral or scandalous trademarks.\u00a0 The idea behind this ban was to disincentivize the use of racist, anti-Semitic, anti-immigrant, misogynist, white supremacist and other vulgar, highly charged words in commerce \u201cby denying them the benefit of trademark registration.\u201d But now as far as trademark law is concerned government-protected brands using such words are just fine. As Justice Breyer warned in his dissent, we should brace ourselves for a world in which people we encounter on the street will be \u201cwearing a t-shirt or using a product emblazoned with an odious racial epithet.\u201d\nSo, in the United States, government cannot take over or even share the job of negative content moderation.\u00a0 The democratically elected representatives of the people cannot even get trademarked racial slurs off of t-shirts in the U.S.\u00a0 In our system, government has outsourced the function of determining the socially acceptable boundaries of public discourse to the private sector, and as long as First Amendment jurisprudence is what it is, there\u2019s no other way to do it here.\u00a0\u00a0\nAs Mark Osiel says in his new book, the law provides a companies with a \u201cright to do wrong.\u201d Social norms provide some public defense against harmful but legal speech. Media companies including digital platforms that allow socially unacceptable content would face shame, outrage, and stigma, but this social pressure is backed up only by the ability of the audience and the advertisers to stay away if they think a particular outlet has gone too far.\nWhat about the public\u2019s right of access to diverse political perspectives?\nThis positive requirement of the First Amendment has deep roots in free speech theory. In 1948, the famous free speech theorist Alexander Meiklejohn wrote that the First Amendment \u201cis not the guardian of unregulated talkativeness\u2026What is essential is not that everyone shall speak, but that everything worth saying shall be said.\u201d\nHe added that \u201c\u2026no suggestion of policy shall be denied a hearing because it is on one side of the issue rather than another\u2026citizens\u2026may not be barred because their views are thought to be false or dangerous.\u201d\nIs there something the government can do to vindicate this citizen right of access to diverse political points of view on digital platforms?\nSenator Josh Hawley\u2019s recently introduced bill, the Ending Support for Internet Censorship Act, might be a step in that direction. It would require platforms to avoid \u201cpolitically biased\u201d practices in their content moderation programs. It appears aimed at preventing them from moderating information in a manner that is designed to \u201cnegatively affect\u201d or that \u201cdisproportionately restricts or promotes access to, or the availability of, information from a political party, political candidate, or political viewpoint.\u201d Enforcement would be assigned to the Federal Trade Commission.\nReaction from both left and right has been harshly critical, with many comparing it to the discredited Fairness Doctrine that required broadcasters to air competing sides of controversial issues of public importance.\u00a0 The Federal Communications Commission repealed that regulation in the 1980s.\u00a0 In fact, in its current form Senator Hawley\u2019s proposed bill is far broader than needed to achieve its purpose of ensuring that a wide range of views are presented on the major digital platforms.\u00a0 It would almost certainly succumb to a facial First Amendment challenge.\nAll is not lost for a positive, progressive vision of the First Amendment for digital platforms\nFor one thing, a more narrowly tailored bill might pass First Amendment scrutiny. While the details of Fairness Doctrine enforcement practice are not suited to the different economic and technical capabilities of digital media, the general idea is sound and has been upheld by the Supreme Court in a 1969 decision that is still good law.\u00a0\nThe history of broadcast regulation shows many other attempts to craft policies that would expand the range of ideas available to the public beyond what would be provided by the private interests of media companies. In the spirit of a positive and progressive approach to the First Amendment, it\u2019s time to look back at that history and see what lessons might be learned to update those policies for the age of the digital platform.