by Carl Schonander

UK ‘Online Harms White Paper’ is ambitious and takes prescriptive approach

Opinion
Apr 23, 2019
GovernmentLegalRegulation

In order for the approach to not negatively affect innovation, however, there needs to be more prioritization.

man shouting aggressive communication expressive
Credit: Thinkstock

On April 8, 2019, the British Department for Digital, Culture, Media & Sport released the “Online Harms White Paper,” which generated considerable media attention. The attention is deserved, and it is worthwhile going through the 102-page document. There is a 12-week consultation period and many stakeholders will likely submit comments.

The approach taken by the White Paper is probably too prescriptive from an innovation standpoint, despite the pro-innovation language in the document. But it is certainly a sign of the times. Whatever the UK ultimately decides to do with respect to online harms content moderation, UK regulation should not be applied extraterritorially.

A new ‘duty of care’ for companies is the core of the white paper

The discussion on the “duty of care” that the UK intends to impose on companies is perhaps the heart and soul of the document, i.e. Chapter 7. The duty of care is to be applied to virtually any website that allows for some user interaction or discovery of user generated content. The UK authorities have set themselves a very ambitious task – the authors themselves say that prioritization will be needed, at least initially.

The Department includes, under the future regulator’s remit, an obligation to promote innovation, as well as protection from online harm. That, plus the emphasis on proportional regulation to benefit SMEs is welcome, although the document places a great deal of emphasis on prescriptive regulation in so many different areas, it is hard to see how this approach would not have an impact on innovation.

Content removal the focus for how to implement the duty of care

The focus of the White Paper is on developing new regulation compelling companies to be more proactive in removing harmful content without changing the liability regime, which in the UK will still be governed by the E-Commerce Directive. In 2016, SIIA wrote that in an age of terrorism, tech companies have the following responsibilities (but in the U.S. context, no legal obligation): take down responsibilities; countervailing responsibilities to foster free speech and association; and affirmative responsibilities to take steps to counter violent extremism.

Clearly, the UK authorities want companies to be more proactive and quicker with respect to take-downs. That is certainly a legitimate goal, although the discussion  about proposed EU monitoring obligations suggests that the broad and general proactive monitoring obligations contemplated by the White Paper may not be entirely consistent with the E-Commerce Directive.

It would also be worthwhile for the UK authorities to consider how to incentivize the more affirmative responsibilities. As has been noted by many observers, a lot of speech is offensive but cannot be eliminated because of freedom of expression law and values.  The White Paper says that regulation will be consistent with freedom of expression. If that is to happen, the more affirmative responsibilities are important in reaching that goal.

Broad list of harms considered

One of the services that the White Paper does is provide a list of exactly what harms it proposes to regulate on page 31 of the document, although the Department is clear that the list is not static. Moreover, each harm is not defined so companies will be relying on the interpretation of the regulator as to exactly what a harm is.

“Harms with a clear definition” include child sexual exploitation and abuse; terrorist content and activity; organized immigration crime; modern slavery; extreme pornography; revenge pornography; harassment and cyberstalking; hate crime; encouraging or assisting suicide; incitement of violence; sale of illegal goods/services, such as drugs and weapons (on the open Internet); content illegally uploaded from prisons; and, sexting of indecent images by under 18s.

“Harms with a less clear definition” include cyberbullying and trolling; extremist content and activity; coercive behavior; intimidation; disinformation, violent content; advocacy of self-harm; and, promotion of female genital mutilation.

“Underage exposure to legal content” includes children accessing pornography. It also includes children accessing inappropriate material (including under 13s using social media and under 18s using dating apps and excessive screen time.

Harms not under consideration in the white paper

The White Paper says that harms to organizations such as competition, intellectual property rights, and organizational responses to fraudulent activity are not covered by the White Paper. With respect to individuals, separate legislation deals with harms stemming from breaches in data protection. Cybersecurity harms and harms suffered by individuals “on the dark web rather than the open internet” are also excluded.   

‘Duty of care’ the core of the white paper 

Observers have rightfully emphasized this aspect of the White Paper. The Paper’s Chapter 7 spells out what a duty of care might look like generally and specifically with respect to different kinds of harm. To begin with, any company providing “services or tools that allow, enable or facilitate users to share or discover user-generated content, or interact with each other online” falls within the scope of the regulation contemplated by the White Paper.

In this context, social media platforms, file hosting sites, public discussion forums, messaging services and search engines are covered by the proposed regulation. Retail websites with product reviews and/or user comments would also be covered. As a general matter, such companies should do the following to comply with their duty of care (more specific requirements are listed later in Chapter 7):

  • Ensure their relevant terms and conditions meet standards set by the regulator and reflect the codes of practice as appropriate.
  • Enforce their own relevant terms and conditions effectively and consistently.
  • Prevent known terrorist or CSEA content being made available to users.
  • Take prompt, transparent and effective action following user reporting.
  • Support law enforcement investigations to bring criminals who break the law online to justice.
  • Direct users who have suffered harm to support.
  • Regularly review their efforts in tackling harm and adapt their internal processes to drive continuous improvement.

In any event, companies should today regularly review what they are doing with respect to the actions listed above. Businesses might want to consider where they could conceivably do more. It seems worthwhile, for instance, to beef up departments dealing with users that report harm.  And those departments should include individuals who can speak empathetically with users.

The white paper lists types of harms but does not explain how to determine if content is harmful 

The Department produced a lengthy list (at least 24 types of harm) of different kinds of online harms. However, there is no indication of how to benchmark content and establish if it is harmful. For example, the line between acceptable political speech and unacceptable public figure intimidations is not clear. If a new regulator is established as the White Paper calls for, it will surely have to prioritize the most serious harms such as child sexual exploitation and terrorist content and activity.   

Upcoming codes of practice likely to be prescriptive    

One of the themes of the White Paper is that in the view of its authors, the era of tech company self-regulation is over. So, UK authorities will be looking for a new regulator to set out detailed prescriptive “codes of practice” for companies. It is worthwhile recalling though that no matter how detailed and prescriptive the codes of practice turn out to be, companies will still be making judgment calls. For example, it is probably never going to be clear what the difference might be between “extreme” pornography and acceptable pornography.

So, especially if the UK decides to make company officials personally liable for online harms, companies are likely going to lean towards taking down possibly objectionable material, rather than, for instance, investigating more thoroughly whether, for instance, freedom of expression concerns/values might possibly override a possible online harm. That might be a trade-off that UK authorities consider worth making, but it is a trade-off.

Continued tech sector innovation welcomed in the white paper  

There is a good discussion in the White Paper about the continued importance of promoting innovation in the tech sector, while at same time curbing online harms. That is welcome because the reality is that tech will continue to be a source of growth in the UK and other advanced industrial economies. It is also simply a driver of new products and services that consumers are quick to take advantage of. UK regulators have been very creative particularly in the financial space, for instance through “regulatory sandboxes,” which the White Paper supports. Hopefully that creativity will be demonstrated in the tech sector as well.  

Whatever ultimately emerges, UK regulation should not be applied extraterritorially 

The online harms space is so vast that countries will likely have different views regarding what is harmful and what is not. Perhaps most importantly, countries will strike different balances between acceptable and non-acceptable harms. For example, the US First Amendment renders it difficult to imagine that the kind of broad monitoring and duty of care approach envisaged by the White Paper could be implemented in the United States.

So, as has been noted in the EU right-be-forgotten context, it will be important to ensure that UK regulations do not impact what consumers/users in other jurisdictions can or cannot do. This also reinforces the view that  geolocation tools should continue to be permitted to be used to allow companies to comply with different rules on content in different countries.