Lessons from Visa and MasterCard: governments need to work with tech companies to respond to harmful content

Imposing fines, mandating blocking and threatening legislation against tech companies is the wrong approach for governments to take when responding to harmful content. Governments need to offer a partnership with tech companies to address this common problem, much like the U.S. did with credit card companies.

H-1B Visa statue of liberty against american flag
Thinkstock

Governments around the world are seeking to ratchet up pressure on tech companies to block and take down harmful content.  Some have passed laws imposing fines if this material stays up too long.  Others threaten legislation if companies don’t automatically block all harmful material.

This is the wrong approach.  At a time when governments need the full and active cooperation of tech companies, they are instead seeking confrontation and legal compulsion.  From a culture of cooperation to solve a common problem, the companies will move to a culture of compliance, viewing anyone from government as a source of legal risk, not a source of aid, encouragement and information.  This is unnecessary.

Tech companies are practicing today what credit card companies learned almost 20 years ago.  They have an affirmative responsibility to themselves, their shareholders, their employees, their customers and to society at large to make sure that their systems do no harm.

Card companies started with the assumption that they were no more responsible for bad uses of their systems than the Federal Reserve Board was responsible for teenagers using dollar bills to buy cigarettes at the local 7-Eleven.  It wasn’t their problem. 

In the late 1990s and early 2000s, that began to change, and I was part of the transition at one of the card companies, Visa. A story appeared in a major magazine with a headline identifying a card network as the payment card of choice for child pornographers.

It didn’t take long for the companies to react.  They had spent billions building brands that encouraged people to feel good about using their cards.  Brand damage was not something to take lightly.

So, they hired an outside firm to search the Internet 24/7 looking for signs that payment cards were being used for child pornography. They took steps to remove the offending merchants from the system immediately.  Within a few months, the child pornographers had moved on to alternative payment schemes.

But this problem could not be solved by each company acting on its own.  Under the direction of the Center for Missing and Exploited Children, the industry joined with tech companies, law enforcement and government agencies to share information and best practices.  I was Visa’s company representative to these meetings. The entire financial services industry formed a united front against the child pornographers. 

Government representatives encouraged this approach and law enforcement provided guidance and instruction. Sometimes law enforcement wanted to allow a child porn network to continue to operate for a time so as to build a stronger case against them and catch all the members of the group.  Without legal compulsion, the companies were only too glad to work together with each other and with government agencies. Reporting requirements were added in 2008 only after the voluntary system had been up and running for several years.

In 2006, I testified at two Congressional hearings on behalf of Visa and called to let the public know what the companies were doing and encouraging them to stay the course.

Contrast this with today’s threats and ultimatums coming from government officials regarding hate speech and extremist content on tech platforms!

Tech companies have a legal advantage over card companies. Section 230 of the Communications Decency Act allows tech companies to take action against harmful material without incurring liability for its presence on their system or for their efforts to take it down. As a result, the major tech companies have a zero-tolerance policy for hate speech and extremist content. Each of them has an active enforcement program, including algorithmic tools to identify material for review and to block it from being uploaded again.

Policymakers should be concerned that activist groups will pressure companies the other way, describing what they do as private censorship, second guessing their decisions, and urging First Amendment protections for platform users.  Some critics, concerned about the strong market position of some tech companies, have suggested legislated due process requirements, removing their legal immunity or turning them into common carriers, required to carry all content.

Governments should resist these calls.  But they should also offer a partnership with tech companies to fix a shared problem, the way they did with card companies to address child pornography.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CIO delivered to your email inbox.