Facebook, the NSA and the Screwy Ethics of Corporate Analytics
CIO.com columnist Rob Enderle suffered a brutal beating after police broke up an illegal rave next door to his house. The rave attracted hundreds of teens who saw the party invite on Facebook. The incident left Enderle to wonder why Facebook, other social sites and even government agencies are so reluctant to use their data to prevent bad things from happening.
By Rob Enderle
Analytics is a very powerful tool. With it, you can determine market trends or potentially prevent the next terrorist attack. But this same tool could prevent mass shootings, suicides and a broad variety of crimes.
This last occurred to me while I was recovering from nearly being kicked to death by three older teens who showed up after police shut down a massive, illegal rave in the house next door. Facebook knew about this rave, but only after the damage had been done was the Facebook connection made.
NSA, meanwhile, knows about crimes — but the agency appears to be selectively sharing information with law enforcement as part of anti-drug efforts or to punish criminals, not as part of an effort to prevent violent crime. I don’t know about you, but not being killed ranks higher on my list of priorities than catching a drug dealer.
The NSA is a mess at the moment, dominated by politics and bureaucrats. But Facebook chooses not to help. This hits the core of the screwy ethics of analytics.
Economics of Data Mining Make Information Disclosure a Hard Sell
Facebook was created as a place for people to share information. Teens in particular don’t like the idea that their stuff is being monitored, but they’re willing to turn a blind eye to this activity, since it primarily focuses on mining advertising dollars. Should Facebook actively step in and become known the company that reports teens behaving badly to law enforcement or, worse, their parents, kids will flee the service in large numbers. Based on comments at Facebook’s latest quarterly financial report, this is already happening.
If you can knowingly do something, anything, to prevent a crime, you can be held liable if you don’t do it. In our litigation-rich environment, juries rule against large companies regardless of actual fault. A company such as Facebook is therefore indemnified if it can get people to believe there was nothing it reasonably could have done.
Facebook executives do care about this. Don’t get me wrong. But analytics isn’t perfect, and a public company using analytics to prevent a crime is largely unproven. You can’t blame Facebook for fearing that saving one child but failing to save another will still leave the company partially liable.
That said, there’s no doubt that Facebook could easily identify and prevent raves like the one that nearly killed me. Just flag the number of people sent to a certain address and check that address against the addresses of the folks promoting the party. Facebook has turned many legitimate parties into disasters thanks to teens oversharing the party’s invitation; the home of a friend was nearly destroyed by such an event.
Facebook — or any other social network, for that matter — could notify the user that a party may get out of hand and help him or her eliminate the problem before property damage or violence occurs. This doesn’t mean the company won’t get sued — Facebook does get sued — but it’s difficult to win a suit if you can’t prove some kind of direct action.
Facebook’s Tactical Thinking on Data Mining May Backfire
It’s only a matter of time before someone figures out how to hold Facebook and other services liable for crimes committed using social media. (Murders have been planned on Facebook. Some have been stopped, albeit by alert users, but others have not.) What makes the difference today: Analytics make it increasingly easy to showcase that a company knew something bad was about to happen but did nothing with that information.
Now, in many jurisdictions you’re not legally obligated to report a crime, but if your service is used in a crime and you benefit from it — through, say, advertising revenue — I believe there’s a good argument to be made to connect the company to it. I expect it will take some time, perhaps years, to establish case law around this, but, once case law is established, social media sites could be held liable.
In the end, though, this all pales in comparison to the idea that a mass killing, suicide, murder or destructive crime (like an illegal rave) could be prevented, or a child in danger saved. Maybe the better path is accepting the risk and preventing the crime. As I recently discovered, that life saved might be yours, or that of someone you care for.
I think Facebook is thinking tactically and should be playing the long game. This may come back to haunt the company. If you worked for Facebook, or another social media site, and you knew that data mining could save lives and property, but could cost you customers and open your company to liability from those you failed to save previously, what would you do?
Rob Enderle is president and principal analyst of the Enderle Group. Previously, he was the Senior Research Fellow for Forrester Research and the Giga Information Group. Prior to that he worked for IBM and held positions in Internal Audit, Competitive Analysis, Marketing, Finance and Security. Currently, Enderle writes on emerging technology, security and Linux for a variety of publications and appears on national news TV shows that include CNBC, FOX, Bloomberg and NPR.