What was both fascinating and sad for me watching the election results on Tuesday was how badly the news services predicted the outcome. Virtually every one had predicted a Clinton victory, which likely contributed to her loss, most wanted her to win, and virtually every one was wrong. The analytics run by the Clinton camp largely seemed to drive her to an anti-Trump message and misspend what was a vastly larger budget than Trump had. Meanwhile, Trump used analytics at the end to both better position his limited budget, time and message — and won.
The irony here is during the last two elections Obama outperformed his Republican rivals using analytics and Trump, who largely ran despite his party, seemed to learn that lesson better than Clinton did.
[ Related: Clinton wins, how analytics cost Trump the election ]
This points to the biggest problem with those who use analytics, assuring the data source.
Let me take you through what happened and I’ll end with the three rules of analytics.
The known problems with the data source
Going into this election we knew that there had been a major shift in how people communicated. Particularly with the growing power class of millennials. They had largely switched to mobile phone use and away from landlines and, due to caller ID, were more likely not to answer even if they were called on their mobile phone. In fact, a massive number of folks likely had this ability to dodge calls creating a huge potential gap in the sampling methodology, which should have significantly increased the confidence intervals. Rather than plus or minus 4 percent or 5 percent these things could have had ranges of 15 percent or more.
As the campaigns matured Trump supporters were vilified by the media and the Clinton camp, and at one point she called them deplorable. This was not a wise strategy because instead of forcing them to switch sides it apparently fueled their feelings of being attacked and cheated and that drove them to the polls in impressive numbers. But the bigger problem for analysts is it made them not want to respond, or if they did respond, not respond truthfully. This introduced massive pro-Clinton bias and should have invalidated the related studies.
As an analyst, a big part of the job is identifying and mitigating bias otherwise you are driving the people who pay you to make bad decisions and, given the outcome, that would seem to be the case here.
Unlike Clinton, however, Trump’s folks rightly identified both issues and they used a little-known analytics company called Cambridge Analytica, and another as yet undisclosed company focused on Hispanics, who came up with a very different methodology. Ten days before the election a small team rewrote the sampling methodology to eliminate both the bias and the get to the potential voters that more traditional methods were missing. This allowed them to better advise their client where to go and how to spend his limited funds.
In short, they used their realization that everyone was doing this wrong, and there had been substantial evidence of that during the primaries, to create a strategic weapon for their client and that significantly helped turn what was likely a certain loss into one of the biggest surprise victories in U.S. history.
Part of what likely worked for Trump is he didn’t want to believe the bad numbers he was seeing driving people to a better methodology. Clinton liked the numbers she was seeing so didn’t do the same. This also showcases a huge common mistake; people just don’t challenge results they like and that almost always leads to bad results.
There was a second company that popped up as a hero after the election. It was the Trafalgar Group out of Atlanta. This company particularly looked at the problem with Trump supporters being undercounted and came up with a creative way to mitigate the problem and their results were unusually accurate even though their methodology was challenged by the better-known and better-funded firms that got this wrong.
What is both fascinating and annoying, as I’ve been there myself, is that to cover up their own incompetence firms like this will say things like “they got it right but for the wrong reasons” or complaining about crappy data completely forgetting that the “got it right” part is actually the most important and excuses don’t win. Some of these folks need to review the difference between winners and losers and come to the conclusion that being expert at excuses isn’t a formula for continued success.
3 rules of analytics
This all comes down to three rules for any kind of analysis.
First, you have to assure your data source. If you don’t have a strong sampling methodology you won’t have accurate results and you’d likely be better off not making the effort than in giving decision-makers bad advice.
Second, you have to identify and eliminate bias. Bias will invalidate the result and if you can’t eliminate it, once again, you’ll give decision-makers bad advice.
And finally, decision-makers have to learn to challenge the analysis, especially when it tells them what they want to hear, because you don’t get off the hook when you royally screw up by blaming the analysts. Yes, you can fire them but you’ll likely follow them out the door.
So, assure the data, identify and mitigate the bias, and always challenge the analysis to assure the advice you do get leads to a positive outcome.