I was watching the interview between Matt Lauer and Seattle Seahawks coach Pete Carroll on the Today Show yesterday. They were discussing team\u2019s bad -- and it really was bad -- call at the end of the Super Bowl. The play also showcased one of the big problems yet to be addressed with analytics and the artificial intelligence (AI) enhancements that will build on limited initial offerings like Apple\u2019s Siri.\nI paid particular attention to the Super Bowl this year because Paul Allen, one of Microsoft\u2019s founders, owns the Seahawks and he has historically been one of the most powerful guys in technology -- yet his team doesn\u2019t seem to benefit from his unique knowledge of technology. Thinking of Allen, the Seahawks and the Super Bowl made me realize that the biggest problem that analytics and AI will have to overcome: We put more effort into looking smart than into winning the game.\nThe goal isn\u2019t to look right while losing, the goal is to win. However, often the massive effort to do the first thing makes the second increasingly impossible.\nLet me explain.\nFootball -- and Information Game \nI\u2019m by no means a football expert, but I do cover analytics and at its heart is analysis.\u00a0\u00a0 Professional sports, and particularly football, are massively analyzed both offensively and defensively.\u00a0 Competing teams are analyzed in depth as are the capabilities of each player, the team\u2019s playbook and even the quirks of the coaches. By the time the Super Bowl is played, each team has a massive amount of information both on the team they are playing and on their own capabilities.\u00a0\u00a0\nIn addition, to get to the Super Bowl each team has demonstrated a nearly unmatched competence in their coaching team, which includes not only a head coach but experts on offense, defense and special teams.\u00a0\u00a0\nWhile players in the heat of play make mistakes, it should be impossible for coach to make a bad play call.\u00a0 Let\u2019s point to the bad call that was made. In this case, the game hinged on this one play, so a focus on making the right decision should have been even higher, making the probability of a bad call so low as to be impossible. Yet the call made was clearly a bad one.\nThe Seahawks were running well for short yardage, and this was a short-yardage play. This appeared true even when the Patriots had set up for a run as they did in the final moments.\u00a0\u00a0\nIn addition, throwing the ball always has a higher probability of a turnover.\u00a0 But even if the chance of success in a run was 25 percent and the chance of success of a pass was 40 percent you\u2019d want to factor in the chance of a turnover and here the chance of a turnover with a run was less than 10 percent and looking at the film of the play the chance (with three defensive players around one receiver) was around 75 percent. So looking at that higher percent chance of ending the game prematurely with a loss should have prevented the play from being called.\u00a0\u00a0\nNow these numbers are all rough, but after the fact you\u2019d alter the chance of interception to 100 percent, which should have led Pete Carroll to admit it was a bad call.\u00a0 But if you listen to the interview, Carroll says it wasn\u2019t a bad call it was simply a bad outcome. This would be like someone shooting themselves in the leg while cleaning a gun concluding they were simply unlucky.\nThe Desire to Not Look Stupid \nThere is an almost universal tendency in executives to avoid learning from mistakes because they refuse to see them as mistakes. This goes to the heart of something we call Argumentative Theory and it is executive Kryptonite. Instead of coming up with creative ways to argue what they did was correct, Carroll should instead be trying to figure out why his process failed him. The goal isn\u2019t to look right while losing, the goal is to win. However, often the massive effort to do the first thing makes the second increasingly impossible. If an executive refuses to see a mistake, he\u2019ll never fix it.\u00a0\u00a0\nYou can see this behavior very well in CEOs, particularly underperforming ones. They will seem to repeatedly do things like layoffs or selecting executives without the proper qualifications who don\u2019t work out. The CEOs act as if the results were historically successful when they weren\u2019t. They are so focused on appearing right that they fail to grow and end up failing repeatedly by making the same avoidable mistakes over and over again simply because they aggressively refuse to see them as mistakes. \u00a0\u00a0\u00a0\nAnalytics and AI\nSo what\u2019s this have to do with analytics and AI? Any process of analysis and conclusion is less than perfect. If it were perfect, you\u2019d have the equivalent of a crystal ball and no system is close to precognition yet (it may be impossible to create such a system).\nTo improve the accuracy of a system you have to constantly learn from its mistakes and improve it. But if you instead focus on defending the solution as if it was perfect, even though it will never be, it won\u2019t improve and you\u2019ll effectively promote a system that has designed-in flaws.\u00a0\u00a0\nIn addition, if the system says one thing and the executive disagrees and makes an alternative decision that turns out to be wrong, the executive is likely to see the system as a threat. Rather than relying on it next time, he or she will eliminate it from the process or alter it so that it produces a recommendation in line with the wrong decision. That way, they can then blame the system.\u00a0\u00a0\nWhat If We\u2019re the Problem?\nWe focus too much on blame and we connect too much emphasis to the impression that a decision maker\u2019s decisions are flawless. Winning in sports and business is a process. You learn from your mistakes and work to find ways to avoid making them. Focusing on blame shifts emphasis from doing the right thing to looking right even when you\u2019re wrong. Unless this is more effectively addressed, as we move from analytics to AI we\u2019ll corrupt these ever more powerful systems into doing the wrong things.\u00a0\u00a0\nIn addition, advanced AI could see us legitimately as the problem to be solved -- something that a number of high profile folks, most recently including Bill Gates, are increasingly concerned about. For analytics and AI to work we have to make every effort to ensure they are used correctly and are as accurate as possible. We need to shift emphasis from blame to improvement. If we can\u2019t make this pivot what we are building might actually make things worse.\u00a0\u00a0\nWe could be intentionally turning the future version of Siri into anything but our friend.\nSomething to think about this weekend.