It isn’t hard to think that the political system is broken in much of the world. In the U.S., in particular, satisfaction (approval) scores surrounding government tend to not only be low, but trend downward and the latest election felt more like a revolt than peaceful power transition. Neither party seems to be able to keep their own members happy and often seems more focused on squabbles with other politicians than making real and meaningful progress.
Sadly, I’ve seen this same behavior in companies where executives often prefer a peer and rival to failover the success of the firm that pays them. At one firm this is so bad they have a term, that I can’t repeat, that refers to a practice that supports you in public, but stabs you in the back in private. This practice made execution almost impossible and is far from uncommon.
Although this self-serving behavior is common, it isn’t universal. There are those who actually do care more about making progress than making personal credit or advancement. Unfortunately, we call these people heroes and they are often not only underappreciated but managed out of their firms by those with better political skills and adverse personal agendas. So what if we had a system that actually understood behavior and could single out people whose priorities were damaging to the firm’s goals for either behavior modification or termination? Wouldn’t that both make the related firm more successful and a better place to work? Granted there is a tad bit of electronic big brother in this, but we use technology to mitigate other types of threats invasively why not focus it on bad human behavior as a core problem to be solved?
SARA the social robot
Apparently using NVIDIA’s deep learning and AI technology [Disclosure: NVIDIA is a client of the author] Carnegie Mellon University and ArticuLab have created SARA (Socially-Aware Robot Assistant), which is designed to read and learn from human behavior. Able to go beyond the spoken word and look at physical behavior, tone and sentence structure to determine the true meaning behind the words and learn from interaction, SARA could change a lot of things from how surveys are taken, to how employees are initially vetted at a scale only limited by the size and scope of the related system.
I see applications ranging from security (identifying people who intend the firm harm whether employees or visitors), to military (identifying those that may be carrying concealed weapons or explosives), to sales (real-time segmentation of those that enter that know what they want, don’t know what they want but want to buy, or are just killing time). But I think this system could also identify and help correct bad executives and politicians, with the latter being at scale.
Making for better executives and politicians
If you’ve been around for a while you’ve likely seen a lot of self-serving executives who take credit for things they didn’t do and are very effective at assigning blame for things they did. I think of them as a hostile virus that is almost impossible to get rid of, but deadly to the corporation. If you had a system that could analyze their behavior when you were meeting with them it likely could, over time, differentiate between those that honestly want to help you make progress and those that say the right words but plan to stab you in the back at the first opportunity.
In short it could highlight the difference between actual loyalty to you and the firm and fake loyalty from someone planning to betray you. Once you know of this problem you can then more effectively deal with it, and I’ve seen a lot of senior executes get screwed by people that they trusted deeply.
Carly Fiorina had this happen twice during her tenure as CEO (the second and more damaging time contributed significantly to her termination) at HP. Novell’s Ray Noorda had his hand-chosen successor not only get caught trying to execute a coupe against him, but was betraying him in a far more personal fashion.
While this could be very powerful in companies to identify people, who might even be embezzling from the firm at any level (that’s my old audit-self talking), it could be far more powerful in politics. Think of a news service that could deploy this unbiased system during a debate and, real time, provide two indicators one tied to facts showing accuracy (which is coming) and one showing truth. Combined it would differentiate those that were misinformed on a topic but telling the truth from those that knew the right answer but were choosing to give another to further their, or their party’s, agenda.
Then the voters could decide if they wanted a clueless truthful politician or a well-informed dishonest one. Granted my own preference would be a truthful well-informed politician and, I think, this kind of application would eventually result in that. Boy, wouldn’t that just change the world?
Unbiased computers could read people and change the world
An unbiased computer that can read people could be a massive help in not only making companies more successful but in making governments more successful, and responsive, as well. It could also have a huge benefit in Thanksgiving arguments clearly identifying those that want to argue just for attention or to express dominance and allow you to change tactics to either win or just point out they are being jerks or maybe just avoid them in the first place.
This kind of thing could open up an entirely new area of analytics one that looks at a far broader set of data points to better tell not only what is actually happening real time, but to better project the future be it election or sales results.
Granted they had me at winning family arguments, but then that is the nature of this time of year. I hope you have, or have had, a great Thanksgiving!