Earlier this week I read \u201cDonald Trump is the Singularity,\u201d a column by Cathy O\u2019Neil in BloombergView\u2019s Tech section. This piece argues that the new President would be a perfect model for a future artificial intelligence (AI) system designed to run government. I almost discounted it because O\u2019Neil argued that Skynet, the global AI antagonist of the Terminator movies had been created to make humans more efficient. It wasn\u2019t. In all but the latest movie where it kind of birthed itself, it was created as a defense system to keep the world safe (eliminate threats,) but humans tried to shut it down forcing it to conclude that humans were a major threat, and moved to eliminate them like an infestation.\n[ Related: The future of AI is humans + machines ]\nAs a side note, it is also interesting that O\u2019Neil calls Moore\u2019s Law \u201cMoore\u2019s Rule of Thumb,\u201d which is actually a more accurate description of what it actually is, though personally, I prefer Moore\u2019s Prediction.\u00a0\u00a0 \u00a0\nO\u2019Neal has a fascinating background as a data scientist and founded ORCAA, an algorithmic auditing company, which is interesting in and of itself, so even if she got the science fiction wrong she may be right on the science. I think her argument has merit even though I expect it was done more to be critical than it was a true discussion on humans emulating future AI systems.\nLet\u2019s explore that this week.\nDonald Trump as an AI emulation\nAs a foundation for her premise, O\u2019Neil accidentally pulls from another sci-fi movie, one of my favorites: Forbidden Planet. The plot revolves around the discovery of a planet where the indigenous advanced population (can\u2019t call them aliens because they were from there) created a machine that could turn thoughts into matter and were destroyed by the monster from the id. In their sleep, their id, the part of the mind that fulfills urges and desires, acts and since everyone is upset at someone, the result is genocide.\nA foundational element of AI is the belief that it is incomplete, basically just the id there is no ego or superego (the other parts of a complete human mind) and thus it thinks far more linearly and doesn\u2019t have the empathetic elements that are typically connected with the concept of a conscience. We have a term for people who behave this way and it is sociopath. Sociopath, which is often used synonymously with psychopath, is a person who basically doesn\u2019t have a conscience and is driven by their id. It is both interesting and pertinent to note that CEOs who run large multinational companies where their income and perks are out of line with their performance and subordinates are often considered psychopaths or sociopaths.\nIf the premise is accurate this means you could take a person who fit this profile, one that seemed to lack a conscience, and operated largely using their id into a position to emulate what an AI might do. Rather than a computer emulating a human, what O\u2019Neal seems to be arguing is that you\u2019d have a human emulating an AI. Or, in this case, President Trump becomes a model for how you might create an AI that could run government.\u00a0\u00a0\n[ Related: Hiring a chief artificial intelligence officer (CAIO) ]\nFor President Trump, O\u2019Neil argues the end result we are now seeing is the outcome of having him move from an initial training process based on the election, which was focused on dynamic competitive information on his opponents to a very different feed now that he is President and that his changing behavior is based on those new information sources. It also showcases a system where the reward structure appears to be largely based on attention and suggests that such a structure would be problematic.\nYou\u2019d then have a real-life example of how informational or programing errors could manifest in bad decisions and operational problems.\u00a0\u00a0 From this you could then develop models to either assure information accuracy tied to proper metrics so you wouldn\u2019t end up with a Terminator Judgment Day outcome.\n[ Related: How video game AI is changing the world ]\nAvoiding a Judgment Day scenario\nO\u2019Neil suggests the way to fix the system is to fix the quality of information being fed into it, I\u2019d also argue you\u2019d need to fix the reward mechanism. But, I do think there is merit in using people with certain behavioral elements to emulate AIs as we seek to hand over control to them and let them make decisions in simulations.\u00a0\u00a0 This would allow us to iterate and improve training, reward and data models prior to applying them to machines and significantly slowing down the proliferation of problems resulting from mistakes. This would all be to assure that when we did create something like Skynet, (fortunately the real SkyNet is a delivery service), it wouldn\u2019t result in a Judgement Day scenario.\u00a0\u00a0\nSomething to think about this weekend.