When it comes to deciding whether to pepper spray a crowd, drone operators, tucked away in a safe location, can make calmer decisions than police on the ground. This is one reason some governments are buying weaponized drones for crowd control.\nUsing this logic, it is easy to argue that dispassionate algorithms are even better suited than human drone operators. So, as the worst-case scenario goes, humans will integrate AI into drone-based weapons; eventually cede complete control; leading AI to begin using these weapons for their own purposes. No amount of programming, no embedded rules, no computer architecture designed for selflessness and loyalty to humans is going to stop them.\nThe good news is that such a foreboding future is a long way off. The bad news is that, following a similar and much more likely course, AI could radically reshape the professional world. And health IT could be the first to fall.\nWhy health IT? First and most obviously, healthcare in the U.S. is desperately inefficient. Lately the solution prescribed to combat inefficiency is to get rid of as many people as possible.\u00a0 AI is a natural for that.\nHealth IT is particularly vulnerable \u2013 or especially well suited \u2013 to AI, depending on your perspective, thanks to HIPAA \u2013 the Health Insurance Portability and Accountability Act of 1996. HIPAA was meant to promote the sharing of patient data, but it actually has done the opposite.\u00a0 The culprit is a privacy rule that scares the bejesus out of healthcare providers, causing them to avoid sharing medical data whenever possible.\u00a0 This is where AI comes in.\nLearning machines can examine medical records, share data among information systems, and draw insights that can help physicians make better decisions in the diagnosis and treatment of patients, all while protecting the privacy of individual patients. Current efforts are in line with this. One example is Watson Health, which IBM is grooming to help physicians make diagnoses.\u00a0 Another is a deep learning algorithm designed to help physicians spot patterns of disease in medical images.\nWhile it may be true that AI programs are being designed for the good of humans, their success could have dramatic effects on health IT as it exists today. \u00a0Simply put, if things keep going as they are, AI will assume more and more of the duties now being performed by human in health IT. Even the jobs of those who design and program health IT will be at risk.\nAs AI-fueled computers increasingly write their own programming, self-modification will be to learning machines what self-improvement is to humans. Eventually, machine-written code will become so complex that human overseers won\u2019t understand how these machines do what they do.\u00a0\nAlong the way, these algorithms will become increasingly adept at improving the analysis and transfer of data from one point to another, even setting criteria for data input, processing, transfer and publication. This will promote efficiency. But it will also allow these machines to define their interactions with humans.\nAt this point, humans in health IT will be largely irrelevant \u2013 and the longer-term threats posed by AI and weaponized drones will become evident in health IT.\nIn the end, it will come down to control \u2013 who or what has it \u2013 and what happens as a result. It has been suggested that if AI exerts total control over weaponized drones, inevitably AI will begin to gather resources for its own ends rather than those of its creators.\nWhat will happen if AI subsumes the human role in health IT is more speculative.\u00a0 But that question may be answered sooner.