When it comes to deciding whether to pepper spray a crowd, drone operators, tucked away in a safe location, can make calmer decisions than police on the ground. This is one reason some governments are buying weaponized drones for crowd control.
Using this logic, it is easy to argue that dispassionate algorithms are even better suited than human drone operators. So, as the worst-case scenario goes, humans will integrate AI into drone-based weapons; eventually cede complete control; leading AI to begin using these weapons for their own purposes. No amount of programming, no embedded rules, no computer architecture designed for selflessness and loyalty to humans is going to stop them.
The good news is that such a foreboding future is a long way off. The bad news is that, following a similar and much more likely course, AI could radically reshape the professional world. And health IT could be the first to fall.
Why health IT? First and most obviously, healthcare in the U.S. is desperately inefficient. Lately the solution prescribed to combat inefficiency is to get rid of as many people as possible. AI is a natural for that.
Health IT is particularly vulnerable – or especially well suited – to AI, depending on your perspective, thanks to HIPAA – the Health Insurance Portability and Accountability Act of 1996. HIPAA was meant to promote the sharing of patient data, but it actually has done the opposite. The culprit is a privacy rule that scares the bejesus out of healthcare providers, causing them to avoid sharing medical data whenever possible. This is where AI comes in.
Learning machines can examine medical records, share data among information systems, and draw insights that can help physicians make better decisions in the diagnosis and treatment of patients, all while protecting the privacy of individual patients. Current efforts are in line with this. One example is Watson Health, which IBM is grooming to help physicians make diagnoses. Another is a deep learning algorithm designed to help physicians spot patterns of disease in medical images.
While it may be true that AI programs are being designed for the good of humans, their success could have dramatic effects on health IT as it exists today. Simply put, if things keep going as they are, AI will assume more and more of the duties now being performed by human in health IT. Even the jobs of those who design and program health IT will be at risk.
As AI-fueled computers increasingly write their own programming, self-modification will be to learning machines what self-improvement is to humans. Eventually, machine-written code will become so complex that human overseers won’t understand how these machines do what they do.
Along the way, these algorithms will become increasingly adept at improving the analysis and transfer of data from one point to another, even setting criteria for data input, processing, transfer and publication. This will promote efficiency. But it will also allow these machines to define their interactions with humans.
At this point, humans in health IT will be largely irrelevant – and the longer-term threats posed by AI and weaponized drones will become evident in health IT.
In the end, it will come down to control – who or what has it – and what happens as a result. It has been suggested that if AI exerts total control over weaponized drones, inevitably AI will begin to gather resources for its own ends rather than those of its creators.
What will happen if AI subsumes the human role in health IT is more speculative. But that question may be answered sooner.
This article is published as part of the IDG Contributor Network. Want to Join?