by Greg Freiherr

Luddites clobber AI, says advocate: Just wait ‘till the FDA weighs in

Opinion
Jan 20, 2016
Emerging TechnologyGovernmentHealth and Fitness Software

A nonprofit advocate of science and technology has criticized high-profile opinion leaders as obstructionists for stirring fear and hysteria about artificial intelligence. The nonprofit hasn’t seen anything yet.

People are scared to death of artificial intelligence and they’re getting hysterical, says the Information Technology and Innovation Foundation.  A “loose coalition of scientists and luminaries” is to blame, according to the ITIF, which this week awarded this coalition its annual Luddite Award.  

Led by the likes of Elon Musk and Stephen Hawking, these alarmists beat out advocates seeking a ban on “killer robots”; states limiting automatic license plate readers;  and Europe, China, and others choosing taxi drivers over car-sharing passengers.

In announcing its award January 19, ITIF President Robert D. Atkinson said in a press release: “It is deeply unfortunate that luminaries such as Elon Musk and Stephen Hawking have contributed to feverish hand-wringing about a looming artificial intelligence apocalypse…they and others have done a disservice to the public—and have unquestionably given aid and comfort to an increasingly pervasive neo-Luddite impulse in society today—by demonizing AI in the popular imagination.”

If the ITIF thinks these guys are bad, wait ‘til they get a load of the FDA.

Medicine is a prime area for the first AI applications.  One is already underway, the artificial brainchild of a San Francisco start-up called Enlitic.  Others are being researched by Big Blue and its corporate ward, Merge Healthcare.  These are a couple of the highest profile efforts, but they are by no means alone.

The medical AI train has begun chugging out of the station and it has plenty of cars in tow. How far they will get is anybody’s guess.  Mine is not far – at least not in the U.S. in any form that resembles what most people would consider AI.

The reason is the FDA, which may be the one, biggest impediment to the clinical adoption of artificial intelligence. 

The FDA escaped being named among the 2015 alarmists cited by ITIF, which focused instead on Musk for comparing AI to “summoning the demon”.  (To be fair, Musk made this comment in 2014 and, therefore, shouldn’t even have been in the mix.  But this technicality did not spare him the ire of ITIF President Atlkinson, who decried the Musk’s pronouncement  as taking “us two steps back.”

The FDA will do better than that.  I believe this agency, one of the slowest, most pedantic in the U.S.  government, will stop medical AI dead in its tracks, making the FDA a shoe-in to win the Luddite award sometime in the future.  To see why, take a look at what it has done in computer aided diagnosis (CAD). 

For more than a decade, the FDA has wrestled with CAD as an adjunct to digital mammography. In the form that finally passed the agency’s review, CAD serves as the medical equivalent of a spell checker.  Mammographers first look over the image to identify suspicious lesions.  The software then checks the image and outlines what its algorithm sees as suspicious, pointing out every possible lesion so that none escapes detection.

With this approach, if the computer were the diagnostician, there would be a lot of false positives.  This is where the mammographer steps back in, looks at all the possibilities and rules out those that were previously dismissed. Now and then, a lesion not seen by human eyes pops up. And that is what drives the sale of CAD – that conscientious diagnosticians will check through all the possibilities again and occasionally find one that was missed.

Whether mammographers routinely use CAD in this way is hard to say.  From a time standpoint, it would make more sense to unleash the CAD to sniff out possible lesions and then bring to bear the diagnostic power of the trained physician.  But the FDA will not allow CAD products to be marketed this way because such marketing would presume that the machine is as good or better than the physician at spotting lesions. Consequently, the physician’s before and after readings are the bread in a CAD sandwich.

By stark contrast to CAD, deep learning algorithms, which are the foundation of medical artificial intelligence, are being groomed to find disease on their own. This is not going to go over well with the FDA. The agency is still trying to get its head around the use of pattern-matching algorithms in CAD.

Toward this end, three years ago the FDA released two guidance documents regarding CAD devices used in radiology. They were intended for use by companies making CAD software, as well as its own reviewers.  The first guidance does not even mention mammography CAD, the primary application for this type of software.  The second does – but specifically and only as a means of providing “a probability of malignancy score to the clinician for each potential lesion as additional information.” 

Simply put, the FDA wants to keep computers on the periphery of diagnosis – and even then is not comfortable with the use of diagnostic software. The agency has cautiously defended the right of physicians – and only physicians – to practice medicine. The FDA absolutely will not step over that line, which pretty much negates the chance that deep learning algorithms with even modest autonomy will pass FDA review.   

ITIF singled out alarmists for their claims that AI poses a danger to the human race as a whole – not just the role or authority of physicians.  For the sake of argument, however, let’s merge the two.  Consider the following: In creating AI diagnostics, it is necessary to create machines that understand human anatomy and physiology, and particularly human vulnerability to disease and injury.  If Musk and Hawking are right, that intelligent machines should be feared, are we turning over to these machines the means of our own destruction?

In my next blog, I’ll explain why such a fear has no merit.