by Greg Freiherr

Thinking algorithm ready to take on conventional medicine

News
Nov 06, 2015
AnalyticsBig DataCollaboration Software

An algorithm built by a San Francisco startup soon will begin using images archived at operating at imaging centers to teach itself to spot the signs of disease. If it s쳮ds, medicine will never be the same.

Deep learning, a branch of artificial intelligence, could be in the medical mainstream in months.

San Francisco software startup Enlitic is preparing to send software engineers to about 80 medical imaging centers in Australia and Asia.  These “forward deployed engineers,” as company founder Jeremy Howard calls them, will install a deep-learning algorithm on IT systems, called Picture Archiving and Communications (PAC) systems. Once on board, the algorithm will begin learning how to interpret medical images, scouring tens of thousands of archived medical images, learning how to identify the signs of disease in every imaging modality in the center: MRI, CT, ultrasound, x-ray and nuclear medicine.

It will do so by studying patterns in the images then drawing conclusions on the basis of these patterns. Deep learning actually works best when tapping multiple sources of data, Howard tells me.

This is all the more astounding when considering that radiologists must train for years to interpret such images, spending four years in medical school and several more refining their diagnostic skills as residents. Many specialize or even subspecialize. This may be in a specific discipline, neuroradiology, for example. Or in a particular modality, such as ultrasound. Or in the use of a single modality on a specific part of the body, such as abdominal CT. 

Enlitic’s business model is as remarkable as the algorithm’s medical potential. The company won’t sell or license the algorithm.  Rather it will take a cut of the profits attributable to its use, which Howard expects will make the centers more efficient and, consequently, more productive.

Enlitic has a kind of profit-sharing deal with Capitol Health, the Australian owner of the imaging centers, the details of which are not publicly known.  What is known is that Capitol Health is already a business partner, having invested $10 million in Enlitic as part of a Series B investment round. If their joint effort succeeds, the result could change the course of medical practice.

For years, healthcare leaders, particularly in the U.S., have argued for a change that gets away from the volume-driven practices, which allow for the reimbursement of individual procedures without regard for whether they have helped the patient.  Critics say this practice adds cost to patient care.

No medical discipline has been criticized more than radiology for allegedly driving up the cost of healthcare. This makes radiology an ideal testing ground for deep learning. The company is so sure its algorithm will perform well that it is skipping the phased rollout that characterizes first-release products and going directly into mainstream medical practices.

If it works as hoped, the algorithm will epitomize value-based medicine, improving patient outcomes as it boosts efficiency and reduces cost.  Much of this hope is speculative, based partly on early experience in the use of deep learning.   Tantalizingly, a deep-learning algorithm featured in Howard’s Tedx talk given in December 2014 discovered that cells surrounding diseased cells can provide information useful in predicting the course of a patient’s illness.  Previously, pathologists had not considered this possibility. The deep-learning algorithm developed by Enlitic might provide similar insights. At the very least it will signal a new kind of partnership between radiologists and their computer.

Currently radiologists use what is called computer-aided detection. CAD is used extensively in mammography, not in the primary or first reading, but as a back up to ensure that radiologists don’t miss anything.  After interpreting the mammogram, CAD circles, boxes or in some way identifies suspicious lesions in the image.  The radiologist then checks through lesions to ensure that none were missed.

The Enlitic algorithm would reverse those roles. The radiologist would identify the areas of interest in the medical image and the computer – when it has become expert at spotting signs of disease – would then render its interpretation for the radiologist to consider.

In this way, the two would be working together, Howard says, as a team.  The fact that the machine is doing the heavy intellectual lifting may, however, hinder its acceptance by radiologists who could feel threatened.  Such concerns were, in fact, the reason the name “computer-aided diagnosis” was changed to “computer-aided detection.”

While the upcoming installment at imaging centers will be the first time Enlitic’s deep learning algorithm has ventured into medical practice, the algorithm has been tested, producing good –  some might even say spectacular – results.  The algorithm cut its diagnostic teeth on a lung cancer data base developed as part of the Lung Imaging Database Consortium, an NIH-funded consortium designed to support the development of guidelines for using CT to screen people for lung cancer.  Using this database, the algorithm learned to detect lung cancer nodules in chest CT images, delivering a performance rated 50 percent better than a panel of expert thoracic radiologists, according to Howard.

Similar success was achieved by the Enlitic algorithm against a database of images showing fractures of the extremities, such as the wrist.  These fractures are common, yet very difficult to detect, leading to missed diagnoses and delays in effective treatment. The Enlitic algorithm proved better at detecting these fractures than leading radiologists and did so is a fraction of the time, Howard says.

How deep learning will impact the practice of medicine is anyone’s guess at this time. But that question may be answered in a future that appears to be surprisingly close.