Deep learning, a branch of artificial intelligence, could be in the medical mainstream in months.\nSan Francisco software startup Enlitic is preparing to send software engineers to about 80 medical imaging centers in Australia and Asia.\u00a0 These \u201cforward deployed engineers,\u201d as company founder Jeremy Howard calls them, will install a deep-learning algorithm on IT systems, called Picture Archiving and Communications (PAC) systems. Once on board, the algorithm will begin learning how to interpret medical images, scouring tens of thousands of archived medical images, learning how to identify the signs of disease in every imaging modality in the center: MRI, CT, ultrasound, x-ray and nuclear medicine.\nIt will do so by studying patterns in the images then drawing conclusions on the basis of these patterns. Deep learning actually works best when tapping multiple sources of data, Howard tells me.\nThis is all the more astounding when considering that radiologists must train for years to interpret such images, spending four years in medical school and several more refining their diagnostic skills as residents. Many specialize or even subspecialize. This may be in a specific discipline, neuroradiology, for example. Or in a particular modality, such as ultrasound. Or in the use of a single modality on a specific part of the body, such as abdominal CT.\u00a0\nEnlitic\u2019s business model is as remarkable as the algorithm\u2019s medical potential. The company won\u2019t sell or license the algorithm.\u00a0 Rather it will take a cut of the profits attributable to its use, which Howard expects will make the centers more efficient and, consequently, more productive.\nEnlitic has a kind of profit-sharing deal with Capitol Health, the Australian owner of the imaging centers, the details of which are not publicly known.\u00a0 What is known is that Capitol Health is already a business partner, having invested $10 million in Enlitic as part of a Series B investment round. If their joint effort succeeds, the result could change the course of medical practice.\nFor years, healthcare leaders, particularly in the U.S., have argued for a change that gets away from the volume-driven practices, which allow for the reimbursement of individual procedures without regard for whether they have helped the patient.\u00a0 Critics say this practice adds cost to patient care.\nNo medical discipline has been criticized more than radiology for allegedly driving up the cost of healthcare. This makes radiology an ideal testing ground for deep learning. The company is so sure its algorithm will perform well that it is skipping the phased rollout that characterizes first-release products and going directly into mainstream medical practices.\nIf it works as hoped, the algorithm will epitomize value-based medicine, improving patient outcomes as it boosts efficiency and reduces cost. \u00a0Much of this hope is speculative, based partly on early experience in the use of deep learning.\u00a0\u00a0 Tantalizingly, a deep-learning algorithm featured in Howard\u2019s Tedx talk given in December 2014 discovered that cells surrounding diseased cells can provide information useful in predicting the course of a patient\u2019s illness.\u00a0 Previously, pathologists had not considered this possibility. The deep-learning algorithm developed by Enlitic might provide similar insights. At the very least it will signal a new kind of partnership between radiologists and their computer.\nCurrently radiologists use what is called computer-aided detection. CAD is used extensively in mammography, not in the primary or first reading, but as a back up to ensure that radiologists don\u2019t miss anything.\u00a0 After interpreting the mammogram, CAD circles, boxes or in some way identifies suspicious lesions in the image.\u00a0 The radiologist then checks through lesions to ensure that none were missed.\nThe Enlitic algorithm would reverse those roles. The radiologist would identify the areas of interest in the medical image and the computer \u2013 when it has become expert at spotting signs of disease \u2013 would then render its interpretation for the radiologist to consider.\nIn this way, the two would be working together, Howard says, as a team.\u00a0 The fact that the machine is doing the heavy intellectual lifting may, however, hinder its acceptance by radiologists who could feel threatened.\u00a0 Such concerns were, in fact, the reason the name \u201ccomputer-aided diagnosis\u201d was changed to \u201ccomputer-aided detection.\u201d\nWhile the upcoming installment at imaging centers will be the first time Enlitic\u2019s deep learning algorithm has ventured into medical practice, the algorithm has been tested, producing good \u2013\u00a0 some might even say spectacular \u2013 results.\u00a0 The algorithm cut its diagnostic teeth on a lung cancer data base developed as part of the Lung Imaging Database Consortium, an NIH-funded consortium designed to support the development of guidelines for using CT to screen people for lung cancer.\u00a0 Using this database, the algorithm learned to detect lung cancer nodules in chest CT images, delivering a performance rated 50 percent better than a panel of expert thoracic radiologists, according to Howard.\nSimilar success was achieved by the Enlitic algorithm against a database of images showing fractures of the extremities, such as the wrist.\u00a0 These fractures are common, yet very difficult to detect, leading to missed diagnoses and delays in effective treatment. The Enlitic algorithm proved better at detecting these fractures than leading radiologists and did so is a fraction of the time, Howard says.\nHow deep learning will impact the practice of medicine is anyone\u2019s guess at this time. But that question may be answered in a future that appears to be surprisingly close.