Practicing physicians are faced with the need to make decisions and recommendations constantly and quickly throughout the day. They assess clinical situations, try to identify a coherent picture of the case at hand, compare that picture to the pattern of similar cases from experience and didactics, and come up with a proposed treatment plan. Many, many times every day.\nWith time, and with the pressure of performance, numerous mental shortcuts can develop, often unconsciously. Learned paradigms are used as shortcuts \u2013 information-processing rules referred to as heuristics \u2013 and are helpful in moving quickly through cognitive processes all day long. However, a number of cognitive biases can emerge, and can lead clinicians into making erroneous conclusions that are often only seen in retrospect.\nThere are many different kinds of cognitive biases that affect clinical decision-making, similar to other fields that that also require rationality and good judgement. There are a few that are common in clinical medicine, which might be useful to describe, in order to see how we might build supportive information systems that can help overcome these biases:\n1. Availability heuristic\nDiagnosis of the current patient biased by experience with past cases. Example: a patient with crushing chest pain was incorrectly treated for a myocardial infarction, despite indications that an aortic dissection was present.\n2. Anchoring heuristic\nRelying on initial diagnostic impression, despite subsequent information to the contrary. Example: Repeated positive blood cultures with Corynebacterium were dismissed as contaminants; the patient was eventually diagnosed with Corynebacterium endocarditis.\n3. Framing effects\nDiagnostic decision-making unduly biased by subtle cues and collateral information. Example: a heroin-addicted patient with abdominal pain was treated for opiate withdrawal, but proved to have a bowel perforation.\n4. Blind obedience\nPlacing undue reliance on test results or \u201cexpert\u201d opinion. Example: false-negative rapid test for Streptococcus pharyngitis resulted in a delay in diagnosis and appropriate treatment.\nThese are examples of some of the kinds of biases that, when pointed out, are familiar to practicing clinicians. They are also the source of common medical errors.\nAI systems can minimize these biases\nConceptually, an Artificial Intelligence (AI) system can overcome these cognitive biases, and deliver personalized, evidence-based rational recommendations in real time to clinicians (and patients) at the point of care. In order to do this, such a system would need of consider all the data about the patient \u2013 current complaints, physical findings, other co-morbid conditions present, medications being taken, allergies, lab and imaging tests done over time. In short, the automated system would need to take into consideration all the things clinicians use to make a recommendation.\nOnce the data about the individual is gathered, it is compared to the experience derived from a large base of clinical data in order to match patterns and predict outcomes. A differential diagnosis (the different possible diagnoses at the time of observation, in order of probability) can be created, further testing to better distinguish between the possibilities can be suggested, and a treatment plan can be proposed \u2013 without the cognitive biases which dog healthcare delivery currently.\nDo such systems exist currently? No. But they are evolving. For example, at Flow Health, we are building an end-to-end service intended to resolve the kind of cognitive bias that clouds medical judgement \u2013 a \u201cvertical AI\u201d approach. The limitations of AI in healthcare has more to do with the availability of data than with the sophistication of the algorithms. Deep learning has advanced considerably, largely in other domains where data is more accessible. In healthcare, breaking down the institution-centric and provider-centric data silos is still a work in process, but is emerging. Access to data is key, in order for deep learning to yield truly important insights.\nNevertheless, some early gains are being made, even with relatively limited data sets. Mostly, in these early stages, machine learning is \u201ctaught\u201d by human expert opinion. For example, when teaching a system to read images (such as mammograms), the system is taught by human experts \u2013 \u201cthis is normal, this is benign, this is suspicious for cancer.\u201d Today, the benchmark is comparing machine learning to human doctors. The real goal should be to apply AI to identify patterns and associations not previously recognized, even by human doctors.\nBut the goal is in sight. Healthcare organizations need to break down barriers that prevent collaboration between AI researchers, organizations, and others. This is needed in order to aggregate enough data to drive AI-based knowledge discovery and overcome the human cognitive bias negatively impacting care quality. \u00a0\nWithin the next few years, medical data will likely be sufficiently aggregated to allow good machine learning to take place. Data collection at the point of care \u2013 from clinicians, from patients, from laboratory findings, from imaging studies, and from genomics as this becomes more widespread \u2013 can flesh out a good case-definition which can then be matched against patterns extracted from large data sets, and logical recommendations can be presented. It can be integrated into the tools clinicians and patients can use (EHRs, portals, etc.), and can deliver on the vision of precision medicine. This is no longer science fiction. It will become our mainstream way of approaching healthcare in the near future.