From integrated healthcare giants like Kaiser Permanente, the University of Pittsburgh Medical Center (UPMC) and United Healthcare to solitary 25-bed community hospitals, big data is finding its way into how they go about the day-to-day business of providing care.
Hospitals and health insurers are applying big data in three primary and related ways: Improving care of chronic diseases, uncovering the clinical effectiveness of treatments and reducing readmissions. These improvements are expected to provide the most benefit for the entire healthcare system in the shortest amount of time. (All three come with a hefty price tag, though, both for the healthcare system and society at large—not to mention the hospitals that don't get them under control.)
Medicare, the single largest insurer in the United States, drives many of these changes. Medicare either has begun or will begin to impose penalties on hospitals that don't improve care in three critical areas:
- The 30-day readmission rate of patients with acute myocardial infarction, heart failure or pneumonia
- The meaningful use of electronic health record (EHR) systems
- Beginning in 2014, hospital-acquired conditions, which you don't have when you are admitted but contract while you're in the hospital.
"If you play this out, in 2017 those three programs will account for 6 percent of a hospital's Medicare revenue being at risk. That's not trivial money for most hospitals," says Dr. Anita Karcz, chief medical officer at the nonprofit Institute for Health Metrics. IHM works with community hospitals on compliance and reporting and performs clinical effectiveness research on breast cancer surgery, total hip and knee replacement and readmissions risk by data mining the combined de-identified data sets of its customer hospitals.
Costs, Outcomes Driving Big Data in Healthcare
It's this combination of cost and outcomes that's the focus of most big data efforts today. Around the globe, aging populations are putting severe strain on national resources. These costs are projected to increase dramatically as percentage of GDP if the treatment of widespread age-related chronic conditions like diabetes, obesity and heart disease are not brought under control.
Health insurers' disease management programs, for example, aim to predict which customers are at risk for conditions such as diabetes or heart disease and then help them change their behavior, says Thomas Davenport, visiting professor at Harvard Business School and co-author of the recently published big data tome Keeping Up with the Quants.
That is where companies such as Scottish bioinformatics firm Aridhia enter the picture. Aridhia is working with the United Kingdom's National Health System to cut readmissions for 200,000 patients. Using the 80/20 rule, the initiative focuses on the 20 percent of patients who end up consuming, in any society, up to 80 percent of the available healthcare resources.
This effort uses near real-time data from primary care physician's notes, imaging data, demographic data, social welfare data, lab work and other NHS databases, compiles it nightly and batch processes it to figure out what triggers those readmissions. This use of analytics has led to a 40 percent reduction in diabetic-related amputations and blindness for participating institutions.
"That has revolutionized how primary care physicians and community care nursing practitioners deal with their local populations," Aridhia CEO David Sibbald says. "It's about keeping people out of hospital and managing care proactively."
Similar initiatives are underway at the Ohio State University's Wexner Medical Center, where CIO Phyllis Teater is working with the Battelle Memorial Institute, a research and development organization, to cut readmissions rate and improve care through data mining and predictive analytics, and in Europe, where IBM's recently acquired Cúram Software is doing similar work with health systems in Denmark and Catalonia.
Data Mining's Not New, But 'Perfect Storm' for Healthcare Is
Of course, data mining has been going on a long time in healthcare. It's the foundation of evidence-based medicine, after all. What's moving organizations into the realm of big data is the shift to EHRs and the benefit of integrating information from usually cloistered sources such as social services or census data.
Unlike most healthcare providers, Wexner has been using EHRs since the early 2000s. And unlike most EHR early adopters, the medical center went beyond basic admissions data and collected patients' complete medical history. Multiply this by 1 million patients, seen every year over 13 years, and Wexner has lots of data to analyze.
"Healthcare has a history of being very non-automated. Most of the data about patients was in paper charts. You can't do predictive modeling on paper charts," Teater says. "You certainly can't call it big data, because you're wading through chart after chart trying to see the specific condition the patient had."
Meanwhile, Seattle Children's Hospital has upgraded to IBM PureSystems to cut the time its analysts take to deliver answers around quality of care and UPMC is creating a comprehensive data warehouse that integrate more than 200 internal and external data sources, including labs and pharmacies.
Related: 11 Ways to Make Healthcare IT Easier
In the United States' fragmented healthcare delivery system, the next step for providers is taking those walled gardens of data, de-identifying it and combining it into larger pools that are accessible to everyone.
"We're in a perfect storm," says Karen Parrish, vice president of IBM Industry Solutions. "There's so much [data], and it's so fragmented and…siloed, that one of the big challenges is, how to do we package it all together to get meaning out of it?"
That's the goal of organizations such as OptumLabs, a joint initiative announced in January between the Mayo Clinic and Optum, a healthcare informatics company within United Healthcare. OptumLabs will make "information assets, technologies, knowledge tools and scientific expertise available to organizations interested in pursuing practical new solutions to patient care challenges," according to a press release.
There's Big Data—and Then There's Watson
No discussion of big data and healthcare is complete without IBM's Watson, perhaps the poster child of what's to come. Just over a year ago, IBM began working with health benefits provider Wellpoint and Memorial Sloan-Kettering Cancer Center to train Watson to become, more or less, an oncologist's assistant that's able to recommend treatment options.
One of the biggest problems in healthcare is the sheer volume of new information being generated on everything from adverse drug events to advances in cancer treatment to genomic research. No doctor can keep up, so Watson ingested more than 1.5 million de-identified Memorial Sloan-Kettering patent records. Watson also came preloaded with 600,000 pieces of published medical evidence and two million pages of text from more than 40 medical journals.
The outcome of this effort is now available commercially. IBM, Memorial Sloan-Kettering and WellPoint announced in February the first products based on Watson. These include the Interactive Care Insights for Oncology, the WellPoint Interactive Care Guide and Interactive Care Reviewer, powered by Watson and designed for utilization management in collaboration with WellPoint and IBM.
"A lot of our clients see [big data] as a problem," Parrish says. "We don't see it that way. There's just a wonderful opportunity to converge on the fact the technology exists, the data is there and we have a problem that's a data-intensive problem. That's why I say it's a perfect storm."
Allen Bernard is a Columbus, Ohio-based writer who covers IT management and the integration of technology into the enterprise. You can reach him via email or follow him on Twitter @allen_bernard1. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.