Assembling data on a large scale is both a technical and political challenge. I’ve been involved with multiple hospitals where the finance and clinical teams never really collaborate and therefore the lenses put on either domain are not terribly realistic. I won’t even start about the Research part of this equation! Truly merging and using the data requires clinical, administrative and research leaders to establish trust and shared goals that promote an environment of accountability. The key to trusted data is transparency.
Combining clinical and financial data for cost management is a popular topic given the political and economic environment. This activity generally includes these data requirements:
- Claims data for diagnosis codes, patient demographics, encounter information and services provided; this data usually resides in the patient billing system
- Clinical data such as labs for quality and outcomes; this data usually resides in the EMR, EHA or other ancillary clinical systems
- Accounting and finance data from the general ledger; this data is housed in the budget and sub-ledger systems.
Assembling this data requires a robust technical architecture that easily stores the data relationships with contextual integrity along with the ability to resolve patient or person identity. Once the data is assembled, leaders of the organization can build disease registries to manage the cost of care for populations, model service line profitability, analyze payer contracts and more. The most important benefit of this transformation is that the organization begins to speak a common language of accountability and front line managers begin to understand the relationships between volume drivers and departmental workload leading to increased ownership of controlling these variables. The costing step is important to ensure the data as well as the transaction level calculated cost is fully accessible to decision makers.
Workforce data can be very revealing when combined with clinical data as well. One hospital experienced huge insights by retroactively analyzing and optimizing shift management and overtime hours. If we evaluate this labor productivity data vs. quality and outcomes metrics, we may be surprised. All too often we hear that “my patients are sicker than theirs” but the proof is in the data!
Over and above cost awareness, there is the business of curing diseases. Much of the industry efforts to date for personalized medicine have been disease specific and myopic in nature. While I believe this was necessary to evolve and mature, some organizations are taking a broader view of their data using advanced analytics to identify patient populations for targeted therapies that take into account an individual patient’s medical history (their disease risk, their individual pathology) and then select the most promising course of treatment, based on their unique characteristics.
There was a time when that vision seemed like it was a long way off but there’s a growing trend in the US from large healthcare organizations to set up and establish these translational medicine research centers, particularly in academic medicine. It starts with the simple idea of integrating what we call “the longitudinal record” of a patient from multiple electronic medical record systems. And what that means is connecting the records in the hospital with the records out in the doctor’s office, with the laboratory, with all the different components around that particular patient. This information can give out medication, dosing, frequency, adverse reactions. All those need to be compiled into an enterprise data warehouse. The patient’s record – and this is the key cohort – needs to be extended to encompass oncology tumor profiling, residual disease testing and other aspects of the genomic space, in order to get a progression analysis around the disease. The key to success is really the plan where this volume of data, the velocity and the types of data need to be built out.
With healthcare focused on outcomes and cost right now, there’s a number of data sources that need to be integrated into the whole and they need to blend the big data of genomics and proteomics in the right manner, but under the control of governance. Precision medicine gives clinicians tools to better understand the complex mechanisms underlying a patient’s health, disease, or condition, and to better predict which treatments will be most effective. Healthcare organizations and research centers are under a lot of pressure to translate scientific advances in molecular biology and target therapies, especially in the treatment of cancer. Today, we all recognize that cancer is an incredibly diverse disease and the ability to group individuals with matching genomic or proteomic profiles is a perfect example of a novel clinical trial design that can speed up saving lives.
So I believe that the Enterprise Data Warehouse is both strategic and tactical. While it can literally help save lives, it can also help healthcare organizations run their business more effectively and optimize even the most mundane processes to gain efficiency.
I will be joining Dr. Michael Ames, University of Colorado | Anschutz Medical Campus, at #HIMSS16 to present:
Session Title: Combining Multi-Institutional Data for Precision Medicine
Date: Thursday, March 3
Time: 10-10:30 AM PST
Location: Clinical & Business Intelligence Knowledge Center – Booth 14000
Add to HIMSS Agenda
Add to Calendar
Stop by and visit Dr. Ames and myself at the Perficient booth #2871 from 11:00 AM – 12:30 on Thursday morning after the session.
In addition, to this session, I will also be presenting the Perficient High-Performance Costing Expressway solution at the Population Health Knowledge Center, Kiosk #14106 on Wednesday, March 2, from 4:00-6:00 PM PST. Add to Calendar
If you are attending HIMSS be sure to enter to win a HIMSS Vegas VIP Experience – but hurry you must enter by 11:59, February 22, 2016.
Follow me @TerieMc
Learn more about Healthcare Analytics in our new guide.