4:30 - 4:50 pmSaturday, September 17
LK 130
Doctors who ordered this also ordered... OrderRex: data-mining electronic health records for clinical decision support
LK 130
Doctors who ordered this also ordered... OrderRex: data-mining electronic health records for clinical decision support
Stanford University
BackgroundHigh-quality, precise medical practice is challenging in the face of uncertainty when the majority of clinical decisions lack adequate evidence to support or refute their practice (e.g., only... Read more

Description

Background
High-quality, precise medical practice is challenging in the face of uncertainty when the majority of clinical decisions lack adequate evidence to support or refute their practice (e.g., only 11% of clinical practice guideline recommendations are based on high-quality evidence). Pervasive practice variability then compromises both the quality and cost-efficiency of healthcare (e.g., patients receive only 55% of evidence-based recommended medical care). Clinical decision support like order sets help distribute expertise, but are constrained by resource intensive manual development.

Objective
To overcome scalability limitations by automatically generating decision support content from existing practice patterns, analogous to Amazon.com’s product recommender. To perform the first structured validation of such a system against external standards-of-care and outcome predictions.

Methods
We extracted deidentified electronic health record data from all hospitalizations at Stanford Hospital in 2011 (>5.4M structured data items from >19K patients) to build a system with association statistics for 811 clinical orders (e.g., labs, imaging, medications) and clinical outcomes. We manually reviewed the National Guideline Clearinghouse for diagnoses of chest pain, gastrointestinal hemorrhage, and pneumonia. We compared system generated clinical orders against guideline referenced orders by receiver operating characteristic (ROC) analysis. Human authored order sets provided real-world benchmarks. We compared predicted vs. actual outcomes by ROC analysis for separate validation patients.

Results
System generated orders were overall consistent with guidelines (ROC AUC c-statistics 0.89, 0.95, 0.83) and improve upon statistical prevalence (0.76, 0.74, 0.73) and pre-existing order sets (0.81, 0.77, 0.73) (P

Conclusions
Automatically generated order suggestions can reproduce and even optimize manual constructs like order sets while remaining largely concordant with guidelines and avoiding inappropriate recommendations. This has even more important implications for prevalent cases where well-defined guidelines and order sets do not exist. The same methodology is predictive of clinical outcomes comparable to state-of-the-art prognosis models (e.g., APACHE II), pointing to opportunities to link suggestions against favorable outcomes.

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search