Using aggregate patient data at the bedside via an on-demand consultation service
read more
Citations
Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery
Personalizing cholesterol treatment recommendations for primary cardiovascular disease prevention
Building a Learning Health System: Creating an Analytical Workflow for Evidence Generation to Inform Institutional Clinical Care Guidelines
A widely distributed gene cluster compensates for uricase loss in hominids
References
Regularization and variable selection via the elastic net
Regularization Paths for Generalized Linear Models via Coordinate Descent
Evidence based medicine: what it is and what it isn't.
An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies
Related Papers (5)
Frequently Asked Questions (17)
Q2. What is the expected false positive rate for consultations?
Because the OMOP and EU-ADR reference sets were constructed to evaluate methods for treatment comparisons, the 17-20% expected false positive rate is applicable to consultations requesting a comparison of the hazard ratio of an outcome between treatments.
Q3. What is the net benefit of providing on-demand evidence to inform clinical decision making?
On-demand evidence generation to inform clinical decision making is an achievable goal, given the confluence of scalable technology for data analysis, a growing data science workforce, the training of increasingly data savvy clinicians, and the availability of large amounts of patient data from EMRs and claims 8,14.
Q4. How many consultations were completed in the first 100?
As the service workflow matured, by the end of the study, 19 consultation reports were returned in 48 hours or less by reusing cohort definitions, experience in PICOT formulation of the request, and analysis code optimization.
Q5. How many non-associated pairs were found in the OMOP community?
Of the 234 non-associated pairs from the OMOP community, there were 137 drug-outcome pairs for which a minimum 100 patients exposed to the drug were present in Stanford data.
Q6. What were the common types of debriefs?
Debriefs where the requester used the results from the consultation report directly as the basis of a publication, poster, abstract, grant submission, or presentation were categorized as follow-up analyses.
Q7. How often did the patient receive a positive blood culture?
By analyzing approximately 16,000 procalcitonin test results and 29,000 blood culture results, the authors calculated how often a positive blood culture was obtained within 48 hours of one cut-off value for a procalcitonin result, how frequently antibiotic therapy was discontinued at different cut-offs of procalcitonin values, and how often antibiotics were restarted within 72 hours of discontinuation.
Q8. Why did 83 consultations not be completed?
17 consultations could not be completed due to missing data elements, available data sources having too few patients meeting the specified cohort criteria, inability to define a cohort, or requiring an unsupported study design.
Q9. What is the core tenet of the learning health system?
A decade later, the adoption of EMRs across the United States and internationally, the increasing ease of use of advanced statistical methods, and the ability to compute with large patient cohorts has enabled a core tenet of the learning health system: deriving on-demand evidence for diverse clinical scenarios from the EMR7,13.
Q10. What is the definition of evidence-based medicine?
Evidence-based medicine emphasizes the “conscientious, explicit and judicious use of current best evidence”1 when making treatment decisions2,3.
Q11. What is the rate of agreement between the RCTs and RCTs?
the authors found 68-74% concordance of consultation results across multiple datasets, a rate of agreement comparable both to the rate at which results from randomized controlled trials (RCTs) agree with each other (67-87%)69 and to the rate at which results derived from observational claims data agree with RCTs (60-80%)70.
Q12. How many consultations did the team complete in one week?
Based on the time required to respond to the first 100 consultations received (see Findings from the first 100 consultations), the authors believe a team composed of one full-time clinical informatics fellow, two full-time EMR data specialists, and a 20% part-time data scientist would be able to complete up to 20 such consultations in one week.
Q13. What were the frequent diagnoses of a melanoma patient?
The authors provided the requestor with summaries of the most frequent diagnoses preceding and following mononeuritis multiplex diagnosis in 118 similarly aged patients, which included bacterial and viral infections as well as psychosomatic disorders.
Q14. What were the motivations for the consultation requests?
Consultation requests had diverse motivationsConsultation requests were driven by a variety of motivations, including evaluating patient management strategies for a given disease or patient presentation, identifying comparatively effective treatments for patients with typically understudied characteristics, and quantifying associations between diseases.
Q15. How many outcomes were concordant across datasets?
When comparing results obtained using different data sources for the same consultation request, 68% to 74% of results were concordant across datasets.
Q16. What was the first time that EMRs were used in clinical care?
At the time, incorporating data from EMRs into clinical decision making required significant manual effort, rendering it infeasible for use in routine patient care.
Q17. What were the reasons for the longer turnaround times?
Longer turnaround times occurred when additional data elements were needed, there were delays in scheduling conversations with the requestor, or when matching required substantial time for large cohorts.