scispace - formally typeset
Open AccessProceedings Article

RETAIN: An interpretable predictive model for healthcare using reverse time attention mechanism

Reads0
Chats0
TLDR
In this paper, a two-level neural attention model is proposed to detect influential past visits and significant clinical variables within those visits (e.g. key diagnoses) in reverse time order so that recent clinical visits are likely to receive higher attention.
Abstract
Accuracy and interpretability are two dominant features of successful predictive models. Typically, a choice must be made in favor of complex black box models such as recurrent neural networks (RNN) for accuracy versus less accurate but more interpretable traditional models such as logistic regression. This tradeoff poses challenges in medicine where both accuracy and interpretability are important. We addressed this challenge by developing the REverse Time AttentIoN model (RETAIN) for application to Electronic Health Records (EHR) data. RETAIN achieves high accuracy while remaining clinically interpretable and is based on a two-level neural attention model that detects influential past visits and significant clinical variables within those visits (e.g. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that recent clinical visits are likely to receive higher attention. RETAIN was tested on a large health system EHR dataset with 14 million visits completed by 263K patients over an 8 year period and demonstrated predictive accuracy and computational scalability comparable to state-of-the-art methods such as RNN, and ease of interpretability comparable to traditional models.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Dipole: Diagnosis Prediction in Healthcare via Attention-based Bidirectional Recurrent Neural Networks

TL;DR: Dipole as discussed by the authors employs bidirectional recurrent neural networks to remember all the information of both the past visits and the future visits, and introduces three attention mechanisms to measure the relationships of different visits for the prediction.
Posted Content

GRAM: Graph-based Attention Model for Healthcare Representation Learning

TL;DR: Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data.
Posted Content

Reinforcement Learning in Healthcare: A Survey

TL;DR: This survey provides an extensive overview of RL applications in a variety of healthcare domains, ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis, and many other control or scheduling problems that have infiltrated every aspect of the healthcare system.
Journal ArticleDOI

Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond

TL;DR: This study surveyed the current progress of XAI and in particular its advances in healthcare applications, and introduced the solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios.
Related Papers (5)