scispace - formally typeset
M

Mohammad Taha Bahadori

Researcher at Amazon.com

Publications -  38
Citations -  4444

Mohammad Taha Bahadori is an academic researcher from Amazon.com. The author has contributed to research in topics: Feature learning & Deep learning. The author has an hindex of 19, co-authored 38 publications receiving 3646 citations. Previous affiliations of Mohammad Taha Bahadori include University of Southern California & Georgia Institute of Technology.

Papers
More filters

Doctor AI: Predicting Clinical Events via Recurrent Neural Networks

TL;DR: Wang et al. as mentioned in this paper developed Doctor AI, a generic predictive model that covers observed medical conditions and medication uses using recurrent neural networks (RNNs) and applied it to longitudinal time stamped EHR data from 260k patients over 8 years.
Posted Content

RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism

TL;DR: The REverse Time AttentIoN model (RETAIN) is developed for application to Electronic Health Records (EHR) data and achieves high accuracy while remaining clinically interpretable and is based on a two-level neural attention model that detects influential past visits and significant clinical variables within those visits.
Proceedings Article

RETAIN: An interpretable predictive model for healthcare using reverse time attention mechanism

TL;DR: In this paper, a two-level neural attention model is proposed to detect influential past visits and significant clinical variables within those visits (e.g. key diagnoses) in reverse time order so that recent clinical visits are likely to receive higher attention.
Proceedings ArticleDOI

GRAM: Graph-based Attention Model for Healthcare Representation Learning

TL;DR: In this article, a GRAPH-based Attention Model (GRAM) is proposed to supplement EHR with hierarchical information inherent to medical ontologies, which is based on the data volume and the ontology structure.
Proceedings ArticleDOI

Multi-layer Representation Learning for Medical Concepts

TL;DR: This work proposes Med2Vec, which not only learns the representations for both medical codes and visits from large EHR datasets with over million visits, but also allows us to interpret the learned representations confirmed positively by clinical experts.