scispace - formally typeset
Open AccessJournal ArticleDOI

Recurrent neural networks for classifying relations in clinical notes.

Reads0
Chats0
TLDR
The first models based on recurrent neural networks (more specifically Long Short-Term Memory - LSTM) for classifying relations from clinical notes show comparable performance to previously published systems while requiring no manual feature engineering.
About
This article is published in Journal of Biomedical Informatics.The article was published on 2017-08-01 and is currently open access. It has received 137 citations till now. The article focuses on the topics: Word embedding & Feature engineering.

read more

Citations
More filters
Journal ArticleDOI

Bidirectional LSTM with attention mechanism and convolutional layer for text classification

TL;DR: A novel and unified architecture which contains a bidirectional LSTM (BiLSTM), attention mechanism and the convolutional layer is proposed in this paper, which outperforms other state-of-the-art text classification methods in terms of the classification accuracy.
Journal ArticleDOI

Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review.

TL;DR: A systematic review of deep learning models for electronic health record (EHR) data is conducted, and various deep learning architectures for analyzing different data sources and their target applications are illustrated.
Posted Content

Graph Convolutional Networks for Text Classification

TL;DR: Zhang et al. as mentioned in this paper proposed a Text Graph Convolutional Network (Text GCN) for text classification, which jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents.
Journal ArticleDOI

Deep Sentiment Classification and Topic Discovery on Novel Coronavirus or COVID-19 Online Discussions: NLP Using LSTM Recurrent Neural Network Approach

TL;DR: In this article, the authors used automated extraction of COVID-19-related discussions from social media and a natural language process (NLP) method based on topic modeling to uncover various issues related to the disease from public opinions.
Journal ArticleDOI

A clinical text classification paradigm using weak supervision and deep representation.

TL;DR: In this article, a clinical text classification paradigm using weak supervision and deep representation was proposed to reduce human efforts of labeled training data creation and feature engineering for applying machine learning to clinical text classi cation.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: In this paper, the authors propose to use a soft-searching model to find the parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Journal ArticleDOI

Learning long-term dependencies with gradient descent is difficult

TL;DR: This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Related Papers (5)