scispace - formally typeset
Open AccessPosted Content

A C-LSTM Neural Network for Text Classification

Reads0
Chats0
TLDR
C-LSTM is a novel and unified model for sentence representation and text classification that outperforms both CNN and LSTM and can achieve excellent performance on these tasks.
Abstract
Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.

read more

Citations
More filters
Journal ArticleDOI

Korean Erroneous Sentence Classification With Integrated Eojeol Embedding

TL;DR: A novel approach of Integrated Eojeol (Korean syntactic word separated by space) Embedding to reduce the effect that poorly analyzed morphemes may make on sentence classification and two noise insertion methods that further improve classification performance are proposed.
Proceedings ArticleDOI

Semantic relatedness based re-ranker for text spotting

TL;DR: In this article, a neural approach was proposed to learn semantic relatedness for text spotting in the wild, where a text in an image (e.g. street sign, advertisement or bus destination) must be identified and recognized.
Book ChapterDOI

Generating Reliable Process Event Streams and Time Series Data Based on Neural Networks

TL;DR: The GENLOG approach as mentioned in this paper employs data resampling and enables the user to select different parts of the log data to orchestrate the training of a recurrent neural network for stream generation.
Journal ArticleDOI

InPHYNet: Leveraging attention-based multitask recurrent networks for multi-label physics text classification

TL;DR: A novel method for multi-label classification of paragraphs, where the paragraphs are chosen from physics subject of 6 t h to 1 2 t h grades from the curriculum of Central Board of Secondary Education (CBSE), India is introduced.
Proceedings ArticleDOI

Detecting Serendipitous Drug Usage in Social Media with Deep Neural Network Models

TL;DR: In the presence of an extremely imbalanced dataset and limited instances of serendipitous drug usage, deep neural network models did not outperform other machine learning models with n-gram and context features, but could more effectively utilize word embedding in feature construction.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Posted Content

Sequence to Sequence Learning with Neural Networks

TL;DR: This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Related Papers (5)