scispace - formally typeset
Open AccessPosted Content

A C-LSTM Neural Network for Text Classification

Reads0
Chats0
TLDR
C-LSTM is a novel and unified model for sentence representation and text classification that outperforms both CNN and LSTM and can achieve excellent performance on these tasks.
Abstract
Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.

read more

Citations
More filters
Journal ArticleDOI

Global-Local Mutual Attention Model for Text Classification

TL;DR: The Global-Local Mutual Attention (GLMA) model for text classification problems, which introduces a mutual attention mechanism for mutual learning between local semantic features and global long-term dependencies, is proposed.
Posted Content

Analyzing COVID-19 on Online Social Media: Trends, Sentiments and Emotions

TL;DR: This study provides a computational approach to unveiling public emotions and concerns on the pandemic in real-time, which would potentially help policy-makers better understand people's need and thus make optimal policy.
Proceedings ArticleDOI

Sentiment Analysis of Restaurant Reviews using Combined CNN-LSTM

TL;DR: A combined CNN-LSTM architecture used in the dataset and got an accuracy of 94.22%.
Journal ArticleDOI

Improving the accuracy using pre-trained word embeddings on deep neural networks for Turkish text classification

TL;DR: It was determined that the GRU and LSTM methods were more successful compared to the other deep neural network models used in this study, and the datasets and word vectors of the current study will be shared in order to contribute to the Turkish language literature in this field.
Proceedings ArticleDOI

Survey of Sentiment Analysis Using Deep Learning Techniques

TL;DR: A detailed review of deep learning techniques used in Sentiment Analysis at sentence level and aspect/target level is presented and the advantages and drawbacks of the methods are discussed along with their performance parameters.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Posted Content

Sequence to Sequence Learning with Neural Networks

TL;DR: This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Related Papers (5)