scispace - formally typeset
Open AccessPosted Content

A C-LSTM Neural Network for Text Classification

Reads0
Chats0
TLDR
C-LSTM is a novel and unified model for sentence representation and text classification that outperforms both CNN and LSTM and can achieve excellent performance on these tasks.
Abstract
Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.

read more

Citations
More filters
Journal ArticleDOI

CTRAN: CNN-Transformer-based Network for Natural Language Understanding

TL;DR: The authors propose CTRAN, a novel encoder-decoder CNN-Transformer-based architecture for intent-detection and slot-filling, which utilizes a zero diagonal mask, aligning output tags with input tokens.
Journal ArticleDOI

Automated Compliance Blueprint Optimization with Artificial Intelligence

TL;DR: The practicality of an approach to automatically analyze regulatory standards using Artificial Intelligence (AI) techniques is demonstrated, and early results to identify the mapping between techspecs and regulation controls are presented.
Proceedings ArticleDOI

A Dual-channel Text Classification Model based on an Interactive Attention Mechanism

Wei-lun Han, +1 more
TL;DR: A dual-channel text classification model based on Interactive Attention Mechanism, which uses skip-gram to embed words into dense low latitude vectors and obtains the text embedding matrix, and the classification effect of this hybrid model is improved.
Proceedings ArticleDOI

A Deep Learning Approach for Minimizing False Negatives in Predicting Receipt Emails

TL;DR: In this article , a Deep Learning algorithm named Long Short-Term Memory (LSTM) is implemented and its results compared with the previous implementation. And the results showed that LSTM is more effective in terms of accuracy compared to the previous ML approach.
Journal ArticleDOI

Large-scale Text Multiclass Classification Using Spark ML Packages

TL;DR: Based on Spark ML, logistic regression is used to classify the preprocessed data originated from standard dataset 20newsgroups, and 5-fold cross validation experiments prove that the result is reliable and procedure design is feasible.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Posted Content

Sequence to Sequence Learning with Neural Networks

TL;DR: This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Related Papers (5)