scispace - formally typeset
SciSpace - Your AI assistant to discover and understand research papers | Product Hunt

Proceedings ArticleDOI

LSTM-CNN Hybrid Model for Text Classification

01 Oct 2018-

TL;DR: A hybrid model of LSTM and CNN is proposed that can effectively improve the accuracy of text classification and the performance of the hybrid model is compared with that of other models in the experiment.

AbstractText classification is a classic task in the field of natural language processing. however, the existing methods of text classification tasks still need to be improved because of the complex abstraction of text semantic information and the strong relecvance of context. In this paper, we combine the advantages of two traditional neural network model, Long Short-Term Memory(LSTM) and Convolutional Neural Network(CNN). LSTM can effectively preserve the characteristics of historical information in long text sequences, and extract local features of text by using the structure of CNN. We proposes a hybrid model of LSTM and CNN, construct CNN model on the top of LSTM, the text feature vector output from LSTM is further extracted by CNN structure. The performance of the hybrid model is compared with that of other models in the experiment. The experimental results show that the hybrid model can effectively improve the accuracy of text classification.

...read more

Citations
More filters


Journal ArticleDOI
TL;DR: This study addresses the early detection of suicide ideation through deep learning and machine learning-based classification approaches applied to Reddit social media by employing an LSTM-CNN combined model to evaluate and compare to other classification models.
Abstract: Suicide ideation expressed in social media has an impact on language usage. Many at-risk individuals use social forum platforms to discuss their problems or get access to information on similar tasks. The key objective of our study is to present ongoing work on automatic recognition of suicidal posts. We address the early detection of suicide ideation through deep learning and machine learning-based classification approaches applied to Reddit social media. For such purpose, we employ an LSTM-CNN combined model to evaluate and compare to other classification models. Our experiment shows the combined neural network architecture with word embedding techniques can achieve the best relevance classification results. Additionally, our results support the strength and ability of deep learning architectures to build an effective model for a suicide risk assessment in various text classification tasks.

30 citations


Cites background or methods from "LSTM-CNN Hybrid Model for Text Clas..."

  • ...Thus, a single neuron in CNN represents a region within an input sample such as a piece of image or text, in our convolution layer we follow the work by [46]....

    [...]

  • ...In our experiment, we use multiple convolutional filters with various parameter initializations to extract multiple maps from the text [46]....

    [...]

  • ...input sequences X = (xt) with a d-dimensional word embedding vector, while H represents the number of LSTM hidden layer nodes [46]....

    [...]


Proceedings ArticleDOI
20 Jul 2019
TL;DR: This work uses sequential patterns of users' behavior which are ordered by time from tourist including opinions, reviews as input data and uses Convolutional Long Short-Term Deep Learning (CLSTDL) which is a deep learning technique that combines convolutional Neural Network (CNN) with Long short-Term Memory (LSTM) to predict the expected location.
Abstract: The trend of industry tourism GDP is increasing in every year that speculates from statistics of the World Travel & Tourism Council (2018). Moreover, travel industry not only considered as the most dynamic sector but also the most importance generator of income and jobs in the country. Thus, the prototype for tourism plans are needed for strategic planning. Currently, social web is a great tool for providing useful insights about tourist behaviors especially with the text data that comes from travelers' opinions. In this work, we use sequential patterns of users' behavior which are ordered by time from tourist including opinions, reviews as our input data. Then, we use Convolutional Long Short-Term Deep Learning (CLSTDL) which is a deep learning technique that combines Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) to predict the expected location. During the process, the output of CNN will be fed into LSTM to learn the sequence pattern behavior of traveler. The model output is then used to predict the next location that particular travelers are likely to go. The experimental results have shown that CLSTDL outperforms other models when evaluating with the accuracy and loss metrics.

4 citations


Cites methods from "LSTM-CNN Hybrid Model for Text Clas..."

  • ...Other researches in section II [1,3,4,5,6,7,8,9,10,11], they used the hybrid method for image processing and text classification....

    [...]

  • ...al that extracted features from CNN and learning sequences by LSTM [8]....

    [...]


Journal ArticleDOI
TL;DR: An improved emotion analysis model based on Bi-LSTM model to classify the further four-dimensional emotions of Pleasure, Anger, Sorrow and Joy is proposed and tags such as comment time and user name are added to the danmaku information.
Abstract: With the rapid development of social media, danmaku video provides a platform for users to communicate online. To some extent, danmaku video provides emotional timing information and an innovative method to analyze video data. In the age of big data, studying the characteristics of danmaku and its emotional tendencies can not only help us understand the psychological characteristics of users but also feedback the effective information of users to video platforms, which can help the platforms optimize related short video recommendations so that it can provide a more accurate solution for the selection of audiences during video production. However, danmaku is different from traditional comments. Current emotion classification methods are only suitable for two-dimensional classification which are not suitable for danmaku emotion analysis. Aiming at the problems such as the colloquialism, diversity, spelling errors, structural non-linearity informal language on the Internet, diversity of social topics, and context dependency of emotion analysis of the danmaku data, this paper proposes an improved emotion analysis model based on Bi-LSTM model to classify the further four-dimensional emotions of Pleasure, Anger, Sorrow and Joy. Furthermore, we add tags such as comment time and user name to the danmaku information. Experimental results show that the improved model has higher Accuracy, Recall, Precision, and F1-Score under the same conditions compared with the CNN and SVM. The classification effect of improved model is close to the SOTA. Experimental results also show that the improved model can be effectively applied to the analysis of irregular danmaku emotion.

4 citations


Cites methods from "LSTM-CNN Hybrid Model for Text Clas..."

  • ...Through statistical analysis, the four performance indexes of Accuracy, Recall, Precision, and F1-Score (as shown in FIGURE 4) in the model used in the experiment are higher than they in the CNN-Convolutional Neural Networks model [21]....

    [...]


Journal ArticleDOI
TL;DR: A CLSTM-based topic memory network for marketing intention detection and a new combination ensemble both long and short term memory (LSTM) and convolution neural network (CNN) is proposed.
Abstract: In recent years, neural network-based models such as machine learning and deep learning have achieved excellent results in text classification. On the research of marketing intention detection, classification measures are adopted to identify news with marketing intent. However, most of current news appears in the form of dialogs. There are some challenges to find potential relevance between news sentences to determine the latent semantics. In order to address this issue, this paper has proposed a CLSTM-based topic memory network (called CLSTM-TMN for short) for marketing intention detection. A ReLU-Neuro Topic Model (RNTM) is proposed. A hidden layer is constructed to efficiently capture the subject document representation, Potential variables are applied to enhance the granularity of subject model learning. We have changed the structure of current Neural Topic Model (NTM) to add CLSTM classifier. This method is a new combination ensemble both long and short term memory (LSTM) and convolution neural network (CNN). The CLSTM structure has the ability to find relationships from a sequence of text input, and the ability to extract local and dense features through convolution operations. The effectiveness of the method for marketing intention detection is illustrated in the experiments. Our detection model has a more significant improvement in F1 (7%) than other compared models.

4 citations


References
More filters

Journal ArticleDOI
Abstract: A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.

6,194 citations


Proceedings Article
01 Jan 2010
TL;DR: Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model.
Abstract: A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition

4,971 citations


"LSTM-CNN Hybrid Model for Text Clas..." refers methods in this paper

  • ...Neural network models, such as Convolutional Neural Network(CNN)[8] and Recurrent Neural Network(RNN)[9] are used for text classification tasks, and the performance of neural network models are better than traditional machine learning methods....

    [...]



Journal Article
Abstract: Maximum Entropy Model is a probability estimation technique widely used for a variety of natural language tasks. It offers a clean and accommodable frame to combine diverse pieces of contextual information to estimate the probability of a certain linguistics phenomena. This approach for many tasks of NLP perform near state-of-the-art level, or outperform other competing probability methods when trained and tested under similar conditions. In this paper, we use maximum entropy model for text categorization. We compare and analyze its categorization performance using different approaches for text feature generation, different number of features and smoothing technique. Moreover, in experiments we compare it to Bayes, KNN and SVM, and show that its performance is higher than Bayes and comparable with KNN and SVM. We think it is a promising technique for text categorization.

35 citations