Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
Dinghan Shen,Guoyin Wang,Wenlin Wang,Martin Renqiang Min,Qinliang Su,Yizhe Zhang,Chunyuan Li,Ricardo Henao,Lawrence Carin +8 more
- Vol. 1, pp 440-450
TLDR
This paper conducted a point-by-point comparative study between Simple Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models.Abstract:
Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring substantial number of parameters and expensive computations. However, there has not been a rigorous evaluation regarding the added value of sophisticated compositional functions. In this paper, we conduct a point-by-point comparative study between Simple Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Based upon this understanding, we propose two additional pooling strategies over learned word embeddings: (i) a max-pooling operation for improved interpretability; and (ii) a hierarchical pooling operation, which preserves spatial (n-gram) information within text sequences. We present experiments on 17 datasets encompassing three tasks: (i) (long) document classification; (ii) text sequence matching; and (iii) short text tasks, including classification and tagging.read more
Citations
More filters
Proceedings ArticleDOI
Model-agnostic Methods for Text Classification with Inherent Noise
TL;DR: This work evaluates model-agnostic methods to handle inherent noise in large scale text classification that can be easily incorporated into existing machine learning workflows with minimal interruption and describes the learning and application of this approach.
Posted Content
Efficient Sentence Embedding via Semantic Subspace Analysis
TL;DR: A novel sentence embedding method built upon semantic subspace analysis, called S3E, is proposed, which offers comparable or better performance than the state-of-the-art on both textual similarity tasks and supervised tasks.
Book ChapterDOI
DGRL: Text Classification with Deep Graph Residual Learning
TL;DR: This article proposed a deep graph convolutional network model that constructs graph base on words and documents to construct a new text graph based on the relevance of words and the relationship between words, which can slow down the risk of gradient disappearance.
Proceedings ArticleDOI
SpanPredict: Extraction of Predictive Document Spans with Neural Attention.
Vivek Subramanian,Matthew M. Engelhard,Samuel I. Berchuck,Liqun Chen,Ricardo Henao,Lawrence Carin +5 more
TL;DR: This model identifies semantically-cohesive spans and assigns them scores that agree with human ratings, while preserving classification performance, and decomposes predictions into a sum of contributions of distinct text spans.
Proceedings ArticleDOI
Recurrent Graph Neural Networks for Text Classification
TL;DR: This work proposes a model that uses a recurrent structure to capture contextual information as far as possible when learning word representations, which keeps word orders information compared to GNN-based networks.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.