scispace - formally typeset
Open AccessPosted Content

Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information

TLDR
The authors proposed a densely-connected co-attentive recurrent neural network (C-RNN), which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers.
Abstract: 
Sentence matching is widely used in various natural language tasks such as natural language inference, paraphrase identification, and question answering. For these tasks, understanding logical and semantic relationship between two sentences is required but it is yet challenging. Although attention mechanism is useful to capture the semantic relationship and to properly align the elements of two sentences, previous methods of attention mechanism simply use a summation operation which does not retain original features enough. Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and the co-attentive feature information from the bottommost word embedding layer to the uppermost recurrent layer. To alleviate the problem of an ever-increasing size of feature vectors due to dense concatenation operations, we also propose to use an autoencoder after dense concatenation. We evaluate our proposed architecture on highly competitive benchmark datasets related to sentence matching. Experimental results show that our architecture, which retains recurrent and attentive features, achieves state-of-the-art performances for most of the tasks.

read more

Citations
More filters
Posted Content

The Natural Language Decathlon: Multitask Learning as Question Answering

TL;DR: Presented on August 28, 2018 at 12:15 p.m. in the Pettit Microelectronics Research Center, Room 102 A/B.
Journal ArticleDOI

Deep Learning--based Text Classification: A Comprehensive Review

TL;DR: This paper provided a comprehensive review of more than 150 deep learning-based models for text classification developed in recent years, and discussed their technical contributions, similarities, and strengths, and provided a quantitative analysis of the performance of different deep learning models on popular benchmarks.
Posted Content

Multi-Task Deep Neural Networks for Natural Language Understanding

TL;DR: A Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks that allows domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations.
Posted Content

Semantics-aware BERT for Language Understanding

TL;DR: This work proposes to incorporate explicit contextual semantics from pre-trained semantic role labeling, and introduces an improved language representation model, Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing contextual semantics over a BERT backbone.
Proceedings ArticleDOI

Exploiting Edge Features for Graph Neural Networks

TL;DR: In this article, the authors propose to use doubly stochastic normalization of graph edge features instead of the commonly used row or symmetric normalization approaches used in current graph neural networks, and construct new formulas for the operations in each individual layer so that they can handle multi-dimensional edge features.
References
More filters
Proceedings ArticleDOI

Noise-Contrastive Estimation for Answer Selection with Deep Neural Networks

TL;DR: The Noise-Contrastive Estimation approach is extended with a triplet ranking loss function to exploit interactions in triplet inputs over the question paired with positive and negative examples and achieves state-of-the-art effectiveness without the need for external knowledge sources or feature engineering.
Proceedings ArticleDOI

Inter-Weighted Alignment Network for Sentence Pair Modeling

TL;DR: A model to measure the similarity of a sentence pair focusing on the interaction information is proposed and the word level similarity matrix is utilized to discover fine-grained alignment of two sentences.
Proceedings Article

Investigating a Generic Paraphrase-Based Approach for Relation Extraction

TL;DR: This work proposes a generic paraphrase-based approach for Relation Extraction (RE), aiming at a dual goal: obtaining an applicative evaluation scheme for paraphrase acquisition and obtaining a generic and largely unsupervised configuration for RE.
Proceedings ArticleDOI

Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

TL;DR: This paper describes a model (alpha) that is ranked among the top in the Shared Task, on both the in- domain test set and on the cross-domain test set, demonstrating that the model generalizes well to theCross-domain data.
Book ChapterDOI

Paraphrase identification on the basis of supervised machine learning techniques

TL;DR: The objective to increase the final performance of the system is to scrutinize the influence of the combination of lexical and semantic information, as well as techniques for classifier combination.
Related Papers (5)