D
Diana Inkpen
Researcher at University of Ottawa
Publications - 206
Citations - 6889
Diana Inkpen is an academic researcher from University of Ottawa. The author has contributed to research in topics: Computer science & Machine translation. The author has an hindex of 35, co-authored 187 publications receiving 5666 citations. Previous affiliations of Diana Inkpen include Ottawa University & University of Toronto.
Papers
More filters
Proceedings ArticleDOI
Enhanced LSTM for Natural Language Inference
TL;DR: This paper presents a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset, and demonstrates that carefully designing sequential inference models based on chain LSTMs can outperform all previous models.
Journal ArticleDOI
SENTIMENT CLASSIFICATION of MOVIE REVIEWS USING CONTEXTUAL VALENCE SHIFTERS
Alistair Kennedy,Diana Inkpen +1 more
TL;DR: It is shown that extending the term‐counting method with contextual valence shifters improves the accuracy of the classification, and combining the two methods achieves better results than either method alone.
Journal ArticleDOI
Semantic text similarity using corpus-based word similarity and string similarity
Aminul Islam,Diana Inkpen +1 more
TL;DR: A method for measuring the semantic similarity of texts using a corpus-based measure of semantic word similarity and a normalized and modified version of the Longest Common Subsequence string matching algorithm is presented.
Book ChapterDOI
Offensive language detection using multi-level classification
TL;DR: An automatic flame detection method is described which extracts features at different conceptual levels and applies multi-level classification for flame detection and there is an auxiliary weighted pattern repository which improves accuracy by matching the text to its graded entries.
Proceedings ArticleDOI
Neural Natural Language Inference Models Enhanced with External Knowledge
TL;DR: This paper enrich the state-of-the-art neural natural language inference models with external knowledge and demonstrate that the proposed models improve neural NLI models to achieve the state of the art performance on the SNLI and MultiNLI datasets.