scispace - formally typeset
Open AccessProceedings ArticleDOI

Supporting inferences in semantic space: representing words as regions

Reads0
Chats0
TLDR
This paper presents a model for learning a region representation for word meaning in semantic space, based on the fact that points at close distance tend to represent similar meanings, and shows that this model can be used to predict, with high precision, when a hyponymy-based inference rule is applicable.
Abstract
Semantic space models represent the meaning of a word as a vector in high-dimensional space. They offer a framework in which the meaning representation of a word can be computed from its context, but the question remains how they support inferences. While there has been some work on paraphrase-based inferences in semantic space, it is not clear how semantic space models would support inferences involving hyponymy, like horse ran → animal moved. In this paper, we first discuss what a point in semantic space stands for, contrasting semantic space with Gardenforsian conceptual space. Building on this, we propose an extension of the semantic space representation from a point to a region. We present a model for learning a region representation for word meaning in semantic space, based on the fact that points at close distance tend to represent similar meanings. We show that this model can be used to predict, with high precision, when a hyponymy-based inference rule is applicable. Moving beyond paraphrase-based and hyponymy-based inference rules, we last discuss in what way semantic space models can support inferences.

read more

Citations
More filters
Proceedings Article

Entailment above the word level in distributional semantics

TL;DR: Two ways to detect entailment using distributional semantic representations of phrases are introduced and nominal and quantifier phrase entailment appears to be cued by different distributional correlates, as predicted by the type-based view of entailment in formal semantics.
Journal Article

Distributional semantics in linguistic and cognitive research

TL;DR: This work concludes that a general model of meaning can indeed be discerned behind the differences, a model that formulates specific hypotheses on the format of semantic representations, and on the way they are built and processed by the human mind.
Proceedings Article

Identifying hypernyms in distributional semantic spaces

TL;DR: This paper applies existing directional similarity measures to identify hypernyms with a state-of-the-art distributional semantic model and proposes a new directional measure that achieves the best performance in hypernym identification.
Proceedings ArticleDOI

Intrinsic Evaluations of Word Embeddings: What Can We Do Better?

TL;DR: It is argued for a shift from abstract ratings of word embedding “quality” to exploration of their strengths and weaknesses to do justice to the strengths of distributional meaning representations.
Proceedings ArticleDOI

Representing words as regions in vector space

TL;DR: This paper discusses two models that represent word meaning as regions in vector space and finds that both models perform at over 95% F-score on a token classification task.
References
More filters
Journal ArticleDOI

WordNet : an electronic lexical database

Christiane Fellbaum
- 01 Sep 2000 - 
TL;DR: The lexical database: nouns in WordNet, Katherine J. Miller a semantic network of English verbs, and applications of WordNet: building semantic concordances are presented.
Book

Introduction to Information Retrieval

TL;DR: In this article, the authors present an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections.
Journal ArticleDOI

A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.

TL;DR: A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena.
Proceedings ArticleDOI

Automatic acquisition of hyponyms from large text corpora

TL;DR: A set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest are identified.
Book

The Big Book of Concepts

TL;DR: The Big Book of Concepts as discussed by the authors reviews and evaluates research on diverse topics such as category learning, word meaning, conceptual development in infants and children, and the basic level of categorization.
Trending Questions (1)
How to draw inferences in Python?

They offer a framework in which the meaning representation of a word can be computed from its context, but the question remains how they support inferences.