Topic
Semantic similarity
About: Semantic similarity is a research topic. Over the lifetime, 14605 publications have been published within this topic receiving 364659 citations. The topic is also known as: semantic relatedness.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The proposed approach exploits ontology hierarchical structure and relations to provide a more accurate assessment of the similarity between terms for word sense disambiguation and introduces lexical chains to extract a set of semantically related words from texts, which can represent the semantic content of the texts.
Abstract: A modified WordNet based similarity measure for word sense disambiguation.Lexical chains as text representation for ideally cover the theme of texts.Extracted core semantics are sufficient to reduce dimensionality of feature set.The proposed scheme is able to correctly estimate the true number of clusters.The topic labels have good indicator of recognizing and understanding the clusters. Traditional clustering algorithms do not consider the semantic relationships among words so that cannot accurately represent the meaning of documents. To overcome this problem, introducing semantic information from ontology such as WordNet has been widely used to improve the quality of text clustering. However, there still exist several challenges, such as synonym and polysemy, high dimensionality, extracting core semantics from texts, and assigning appropriate description for the generated clusters. In this paper, we report our attempt towards integrating WordNet with lexical chains to alleviate these problems. The proposed approach exploits ontology hierarchical structure and relations to provide a more accurate assessment of the similarity between terms for word sense disambiguation. Furthermore, we introduce lexical chains to extract a set of semantically related words from texts, which can represent the semantic content of the texts. Although lexical chains have been extensively used in text summarization, their potential impact on text clustering problem has not been fully investigated. Our integrated way can identify the theme of documents based on the disambiguated core features extracted, and in parallel downsize the dimensions of feature space. The experimental results using the proposed framework on reuters-21578 show that clustering performance improves significantly compared to several classical methods.
212 citations
••
07 Jan 2009TL;DR: A model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation is proposed, extending past work in natural logic by incorporating both semantic exclusion and implicativity.
Abstract: We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.
212 citations
••
TL;DR: This work presents three experiments that indicate that similarity is strongly influenced by transformation distance, and introduces a family of transformation-based accounts of similarity, called 'Representational Distortion', as a specific example of a transformational approach to similarity.
209 citations
••
TL;DR: This paper provides rating norms for a set of symbols and icons selected from a wide variety of sources that have been quantified and include concreteness, complexity, meaningfulness, familiarity, and semantic distance.
Abstract: This paper provides rating norms for a set of symbols and icons selected from a wide variety of sources. These ratings enable the effects of symbol characteristics on user performance to be systematically investigated. The symbol characteristics that have been quantified are considered to be of central relevance to symbol usability research and include concreteness, complexity, meaningfulness, familiarity, and semantic distance. The interrelationships between each of these dimensions is examined and the importance of using normative ratings for experimental research is discussed.
207 citations