Topic
Phrase
About: Phrase is a research topic. Over the lifetime, 12580 publications have been published within this topic receiving 317823 citations. The topic is also known as: syntagma & phrases.
Papers published on a yearly basis
Papers
More filters
•
TL;DR: The authors show that recursive neural models can outperform simple recurrent neural networks (LSTM and LSTM) on several tasks, such as sentiment classification at the sentence level and phrase level, matching questions to answer-phrases, discourse parsing and semantic relation extraction.
Abstract: Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. But there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper we benchmark {\bf recursive} neural models against sequential {\bf recurrent} neural models (simple recurrent and LSTM models), enforcing apples-to-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answer-phrases; (3) discourse parsing; (4) semantic relation extraction (e.g., {\em component-whole} between nouns).
Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require associating headwords across a long distance, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.
136 citations
•
20 Aug 1995TL;DR: This paper proposed a dependency-based method for evaluating broad-coverage parsers, which offers several advantages over previous methods that are based on phrase boundaries The error count score is not only more intuitively meaningful than other scores, but also more relevant to semantic interpretation.
Abstract: With the emergence of broad-coverage parsers, quantitative evaluation of parsers becomes increasingly more important. We propose a dependency-based method for evaluating broad-coverage parsers. The method offers several advantages over previous methods that are based on phrase boundaries The error count score. We propose here is not only more intuitively meaningful than other scores, but also more relevant to semantic interpretation. We will also present an algorithm for transforming constituency trees into dependency trees so that the evaluation method is applicable to both dependency and constituency grammars. Finally, we discuss a set of operations for modifying dependency trees that can be used lo eliminate inconsequential differences among different parse trees and allow us to selectively evaluate different aspects of a parser.
135 citations
••
TL;DR: Both types of phrasal verbs induced structural generalizations and differed little in their ability to do so, interpreted in terms of the role of abstract structural processes in language production.
135 citations
••
23 Oct 2006TL;DR: This paper draws an analogy between image retrieval and text retrieval and proposes a visual phrase-based approach to retrieve images containing desired objects and devise methods on how to construct visual phrases from images and how to encode the visual phrase for indexing and retrieval.
Abstract: In this paper, we draw an analogy between image retrieval and text retrieval and propose a visual phrase-based approach to retrieve images containing desired objects. The visual phrase is defined as a pair of adjacent local image patches and is constructed using data mining. We devise methods on how to construct visual phrases from images and how to encode the visual phrase for indexing and retrieval. Our experiments demonstrate that visual phrase-based retrieval approach can be very efficient and can be 20% more effective than its visual word-based counterpart.
135 citations
••
28 Jul 2003TL;DR: This paper investigates the use of concept-based document representations to supplement word- or phrase-based features, and proposes to use AdaBoost to optimally combine weak hypotheses based on both types of features.
Abstract: Term-based representations of documents have found wide-spread use in information retrieval. However, one of the main shortcomings of such methods is that they largely disregard lexical semantics and, as a consequence, are not sufficiently robust with respect to variations in word usage.In this paper we investigate the use of concept-based document representations to supplement word- or phrase-based features. The utilized concepts are automatically extracted from documents via probabilistic latent semantic analysis. We propose to use AdaBoost to optimally combine weak hypotheses based on both types of features. Experimental results on standard benchmarks confirm the validity of our approach, showing that AdaBoost achieves consistent improvements by including additional semantic features in the learned ensemble.
135 citations