scispace - formally typeset
Search or ask a question
Topic

Phrase

About: Phrase is a research topic. Over the lifetime, 12580 publications have been published within this topic receiving 317823 citations. The topic is also known as: syntagma & phrases.


Papers
More filters
Journal ArticleDOI
TL;DR: The effects of plausibility were substantially larger and longer lasting than the effects of argument status, but both appeared very early in the reading of the prepositional phrase.
Abstract: In two experiments, we investigated how reading time was affected by the plausibility of the prepositional phrase in subject-verb-noun-phrase-prepositional-phrase sentences, and the status of the prepositional phrase as argument versus adjunct of the verb. Highly plausible prepositional phrases were read faster than less plausible ones, and argument prepositional phrases were read faster than adjuncts. These effects appeared both in a self-paced reading experiment and in an experiment that measured eye movements during normal reading. The effects of plausibility were substantially larger and longer lasting than the effects of argument status, but both appeared very early in the reading of the prepositional phrase. The implications of these effects for models of parsing and sentence interpretation are discussed.

66 citations

Proceedings Article
27 Jul 2011
TL;DR: A source dependency structure based model that requires no heuristics or separate ordering models of the previous works to control the word order of translations and performs well on long distance reordering.
Abstract: Dependency structure, as a first step towards semantics, is believed to be helpful to improve translation quality. However, previous works on dependency structure based models typically resort to insertion operations to complete translations, which make it difficult to specify ordering information in translation rules. In our model of this paper, we handle this problem by directly specifying the ordering information in head-dependents rules which represent the source side as head-dependents relations and the target side as strings. The head-dependents rules require only substitution operation, thus our model requires no heuristics or separate ordering models of the previous works to control the word order of translations. Large-scale experiments show that our model performs well on long distance reordering, and outperforms the state-of-the-art constituency-to-string model (+1.47 BLEU on average) and hierarchical phrase-based model (+0.46 BLEU on average) on two Chinese-English NIST test sets without resort to phrases or parse forest. For the first time, a source dependency structure based model catches up with and surpasses the state-of-the-art translation models.

66 citations

Journal ArticleDOI
TL;DR: The authors explored the role of the word position-in-text in sentence and paragraph reading and found that the increase in inspection times is driven by the visual boundaries of the text organized in lines, rather than by syntactic sentence boundaries.
Abstract: The present study explores the role of the word position-in-text in sentence and paragraph reading. Three eye-movement data sets based on the reading of Dutch and German unrelated sentences reveal a sizeable, replicable increase in reading times over several words at the beginning and the end of sentences. The data from the paragraph-based English-language Dundee corpus replicate the pattern and also indicate that the increase in inspection times is driven by the visual boundaries of the text organized in lines, rather than by syntactic sentence boundaries. We argue that this effect is independent of several established lexical, contextual, and oculomotor predictors of eye-movement behaviour. We also provide evidence that the effect of word position-in-text has two independent components: a start-up effect, arguably caused by a strategic oculomotor programme of saccade planning over the line of text, and a wrap-up effect, originating in cognitive processes of comprehension and semantic integration.

66 citations

Journal ArticleDOI
TL;DR: The size of a randomly selected phrase, and the average number of phrases of a given size (the so-called average profile of phrase sizes) are focused on.
Abstract: Consider the parsing algorithm developed by Lempel and Ziv (1978) that partitions a sequence of length n into variable phrases (blocks) such that a new block is the shortest substring not seen in the past as a phrase. In practice, the following parameters are of interest: number of phrases, the size of a phrase, the number of phrases of given size, and so forth. In this paper, we focus on the size of a randomly selected phrase, and the average number of phrases of a given size (the so-called average profile of phrase sizes). These parameters can be efficiently analyzed through a digital search tree representation. For a memoryless source with unequal probabilities of symbols generation (the so-called asymmetric Bernoulli model), we prove that the size of a typical phrase is asymptotically normally distributed with mean and variance explicitly computed. In terms of digital search trees, we prove the normal limiting distribution of the typical depth (i.e., the length of a path from the root to a randomly selected node). The latter finding is proved by a technique that belongs to the toolkit of the "analytical analysis of algorithms", and it seems to be novel in the context of data compression. >

66 citations

Proceedings Article
John DeNero1, Jakob Uszkoreit1
27 Jul 2011
TL;DR: This paper presents a method for inducing parse trees automatically from a parallel corpus, instead of using a supervised parser trained on a tree-bank, showing that the syntactic structure which is relevant to MT pre-ordering can be learned automatically from parallel text, thus establishing a new application for unsupervised grammar induction.
Abstract: When translating among languages that differ substantially in word order, machine translation (MT) systems benefit from syntactic pre-ordering---an approach that uses features from a syntactic parse to permute source words into a target-language-like order. This paper presents a method for inducing parse trees automatically from a parallel corpus, instead of using a supervised parser trained on a tree-bank. These induced parses are used to pre-order source sentences. We demonstrate that our induced parser is effective: it not only improves a state-of-the-art phrase-based system with integrated reordering, but also approaches the performance of a recent pre-ordering method based on a supervised parser. These results show that the syntactic structure which is relevant to MT pre-ordering can be learned automatically from parallel text, thus establishing a new application for unsupervised grammar induction.

66 citations


Network Information
Related Topics (5)
Sentence
41.2K papers, 929.6K citations
92% related
Vocabulary
44.6K papers, 941.5K citations
88% related
Natural language
31.1K papers, 806.8K citations
84% related
Grammar
33.8K papers, 767.6K citations
83% related
Perception
27.6K papers, 937.2K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023467
20221,079
2021360
2020470
2019525
2018535