scispace - formally typeset
Search or ask a question
Topic

Phrase

About: Phrase is a research topic. Over the lifetime, 12580 publications have been published within this topic receiving 317823 citations. The topic is also known as: syntagma & phrases.


Papers
More filters
Book ChapterDOI
01 Jan 1983
TL;DR: While certain phonological phenomena can be described with rules that modify particular segments in particular segmental context, these segmental contexts are not enough to determine whether or not a rule applies.
Abstract: Since in various studies the term “prosody” has been used in a variety of ways, we will begin by clarifying our use of it in this contribution. We include under “prosodic phenomena” any phonological rules or processes that are not purely local, in that they cannot be described solely in terms of their phonotactic environments. Instead, additional information is required as to what larger units, or “prosodie domains”, they belong to. In other words, while certain phonological phenomena can be described with rules that modify particular segments in particular segmental contexts, these segmental contexts are not enough to determine whether or not a rule applies. Following recent proposals [e.g. Liberman and Prince, 1977; Selkirk, 1978 b, 1980], we take the prosodie domains to include rhyme, syllable, foot, phonological word, phonological phrase, intonational phrase, and utterance. To take a simple illustration, consider a rule in Dutch that inserts a schwa between a liquid and a following consonant1.

135 citations

Book ChapterDOI
08 Sep 2018
TL;DR: This paper propose a neural module network architecture for visual dialog by introducing two novel modules, refer and exclude, that perform explicit, grounded, coreference resolution at a finer word level, and demonstrate the effectiveness of their model on MNIST Dialog.
Abstract: Visual dialog entails answering a series of questions grounded in an image, using dialog history as context. In addition to the challenges found in visual question answering (VQA), which can be seen as one-round dialog, visual dialog encompasses several more. We focus on one such problem called visual coreference resolution that involves determining which words, typically noun phrases and pronouns, co-refer to the same entity/object instance in an image. This is crucial, especially for pronouns (e.g., ‘it’), as the dialog agent must first link it to a previous coreference (e.g., ‘boat’), and only then can rely on the visual grounding of the coreference ‘boat’ to reason about the pronoun ‘it’. Prior work (in visual dialog) models visual coreference resolution either (a) implicitly via a memory network over history, or (b) at a coarse level for the entire question; and not explicitly at a phrase level of granularity. In this work, we propose a neural module network architecture for visual dialog by introducing two novel modules—Refer and Exclude—that perform explicit, grounded, coreference resolution at a finer word level. We demonstrate the effectiveness of our model on MNIST Dialog, a visually simple yet coreference-wise complex dataset, by achieving near perfect accuracy, and on VisDial, a large and challenging visual dialog dataset on real images, where our model outperforms other approaches, and is more interpretable, grounded, and consistent qualitatively.

134 citations

Proceedings ArticleDOI
27 Oct 2019
TL;DR: In this article, the authors propose to detect visual relations in images of the form of triplets t = (subject, predicate, object), where training examples of the individual entities are available but their combinations are unseen at training.
Abstract: We seek to detect visual relations in images of the form of triplets t = (subject, predicate, object), such as “person riding dog”, where training examples of the individual entities are available but their combinations are unseen at training. This is an important set-up due to the combinatorial nature of visual relations : collecting sufficient training data for all possible triplets would be very hard. The contributions of this work are three-fold. First, we learn a representation of visual relations that combines (i) individual embeddings for subject, object and predicate together with (ii) a visual phrase embedding that represents the relation triplet. Second, we learn how to transfer visual phrase embeddings from existing training triplets to unseen test triplets using analogies between relations that involve similar objects. Third, we demonstrate the benefits of our approach on three challenging datasets : on HICO-DET, our model achieves significant improvement over a strong baseline for both frequent and unseen triplets, and we observe similar improvement for the retrieval of unseen triplets with out-of-vocabulary predicates on the COCO-a dataset as well as the challenging unusual triplets in the UnRel dataset.

134 citations

Journal ArticleDOI
TL;DR: A psycholinguistic model compatible with the grammatical description is presented and is shown to account for a wide range of facts about agrammatism.

134 citations

Proceedings ArticleDOI
25 Oct 2008
TL;DR: The MANLI system is presented, a new NLI aligner designed to address the alignment problem, which uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data.
Abstract: The alignment problem---establishing links between corresponding phrases in two related sentences---is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.

133 citations


Network Information
Related Topics (5)
Sentence
41.2K papers, 929.6K citations
92% related
Vocabulary
44.6K papers, 941.5K citations
88% related
Natural language
31.1K papers, 806.8K citations
84% related
Grammar
33.8K papers, 767.6K citations
83% related
Perception
27.6K papers, 937.2K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023467
20221,079
2021360
2020470
2019525
2018535