scispace - formally typeset
Search or ask a question
Topic

Head (linguistics)

About: Head (linguistics) is a research topic. Over the lifetime, 2540 publications have been published within this topic receiving 29023 citations. The topic is also known as: nucleus.


Papers
More filters
Journal ArticleDOI
TL;DR: The notion of a single-lemma representation of lexically stored compound nouns in the German production lexicon is supported, supported by the general sensitivity of the task.
Abstract: We examined how noun-noun compounds and their syntactic properties are lexically stored and processed in speech production. Using gender-marked determiner primes ( dermasc, diefem, dasneut [the]) in a picture naming task, we tested for specific effects from determiners congruent with either the modifier or the head of the compound target (e.g., Teemasckannefem [teapot]) to examine whether the constituents are processed independently at the syntactic level. Experiment 1 assessed effects of auditory gender-marked determiner primes in bare noun picture naming, and Experiment 2 assessed effects of visual gender-marked determiner primes in determiner-noun picture naming. Three prime conditions were implemented: (a) head-congruent determiner (e.g., diefem), (b) modifier-congruent determiner (e.g., dermasc), and (c) incongruent determiner (e.g., dasneuter). We observed a facilitation effect of head congruency but no effect of modifier congruency. In Experiment 3, participants produced novel noun-noun compounds in response to two pictures, demanding independent processing of head and modifier at the syntactic level. Now, head and modifier congruency effects were obtained, demonstrating the general sensitivity of our task. Our data support the notion of a single-lemma representation of lexically stored compound nouns in the German production lexicon.

8 citations

Book ChapterDOI
23 Jun 2010
TL;DR: This paper critically examines a theoretical proposal that was devised in an attempt to deal in a unified way with the pre-theoretical notion "phrasal head", and whose effect is, in certain cases, to circumvent constraints imposed by the left periphery on long-distance movement: the multi-dimensional "Grafting" mechanism of Van Riemsdijk (2006) and references therein.
Abstract: This paper critically examines a theoretical proposal that was devised in an attempt to deal in a unified way with the pre-theoretical notion "phrasal head", and whose effect is, in certain cases, to circumvent constraints imposed by the left periphery on long-distance movement: the multi-dimensional "Grafting" mechanism of Van Riemsdijk (2006) and references therein. The constructions for which Grafting has been invoked are examined one by one, comparing proposed Grafting analyses with specific alternatives based on bidimensional representations. It is argued that the Grafting analyses have no advantage over their competitors in any of the cases, and that they are in fact empirically and conceptually inferior in four of the cases and downright inapplicable in the remaining one. The paper pays special attention to the analysis of Transparent Free Relatives, refining proposals in Grosu (2003) and bringing into bolder relief their empirical and conceptual merits, and also sheds novel light on the nature of the far from simple construction and of two Lakovian amalgams.

8 citations

Journal ArticleDOI
H. V. Neal1

8 citations

Proceedings ArticleDOI
01 Aug 2019
TL;DR: This work introduces temporally and contextually-aware models for the novel task of predicting unseen but plausible concepts, as conveyed by noun-noun compounds in a time-stamped corpus, and finds that in around 85% of the cases, the novel compounds generated are attested in previously unseen data.
Abstract: We introduce temporally and contextually-aware models for the novel task of predicting unseen but plausible concepts, as conveyed by noun-noun compounds in a time-stamped corpus. We train compositional models on observed compounds, more specifically the composed distributed representations of their constituents across a time-stamped corpus, while giving it corrupted instances (where head or modifier are replaced by a random constituent) as negative evidence. The model captures generalisations over this data and learns what combinations give rise to plausible compounds and which ones do not. After training, we query the model for the plausibility of automatically generated novel combinations and verify whether the classifications are accurate. For our best model, we find that in around 85% of the cases, the novel compounds generated are attested in previously unseen data. An additional estimated 5% are plausible despite not being attested in the recent corpus, based on judgments from independent human raters.

8 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
202168
202090
201986
201890
201790