scispace - formally typeset
Search or ask a question

Showing papers on "Head (linguistics) published in 2009"


Journal ArticleDOI
TL;DR: Several sources of evidence indicated that participants constructed perceptual simulations to generate properties for the noun phrases during conceptual combination, and a process model of the property generation task grounded in simulation mechanisms is presented.

261 citations


Journal ArticleDOI
TL;DR: It is proposed that lexical roots are not specified as mass or count by combining with a functional head, and some roots that have individuals in their denotations can be used as mass nouns to denote individuals.
Abstract: Comparative judgments for mass and count nouns yield two generalizations. First, all words that can be used in both mass and count syntax (e.g., rock, string, apple, water) always denote individuals when used in count syntax but never when used in mass syntax (e.g. too many rocks vs. too much rock). Second, some mass nouns denote individuals (e.g., furniture) while others do not (e.g., water). In this article, we show that no current theory of mass-count semantics can capture these two facts and argue for an alternative theory that can. We propose that lexical roots are not specified as mass or count. Rather, a root becomes a mass noun or count noun by combining with a functional head. Some roots have denotations with individuals while others do not. The count head is interpreted as a function that maps denotations without individuals to those with individuals. The mass head is interpreted as an identity function making the interpretation of a mass noun equivalent to the interpretation of the root. As a result, all count nouns have individuals in their denotation, whereas mass counterparts of count nouns do not. Also, some roots that have individuals in their denotations can be used as mass nouns to denote individuals.

132 citations


Journal ArticleDOI
01 Nov 2009
TL;DR: It is shown that only in the construct state are the syntactic conditions fulfilled which allow the classifier + numeral to be interpreted as a (complex) modifier of the syntactically embedded noun.
Abstract: Classifier constructions in English such as three glasses of water are ambiguous between an individuating reading, in which the DP denotes plural objects consisting of three individual glasses of water, and a measure reading, in which the DP denotes quantities of water which equal the quantity contained in three glasses. A plausible semantic account of the contrast has been given in Landman 2004. In this account, on the individuating reading, the nominal glasses is the head of the noun phrase and has its expected semantic interpretation, while in the measure reading, three glasses is a modifier expression modifying the nominal head of the phrase water . However, there is little direct syntactic evidence for these constructions in English. Modern Hebrew, however, provides support for Landman's analysis of the dual function of classifier heads. There are two ways to express three glasses of water in Modern Hebrew. The first is via the free genitive construction where a nominal head in absolute form takes a prepositional phrase complement as in salos kosot sel mayim , and the second using the construct state as in salos kosot mayim . The first has only the individuating reading, while the second is ambiguous between the individuating and measure readings. We show that only in the construct state are the syntactic conditions fulfilled which allow the classifier + numeral to be interpreted as a (complex) modifier of the syntactically embedded noun.

59 citations


Journal ArticleDOI
01 Feb 2009-Lingua
TL;DR: The authors compare Baker's head-movement analysis of noun incorporation to other non-lexicalist theories, including Massam's pseudo-incorporation analysis, Van Geenhoven's base generation analysis, and a Koopman/Szabolcsi-style analysis in terms of phrasal movement.

50 citations


Journal Article
01 Dec 2009-Hispania
TL;DR: In this article, the authors investigated the processing of Spanish gender agreement during an online comprehension task and found that animacy is a strong linguistic cue for both native and non-native speakers when establishing correct gender agreement.
Abstract: The present study investigates the processing of Spanish gender agreement during an online comprehension task. The linguistic variables examined are the noun class (semantic or non-semantic) and gender (masculine or feminine) of the head and attractor nouns, head noun morphology (overt or non-overt), and noun class and gender congruencies (matched or mismatched). The study is guided by specific research questions considering whether there is an effect of these variables on gender agreement reaction times (RTs), and whether there is a difference in performance between L2 learners of Spanish and native-Spanish speakers. Analysis of the data indicated inconsistent responses to some of the linguistic variables, but that the class of the subject noun was significant for RTs. Results are explained in light of the competition model, and suggest that animacy is a strong linguistic cue for both native and non-native speakers when establishing correct gender agreement.

46 citations


Proceedings ArticleDOI
06 Aug 2009
TL;DR: It is shown in experiments the method was able to find semantically appropriate revisions thus demonstrating its basic feasibility and that parsing errors mainly degraded the sentential completeness such as grammaticality and redundancy.
Abstract: We propose a method of revising lead sentences in a news broadcast. Unlike many other methods proposed so far, this method does not use the coreference relation of noun phrases (NPs) but rather, insertion and substitution of the phrases modifying the same head chunk in lead and other sentences. The method borrows an idea from the sentence fusion methods and is more general than those using NP coreferencing as ours includes them. We show in experiments the method was able to find semantically appropriate revisions thus demonstrating its basic feasibility. We also show that that parsing errors mainly degraded the sentential completeness such as grammaticality and redundancy.

44 citations


01 Jan 2009
TL;DR: This paper will first provide typologically based data on the dimension and limits of exocentricity, and then it will argue that the notion of head can be split into three different subparts: categorial head, semantic head and morphological head.
Abstract: The identification of a compound as endocentric or exocentric depends on the notion of head: if a compound has a head (or two), it is called endocentric; if it has no head, it is called exocentric. Exocentricity, however, has been usually assumed as a unitary notion, exactly because the notion of head has been generally interpreted as a unitary notion. In this paper we will first provide typologically based data on the dimension and limits of exocentricity, and then we will argue that the notion of head can be split into three different subparts: categorial head, semantic head and morphological head. Correspondingly, the notion of exocentricity can be split into categorial exocentricity, semantic exocentricity and morphological exocentricity. Our approach, based on features of the constituents and not on constituents as a whole, will hopefully provide a new analysis of exocentricity in compounding. * .

37 citations


Journal ArticleDOI
TL;DR: The results suggest that lexical-semantic integration of compound constituents is an incremental process and, thus, challenge a recent proposal on the time-course of semantic processing in auditory compound comprehension.

31 citations



Journal ArticleDOI
TL;DR: Information is found that antidromic corticospinal tract activation by the conditioning stimulus can produce suppression of contralateral motor cortex and several confounding factors should be excluded before concluding that the suppressive effects of ‘‘electrical cerebellar stimulation’ are derived from activation of cerebellAR structures or dis-facilitation of the dentato-thalamo-cortical pathway.

29 citations


Journal ArticleDOI
TL;DR: The authors argue that both prosodie principles and narrow-syntactic principles play a role in the linearization of syntactic structures, and take Kayne's Linear Correspondence Axiom as a starting point: (asymmetric) c-command maps onto precedence relations.
Abstract: The overarching question addressed here is how syntactic structures based on constituency (dominance, c-command) are to be mapped onto linear phonetic strings. I argue that both prosodie principles and narrow-syntactic principles play a role in the linearization of syntactic structures. I take Kayne's (1994) Linear Correspondence Axiom as a starting point: (asymmetric) c-command maps onto precedence relations. Two wide-ranging predictions of Kayne's theory are that specifiers precede their heads and that a head can only have one specifier or adjunct. Although abundant evidence supports these predictions, there is nonetheless a well-known class of apparent counterexamples: Romance languages allow both rightward and multiple dislocations. I take the LCA to be a soft constraint, overruled by a constraint of the Wrap family that seeks to combine a verb and its extended projection in one intonational phrase. Apparent rightward movement is the outcome of rightward linearization forced by Wrap. The possibility o...

Proceedings Article
10 May 2009
TL;DR: This paper describes the machine-learning approach that creates a head nod model from annotated corpora of face-to-face human interaction, relying on the linguistic features of the surface text, and shows that the model is able to predict head nods with high precision and recall.
Abstract: During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions. Our goal is to develop a model of the speaker's head movements that can be used to generate head movements for virtual agents based on a gesture annotation corpora. In this paper, we focus on the first step of the head movement generation process: predicting when the speaker should use head nods. We describe our machine-learning approach that creates a head nod model from annotated corpora of face-to-face human interaction, relying on the linguistic features of the surface text. We also describe the feature selection process, training process, and the evaluation of the learned model with test data in detail. The result shows that the model is able to predict head nods with high precision and recall.

Journal ArticleDOI
TL;DR: In this article, the subject of construction NOM and construction PROP is clausal and the topmost XP of the subject phrase of both constructions contains a null neuter element.
Abstract: Having shown how Construction NOM and Construction PROP differ, I demonstrate that the subject of Construction PROP is clausal. I argue that the topmost XP of the subject phrase of both constructions contains a null neuter element. This accounts for the neuter predicative agreement; hence the idea of default agreement or semantic agreement can be dismissed. I also argue that the subject in (ii) contains a nu P, the head of which is a null light verb. Other instances of null light verbs in Swedish are identified too. Finally, I propose an analysis that accounts for the close relation between Construction PROP and the corresponding construction with a med-phrase 'with -phrase'.

09 Sep 2009
TL;DR: It was a good speech, a fine speech, an Obama speech, and in some ways a surprising speech as mentioned in this paper, and here are some thoughts off the top of my head:
Abstract: It was a good speech, a fine speech, an Obama speech, and in some ways a surprising speech. The president just finished speaking, so here are some thoughts off the top of my head:



Journal ArticleDOI
Kensuke Takita1
TL;DR: It is not the case that Japanese head-final structures are derived from head-initial ones, which implies that Universal Grammar is equipped with a directionality parameter, admitting not only head- initial structures but also head- final structures.
Abstract: One of the important topics in current syntactic theory is whether there is a directionality parameter in Universal Grammar. Based on the observation that the presence of Chinese sentence-final aspectual particles blocks movement out of their complement, Lin (Complement-to-Specifier movement in Mandarin Chinese. MS., National Tsing Hua University, 2006) argues that each of these particles is the head of an underlyingly head-initial phrase and that the surface head-final order is derived by movement of its complement. Thus, movement out of it violates the Condition on Extraction Domain [CED: Huang (Logical relations in Chinese and the theory of grammar. PhD dissertation, MIT, 1982)]. Taking this analysis as a diagnostic that distinguishes a derived head-final structure from a genuine one, this paper illustrates that it is not the case that Japanese head-final structures are derived from head-initial ones. Our result implies that Universal Grammar is equipped with a directionality parameter, admitting not only head-initial structures but also head- final structures.

Proceedings Article
01 Dec 2009
TL;DR: It is shown that position-independent syntactic dependency relations of the head of a source phrase can be modeled as useful source context to improve target phrase selection and thereby improve overall performance of PB-SMT.
Abstract: The Phrase-Based Statistical Machine Translation (PB-SMT) model has recently begun to include source context modeling, under the assumption that the proper lexical choice of an ambiguous word can be determined from the context in which it appears. Various types of lexical and syntactic features such as words, parts-of-speech, and supertags have been explored as effective source context in SMT. In this paper, we show that position-independent syntactic dependency relations of the head of a source phrase can be modeled as useful source context to improve target phrase selection and thereby improve overall performance of PB-SMT. On a Dutch—English translation task, by combining dependency relations and syntactic contextual features (part-of-speech), we achieved a 1.0 BLEU (Papineni et al., 2002) point improvement (3.1% relative) over the baseline.

01 Aug 2009
TL;DR: Experiments show the proposed framework is capable of classifying three different head movement gestures and identifying 15 other head movements as movements which are outside of the training set, and an area under the curve measurement of 0:936 for the best performing feature vector.
Abstract: A novel system for the recognition of head movement gestures used to convey non-manual information in sign language is presented. We propose a framework for recognizing a set of head movement gestures and identifying head movements outside of this set. Experiments show our proposed system is capable of classifying three different head movement gestures and identifying 15 other head movements as movements which are outside of the training set. In this paper we perform experiments to investigate the best feature vectors for discriminating between positive a negative head movement gestures and a ROC analysis of the systems classifications performance showed an area under the curve measurement of 0:936 for the best performing feature vector.


Dissertation
01 Oct 2009
TL;DR: This paper provided a minimalist account of the Arabic DP and argued that head-to-spec movement takes place in all Arabic DPs and that this movement is a cyclic, minimalist alternative to standard Head Movement.
Abstract: This thesis provides a minimalist account of the Arabic DP. The data used comes from Modern Standard Arabic and Makkan Arabic, a spoken variety used in Saudi Arabia. Using two varieties provides a more complete picture of Arabic DPs and sheds light on the relationship between standard and spoken Arabic. I argue that head-to-spec movement takes place in all Arabic DPs and that this movement is a cyclic, minimalist alternative to standard Head Movement. I claim that the basic differences between Simple DPs and Free States on the one hand and Construct States on the other are derivable from the D projected in each structure; definite or indefinite D are projected in the former and Construct State D in the latter. I analyse Construct States headed by a number of categories: nouns, quantifiers, nominalised adjectives, numerals and verbal nouns. I claim that the similarities between these constructs are due to the use of Construct State D, and the special behaviour of each type is a reflection of the category of the head projected below D. I propose that the Arabic lexicon is rich and I present evidence for some complex word formation processes. Moreover, I propose that complex adjectives, often referred to in the related literature as Adjectival Constructs, which show a mixture of adjectival and construct properties, are adjectival compounds formed in the lexicon. I also argue that Verbal Noun Construct States in Modern Standard Arabic may be formed either in the lexicon or in the syntax, and that each option is associated with different structures and modificational patterns. Moreover, I claim that the restrictions on Verbal Noun Construct States in Makkan Arabic are a result of this variety having only lexically formed Verbal Nouns.

Proceedings ArticleDOI
13 Oct 2009
TL;DR: A Vietnamese Noun Phrase chunking approach based on Conditional random fields (CRFs) models and a method to build Vietnamese corpus from a set of hand annotated sentences is described.
Abstract: Noun phrase chunking is an important and useful task in many natural language processing applications. It is studied well for English, however with Vietnamese it is still an open problem. This paper presents a Vietnamese Noun Phrase chunking approach based on Conditional random fields (CRFs) models. We also describe a method to build Vietnamese corpus from a set of hand annotated sentences. For evaluation, we perform several experiments using different feature settings. Outcome results on our corpus show a high performance with the average of recall and precision 82.72% and 82.62% respectively.


26 Aug 2009
TL;DR: It is observed that adding phrase pairs from any other method improves translation performance over the baseline n-gram-based system, percolated dependencies are a good substitute for parsed dependencies, and that supplementing with the novel head percolation-induced chunks shows a general trend toward improving all system types across two data sets up to a 5.26% relative increase in BLEU.
Abstract: Statistical Machine Translation (SMT) systems rely heavily on the quality of the phrase pairs induced from large amounts of training data. Apart from the widely used method of heuristic learning of n-gram phrase translations from word alignments, there are numerous methods for extracting these phrase pairs. One such class of approaches uses translation information encoded in parallel treebanks to extract phrase pairs. Work to date has demonstrated the usefulness of translation models induced from both constituency structure trees and dependency structure trees. Both syntactic annotations rely on the existence of natural language parsers for both the source and target languages. We depart from the norm by directly obtaining dependency parses from constituency structures using head percolation tables. The paper investigates the use of aligned chunks induced from percolated dependencies in French–English SMT and contrasts it with the aforementioned extracted phrases. We observe that adding phrase pairs from any other method improves translation performance over the baseline n-gram-based system, percolated dependencies are a good substitute for parsed dependencies, and that supplementing with our novel head percolation-induced chunks shows a general trend toward improving all system types across two data sets up to a 5.26% relative increase in BLEU.

19 Nov 2009
TL;DR: In this article, a syntactic account of two types of adpositional preverb constructions in Hungarian is presented, which explains their special behavior, including their mixed argument/adjunct properties.
Abstract: This paper develops a syntactic account of two types of adpositional preverb constructions in Hungarian that explains their special behavior, including their mixed argument/adjunct properties. It is maintained that in neutral clauses both types of preverbs come to occupy a position left-adjacent to the verb by XP-movement of an adpositional phrase. Apart from yielding a regular overt movement chain, Chain Reduction (Nunes 1999, 2004), applying in the mapping to PF, may also reduce the copy of the PP left-adjacent to the verb to its adpositional head. Morphosyntactic reanalysis of the reduced copy makes it possible to realize the lower copy of the PP-chain overtly, either as a partial copy or as a full double, depending in part on the morphological status its head. Drawing on the assumption that Chain Reduction applies at the phase level, the paper accounts for the complex pattern of the (non-)availability of these spell out forms in various positions of the clause. This paper explores the syntax of two classes of preverbs in Hungarian, illustrated in (1) and (2) below.1 As is characteristic of verbal particles in Germanic and lexical verbal prefixes in Slavic, both classes systematically enable the verb to combine with a modifier phrase that appears to be an argument, whose morpho-syntactic form is restricted by the choice of the particle. Both types of preverbs apparently alter the argument structure of the verb: the modifier phrases display properties that render them similar to arguments. One way in which they consistently behave as adjuncts, however, is that their co-occurrence with the prefixed verb is invariably optional, see (1b), (2b). In what follows I will be referring to these elements agnostically as ‘quasi-arguments’ whenever their argument structural status is irrelevant to the discussion, or yet to be determined.

01 Jan 2009
TL;DR: French and Romanian verbless relative adjuncts are incidental adjuncts which have been described as elliptical relative clauses but it is shown that this analysis is not empirically adequate and an alternative non-elliptical analysis is proposed.
Abstract: French and Romanian verbless relative adjuncts are incidental adjuncts which have been described as elliptical relative clauses. We show that this analysis is not empirically adequate and propose an alternative non-elliptical analysis. We analyze verbless relative adjuncts as sentential fragments whose head can be a cluster of phrases. They are marked by a functor phrase which displays selection properties with respect to the head phrase and makes an essential contribution to the semantics of the adjunct. The analysis relies on the interaction of grammatical constraints introduced by various linguistic objects, as well as on a constructional analysis of verbless relative adjuncts distinguishing several subtypes.


01 Jan 2009
TL;DR: In this paper, the role of number in Niuean, an Austronesian language in the Tongic subgroup of the Polynesian family, is discussed. But it is argued that the concepts of individuation, classification and number are separable, even though they overlap significantly.
Abstract: This paper focuses on the role of number in Niuean, an Austronesian language in the Tongic subgroup of the Polynesian family. It is argued that the concepts of individuation, classification, and number are separable, even though they overlap significantly, as argued by Borer (2005). Number (i.e. singular/plural) must be expressed in the Niuean noun phrase, but it can be expressed on a variety of different elements in the phrase, such as on a quantifier, a numeral, or the reduplicated noun itself, or by means of a plural marker. The following question is addressed: Is it possible to situate number in a single functional head in Niuean? The answer is yes, but several problems must first be addressed. In order to explain the lack of the plural particle in quantified and counted nominal phrases, it is proposed that the linking particle that appears in such phrases be analyzed as a deficient classifier. This allows a uniform analysis for number: Niuean, like Armenian, has both a classifier and a number system. The paper then turns to examine certain classifying collective particles, which co-occur with the plural marker. These are considered to merge lower than number, but if number is otherwise unexpressed, they can raise to serve the function of number. The number marker itself is analyzed as being ambiguous between a number and a collective particle. In conclusion, neither the number system, nor the classifier system in Niuean is canonical, suggesting a system in change from classifiers to number.

01 Jan 2009
TL;DR: This paper presents a system for another input modality in a multimodal human-machine interaction scenario, e.g. speech, that extracts head gestures by image interpretation techniques based on machine learning algorithms to have a nonverbal and familiar way of interacting with the system.

Patent
19 Mar 2009
TL;DR: In this article, a speech recognition system includes a storage unit which stores vocabularies, an instruction receiving unit which receives an instruction of a target vocabulary and a target operation, and a grammar network generating unit which generates, when adding is instructed, a grammar networks containing the word head portion.
Abstract: A speech recognition apparatus includes a storage unit which store vocabularies, each of vocabularies including plural word body data, each of the word body data obtained by removing a specific word head from a word or sentence, and store at least one word head portion including labeled nodes to express at least one common word head common to at least two of the vocabularies, an instruction receiving unit which receive an instruction of a target vocabulary and an instruction of a operation, a grammar network generating unit which generate, when adding is instructed, a grammar network containing the word head portion, the target vocabulary and connection information indicating that each of the word body data contained in the target vocabulary is connected to a specific one of the labeled nodes contained in the word head portion, and a speech recognition unit which execute speech recognition using the generated grammar network.