scispace - formally typeset
Search or ask a question

Showing papers in "Cognitive Linguistics in 1997"


Journal ArticleDOI
TL;DR: The authors examine the claim that figurative language does not involve processing the surface literal meaning (e.g., Gibbs 1984), and that its comprehension is not processing-intensive, because it does not involving a trigger (eg., Keysar 1989).
Abstract: In this study I lest the prevalent Claims among contemporary psycholinguists that understanding metaphor does not involve a special process, and that it is essentially identical to understanding literal language. Particularly, I examine the claims that figurative language does not involve processing the surface literal meaning (e.g., Gibbs 1984), and that its comprehension is not processing-intensive, because it does not involve a trigger (e.g., Keysar 1989). A critique, review and reinterpretation ofa number of contemporary researches on literal and figurative language reveal that figurative and literal language use are governed by a general principle of salience: Salient meanings (e.g., conventional frequent, familiär, enhanced by prior context) are processed first. Thus, for example, when the most salient meaning is intended (äs in, e.g., the figurative meaning of conventional Idioms), it is accessed directly, without having toprocess the less salient (literal) meaning first (Gibbs 1980). However, when a less rather than a more salient meaning is intended (e.g., the metaphoric meaning ofnovel metaphors, the literal meaning of conventional Idioms, or a novel Interpretation ofa highly conventional literal expression) comprehension seems to involve a sequential process, upon which the more salient meaning is processed initially, before the intended meaning is derived (Blasko and Connine 1993; Gerrig 1989; Gibbs 1980; Gregory and Mergler 1990). Parallel processing is induced when more than one meaning is salient. For instance, conventional metaphors whose metaphoric and literal meanings are equally salient, are processed initially both literally and metaphorically (Blasko and Connine 1993). The directl sequential process debate, then, can be reconciled: Different linguistic expressions ( salient-less salient) may tap different (direct/parallel/sequential) processes.

747 citations


Journal ArticleDOI
TL;DR: De Gruyter et al. as mentioned in this paper reanalyze the data which Lakoff and Johnson (1980) treated as evidence for the conceptual metaphor TheORIES-Are-Bricks.
Abstract: In this paper I reanalyze the data which Lakoff and Johnson (1980) treated äs evidence for the conceptual metaphor THEORIES ARE BUILDINGS. This reanalysis involves making a new distinction between types of conceptual metaphor—primary vs. compound metaphors—according to principles outlined in Grady et al. (1996) and Grady (in press). The two more basic metaphoric conceptualizations proposed here, which combine to yield the data discussed by Lakoff and Johnson, are ORGANIZATION is PHYSICAL STRÜCTURE and PERSISTING is REMAINING ERECT. The decomposition ofcomplex metaphors such äs THEORIES ARE BUILDINGS into more basic metaphors offers several important benefits over previous accounts. First, a decompositional account predicts linguistic data more accurately and specifically— e.g., the fact that there is a conventional Interpretation for the foundation of the theory but not for the walls of the theory. Furthermoret this type of analysis captures anddefines the relationship between \"different\" metaphors such äs THEORIES ARE BUILDINGS and THEORIES ARE FABRICS (e.g., the theory unraveledy), which clearly share much of the same structure. Most importantlyf this account focusses on metaphoric mappings for which there is a direct experiential basis, and therefore sheds light on the fundamental structure ofour conceptual Systems. l. The THEORIES ARE BUILDINGS metaphor The title of this paper refers to an example discussed in Metaphors We Live By, the 1980 book by Lakoff and Johnson that inspired much of the recent work in the field of metaphor study. The example concerned linguistic evidence that for English Speakers there is a specific and definable conceptual relationship between buildings and theories. In this paper I will take up the discussion of the THEORIES ARE BUILDINGS metaphor where Lakoff and Johnson left off. In particular, I will argue for a slightly different view of the complex of figurative correspondences which make Cognitive Linguistics 8-4 (1997), 267-290 0936-5907/97/0008-0267 © Walter de Gruyter

383 citations


Journal ArticleDOI
TL;DR: By restricting prepositional senses to the relational level and image Schema transformations to the component level a principally constrained, yet widely applicable model of polysemy structure is demonstrated.
Abstract: Previous network accounts of polysemy (Brugman 1981, Lakoff 1987, Dewell 1994) have used image Schemata to motivate and define the various senses of a single lexical item; while the application of image-schematic structure to lexical semantics is taken äs ultimately beneficial, such analyses have not been systematically constrained. The present study, inspired by Talmy (1983), distinguishes between three levels of image-schematic structure: (1) the component level, (2) the relational level, and (3) the integrative level; by restricting prepositional senses to the relational level and image Schema transformations to the component level a principally constrained, yet widely applicable model of polysemy structure is demonstrated. V sing the well-studied preposition over äs an example, the composition and application of relational and integrative level Schemata is examined.

92 citations


Journal ArticleDOI
TL;DR: De Gruyter as discussed by the authors examines the role of constituency in grammatical structure and explores the relationship among constituency, dependency, grammatical relations, and conceptual structure from the viewpoint of cognitive grammar.
Abstract: Constituency is argued to be neither fundamental nor essential to grammatical structure, and Standard phrase trees are considered artifactual. More basic is our capacity for grouping on grounds ofsimilarity and contiguity. Linguistic expressions involve many kinds of conceptual and phonological groupings. Conceptual groups are candidates for symbolization, and phonological groups offer themselves äs possible symbolizing structures. A \"classical constituent\" emerges äs a special case when a particular kind of conceptual group happens to be symbolized by a particular kind of phonological group. However, not all groupings participate in such relationships, nor do they arrange themselves into consistent hierarchies. Cognitive grammar adopts a flexible scheine based on assemblies of conceptual groups, phonological groups and symbolic linkages between them. This model avoids numerous problems encountered by classical Constituency hierarchies. This article examines the nature and role of constituency in grammatical structure. From the standpoint of cognitive grammar, it explores the relationship among constituency, dependency, grammatical relations, and conceptual structure. It reviews and extends the argument (originally made in Langacker 1995a) that constituent structure is less essential to grammar than is generally thought and is not appropriately modeled by syntactic phrase trees äs they are standardly conceived. To the extent that it needs to be posited, constituency can be seen äs emerging from more basic phenomena: conceptual grouping, phonological grouping, and symbolization. 1. Background and overview Constituency is clearly an important aspect of linguistic organization. Accordingly, tree-like representations are prevalent in both traditional Cognitive Linguistics 8-1 (1997), 1-32 0936-5907/97/0008-0001 © Walter de Gruyter

92 citations


Journal ArticleDOI
TL;DR: De Gruyter et al. as discussed by the authors focused on the issue of directionality in three figures of speech, simile, synaesthesia, and zeugma, and provided a cognitive account for this selective use, arguing that the figures involved conform to a certain cognitive constraint determining their directionality.
Abstract: The present article focuses on the issue of directionality in three figures of speech, simile, synaesthesia, and zeugma, s it appears in the poetic use of these figures. First, an attempt is made to isolate a certain (structural) level αϊ which these figures of speech manifest an extremely selective preference for certain structural options over others, beyond a specific context (text, poett school, or period). A textual analysis of extensive poetic corpora corroborates this selective use, Traditional accountsfail to accountfor such preferences, given their \"contextual\" orientation, whereas the phenomena under discussion go beyond any relevant specific context. By contrast, the article provides a cognitive accountfor this selective use, arguing that the figures involved conform to a certain cognitive constraint determining their directionality. Empirical data are introduced, based on various psychological tasks> suggesting that the structures selectedare,from a cognitive standpoint, \"more basic\" (e.g., are easier to comprehend and recall, and are more easily conventionalized) than those ruled out. The reason these structures are \"more basic\" than their counterparts is that they meet a general cognitive constraint. Aformulation and a theoretical account of this constraint areproposed and discussed. Introduction: Cognitive vs. other constraints on poetic language Despite the creativity and novelty exhibited in the language used in poetry, any Student of literature would agree that (at least some major aspects of) poetic language exhibit certain regularities, that is, conform to certain rules or constraints. The general question of interest here is: What is the nature of the rules or constraints to which even poetic language must conform? One possible answer pertains to the idea that poetic language is constrained by linguistic rules. Under this view, poetic language is part of Cognitive Linguistics 8-1 (1997), 33-71 0936-5907/97/0008-0033 © Walter de Gruyter

84 citations


Journal ArticleDOI
TL;DR: De Gruyter as mentioned in this paper proposed an analyzable and systematic semantic view of verb-particle constructions, situating them in a "space grammar" framework, identifying specific relationships to both familiär and previously unrecognized conceptual metaphors of English.
Abstract: The meanings of verb-particle constructions (VPCs; also called \"verbparfiele combinations\" and \"phrasal verbs\") such äs pick out or figure out have generally been viewedäs arbitrary and idiosyncratic, since explanations along traditional semantic lines have been recognized äs wholly or partly inadequate (e.g., Dixon 1991, Fräser 1976, Bolinger 1971). The analysis of Lindner (1983), however, proposed an analyzable and systematic semantic view of verb-particle constructions, situating them in a \"space grammar\" framework. In building on her work, afurther use of metaphor theory and o t her aspects of cognitive linguistics reveals still more degrees ofmotivated semantic systematicity, äs well äs identifying specific relationships to both familiär andformerly unrecognized conceptual metaphors of English. Under this explanation, both the verb and the particle not only contribute semantically to the verb-particle construction, but also provide reasons for some of the syntactic and semantic limitations and constraints found in the usage patterns ofthe complete constructions. l. Previous discussions of verb-particle constructions It is generally recognized that attempts to use traditional methods of semantic analysis to investigate verb-particle constructions have not been very fruitful. Many analysts, such äs Fräser (1976) and Bolinger (1971), have concentrated instead on other aspects of the verb-particle construction, such äs syntax. Others such äs Dixon (1991) have given up on seeking any systematicity at all. However, the application of metaphor theory (and frame semantics) can result in contributing some degree of semantic systematicity to the discussion. In her dissertation (Lindner 1983), Susan Lindner provided a cognitive analysis of hundreds of verb-particle constructions (VPCs) with out and up using the framework of Langacker's \"space grammar\", and classified Cognitive Linguistics 8-4 (1997), 327-357 0936-5907/97/0008-0327 © Walter de Gruyter

71 citations


Journal ArticleDOI
TL;DR: This paper argued that if sets up a mental space wherein the apodosis's content (or speech act, or conclusion) is taken, then refers uniquely and anaphorically to the mental space set up in the protasis, and may contextually have a contrastive deictic function which requires some other mental space to be postulated the contrasied entity.
Abstract: in if-then conditionals, the different parts of the construction and of the context make differing contributions to Interpretation. It is here argued that if sets up a mental space wherein the apodosis's content (or speech act, or conclusion) is taken äs existing. Then refers uniquely and anaphorically to the mental space set up in the protasis, and may contextually have a contrastive deictic function which requires some other mental space to be postulated äs the contrasied entity. The relationship between clause order and the use o/then falls out ofthis anaphoric Interpretation o/then, which is not interpretable without preceding establishment of a referent mental space. The demands of relevance allow a hearer to conventionally give a predictive Interpretation to a conditional construction marked by appropriate verb forms; this Interpretation in turn requires the setting up of alternative spacesfor the purpose ofprediction (and thus gives rise to an iff implicature). The restrictions on the use o/then with even if conditionals, generic conditionals, and only if conditionals, also fall out of semantic and pragmatic aspects of these particular classes of constructions. In general, conditional constructions in English are more compositional in meaning than has previously been observed; this compositionality emerges when the analyst is willing to map meaning onto syntactic constructions, and not solely onto morphemes.

51 citations


Journal ArticleDOI
TL;DR: This article proposed the hypothesis that Lexical Memory Traces relating to ontogenetic pro· cesses of grammaticalization may account for the representation of retention in the psychology of an individual speaker.
Abstract: Previous research in grammaticalization studies has revealed instances of the retention of historical lexical source meanings äs afactor contributing to constraints on the distribution and generalization of a grammaticalizing feature to new environments. Recent studies havefurther revealed that evidence of retention can be reflected in the intuitions of native Speakers in evaluation tests ofsemantic acceptability. In the present study, the question ofthe correlation between the synchronic and the diachronic representation of retention is addressed, and its presence äs a psychological phenomenon relating to the intuitions of individual Speakers is considered. Using examples from studies of child language acquisition, the present paper proposes the hypothesis that Lexical Memory Traces relating to ontogenetic pro· cesses of grammaticalization may accountfor the representation of retention äs semantic intuitions in the psychology ofthe individual Speaker.

44 citations


Journal ArticleDOI
TL;DR: The authors argue that their data support a radically different model of language structure based on networks of prototypes, such as either Cognitive Grammar or Word Grammar, and they question the validity of their data and of their statistical interpretations.
Abstract: Recent work by Guy and Kroch has used statistical data on variation in performance as evidence for specific theories of language structure, namely Lexical Phonology and Principles-and-Parameters syntax. I accept the validity of their data and of their statistical interpretations Guy's `exponential model' and Kroch's `constant rate effect'; but I question their interpretations of these data in terms of language structure. Instead I argue that their data support a radically different model of language structure based on networks of prototypes, such as either Cognitive Grammar or Word Grammar. Inherent variability and linguistic theory 1 Many thanks for help and encouragement from David Denison and for comments and criticisms from those who attended seminars at UCL and Oxford in which I presented part of the material. 2 1. Inherent variability as evidence for competence theories One of the most important discoveries in modern linguistics is surely the existence of `inherent variability' (Labov 1969), the coexistence of alternative `ways of saying the same thing' within the speech of a single speaker who alternates between them in a statistically regular way. The study of inherent variability has turned into a major area of linguistic research and greatly increased our understanding of variation in both place and time. But most of this work has fallen clearly within the sphere of sociolinguistics, with its special focus on the relationships between linguistic and social structures; very little could be described as the study of language structure as such, and even less has had any influence on (synchronic) theories of language structure. Indeed, it is hard to think of a single example (until very recently) where statistical data on inherent variability has been used as evidence in discussions of language structure. It is true that there have been a few sophisticated marriages between quantitative data and existing theories of language structure, the obvious example being Labov's idea of associating transformations or phonological rules with probabilities to give `variable rules' (Labov 1972:216ff, Cedergren and Sankoff 1974); but none of this work really challenged, or even influenced, the (then) current views on language structure, and in any case `the variable rule as a part of linguistic theory has quietly been abandoned' even in sociolinguistic studies of variation (Fasold 1990:256). After Labov's early work there was a long period of separation between the work on inherent variability and work on language structure. Although the units that varied were parts of language words, sounds, morphemes, constructions inherent variability was left to the sociolinguists on the grounds that this variation had nothing to do with anything that `theoretical linguists' were interested in. A typical view is expressed by Smith (1989:180): To be of interest to a linguistic theorist it is not sufficient that the talk be of words and such like, rather the talk has to have implications of some kind for the theory concerned, by supporting or contradicting one of the claims derivable from it. ... Any social parameter whatsoever may be the locus for some linguistic difference. Unfortunately nothing of interest to linguistic theory follows from this, so quantifying the difference is irrelevant to linguistics even though it may be of interest to the sociologist if it gives him or her a recognition criterion for some socially relevant variable. It is easy to criticise this comment for assuming in advance that `linguistic theory' cannot contain any claims which make contact with variability; but there is no denying that at the time when it was written it contained an important grain of truth. Even if variability had potential for illuminating theories of language structure, this potential had not been realised. It is true that some of us had asked what the existence of variability implied for theories of language structure (Romaine 1982, Hudson 1985, 1986, in press), but none of us managed to get beyond the stage of producing general statements of principles or programmes for future research. The stimulus for the present article is an important collection of papers (Beals et al 1994) presented as a parasession to the Chicago Linguistic Society's 30th regional meeting, with the title `Variation in Linguistic Theory'. Some of these papers are serious attempts to use variable data as evidence for particular theoretical positions on the structure of language. They show beyond any reasonable doubt that the data are not only robust, but also relevant; but (unsurprisingly) different authors draw different theoretical conclusions, with two main groups of contenders. Roughly speaking, we can distinguish the `classical theories' from the `prototype-based theories'. The former are represented by Lexical Phonology (Guy) and Principles-andParameters Theory (Kroch), and the latter by Cognitive Grammar (Kemmer and Israel) and Pierrehumbert's view on phonology. The former articles (by Guy and Kroch) use statistical data very skillfully to support their theoretical claims, but the prototype-based theories are presented without statistical support. This article is an attempt to contribute to the debate started in that collection. My theoretical sympathies are with the Inherent variability and linguistic theory 3 prototype-based theories, and my strategy will be to recycle statistical data quoted in support of classical theories as evidence for a prototype-based alternative. I shall consider the two specific cases which Guy and Kroch discuss, one in English phonology (so-called `t/d deletion') and one in English historical syntax (` do-support'). The argument will involve some fundamental questions about the nature of language-structure, not least of which is the question whether phonology and syntax are really different (as claimed most explicitly by Halle and Bromberger, 1989). I shall try to show that both cases allow the same kind of reanalysis (in terms of prototypes), so to that extent the argument will count as support for the view that all language structure is formally homogeneous, as well as being formally similar to structures found in general cognition. 2. Phonology: t/d loss as evidence for lexical phonology The first case is the alternation between the presence or absence of a final alveolar stop following a consonant at the end of a word, so-called t/d deletion. For example, mist and raised can both be pronounced with or without the final stop, and according to the available data on free speech, both stops are more likely to be pronounced than to be omitted, though they are both omitted on at least some occasions. The data are summarised by Gregory Guy (1994), who has also been responsible for collecting and analysing most of it over the last decade or so (Guy 1980, 1991a, 1991b, 1993). His very careful statistical analyses have shown beyond reasonable doubt that this alternation is not a simple phonetic matter (though of course it is phonetically motivated by the universal tendency to reduce consonant clusters). The most interesting conclusion (for present purposes) is that the survival chances of the stop (which we shall now call simply `t/d') depend on the morphological structure of the word containing it. If t/d is the past-tense (or participle) suffix, it is much less likely to be omitted than if it is simply the last segment in the word's stem. For example, mist and hold are much more likely than missed and holed to lose their t/d; and for past-tense verbs like left and felt where t/d is a suffix but not the sole inflectional marker the figure is intermediate. These differences between the three types of word are easily understood in functional terms: when t/d is a suffix it carries far more information than when it is just the last segment of the stem. But a functional explanation cannot be the whole story because suffixes are often omitted regardless of the loss of information (Labov 1994: Chapter 19). In any case, the functional explanation does not help at all in understanding the most interesting fact of all, to which we now turn. Guy has discovered a regular mathematical relationship among the t/d omission figures for the three word-classes. He calls the morphological word-classes `monomorphemes', `irregular past' and `regular past', so we can let M, I and R stand for the probability of t/d being pronounced in each of these types of word. The following formulae define the relationship among these figures: The Exponential Model

42 citations


Journal ArticleDOI
TL;DR: This paper explored these notions and tested them in the context of the well-known (but perhaps less read) article by Kurylowicz in which are developed six formulas capturing directional tendencies in analogical change.
Abstract: Within theframework of cognitive grammar, analogy is viewedas an aspect of ihe mental categorization of linguistic units, involving the recognition of similarity which emerges from the comparison of salient features of these units. Analogical change, then, is a process of recategorization based on evolving judgments about how such groupings should be arranged (both internal to a given category and between them). Thepresent arüde explores these notions and tests them in the context of the well-known (but perhaps less read) article by Kurylowicz in which are developed six formulas capturing directional tendencies in analogical change.

8 citations


Journal ArticleDOI
TL;DR: This article found that children and adults were able to learn that different verb meanings applied to different objects when those objects differed only in dimensionality or only in basic-level categories, but not in the linguistically less-relevant dimension of size, or in subordinate or superordinate level categories.
Abstract: Common linguistic phenomena such äs selectional restrictions (e.g., the verb assassinate applies only to prominent people) and verb polysemy (e.g.f one meaning of roll applies only to round objects, äs in John rolled the ball another only to flat flexible objects, äs in John rolled iip the flagj suggest t Hai verb learning is context sensitivefwhere context may be characterized in terms ofthe conceptual categories (e.g., basic-level kinds) or grammatically relevant properties (e.g., shape/dimensionality) that apply to the arguments of a verb. Two experiments lest the prediction that verb learners are predisposed to associate conceptual and/or grammatically relevant Information with the arguments of a verb. Children andadults were taught two different verb meanings, for the same made-up verb stem, in the context of two different objects; they were then testedon their ability to act out the meaning of the verb. It was found that subjects were able to learn that different verb meanings applied to different objects when those objects differed only in dimensionality or only in basic-level categories, but not when those objects differed only in the linguistically less-relevant dimension of size, or only in subordinateor superordinate-level categories. The results are taken to support the hypothesis that verb learning is context sensitive, and are interpreted with respect to two possible functions of context sensitivity: how children acquire selectional restrictions on the use of a verb, and how they individuate the different versions of a polysemous verb.