scispace - formally typeset
Search or ask a question

Showing papers by "Laurie A. Stowe published in 2003"


Journal ArticleDOI
TL;DR: This result is consistent with the idea that retinotopic levels of object representation are linked with the semantic level of object description, unless there is direct reference from the word to the visual representation of the object.

29 citations


01 Jan 2003
TL;DR: Hoeks et al. as discussed by the authors found evidence that seems to contradict this'message-level hypothesis', using the N400 amplitude evoked by the final word as a dependent measure, which is a negative component of the ERP that peaks some 400 ms after presentation of a stimulus.
Abstract: Recent research on sentence processing using ERPs (Event Related brain potentials) has shown that there are situations in which the semantic relationships between words in a sentence are so strong that they can block the semantic interpretation that is actually prescribed by the syntactic structure of that sentence (Hoeks, Stowe, & Doedens, 2003; Kolk, Chwilla, van Herten, & Oor, in press). As syntactic processing is the assumed province of the left hemisphere (LH), it was hypothesized that this so-called 'semantic illusion' might result from a transient but apparently rather influential non-syntactic sentence representation formed in the right hemisphere (RH). Two reaction time experiments using the Divided Visual Field paradigm only partially supported this hypothesis, as they showed that it is the LH that is most sensitive to semantic illusion. Introduction Readers do not wait with the interpretation of a sentence until they have received the final word. On the contrary, the process of understanding sentences occurs in a highly incremental fashion, approximately as each word is encountered (e.g., Altmann & Steedman, 1988). The partial sentence representations that are produced by this continuous process of interpretation have been shown to facilitate the processing of upcoming words. For instance, Duffy, Henderson, & Morris (1989) showed with eye tracking that if a new word is semantically related to two or more words in the preceding sentence context, it is fixated for a significantly shorter time than in a semantically 'neutral' context. This kind of facilitated processing, possibly originating from some kind of rather 'coarse' semantic representation based on all preceding lexical items taken together, was called 'lexical' facilitation. Following up on Duffy et al., Morris (1994) showed that semantic relations indeed play an important facilitatory role in sentence processing, but that this effect is mediated by the 'message-level' representation of a given sentence, that is, the representation in which both semantic and syntactic information are taken into account. To illustrate, in a sentence such as The gardener talked as the barber trimmed the MUSTACHE, the presence of barber and trimmed facilitated the processing of the target word mustache. If, however, syntactic structure was slightly altered, as in The gardener talked to the barber and trimmed the MUSTACHE, the semantically related words remained the same and in approximately the same position, but the representation at the sentence-level changed considerably before the final word is reached: Not the barber, but the gardener is doing the trimming here. Morris showed that mustache is not facilitated under these circumstances. In other words, facilitation is governed by the message-level representation of a sentence. However, a recent study by Hoeks, Stowe, & Doedens (2003) using ERPs (Event Related brain Potentials) found evidence that seems to contradict this 'message-level hypothesis'. In their experiment they used the N400 amplitude evoked by the final word was used as a dependent measure; the N400 being a negative component of the ERP that peaks some 400 ms after presentation of a stimulus and is highly sensitive to semantic processing: the easier it is to process a given item semantically, the smaller the N400 (e.g., Kutas & Hillyard, 1984). Hoeks et al. used materials like the following (lit. = literal English translation of the Dutch example sentence): 1. Plausible & Related Het brood werd door de bakkers GEBAKKEN. lit. The bread was by the bakers BAKED. 2. Implausible & Related Het brood heeft de bakkers GEBAKKEN. lit. The bread has the bakers BAKED. 3. Implausible & Unrelated Het brood heeft de bakkers BEDREIGD. lit. The bread has the bakers THREATENED. According to the 'message-level hypothesis', N400 amplitude to the target word baked in sentence 1 should be smallest, as this word is very easy to process; in contrast, the final words in sentences 2 and 3 should give rise to much larger amplitudes because they obviously do not fit into the existing message-level representation; both are equally implausible as ascertained in a separate rating study. Surprisingly, however, no significant difference in N400 amplitude was found for the final words of sentences 1 and 2, while both differed significantly from the N400 elicited by control sentence 3. It was only some 700 ms after presentation of the final word that the ERP waveforms PDF created with FinePrint pdfFactory Pro trial version http://www.fineprint.com 563 for plausible sentence 1 and implausible sentence 2 started to diverge (a positive shift was observed that might indicate processing difficulty related to, e.g., syntactic structure building, or reanalysis). Thus it is very likely that, at least temporarily, sentence 2 was wrongly taken as highly plausible. In other words, these results point to a phenomenon that may be called a temporary 'semantic illusion'. See Figure 1 for the results (in microvolts) at electrode Pz (i.e., an electrode near the top of the head that is generally highly sensitive to modulations of the N400). Figure 1. ERP waveforms from the Hoeks et al. study Note that after reading these sentences participants were required to make a plausibility judgment. The majority semantic illussion sentences were correctly classified as being implausible (i.e., 89 %), indicating that the illusion is really a temporary phenomenon. Hoeks et al. argued that in order to find a semantic illusion effect, two conditions should be met. First, there must be some problem in the timely construction of a message-level representation. For sentence 2 this difficulty might arise from the fact that the thematic relations (i.e., 'who is doing what to whom') in this sentence are not at all clear: the syntax prescribes that the inanimate entity (the bread) should do something to the animate entities (the bakers), which is not the usual state of affairs. The second condition is that all words in the sentence should fit together semantically (not necessarily associatively; more like fitting into one concept or scenario) thus facilitating the processing of the target word. So if the construction of a valid message-level representation is hampered or seriously delayed, there can be significant lexical facilitation if the words fit together. This was not the only demonstration of the semantic illusion effect. In fact, the Hoeks et al. (2003) results were practically replicated by the results of another recent study by Kolk and co-workers (Kolk, Chwilla, van Herten, & Oor, in press). They used sentences such as the following, and measured the N400 on the target word hunted: 4. De stroper die op de vossen joeg ... lit. The poacher that on the foxes hunted ... 5. De vos die op de stropers joeg ... lit. The fox that on the poachers hunted ... As in the Hoeks et al. study, Kolk et al. did not find any evidence for an N400 difference between these sentences, even though sentences such as 5 were rated as highly implausible. Instead, they found a late positive component they interpreted as indicative of syntactic processing difficulty. Again, the prerequisites for the semantic illusion effect are present: thematic processing difficulty (cf., foxes that hunt poachers in 5 vs. poachers that hunt foxes in 4) and words that are highly semantically related. To briefly summarize, we have seen that during some phase of sentence interpretation, effects of semantic relatedness can 'overrule' syntactic structure, that is, even if the syntactic structure of a sentence permits only one interpretation, a strong semantic relatedness between the content words in the sentence can temporarily overturn or even block this obligatory interpretation. Perhaps it is possible that there are actually two mechanisms of sentence interpretation: one responsible for a message-level representation (with syntax) and one for a coarse-grained semantic representation that does not need syntax. And perhaps each of these mechanisms is housed in a separate hemisphere. As syntactic processing is assumed to be the province of the left hemisphere (LH), it may be hypothesized that the 'semantic illusion' results from a transient but apparently influential coarse-grained semantic sentence representation formed in the right hemisphere (RH). This might seem farfetched, but in the next paragraph we will explain why we think this is a plausible hypothesis. A Right Hemisphere Phenomenon? In the last thirty years a lot of research has been dedicated to unraveling the linguistic capabilities of the RH, as compared to the language-dominant LH. The general picture that emerges from the literature is that though the two hemispheres collaborate closely during language processing, they have a specific division of labour. For instance, it has been argued that left hemisphere language processing takes place at the message-level where both semantic and syntactic information are integrated, while processing in the RH proceeds in a more global manner, and is more geared toward semantic coherence (e.g., Beeman et al., 1994). Given the evidence for the different modes of language processing in the two hemispheres, Hoeks et al. (2003) speculated that there might actually be two mechanisms for sentence interpretation that, in spite of their close cooperation, are nevertheless dedicated to different aspects of the interpretation process. One of these mechanisms, in the LH, would then be responsible for creating the message-level representation (i.e., with the use of syntax), whereas the other, located in the RH, continuously creates a coarse-grained semantic representation into which all content words are integrated, thus more or less representing the 'gist' of the sentence. If, for some reason or other, the LH is not able to produce a valid message-level representation quickly enough, the RH temporarily takes over to guide the integration of incoming lexical items. Th