scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: Human Perception and Performance in 2008"


Journal ArticleDOI
TL;DR: In this paper, a simple auditory pip is used to increase search times for a synchronized visual object that is normally very difficult to find, even though the pip contains no information on the location or identity of the visual object.
Abstract: Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location or identity of the visual object. The experiments also show that the effect is not due to general alerting (because it does not occur with visual cues), nor is it due to top-down cuing of the visual change (because it still occurs when the pip is synchronized with distractors on the majority of trials). Instead, we propose that the temporal information of the auditory signal is integrated with the visual signal, generating a relatively salient emergent feature that automatically draws attention. Phenomenally, the synchronous pip makes the visual object pop out from its complex environment, providing a direct demonstration of spatially nonspecific sounds affecting competition in spatial visual processing. Keywords: attention, visual search, multisensory integration, audition, vision

368 citations


Journal ArticleDOI
TL;DR: Four experiments are presented that implicate motor simulation as a mediator of perception effects and the perceiver's ability to perform the action, as determined by the outcome of the simulation, influences perceived distance.
Abstract: Perception is influenced by the perceiver's ability to perform intended actions. For example, when people intend to reach with a tool to targets that are just beyond arm's reach, the targets look closer than when they intend to reach without the tool (J. K. Witt, D. R. Proffitt, & W. Epstein, 2005). This is one of several examples demonstrating that behavioral potential affects perception. However, the action-specific processes that are involved in relating the person's abilities to perception have yet to be explored. Four experiments are presented that implicate motor simulation as a mediator of these effects. When a perceiver intends to perform an action, the perceiver runs a motor simulation of that action. The perceiver's ability to perform the action, as determined by the outcome of the simulation, influences perceived distance.

241 citations


Journal ArticleDOI
TL;DR: The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading.
Abstract: Native Chinese readers' eye movements were monitored as they read text that did or did not demark word boundary information. In Experiment 1, sentences had 4 types of spacing: normal unspaced text, text with spaces between words, text with spaces between characters that yielded nonwords, and finally text with spaces between every character. The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading. Global and local measures indicated that sentences with unfamiliar word spaced format were as easy to read as visually familiar unspaced text. Nonword spacing and a space between every character produced longer reading times. In Experiment 2, highlighting was used to create analogous conditions: normal Chinese text, highlighting that marked words, highlighting that yielded nonwords, and highlighting that marked each character. The data from both experiments clearly indicated that words, and not individual characters, are the unit of primary importance in Chinese reading.

178 citations


Journal ArticleDOI
TL;DR: It is indicated that lexical processing of words can influence saccade programming (as shown by fixation durations and which words are fixated) and orthographic familiarity, but not word frequency, influenced the duration of prior fixations.
Abstract: Word frequency and orthographic familiarity were independently manipulated as readers’ eye movements were recorded. Word frequency influenced fixation durations and the probability of word skipping when orthographic familiarity was controlled. These results indicate that lexical processing of words can influence saccade programming (as shown by fixation durations and which words are fixated). Orthographic familiarity, but not word frequency, influenced the duration of prior fixations. These results provide evidence for orthographic, but not lexical, parafoveal-on-foveal effects. Overall, the findings have a crucial implication for models of eye movement control in reading: There must be sufficient time for lexical factors to influence saccade programming before saccade metrics and timing are finalized. The conclusions are critical for the fundamental architecture of models of eye movement control in reading— namely, how to reconcile long saccade programming times and complex linguistic influences on saccades during reading.

176 citations


Journal ArticleDOI
TL;DR: The authors addressed this issue using the N2-posterior- contralateral (N2pc) effect, a component of the event-related brain potential thought to reflect attentional allocation, to provide converging evidence for attentional capture contingent on top-down control settings.
Abstract: Theories of attentional control are divided over whether the capture of spatial attention depends primarily on stimulus salience or is contingent on attentional control settings induced by task demands. The authors addressed this issue using the N2-posterior- contralateral (N2pc) effect, a component of the event-related brain potential thought to reflect attentional allocation. They presented a cue display followed by a target display of 4 letters. Each display contained a green item and a red item. Some participants responded to the red letter and others to the green letter. Converging lines of evidence indicated that attention was captured by the cues with the same color as the target. First, these target-color cues produced a cuing validity effect on behavioral measures. Second, distractors appearing in the cued location produced larger compatibility effects. Third, the target-color cue produced a robust N2pc effect, similar in magnitude to the N2pc effect to the target itself. Furthermore, the target-color cue elicited a similar N2pc effect regardless of whether it competed with a simultaneous abrupt onset. The findings provide converging evidence for attentional capture contingent on top-down control settings.

165 citations


Journal ArticleDOI
TL;DR: It was found that memory for bindings and memory for features were equally impaired by the search task, and that attention is more important for the maintenance of feature bindings than for thetenance of unbound feature values.
Abstract: This study examined the role of attention in maintaining feature bindings in visual short-term memory. In a change-detection paradigm, participants attempted to detect changes in the colors and orientations of multiple objects; the changes consisted of new feature values in a feature-memory condition and changes in how existing feature values were combined in a binding-memory condition. In the critical experiment, a demanding visual search task requiring sequential shifts of spatial attention was interposed during the delay interval of the change-detection task. If attention is more important for the maintenance of feature bindings than for the maintenance of unbound feature values, the attention-requiring search task should specifically disrupt performance in the binding-memory task. Contrary to this proposal, it was found that memory for bindings and memory for features were equally impaired by the search task.

164 citations


Journal ArticleDOI
TL;DR: The results revealed that color has an influence across a wide variety of scenes and is directly associated with scene gist, and how color contributes, if it does, is revealed.
Abstract: In 3 experiments the authors used a new contextual bias paradigm to explore how quickly information is extracted from a scene to activate gist, whether color contributes to this activation, and how color contributes, if it does. Participants were shown a brief presentation of a scene followed by the name of a target object. The target object could be consistent or inconsistent with scene gist but was never actually present in the scene. Scene gist activation was operationalized as the degree to which participants respond "yes" to consistent versus inconsistent objects, reflecting a response bias produced by scene gist. Experiment 1 demonstrated that scene gist is activated after a 42-ms exposure and that the strength of the activation increases with longer presentation durations. Experiments 2 and 3 explored the contribution of color to the activation of scene gist. The results revealed that color has an influence across a wide variety of scenes and is directly associated with scene gist.

162 citations


Journal ArticleDOI
TL;DR: Evidence for the existence of an other-age effect (OAE), analogous to the well-documented other-race effect, is provided and evidence from Experiment 3 indicates that the OAE obtained with child faces can be modulated by experience.
Abstract: The current study provides evidence for the existence of an other-age effect (OAE), analogous to the well-documented other-race effect. Experiments 1 and 2 demonstrate that adults are better at recognizing adult faces compared with faces of newborns and children. Results from Experiment 3 indicate that the OAE obtained with child faces can be modulated by experience. Moreover, in each of the 3 experiments, differences in the magnitude of the observed face inversion effect for each age class of faces were taken to reflect a difference in the processing strategies used to recognize the faces of each age. Evidence from Experiment 3 indicates that these strategies can be tuned by experience. The data are discussed with reference to an experience-based framework for face recognition.

155 citations


Journal ArticleDOI
TL;DR: Findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech.
Abstract: Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.

145 citations


Journal ArticleDOI
TL;DR: In Experiment 1, transpositions within low frequency words led to longer reading times than when letters were transposed within high frequency words, and Experiment 2 demonstrated that the position of word-initial letters is most critical even when parafoveal preview of words to the right of fixation is unavailable.
Abstract: Participants' eye movements were recorded as they read sentences with words containing transposed adjacent letters. Transpositions were either external (e.g., problme, rpoblem) or internal (e.g., porblem, probelm) and at either the beginning (e.g., rpoblem, porblem) or end (e.g., problme, probelm) of words. The results showed disruption for words with transposed letters compared to the normal baseline condition, and the greatest disruption was observed for word-initial transpositions. In Experiment 1, transpositions within low frequency words led to longer reading times than when letters were transposed within high frequency words. Experiment 2 demonstrated that the position of word-initial letters is most critical even when parafoveal preview of words to the right of fixation is unavailable. The findings have important implications for the roles of different letter positions in word recognition and the effects of parafoveal preview on word recognition processes.

141 citations


Journal ArticleDOI
TL;DR: All experiments provided evidence for image-specific picture learning taking place over and above any invariant face learning, with recognition accuracy always highest for the image studied and performance falling across transformations between study and test images.
Abstract: Previous studies examining face learning have mostly used only a single exposure to 1 image of each of the faces to be learned. However, in daily life, faces are usually learned from multiple encounters. These 6 experiments examined the effects on face learning of repeated exposures to single or multiple images of a face. All experiments provided evidence for image-specific picture learning taking place over and above any invariant face learning, with recognition accuracy always highest for the image studied and performance falling across transformations between study and test images. The relative roles of pictorial and structural codes in mediating learning faces from photographs need to be reconsidered.

Journal ArticleDOI
TL;DR: The authors tested the hypothesis that RT-TRCE reflects activated overlearned response category codes in long-term memory and showed that it was absent for tasks for which there were no response codes ready beforehand, and present after these tasks were practiced.
Abstract: Reaction time task rule congruency effects (RT-TRCEs) reflect faster responses to stimuli for which the competing task rules indicate the same correct response than to stimuli indicating conflicting responses. The authors tested the hypothesis that RT-TRCE reflects activated overlearned response category codes in long-term memory (such as up or left). The results support the hypothesis by showing that (a) RT-TRCE was absent for tasks for which there were no response codes ready beforehand, (b) RT-TRCE was present after these tasks were practiced, and (c) these practice effects were found only if the tasks permitted forming abstract response category codes. The increase in the RT-TRCE with response slowness, found only for familiar tasks, suggests that the abstract response category codes may be verbal or linguistic in these cases. The results are discussed in relation to task-switching theories and prefrontal functions.

Journal ArticleDOI
TL;DR: The findings suggest that information about emotional tone is used in the processing of linguistic content influencing the recognition and naming of spoken words in an emotion-congruent manner.
Abstract: The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming latencies were collected. Across experiments, tone of voice was either blocked or mixed with respect to emotional meaning. The results suggest that emotional tone of voice facilitated linguistic processing of emotional words in an emotion-congruent fashion. These findings suggest that information about emotional tone is used in the processing of linguistic content influencing the recognition and naming of spoken words in an emotion-congruent manner.

Journal ArticleDOI
TL;DR: The present findings strengthen the DWA by indicating a perceptual origin of dimension change costs in visual search by focusing on specific event-related brain potential components (directly linkable to perceptual and response-related processing) during a compound search task.
Abstract: In cross-dimensional visual search tasks, target discrimination is faster when the previous trial contained a target defined in the same visual dimension as the current trial. The dimension-weighting account (DWA; A. Found & H. J. Muller, 1996) explains this intertrial facilitation by assuming that visual dimensions are weighted at an early perceptual stage of processing. Recently, this view has been challenged by models claiming that intertrial facilitation effects are generated at later stages that follow attentional target selection (K. Mortier, J. Theeuwes, & P. A. Starreveld, 2005). To determine whether intertrial facilitation is generated at a perceptual stage, at the response selection stage, or both, the authors focused on specific event-related brain potential components (directly linkable to perceptual and response-related processing) during a compound search task. Visual dimension repetitions were mirrored by shorter latencies and enhanced amplitudes of the N2-posterior- contralateral, suggesting a facilitated allocation of attentional resources to the target. Response repetitions and changes systematically modulated the lateralized readiness potential amplitude, suggesting a benefit from residual activations of the previous trial biasing the correct response. Overall, the present findings strengthen the DWA by indicating a perceptual origin of dimension change costs in visual search.

Journal ArticleDOI
TL;DR: A case is made against current theoretical views on imitation and direct matching in favor of more flexible models of perception-action coupling and a Simon effect was identified to underlie faster responses in the imitation task.
Abstract: A robust finding in imitation literature is that people perform their actions more readily if they are congruent with the behavior of another person. These action congruency effects are typically explained by the idea that the observation of someone else acting automatically activates our motor system in a directly matching way. In the present study action congruency effects were investigated between an imitation task and a complementary action task. Subjects imitated or complemented a virtual actor's grasp on a manipulandum. In both tasks, a color-cue could be presented forcing subjects to ignore the task rule and execute a predefined grasp. Reaction times revealed a reversal of congruency effects in the complementary action task, suggesting that subjects were able to circumvent the automatic tendency to copy actions or postures of another person. In 2 additional control experiments, congruency effects were replicated, and a Simon effect was identified to underlie faster responses in the imitation task. These results make a case against current theoretical views on imitation and direct matching in favor of more flexible models of perception-action coupling.

Journal ArticleDOI
TL;DR: This work replicated the V. Goffaux and B. Rossion (2006) results, finding a greater alignment effect in accuracy for LSF compared with HSF faces on same trials, but there was also a greater bias for responding "same" for HSF compared with LSF faces, indicating that the alignment effects arose from differential response biases.
Abstract: V. Goffaux and B. Rossion (2006) argued that holistic processing of faces is largely supported by low spatial frequencies (LSFs) but less so by high spatial frequencies (HSFs). We addressed this claim using a sequential matching task with face composites. Observers judged whether the top halves of aligned or misaligned composites were identical. We replicated the V. Goffaux and B. Rossion (2006) results, finding a greater alignment effect in accuracy for LSF compared with HSF faces on same trials. However, there was also a greater bias for responding "same" for HSF compared with LSF faces, indicating that the alignment effects arose from differential response biases. Crucially, comparable congruency effects found for LSF and HSF suggest that LSF and HSF faces are processed equally holistically. These results demonstrate that it is necessary to use measures that take response biases into account in order to fully understand the holistic nature of face processing.

Journal ArticleDOI
TL;DR: It is demonstrated that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory.
Abstract: Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or inverted and a low- or high-load concurrent verbal WM task was administered to suppress contribution from verbal WM. Even with a high verbal memory load, visual WM performance was significantly better and capacity estimated as significantly greater for famous versus unfamiliar faces. Face inversion abolished this effect. Thus, neither strategic, explicit support from verbal WM nor low-level feature processing easily accounts for the observed benefit of high familiarity for visual WM. These results demonstrate that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory.

Journal ArticleDOI
TL;DR: In this paper, a series of behavioral experiments tested whether direct matching, as measured by motor priming, can be modulated by inferred action goals and attributed intentions and found that direct matching can be top-down modigated by the observer's interpretation of the observed movement as intended or not.
Abstract: Converging evidence has shown that action observation and execution are tightly linked. The observation of an action directly activates an equivalent internal motor representation in the observer (direct matching). However, whether direct matching is primarily driven by basic perceptual features of the observed movement or is influenced by more abstract interpretative processes is an open question. A series of behavioral experiments tested whether direct matching, as measured by motor priming, can be modulated by inferred action goals and attributed intentions. Experiment 1 tested whether observing an unsuccessful attempt to execute an action is sufficient to produce a motor-priming effect. Experiment 2 tested alternative perceptual explanations for the observed findings. Experiment 3 investigated whether the attribution of intention modulates motor priming by comparing motor-priming effects during observation of intended and unintended movements. Experiment 4 tested whether participants' interpretation of the movement as triggered by an external source or the actor's intention modulates the motor-priming effect by a pure instructional manipulation. Our findings support a model in which direct matching can be top-down modulated by the observer's interpretation of the observed movement as intended or not.

Journal ArticleDOI
TL;DR: As the delay increased, the probability of responding on stop trials changed very little, but GO2 task reaction times decreased substantially, which is consistent with both a nondeterministic serial model and a limited-capacity parallel model.
Abstract: Multitasking was studied in the stop-change paradigm, in which the response for a primary GO1 task had to be stopped and replaced by a response for a secondary GO2 task on some trials. In 2 experiments, the delay between the stop signal and the change signal was manipulated to determine which task goals (GO1, GO2, or STOP) were involved in performance and to determine whether the goals were activated in series or in parallel. As the delay increased, the probability of responding on stop trials changed very little, but GO2 task reaction times decreased substantially. Such effects are consistent with both a nondeterministic serial model (in which the GO1 goal is replaced by the STOP goal, which is subsequently replaced by the GO2 goal) and a limited-capacity parallel model (in which stopping and GO2 processing occur concurrently) with a capacity-sharing proportion that resembles serial processing.

Journal ArticleDOI
TL;DR: The study points to the profound reliance on phonatory and manual motor processing--a dual-route stratagem--used during music reading and further explores the phonatory nature of notational audiation with throat-audio and larynx-electromyography measurement.
Abstract: This study investigated the mental representation of music notation Notational audiation is the ability to internally "hear" the music one is reading before physically hearing it performed on an instrument In earlier studies, the authors claimed that this process engages music imagery contingent on subvocal silent singing This study refines the previously developed embedded melody task and further explores the phonatory nature of notational audiation with throat-audio and larynx-electromyography measurement Experiment 1 corroborates previous findings and confirms that notational audiation is a process engaging kinesthetic-like covert excitation of the vocal folds linked to phonatory resources Experiment 2 explores whether covert rehearsal with the mind's voice also involves actual motor processing systems and suggests that the mental representation of music notation cues manual motor imagery Experiment 3 verifies findings of both Experiments 1 and 2 with a sample of professional drummers The study points to the profound reliance on phonatory and manual motor processing--a dual-route stratagem--used during music reading Further implications concern the integration of auditory and motor imagery in the brain and cross-modal encoding of a unisensory input

Journal ArticleDOI
TL;DR: These experiments clearly demonstrate that high perceptual load determines conscious perception, impairing the ability to merely detect the presence of a stimulus—a phenomenon of load induced blindness.
Abstract: Although the perceptual load theory of attention has stimulated a great deal of research, evidence for the role of perceptual load in determining perception has typically relied on indirect measures that infer perception from distractor effects on reaction times or neural activity (see N. Lavie, 2005, for a review). Here we varied the level of perceptual load in a letter-search task and assessed its effect on the conscious perception of a search-irrelevant shape stimulus appearing in the periphery, using a direct measure of awareness (present/absent reports). Detection sensitivity (d') was consistently reduced with high, compared to low, perceptual load but was unaffected by the level of working memory load. Because alternative accounts in terms of expectation, memory, response bias, and goal-neglect due to the more strenuous high load task were ruled out, these experiments clearly demonstrate that high perceptual load determines conscious perception, impairing the ability to merely detect the presence of a stimulus--a phenomenon of load induced blindness.

Journal ArticleDOI
TL;DR: A steering model based on active gaze sampling is proposed, informed by the experimental conditions and consistent with observations in free-gaze experiments and with recommendations from real-world high-speed steering.
Abstract: The authors examined observers steering through a series of obstacles to determine the role of active gaze in shaping locomotor trajectories. Participants sat on a bicycle trainer integrated with a large field-of-view simulator and steered through a series of slalom gates. Steering behavior was determined by examining the passing distance through gates and the smoothness of trajectory. Gaze monitoring revealed which slalom targets were fixated and for how long. Participants tended to track the most immediate gate until it was about 1.5 s away, at which point gaze switched to the next slalom gate. To probe this gaze pattern, the authors then introduced a number of experimental conditions that placed spatial or temporal constraints on where participants could look and when. These manipulations resulted in systematic steering errors when observers were forced to use unnatural looking patterns, but errors were reduced when peripheral monitoring of obstacles was allowed. A steering model based on active gaze sampling is proposed, informed by the experimental conditions and consistent with observations in free-gaze experiments and with recommendations from real-world high-speed steering.

Journal ArticleDOI
TL;DR: The results suggest that automatic imitation is modulated by top-down influences, coding actions in terms of both movements and goals depending on the focus of attention.
Abstract: Recent behavioral, neuroimaging, and neurophysiological research suggests a common representational code mediating the observation and execution of actions; yet, the nature of this representational code is not well understood. The authors address this question by investigating (a) whether this observation-execution matching system (or mirror system) codes both the constituent movements of an action as well as its goal and (b) how such sensitivity is influenced by top-down effects of instructions. The authors tested the automatic imitation of observed finger actions while manipulating whether the movements were biomechanically possible or impossible, but holding the goal constant. When no mention was made of this difference (Experiment 1), comparable automatic imitation was elicited from possible and impossible actions, suggesting that the actions had been coded at the level of the goal. When attention was drawn to this difference (Experiment 2), however, only possible movements elicited automatic imitation. This sensitivity was specific to imitation, not affecting spatial stimulus-response compatibility (Experiment 3). These results suggest that automatic imitation is modulated by top-down influences, coding actions in terms of both movements and goals depending on the focus of attention.

Journal ArticleDOI
TL;DR: It is suggested that the aftereffects of successful response inhibition are primarily due to repetition priming, although there was evidence for between-trial control adjustments when inhibition failed.
Abstract: Repetition priming and between-trial control adjustments after successful and unsuccessful response inhibition were studied in the stop-signal paradigm. In 5 experiments, the authors demonstrated that response latencies increased after successful inhibition compared with trials that followed no-signal trials. However, this effect was found only when the stimulus (Experiments 1A-4) or stimulus category (Experiment 3) was repeated. Slightly different results were found after trials on which the response inhibition failed. In Experiments 1A, 2, and 4, response latencies increased after unsuccessful inhibition trials compared with after no-inhibition trials, and this happened whether or not the stimulus repeated. Based on these results, we suggest that the aftereffects of successful response inhibition are primarily due to repetition priming, although there was evidence for between-trial control adjustments when inhibition failed.

Journal ArticleDOI
TL;DR: The results suggested that object-based attention operates via the spread of attention within an object, and that objects are not required to prioritize the deployment of attentional search.
Abstract: The authors investigated 2 effects of object-based attention: the spread of attention within an attended object and the prioritization of search across possible target locations within an attended object. Participants performed a flanker task in which the location of the task-relevant target was fixed and known to participants. A spreading attention account predicts that object-based attention will arise from the spread of attention through an attended object. A prioritization account predicts that there will be a small, if any, object-based effect because the location of the target is known in advance and objects are not required to prioritize the deployment of attentional search. The results suggested that object-based attention operates via the spread of attention within an object.

Journal ArticleDOI
TL;DR: The results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary, which may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability.
Abstract: Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability.

Journal ArticleDOI
TL;DR: Across experiments, errors in decisions to reach through too-small apertures were likely due to low penalty for error, and it was shown that participants recalibrate motor decisions to take changing body dimensions into account.
Abstract: Affordances—possibilities for action—are constrained by the match between actors and their environments. For motor decisions to be adaptive, affordances must be detected accurately. Three experiments examined the correspondence between motor decisions and affordances as participants reached through apertures of varying size. A psychophysical procedure was used to estimate an affordance threshold for each participant (smallest aperture they could fit their hand through on 50% of trials), and motor decisions were assessed relative to affordance thresholds. Experiment 1 showed that participants scale motor decisions to hand size, and motor decisions and affordance thresholds are reliable over two blocked protocols. Experiment 2 examined the effects of habitual practice: Motor decisions were equally accurate when reaching with the more practiced dominant hand and less practiced nondominant hand. Experiment 3 showed that participants recalibrate motor decisions to take changing body dimensions into account: Motor decisions while wearing a hand-enlarging prosthesis were similar to motor decisions without the prosthesis when data were normalized to affordance thresholds. Across experiments, errors in decisions to reach through too-small apertures were likely due to low penalty for error.

Journal ArticleDOI
Ed Symes1, Mike Tucker1, Rob Ellis1, Lari Vainio1, Giovanni Ottoboni1 
TL;DR: Data was reported in which participants planned a grasp prior to the onset of a change-blindness scene in which 1 of 12 objects changed identity, and change blindness was substantially reduced for grasp-congruent objects.
Abstract: A series of experiments provided converging support for the hypothesis that action preparation biases selective attention to action-congruent object features. When visual transients are masked in so-called change-blindness scenes, viewers are blind to substantial changes between 2 otherwise identical pictures that flick back and forth. The authors report data in which participants planned a grasp prior to the onset of a change-blindness scene in which 1 of 12 objects changed identity. Change blindness was substantially reduced for grasp-congruent objects (e.g., planning a whole-hand grasp reduced change blindness to a changing apple). A series of follow-up experiments ruled out an alternative explanation that this reduction had resulted from a labeling or strategizing of responses and provided converging support that the effect genuinely arose from grasp planning.

Journal ArticleDOI
TL;DR: Manipulation of sentence context demonstrated that parafoveal word length information can be used in combination with sentence context to narrow down lexical candidates.
Abstract: Eye movements were monitored in 4 experiments that explored the role of parafoveal word length in reading. The experiments employed a type of compound word where the deletion of a letter results in 2 short words (e.g., backhand, back and). The boundary technique (K. Rayner, 1975) was employed to manipulate word length information in the parafovea. Accuracy of the parafoveal word length preview significantly affected landing positions and fixation durations. This disruption was larger for 2-word targets, but the results demonstrated that this interaction was not due to the morphological status of the target words. Manipulation of sentence context also demonstrated that parafoveal word length information can be used in combination with sentence context to narrow down lexical candidates. The 4 experiments converge in demonstrating that an important role of parafoveal word length information is to direct the eyes to the center of the parafoveal word.

Journal ArticleDOI
TL;DR: It was found that memory retrieval can slow responses for 1-20 trials after successful inhibition, which suggests the automatic retrieval of task goals and concluded that cognitive control can rely on both memory retrieval and executive processes.
Abstract: Cognitive control theories attribute control to executive processes that adjust and control behavior online. Theories of automaticity attribute control to memory retrieval. In the present study, online adjustments and memory retrieval were examined, and their roles in controlling performance in the stop-signal paradigm were elucidated. There was evidence of short-term response time adjustments after unsuccessful stopping. In addition, it was found that memory retrieval can slow responses for 1-20 trials after successful inhibition, which suggests the automatic retrieval of task goals. On the basis of these findings, the authors concluded that cognitive control can rely on both memory retrieval and executive processes.