scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 2000"


Journal ArticleDOI
TL;DR: This article outlines, review, and evaluates three new models of backward masking: an extension of the dual-channel approach as realized in the neural network model of retino-cortical dynamics, the perceptual retouch theory, and the boundary contour system.
Abstract: Visual backward masking not only is an empirically rich and theoretically interesting phenomenon but also has found increasing application as a powerful methodological tool in studies of visual information processing and as a useful instrument for investigating visual function in a variety of specific subject populations. Since the dual-channel, sustained-transient approach to visual masking was introduced about two decades ago, several new models of backward masking and metacontrast have been proposed as alternative approaches to visual masking. In this article, we outline, review, and evaluate three such approaches: an extension of the dual-channel approach as realized in the neural network model of retino-cortical dynamics (Ogmen, 1993), the perceptual retouch theory (Bachmann, 1984, 1994), and the boundary contour system (Francis, 1997; Grossberg & Mingolla, 1985b). Recent psychophysical and electrophysiological findings relevant to backward masking are reviewed and, whenever possible, are related to the aforementioned models. Besides noting the positive aspects of these models, we also list their problems and suggest changes that may improve them and experiments that can empirically test them.

467 citations


Journal ArticleDOI
TL;DR: Two components of stimulation (presumably vibrational and spatial) contribute to texture perception, as Katz maintained; mechanisms for responding to the latter appear to be engaged at texture element sizes down to 100 ^m, a surprisingly small value.
Abstract: Three experiments are reported bearing on Katz's hypothesis that tactile texture perception is mediated by vibrational cues in the case of fine textures and by spatial cues in the case of coarse textures. Psychophysical responses when abrasive surfaces moved across the skin were compared with those obtained during static touch, which does not provide vibrational cues. Experiment 1 used two-interval forced-choice procedures to measure discrimination of surfaces. Fine surfaces that were readily discriminated when moved across the skin became indistinguishable in the absence of movement; coarse surfaces, however, were equally discriminable in moving and stationary conditions. This was shown not to result from any inherently greater difficulty of fine-texture discrimination. Experiments 2 and 3 used free magnitude estimation to obtain a more comprehensive picture of the effect of movement on texture (roughness) perception. Without movement, perception was seriously degraded (the psychophysical magnitude function was flattened) for textures with element sizes below 100 microns; above this point, however, the elimination of movement produced an overall decrease in roughness, but not in the slope of the magnitude function. Thus, two components of stimulation (presumably vibrational and spatial) contribute to texture perception, as Katz maintained; mechanisms for responding to the latter appear to be engaged at texture element sizes down to 100 microns, a surprisingly small value.

361 citations


Journal ArticleDOI
TL;DR: The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation and accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Abstract: Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

279 citations


Journal ArticleDOI
TL;DR: It is concluded that thesticky/slippery dimension is perceptually weighted less than therough/smooth andsoft/hard dimensions, materially contributing to the structure of perceptual space only in some individuals.
Abstract: Ratio scaling was used to obtain from 5 subjects estimates of the subjective dissimilarity between the members of all possible pairs of 17 tactile surfaces. The stimuli were a diverse array of everyday surfaces, such as corduroy, sandpaper, and synthetic fur. The results were analyzed using the multidimensional scaling (MDS) program ALSCAL. There was substantial, but not complete, agreement across subjects in the spatial arrangement of perceived textures. Scree plots and multivariate analysis suggested that, for some subjects, a two-dimensional space was the optimal MDS solution, whereas for other subjects, a three-dimensional space was indicated. Subsequent to their dissimilarity scaling, subjects rated each stimulus on each of five adjective scales. Consistent with earlier research, two of these (rough/smooth and soft/hard) were robustly related to the space for all subjects. A third scale, sticky/slippery, was more variably related to the dissimilarity data: regressed into three-dimensional MDS space, it was angled steeply into the third dimension only for subjects whose scree plots favored a nonplanar solution. We conclude that the sticky/slippery dimension is perceptually weighted less than the rough/smooth and soft/hard dimensions, materially contributing to the structure of perceptual space only in some individuals.

277 citations


Journal ArticleDOI
TL;DR: Repetition influences target selection in search guided by bottom-up factors and more than one mechanism may underlie the repetition effect, but assuming that there is a unitary mechanism, a short-term episodic memory mechanism is proposed.
Abstract: Maljkovie and Nakayama (1994) demonstrated an automatic benefit of repeating the defining feature of the target in search guided by salience. Thus, repetition influences target selection in search guided by bottom-up factors. Four experiments demonstrate this repetition effect in search guided by top-down factors, and so the repetition effect is not merely part of the mechanism for determining what display elements are salient. The effect is replicated in singleton search and in three situations requiring different degrees of top-down guidance: when the feature defining the target is less salient than the feature defining the response, when there is more than one singleton in the defining dimension, and when the target is defined by a conjunction of features. Repetition does not change the priorities of targets, relative to distractors: Display size affects search equally whether the target is repeated or changed. More than one mechanism may underlie the repetition effect in different experiments, but assuming that there is a unitary mechanism, a short-term episodic memory mechanism is proposed.

271 citations


Journal ArticleDOI
TL;DR: It is shown that despite prior familiarization with a point-light figure at all orientations, its detectability within a mask decreased with a change in orientation from upright to a range of 90°–180°, and top-down influence on the perception of biological motion is limited by display orientation.
Abstract: We addressed the issue of how display orientation affects the perception of biological motion. In Experiment 1, spontaneous recognition of a point-light walker improved abruptly with image-plane display rotation from inverted to upright orientation. Within a range of orientations from 180° to 90°, it was dramatically impeded. Using ROC analysis, we showed (Experiments 2 and 3) that despite prior familiarization with a point-light figure at all orientations, its detectability within a mask decreased with a change in orientation from upright to a range of 90°–180°. In Experiment 4, a priming effect in biological motion was observed only if a prime corresponded to a range of deviations from upright orientation within which the display was spontaneously recognizable. The findings indicate that display orientation nonmonotonically affects the perception of biological motion. Moreover, top-down influence on the perception of biological motion is limited by display orientation.

263 citations


Journal ArticleDOI
TL;DR: It is demonstrated that phonetic priming does not depend on target degradation and that it affects processing time, and that PARSYN—a connectionist instantiation of the neighborhood activation model—accurately simulates the observed pattern of priming.
Abstract: Perceptual identification of spoken words in noise is less accurate when the target words are preceded by spoken phonetically related primes (Goldinger, Luce, & Pisoni, 1989). The present investigation replicated and extended this finding. Subjects shadowed target words presented in the clear that were preceded by phonetically related or unrelated primes. In addition, primes were either higher or lower in frequency than the target words. Shadowing latencies were significantly longer for target words preceded by phonetically related primes, but only when the prime-target interstimulus interval was short (50 vs. 500 msec). These results demonstrate that phonetic priming does not depend on target degradation and that it affects processing time. We further demonstrated that PARSYN—a connectionist instantiation of the neighborhood activation model—accurately simulates the observed pattern of priming.

251 citations


Journal ArticleDOI
TL;DR: It is concluded that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.
Abstract: It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.

251 citations


Journal ArticleDOI
TL;DR: The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.
Abstract: In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH;n=96) and with severely to profoundly impaired hearing (IH;n=72) identified consonant-vowel (CV) nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct and the amount of phoneme substitution entropy for sentences. The results demonstrated greater sensitivity to phonetic information in the IH group. Transmitted feature information was related to isolated word scores for the IH group, but not for the NH group. Phoneme errors in sentences were more systematic in the IH than in the NH group. Individual differences in phonetic perception for CVs were more highly associated with word and sentence performance for the IH than for the NH group. The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.

245 citations


Journal ArticleDOI
TL;DR: Compared with age-matched sighted subjects on three tactile tasks using precisely specified stimuli, the blind significantly outperformed the sighted at a hyperacuity task using Braille-like dot patterns, although, with practice, both groups performed equally well.
Abstract: It is not clear whether the blind are generally superior to the sighted on measures of tactile sensitivity or whether they excel only on certain tests owing to the specifics of their tactile experience. We compared the discrimination performance of blind Braille readers and age-matched sighted subjects on three tactile tasks using precisely specified stimuli. Initially, the blind significantly outperformed the sighted at a hyperacuity task using Braille-like dot patterns, although, with practice, both groups performed equally well. On two other tasks, hyperacute discrimination of gratings that differed in ridge width and spatial-acuity-dependent discrimination of grating orientation, the performance of the blind did not differ significantly from that of sighted subjects. These results probably reflect the specificity of perceptual learning due to Braille-reading experience.

193 citations


Journal ArticleDOI
TL;DR: The results from three experiments showed that, as measured by the lateral occipital P1 and Nl ERP components, the magnitude of spatially selective processing in extrastriate visual cortex increased with perceptual load, suggesting a relatively broader model—where perceptual load is one of many factors mediating early selection.
Abstract: Behavioral data have suggested that perceptual load can modulate spatial selection by influencing the allocation of attentional resources at perceptual-level processing stages (Lavie & Tsal, 1994). To directly test this hypothesis, event-related potentials (ERPs) were recorded for both low- and high-perceptual-load targets in a probabilistic spatial cuing paradigm. The results from three experiments showed that, as measured by the lateral occipital P1 and Nl ERP components, the magnitude of spatially selective processing in extrastriate visual cortex increased with perceptual load. Furthermore, these effects on spatial selection were found in the P1 at lower levels of perceptual load than in the N1. The ERP data thus provide direct electrophysiological support for proposals that link perceptual load to early spatial selection in visual processing. However, our findings suggest a relatively broader model—where perceptual load is but one of many factors mediating early selection.

Journal ArticleDOI
TL;DR: The experiments reported here show that the detection of a dim probe dot is impaired when it falls at the location of an old object but that this occurs only in conditions in which it is advantageous for subjects to mark (inhibit) old objects.
Abstract: Watson and Humphreys (1997, 1998) have recently demonstrated that new objects can be prioritized for visual attentional processing by the top-down attentional inhibition of old objects already in the field, a mechanism they called visual marking. The experiments reported here show that the detection of a dim probe dot is impaired when it falls at the location of an old object (Experiments 1 and 3) but that this occurs only in conditions in which it is advantageous for subjects to mark (inhibit) old objects (Experiment 2). These results further support previous work showing that visual marking is based on the inhibition of the locations of old objects and that visual marking can be flexibly applied (or withheld), depending on the goals of current behavior.

Journal ArticleDOI
TL;DR: Four experiments explored inhibitory mechanisms related to attentional selection by viewing multielement displays and performing a form discrimination task involving a probe element and demonstrating that the extent of the inhibitory region is spatially mediated.
Abstract: Four experiments explored inhibitory mechanisms related to attentional selection. Observers viewed multielement displays and performed a form discrimination task involving a probe element. Also present in the stimulus display was a singleton element (possessing a unique color or orientation). In Experiments 1-3, probe discrimination performance was measured as a function of the distance between the probe and the singleton. Experiment 1 revealed that probe discriminations suffered when the probe was adjacent to the singleton, but improved as the spatial separation between the probe and attentionally salient singleton increased. Experiment 2 added a control condition, revealing that probe discriminations were inhibited near the singleton, but returned to control level performance with increased separation. Further, the amount of inhibition increased with larger stimulus onset asynchronies between the singleton and probe. Experiment 3 demonstrated that the extent of the inhibitory region is spatially mediated. In Experiment 4, the task was modified to one of probe detection. No inhibition was observed in the detection task, indicating that the decrease in probe discrimination performance observed in Experiments 1-3 was not due to observers' inability to detect the probe element.

Journal ArticleDOI
TL;DR: The findings demonstrate that spectral centroid and rise time represent principal perceptual dimensions of timbre, independent of musical training, but that the tendency to group timbres according to source properties increases with acoustic complexity.
Abstract: The goal of a series of listening tests was to better isolate the principal dimensions of timbre, using a wide range of timbres and converging psychophysical techniques. Expert musicians and nonmusicians rated the timbral similarity of three sets of pitched and percussive instruments. Multidimensional scaling analyses indicated that both centroid and rise time comprise the principal acoustic factors across all stimulus sets and that musicians and nonmusicians did not differ significantly in their weighting of these factors. Clustering analyses revealed that participants also categorized percussive and, to a much lesser extent, pitched timbres according to underlying physical-acoustic commonalties. The findings demonstrate that spectral centroid and rise time represent principal perceptual dimensions of timbre, independent of musical training, but that the tendency to group timbres according to source properties increases with acoustic complexity.

Journal ArticleDOI
TL;DR: Evidence of inhibitory tagging could be observed only when the items of the search tasks were maintained until the responses for the small probes were made, and this appeared to be an object-based effect.
Abstract: Klein (1988) reported that inhibitory tagging (i.e., inhibition of return in visual search) made reaction times for the detection of small probes increase at locations where there had previously been rejected items in serial visual search. It is reasonable that the attended and rejected locations are inhibited. However, subsequent studies did not support Klein's idea. In these studies, inhibitory tagging was tested after removing the items from the search tasks. The paradigms in these studies were not appropriate for testing an object-based inhibitory effect because the objects (i.e., items) were removed from the display. In the present study, we found that evidence of inhibitory tagging could be observed only when the items of the search tasks were maintained until the responses for the small probes were made. This appeared to be an object-based effect.

Journal ArticleDOI
TL;DR: This theory proposes that retinal (as opposed to extra-retinal) factors, primarily those concerning the saccade target object, are critical for the detection of intrasaccadic stimulus shifts.
Abstract: Although the proximal stimulus shifts position on our retinae with each saccade, we perceive our world as stable and continuous. Most theories of visual stability implicitly assume a mechanism that spatially adjusts perceived locations associated with the retinal array by using, as a parameter,extraretinal eye position information, a signal that encodes the size and direction of the saccade. The results from the experiment reported in this article challenge this idea. During a participant’s saccade to a target object, one of the following was displaced:the entire scene, the target object, or the background behind the target object. Participants detected the displacement of the target object twice as frequently as the displacement of the entire background. The direction of displacement relative to the saccade also affected detectability. We use a new theory, the saccade target theory (McConkie & Currie, 1996), to interpret these results. This theory proposes that retinal (as opposed to extra-retinal) factors, primarily those concerning the saccade target object, are critical for the detection of intrasaccadic stimulus shifts.

Journal ArticleDOI
TL;DR: Results demonstrate that training redirects listeners’ attention to acoustic cues and that this shift of attention generalizes to novel (untrained) phonetic contexts.
Abstract: Learning new phonetic categories in a second language may be thought of in terms of learning to focus one’s attention on those parts of the acoustic-phonetic structure of speech that are phonologically relevant in any given context. As yet, however, no study has demonstrated directly that training can shift listeners’ attention between acoustic cues given feedback about the linguistic phonetic category alone. In this paper we discuss the results of a training study in which subjects learned to shift their attention from one acoustic cue to another using only category-level identification as feedback. Results demonstrate that training redirects listeners’ attention to acoustic cues and that this shift of attention generalizes to novel (untrained) phonetic contexts.

Journal ArticleDOI
TL;DR: The results showed that both manual and saccadic responses result in equivalent amounts of facilitation following initial exposure to a spatial cue, however, IOR developed more quickly for saccades than for manual responses, such that, at certain cue-target SO As, saccads responses to targets were inhibited, whereas manual responses were still facilitated.
Abstract: When nonpredictive exogenous visual cues are used to reflexively orient covert visual spatial attention, the initial early facilitation for detecting stimuli at cued versus uncued spatial locations develops into inhibition by 300 msec following the cue, a pattern referred to as inhibition of return (IOR). Experiments were carried out comparing the magnitude and time course for development of IOR effects when manual versus saccadic responses were required. The results showed that both manual and saccadic responses result in equivalent amounts of facilitation following initial exposure to a spatial cue. However, IOR developed more quickly for saccadic responses, such that, at certain cue-target SOAs, saccadic responses to targets were inhibited, whereas manual responses were still facilitated. The findings are interpreted in terms of a premotor theory of visual attention.

Journal ArticleDOI
TL;DR: The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search, and the importance of foveal vision during search.
Abstract: The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects’ eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5°. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.

Journal ArticleDOI
TL;DR: Two experiments found that form discriminations to a target item were inhibited when the target appeared adjacent to an attentionally salient item, suggesting a distinction between attentional preparation and attentional selection.
Abstract: Two experiments found that form discriminations to a target item were inhibited when the target appeared adjacent to an attentionally salient item. Experiment 1 manipulated the attentional salience of an irrelevant color singleton through the attentional set adopted by the subjects. Color singletons captured attention when the target was itself a feature singleton, but not when the target was defined as a conjunction of features. Attentional capture was accompanied by an inhibitory region (i.e., slowed target reaction times), which dissipated with distance from the color singleton. In Experiment 2, the attentional salience of abrupt onsets and color singletons was compared. Irrelevant abrupt onsets captured attention, whereas irrelevant color singletons failed to capture attention. Again, an inhibitory region surrounded the attentionally salient abrupt onsets, but not the color singletons. The results are discussed in the context of current models of visual spatial attention and suggest a distinction between attentional preparation and attentional selection.

Journal ArticleDOI
TL;DR: The FACADE theory of three-dimensional (3-D) vision is developed to simulate data concerning how two-dimensional pictures give rise to 3-D percepts of occluded and occluding surfaces to provide sensitivity to T-junctions without the need to assume that T-junction “detectors” exist.
Abstract: This article develops the FACADE theory of three-dimensional (3-D) vision to simulate data concerning how two-dimensional pictures give rise to 3-D percepts of occluded and occluding surfaces. The theory suggests how geometrical and contrastive properties of an image can either cooperate or compete when forming the boundary and surface representations that subserve conscious visual percepts. Spatially long-range cooperation and short-range competition work together to separate boundaries of occluding figures from their occluded neighbors, thereby providing sensitivity to T-junctions without the need to assume that T-junction “detectors” exist. Both boundary and surface representations of occluded objects may be amodally completed, whereas the surface representations of unoccluded objects become visible through modal processes. Computer simulations include Bregman-Kanizsa figure-ground separation, Kanizsa stratification, and various lightness percepts, including the Munker-White, Benary cross, and checkerboard percepts.

Journal ArticleDOI
TL;DR: In this paper, optimal parameters were explored for lines presented dynamically to the skin with vibrotactile arrays on three body sites, with veridical and saltatory presentation modes.
Abstract: In order to provide information regarding orientation or direction, a convenient code employs vectors (lines) because they have both length and direction. Potential users of such information, encoded tactually, could include persons who are blind, as well as pilots, astronauts, and scuba divers, all of whom need to maintain spatial awareness in their respective unusual environments. In these situations, a tactile display can enhance environmental awareness. In this study, optimal parameters were explored for lines presented dynamically to the skin with vibrotactile arrays on three body sites, with veridical and saltatory presentation modes. Perceived length, straightness, spatial distribution, and smoothness were judged while the durations of the discrete taps making up the “drawn” dotted lines and the times between them were varied. The results indicate that the two modes produce equivalent sensations and that similar sets of timing parameters, within the ranges tested, result in “good” lines at each site.

Journal ArticleDOI
TL;DR: Results indicate that CP can be observed for unfamiliar faces, in both familiar (same-race) and unfamiliar groups, and it is argued that these CP effects are based on the rapid acquisition of perceptual equivalence classes.
Abstract: On the basis of findings that categorical perception (CP) is possible in complex visual stimuli such as faces, the present study tested for CP on continua between unfamiliar face pairs. Results indicate that CP can be observed for unfamiliar faces, in both familiar (same-race) and unfamiliar (other-race) groups. In addition, significant CP effects were observed in inverted faces. Finally, half-continua were tested where midpoint stimuli became endpoints. This was done to ensure that stimulus artifacts did not account for the observed CP effects. Consistent with the perceptual rescaling associated with CP, half-continua showed a rescaled CP effect. We argue that these CP effects are based on the rapid acquisition of perceptual equivalence classes.

Journal ArticleDOI
TL;DR: These results suggest that differences between attended and ignored repetition effects in selective attention studies of spatial localization do not provide a basis for distinguishing between spatial negative priming and inhibition of return.
Abstract: A series of spatial localization experiments is reported that addresses the relation betweennegative priming andinhibition of return. The results of Experiment 1 demonstrate that slowed responses to repeated location stimuli can be obscured by repetition priming effects involving stimulus dimensions other than spatial location. The results of Experiments 2, 3A, and 3B demonstrate that these repetition priming effects may occur only when participants are required to respond to the prime display. Together, these results suggest that differences between attended and ignored repetition effects in selective attention studies of spatial localization do not provide a basis for distinguishing between spatial negative priming and inhibition of return.

Journal ArticleDOI
TL;DR: In Experiment 1, it was found, using an interleaved melody task, that target sounds could be selected from distractors in the same spectral region more easily when they differed in timbre, indicating the occurrence of primitive stream segregation.
Abstract: Differences in the timbre of sounds in a sequence can affect their perceptual organization. Using a performance measure, Hartmann and Johnson (1991) concluded that streaming could be predicted primarily by the extent to which sounds were passed by different peripheral channels. However, results from a rating task by Dannenbring and Bregman (1976) suggested that sounds in the same spectral region (passed by the same peripheral channels) can be allocated to different streams. In Experiment 1, it was found, using an interleaved melody task, that target sounds could be selected from distractors in the same spectral region more easily when they differed in timbre. This finding might result from primitive stream segregation or schema-driven selection, but not from peripheral channeling. In Experiment 2, a rhythm discrimination task was used, requiring the sounds to be integrated for good performance. Differences in timbre impaired performance, indicating the occurrence of primitive stream segregation.

Journal ArticleDOI
TL;DR: The results indicate that the CPA is independent of attentional factors but strongly related to the physiological inhomogeneity of the retina, and is argued that central and peripheral primes trigger an initial motor activation, which is inhibited only if primes are presented at retinal locations of sufficiently high perceptual sensitivity.
Abstract: Masked primes presented prior to a target result in behavioral benefits on incompatible trials (in which the prime and the target are mapped onto opposite responses) when they appear at fixation, but in behavioral benefits on compatible trials (in which the prime and the target are mapped onto the same response) when appearing peripherally. In Experiment 1, the time course of thiscentral-peripheral asymmetry (CPA) was investigated. For central primes, compatible-trial benefits at short stimulus onset asynchronies (SOAs) turned into incompatible-trial benefits at longer SOAs. For peripheral primes, compatible-trial benefits at short SOAs increased in size with longer SOAs. Experiment 2 showed that these effects also occur when primes and targets are physically dissimilar, ruling out an interpretation in terms of the perceptual properties of the stimulus material. In Experiments 3 and 4, the question weis investigated as to whether the CPA is related to visual-spatial attention and/or retinal eccentricity per se. The results indicate that the CPA is independent of attentional factors but strongly related to the physiological inhomogeneity of the retina. It is argued that central and peripheral primes trigger an initial motor activation, which is inhibited only if primes are presented at retinal locations of sufficiently high perceptual sensitivity. The results are discussed in terms of an activation threshold model.

Journal ArticleDOI
TL;DR: Perceptual anticipation—that is, the observer’s ability to predict the course of dynamic visual events—in the case of handwriting traces was investigated, consistent with the hypothesis that perceptual anticipation of human movements involves comparing the perceptual stimulus with an internal dynamic representation of the ongoing event.
Abstract: In two experiments, perceptual anticipation--that is, the observer's ability to predict the course of dynamic visual events--in the case of handwriting traces was investigated. Observers were shown the dynamic display of the middle letter l excerpted from two cursive trigrams (lll or lln) handwritten by one individual. The experimental factor was the distribution of the velocity along the trace, which was controlled by a single parameter, beta. Only for one value of this parameter (beta = 2/3) did the display comply with the two-thirds power law, which describes how tangential velocity depends on curvature in writing movements. The task was to indicate the trigram from which the trace was excerpted--that is, to guess the letter that followed the specific instance of the l that had been displayed. In Experiment 1, the no answer option was available. Experiment 2 adopted a forced-choice response rule. Responses were never reinforced. When beta = 2/3, the rate of correct guesses was high (Experiment 1, P?correct? = .69; Experiment 2, P?correct? = .78). The probability of a correct answer decreased significantly for both smaller and larger values of beta, with wrong answers becoming predominant at the extremes of the range of variation of this parameter. The results are consistent with the hypothesis that perceptual anticipation of human movements involves comparing the perceptual stimulus with an internal dynamic representation of the ongoing event.

Journal ArticleDOI
TL;DR: The impairment when fixating toward distractor sounds was greater when speaking Ups were fixedated than when chewing Ups were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual Upreading rather than merely passively fixated.
Abstract: In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual Up-read information consistent with the relevant auditory stream, or meaningless “chewing” Up movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking Ups were fixated than when chewing Ups were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual Upreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual Up movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a rJifficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The imphcations for cross-modal links in spatial attention are discussed.

Journal ArticleDOI
TL;DR: The study revealed that IOR can be measured at a minimum of five locations and the magnitude of the IOR effect was largest at the most recently searched location and declined from there in an approximately linear fashion.
Abstract: Using a novel sequential visual search paradigm Danziger, Kingstone, and Snyder (1998) demon-strated that inhibition of return (IOR) can reside at three spatial locations. In the present study, we extended the work of Danziger et al. by investigating whether there is a limit to the number of locations that can be inhibited in a sequential visual search task. Our study revealed that IOR can be measured at a minimum of five locations. The magnitude of the IOR effect was largest at the most recently searched location and declined from there in an approximately linear fashion. Two models that can account for our data are presented.

Journal ArticleDOI
TL;DR: Results demonstrated that sensitivity peaks at vowel boundaries were more influenced by stimulus range than were perceptual magnet effects; peaks in sensitivity near the /i/-/e/ boundary were reduced with restricted stimulus ranges and one-step intervals, but nüfüma in discrimination near the best exemplars of /i/ were present in all conditions.
Abstract: The question of whether sensitivity peaks at vowel boundaries (i.e., phoneme boundary effects) and sensitivity minima near excellent category exemplars (i.e., perceptual magnet effects) stem from the same stage of perceptual processing was examined in two experiments. In Experiment 1, participants gave phoneme identification and goodness ratings for 13 synthesized English /i/ and /e/ vowels. In Experiment 2, participants discriminated pairs of these vowels. Either the listeners discriminated the entire range of stimuli within each block of trials, or the range within each block was restricted to a single stimulus pair. In addition, listeners discriminated either one-step or two-step intervals along the stimulus series. The results demonstrated that sensitivity peaks at vowel boundaries were more influenced by stimulus range than were perceptual magnet effects; peaks in sensitivity near the /i/-/e/ boundary were reduced with restricted stimulus ranges and one-step intervals, but minima in discrimination near the best exemplars of /i/ were present in all conditions.