scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 1999"


Journal ArticleDOI
TL;DR: Investigating the long-term retention of learning in both perception and production of this difficult non-native contrast showed that 3 months after completion of the perceptual training procedure, the Japanese trainees maintained their improved levels of performance on the perceptual identification task.
Abstract: Previous work from our laboratories has shown that monolingual Japanese adults who were given intensive high-variability perceptual training improved in both perception and production of English /r/-/l/ minimal pairs. In this study, we extended those findings by investigating the long-term retention of learning in both perception and production of this difficult non-native contrast. Results showed that 3 months after completion of the perceptual training procedure, the Japanese trainees maintained their improved levels of performance of the perceptual identification task. Furthermore, perceptual evaluations by native American English listeners of the Japanese trainees' pretest, posttest, and 3-month follow-up speech productions showed that the trainees retained their long-term improvements in the general quality, identifiability, and overall intelligibility of their English/r/-/l/ word productions. Taken together, the results provide further support for the efficacy of high-variability laboratory speech sound training procedures, and suggest an optimistic outlook for the application of such procedures for a wide range of "special populations."

390 citations


Journal ArticleDOI
TL;DR: A constrained generalized maximum likelihood routine for fitting psychometric functions is proposed, which determines optimum values for the complete parameter set, that is, threshold and slope, as well as for guessing and lapsing probability.
Abstract: A constrained generalized maximum likelihood routine for fitting psychometric functions is proposed, which determines optimum values for the complete parameter set—that is, threshold and slopeas well as for guessing and lapsing probability. The constraints are realized by Bayesian prior distributions for each of these parameters. The fit itself results from maximizing the posterior distribution of the parameter values by a multidimensional simplex method. We present results from extensive Monte Carlo simulations by which we can approximate bias and variability of the estimated parameters of simulated psychometric functions. Furthermore, we have tested the routine with data gathered in real sessions of psychophysical experimenting.

294 citations


Journal ArticleDOI
TL;DR: A series of four experiments was conducted to determine whether English-learning infants can use allophonic cues to word boundaries to segment words from fluent speech and what implications these findings have for understanding how word segmentation skills develop.
Abstract: A series of four experiments was conducted to determine whether English-learning infants can use allophonic cues to word boundaries to segment words from fluent speech. Infants were familiarized with a pair of two-syllable items, such asnitrates andnight rates and then were tested on their ability to detect these same words in fluent speech passages. The presence of allophonic cues to word bound-aries did not help 9-month-olds to distinguish one of the familiarized words from an acoustically similar foil. Infants familiarized withnitrates were just as likely to listen to a passage aboutnight rates as they were to listen to one aboutnitrates. Nevertheless, when the passages contained distributional cues that favored the extraction of the familiarized targets, 9-month-olds were able to segment these items from fluent speech. By the age of 10.5 months, infants were able to rely solely on allophonic cues to locate the familiarized target words in passages. We consider what implications these findings have for understanding how word segmentation skills develop.

248 citations


Journal ArticleDOI
TL;DR: Results suggest that color plays a role in the recognition of HCD objects, and when shape information was degraded but color information preserved, subjects were less impaired in their recognition of degraded H CD objects than of degraded LCD objects, relative to their nondegraded versions.
Abstract: Does color influence object recognition? In the present study, the degree to which an object was associated with a specific color was referred to ascolor diagnosticity. Using a feature listing and typicality measure, objects were identified as either high in color diagnosticity or low in color diagnosticity. According to the color diagnosticity hypothesis, color should more strongly influence the recognition of high color diagnostic (HCD) objects (e.g., a banana) than the recognition of low color diagnostic (LCD) objects (e.g., a lamp). This prediction was supported by results from classification, naming, and verification experiments, in which subjects were faster to identify color versions of HCD objects than they were to identify achromatic versions and incongruent color versions. In contrast, subjects were no faster to identify color versions of LCD objects than they were to identify achromatic and incongruent color versions. Moreover, when shape information was degraded but color information preserved, subjects were less impaired in their recognition of degraded HCD objects than of degraded LCD objects, relative to their nondegraded versions. Collectively, these results suggest that color plays a role in the recognition of HCD objects.

246 citations


Journal ArticleDOI
TL;DR: The data suggest that although information about all three properties of spoken words is encoded and retained in memory, each source of stimulus variation differs in the extent to which it affects episodic memory for spoken words.
Abstract: This study investigated the encoding of the surface form of spoken words using a continuous recognition memory task. The purpose was to compare and contrast three sources of stimulus variability—talker, speaking rate, and overall amplitude—to determine the extent to which each source of variability is retained in episodic memory. In Experiment 1, listeners judged whether each word in a list of spoken words was “old” (had occurred previously in the list) or “new.” Listeners were more accurate at recognizing a word as old if it was repeated by the same talker and at the same speaking rate; however, there was no recognition advantage for words repeated at the same overall amplitude. In Experiment 2, listeners were first asked to judge whether each word was old or new, as before, and then they had to explicitly judge whether it was repeated by the same talker, at the same rate, or at the same amplitude. On the first task, listeners again showed an advantage in recognition memory for words repeated by the same talker and at same speaking rate, but no advantage occurred for the amplitude condition. However, in all three conditions, listeners were able to explicitly detect whether an old word was repeated by the same talker, at the same rate, or at the same amplitude. These data suggest that although information about all three properties of spoken words is encoded and retained in memory, each source of stimulus variation differs in the extent to which it affects episodic memory for spoken words.

207 citations


Journal ArticleDOI
TL;DR: The results indicate the potential viability of vibratory coding of roughness through a rigid link and have implications for teleoperation and virtual-reality systems.
Abstract: Subjects made roughness judgments of textured surfaces made of raised elements, while holding stick-like probes or through a rigid sheath mounted on the fingertip. These rigid links, which impose vibratory coding of roughness, were compared with the finger (bare or covered with a compliant glove), using magnitude-estimation and roughness differentiation tasks. All end effectors led to an increasing function relating subjective roughness magnitude to surface interelement spacing, and all produced above-chance roughness discrimination. Although discrimination was best with the finger, rigid links produced greater perceived roughness for the smoothest stimuli. A peak in the magnitude-estimation functions for the small probe and a transition from calling more sparsely spaced surfaces rougher to calling them smoother were predictable from the size of the contact area. The results indicate the potential viability of vibratory coding of roughness through a rigid link and have implications for teleoperation and virtual-reality systems.

203 citations


Journal ArticleDOI
TL;DR: Results suggest that top-down control of attention is possible at an early stage of visual processing and that a singleton distractor did not receive attention after extended practice.
Abstract: In two experiments using spatial probes, we measured the temporal and spatial interactions between top-down control of attention and bottom-up interference from a salient distractor in visual search. The subjects searched for a square among circles, ignoring color. Probe response times showed that a color singleton distractor could draw attention to its location in the early stage of visual processing (before a 100-msec stimulus onset asynchrony [SOA]), but only when the color singleton distractor was located far from the target. Apparently the bottom-up activation of the singleton distractor's location is affected early on by local interactions with nearby stimulus locations. Moreover, probe results showed that a singleton distractor did not receive attention after extended practice. These results suggest that top-down control of attention is possible at an early stage of visual processing. In the long-SOA condition (150-msec SOA), spatial attention selected the target location over distractor locations, and this tendency occurred with or without extended practice.

202 citations


Journal ArticleDOI
TL;DR: The short-cut rule is proposed, which states that, other things being equal, human vision prefers to use the shortest possible cuts to parse silhouettes and is motivated by the well-known Petter’s rule for modal completion.
Abstract: Many researchers have proposed that, for the purpose of recognition, human vision parses shapes into component parts. Precisely how is not yet known. The minima rule for silhouettes (Hoffman & Richards, 1984) defines boundary points at which to parse but does not tell how to use these points to cut silhouettes and, therefore, does not tell what the parts are. In this paper, we propose the short-cut rule, which states that, other things being equal, human vision prefers to use the shortest possible cuts to parse silhouettes. We motivate this rule, and the well-known Petter’s rule for modal completion, by the principle of transversality. We present five psychophysical experiments that test the short-cut rule, show that it successfully predicts part cuts that connect boundary points given by the minima rule, and show that it can also create new boundary points.

183 citations


Journal ArticleDOI
TL;DR: In several experiments, observers tried to categorize stimuli constructed from two separable stimulus dimensions in the absence of any trial-by-trial feedback to support the hypothesis that people are constrained to use unidimensional rules.
Abstract: In several experiments, observers tried to categorize stimuli constructed from two separable stimulus dimensions in the absence of any trial-by-trial feedback. In all of the experiments, the observers were told the number of categories (i.e., two), they were told that perfect accuracy was possible, and they were given extensive experience in the task (i.e., 800 trials). When the boundary separating the contrasting categories was umdimensional, the accuracy of all observers improved significantly over blocks (i.e., learning occurred), and all observers eventually responded optimally. When the optimal boundary was diagonal, none of the observers responded optimally. Instead they all used some sort of suboptimal unidimensional rule. In a separate feedback experiment, all observers responded optimally in the diagonal condition. These results contrast with those for supervised category learning; they support the hypothesis that in the absence of feedback, people are constrained to use unidimensional rules.

182 citations


Journal ArticleDOI
TL;DR: The results suggest that uniform connectedness plays an important role in defining the entities available for attention selection and the influence of bottomup factors and top-down factors on the selection process is suggested.
Abstract: We report the results of four experiments that were conducted to examine both the representations that provide candidate entities available for object-based attentional selection and the influence of bottomup factors (ie, geometric and surface characteristics of objects) and top-down factors (ie, context and expectancies) on the selection process Subjects performed the same task in each of the experiments They were asked to determine whether two target properties, a bent end and an open end of a wrench, appeared in a brief display of two wrenches In each experiment, the target properties could occur on a single wrench or one property could occur on each of two wrenches The question of central interest was whether a same-object effect (faster and/or more accurate performance when the target properties appeared on one vs two wrenches) would be observed in different experimental conditions Several interesting results were obtained First, depending on the geometric (ie, concave discontinuities on object contours) and surface characteristics (ie, homogeneous regions of color and texture) of the stimuli, attention was preferentially directed to one of three representational levels, as indicated by the presence or absence of the same-object effect Second, although geometric and surface characteristics defined the candidate objects available for attentional selection, top-down factors were quite influential in determining which representational level would be selected Third, the results suggest that uniform connectedness plays an important role in defining the entities available for attention selection These results are discussed in terms of the manner in which attention selects objects in the visual environment

175 citations


Journal ArticleDOI
TL;DR: The major conclusion is that duration judgment ratios decrease from younger to older adults when the intervals are filled with a mental task.
Abstract: The effects of aging on judgments of short temporal durations were explored using the prospective paradigm and the methods of verbal estimation and production. Younger and older adults performed a perceptual judgment task at five levels of complexity for periods of 30, 60, and 120 sec. Participants either continued to perform the task for a specified interval (production) or were stopped and then verbally estimated the interval. Older adults gave shorter verbal estimates and longer productions than did younger adults. The methods of verbal estimation and production yielded approximately equal duration judgment ratios once range effects were taken into account. Task complexity had little effect. The major conclusion is that duration judgment ratios decrease from younger to older adults when the intervals are filled with a mental task.

Journal ArticleDOI
TL;DR: When eye movements were withheld, IOR was larger when a target was presented alone than when it was presented with a distractor, suggesting that IOR is larger for exogenous than for endogenous covert orienting.
Abstract: Response time can be delayed if a target stimulus appears at a location or object that was previously cued. This inhibition of return (IOR) phenomenon has been attributed to a delay in activating attentional or motor processes to a previously cued stimulus. Two experiments required subjects to localize or identify a target stimulus. In Experiment 1, the subjects’ eyes were not monitored. In Experiment 2, the subjects’ eyes were monitored, and the subjects were instructed to either execute or withhold an eye movement to a target stimulus. The results indicated that IOR was always present for location and identification responses, supporting an attentional account of IOR. However, IOR was larger when eye movements were executed, indicating that a motor component can contribute to IOR. Finally, when eye movements were withheld, IOR was larger when a target was presented alone than when it was presented with a distractor, suggesting that IOR is larger for exogenous than for endogenous covert orienting. Together, the data indicate that IOR is composed of both an oculomotor component and an attentional component.

Journal ArticleDOI
TL;DR: The tactual information transmission capabilities of a tactual display designed to provide stimulation along a continuum from kinesthetic movements to cutaneous vibrations are assessed and the IT rate was estimated to be about 12 bits/seC.
Abstract: In this work, the tactual information transmission capabilities of a tactual display designed to provide stimulation along a continuum from kinesthetic movements to cutaneous vibrations are assessed. The display is capable of delivering arbitrary waveforms to three digits (thumb, index, and middle finger) within an amplitude range from absolute detection threshold to about 50 dB sensation level and a frequency range from dc to above 300 Hz. Stimulus sets were designed at each of three signal durations (125, 250, and 500 msec) by combining salient attributes, such as frequency (further divided into low, middle, and high regions), amplitude, direction of motion, and finger location. Estimated static information transfer (IT) was 6.5 bits at 500 mseC., 6.4 bits at 250 mseC., and 5.6 bits at 125 msec. Estimates of IT rate were derived from identification experiments in which the subject’s task was to identify the middle stimulus in a sequence of three stimuli randomly selected from a given stimulus set. On the basis of the extrapolations from these IT measurements to continuous streams, the IT rate was estimated to be about 12 bits/seC., which is roughly the same as that achieved by Tadoma users in tactual speech communication.

Journal ArticleDOI
TL;DR: It is suggested that differences in processing speed cannot account for the asymmetric relationship between identity and emotion per-ception, and theoretical accounts proposing independence of Identity and emotion perception are dis-cussed.
Abstract: We investigated whether an asymmetric relationship between the perception of identity and emo-tional expressions in faces (Schweinberger & Soukup, 1998) may be related to differences in the rela-tive processing speed of identity and expression information. Stimulus faces were morphed across identity within a given emotional expression, or were morphed across emotion within a given identity. In Experiment 1, consistent classifications of these images were demonstrated across a wide range of morphing, with only a relatively narrow category boundary. At the same time, classification reaction times (RTs) reflected the increased perceptual difficulty of the morphed images. In Experiment 2, we investigated the effects of variations in the irrelevant dimension on judgments of faces with respect to a relevant dimension, using a Garner-type speeded classification task. RTs for expression classifica-tions were strongly influenced by irrelevant identity information. In contrast, RTs for identity classifi-cations were unaffected by irrelevant expression information, and this held even for stimuli in which identity was more difficult and slower to discriminate than expression. This suggests that differences in processing speed cannot account for the asymmetric relationship between identity and emotion per-ception. Theoretical accounts proposing independence of identity and emotion perception are dis-cussed in the light of these findings.

Journal ArticleDOI
TL;DR: Inhibition was significantly reduced when different letters were replaced by nonalphabetic symbols and facilitation effects disappeared when the common letters did not have the same relative position in the prime and target strings, thus supporting a relative-position coding scheme for letters in words.
Abstract: Four experiments are reported investigating orthographic priming effects in French by varying the number and the position of letters shared by prime and target stimuli. Using both standard masked priming and the novel incremental priming technique (Jacobs, Grainger, & Ferrand, 1995), it is shown that net priming effects are affected not only by the number of letters shared by prime and target stimuli but also by the number of letters in the prime not present in the target. Several null results are thus explained as a tradeoff between the facilitation generated by common letters and the inhibition generated by different letters. Inhibition was significantly reduced when different letters were replaced by nonalphabetic symbols. Facilitation effects disappeared when the common letters did not have the same relative position in the prime and target strings, thus supporting a relative-position coding scheme for letters in words.

Journal ArticleDOI
TL;DR: This work reports that IOR occurs at a cued location far earlier than was previously thought, and that it is distinct from attentional orienting, and concludes that previous failures to observe early IOR at acued location may have been due to attention being directed to the cuing location and thus “masking” IOR.
Abstract: Conventional wisdom holds that a nonpredictive peripheral cue produces a biphasic response time (RT) pattern: early facilitation at the cued location, followed by an RT delay at that location. The latter effect is called inhibition of return (IOR). In two experiments, we report that IOR occurs at a cued location far earlier than was previously thought, and that it is distinct from attentional orienting. In Experiment 1, IOR was observed early (i.e., within 50 msec) at the cued location, when the cue predicted that a detection target would occur at another location. In Experiment 2, this early IOR effect was demonstrated to occur for target detection, but not for target identification. We conclude that previous failures to observe early IOR at a cued location may have been due to attention being directed to the cued location and thus “masking” IOR.

Journal ArticleDOI
TL;DR: Event-related brain potentials were recorded during two spatial-cuing experiments using nonpredictive cues to determine the electrophysiological consequences of inhibition of return (IOR), and the posterior negative difference was found when sensory interactions were likely to be greatest, indicating that it could arise from sensory refractoriness.
Abstract: Event-related brain potentials (ERPs) were recorded during two spatial-cuing experiments using nonpredictive cues. Our primary goal was to determine the electrophysiological consequences of inhibition of return (IOR). At long (>500 msec) cue—target intervals, subjects responded more slowly to targets that appeared at or near the cued location, relative to targets that appeared on the opposite side of fixation from the cue. This behavioral IOR effect was associated with cue-validity effects on several components of the target-elicited ERP waveforms. The earliest such effect was a smaller occipital PI on valid-cue trials, which we interpret as a PI reduction. The P2 component was also smaller on validcue trials, indicating that nonpredictive spatial cues influence multiple stages of information processing at long cue—target intervals. Both of these effects were observed when sensory interactions between cue and target were likely to be negligible, indicating that they were not caused by sensory refractoriness. A different effect of cue validity, the posterior negative difference, was found when sensory interactions were likely to be greatest, indicating that it could arise from sensory refractoriness.

Journal ArticleDOI
TL;DR: A review of other experimental studies on the size-weight illusion in the 1890s suggests that the idea that the illusion depended on "disappointed expectations, especially with respect to speed of lift, became dominant almost immediately following the publication of Charpentiers paper.
Abstract: This paper offers background for an English translation of an article originally published in 1891 by Augustin Charpentier (1852–1916), as well as a summary of it. The article is frequently described as providing the first experimental evidence for the size—weight illusion. A comparison of experiments on the judged heaviness of lifted weights carried out by Weber (1834) and by Charpentier (1891) supports the view that Charpentiers work deserves priority; review of other experimental studies on the size-weight illusion in the 1890s suggests that the idea that the illusion depended on “disappointed expectations,” especially with respect to speed of lift, became dominant almost immediately following the publication of Charpentiers paper. The fate of this and other ideas, including “motor energy,” in 20th-century research on the illusion is briefly described.

Journal ArticleDOI
TL;DR: It was concluded that attentional capture by new objects is subject to top-down modulation by attentional control settings.
Abstract: Previous research suggests that attentional capture by abrupt onsets is contingent on top-down attentional control settings. Four experiments addressed whether similar contingencies hold for capture elicited by the appearance of new perceptual objects. In a modified spatial cuing task, targets defined by abrupt onset or color were paired with distractors consisting of an abrupt brightening of an existing object or the abrupt appearance of a new object. In Experiments 1 and 2, when subjects searched for an onset target, both distractor types produced evidence of capture. When subjects searched for a color target, however, distractors produced no evidence of attentional capture, regardless of whether they consisted of a new perceptual object or not. Experiments 3-5 showed that the lack of distractor effects in the color-target condition cannot be accounted for by rapid recovery from capture. It was concluded that attentional capture by new objects is subject to top-down modulation by attentional control settings.

Journal ArticleDOI
TL;DR: The results suggest that grouping by similarity of shapes is perceived slower than grouping by UC, but grouping by proximity can be as fast and efficient as that by UC.
Abstract: We assessed whether uniform connectedness (UC; Palmer & Rock, 1994) operates prior to effects reflecting classical principles of grouping: proximity and similarity. In Experiments 1 and 2, reaction times to discriminate global letters (H vs. E), made up of small circles, were recorded. The small circles were respectively grouped by proximity, similarity of shapes, and by UC. The discrimination of stimuli grouped by similarity was slower than those grouped by proximity, and it was speeded up by the addition of UC. However, the discrimination of stimuli grouped by proxhnity was unaffected by connecting the local elements. In Experiment 3, similar results occurred in a task requiring discrimination of the orientation of grouped elements, except that the discrimination of stimuli grouped by UC was faster than that of those grouped by weak proximity. Experiment 4 further showed that subjects could respond to letters composed of discriminably separate local elements as fast as to those without separated local elements. The results suggest that grouping by similarity of shapes is perceived slower than grouping by UC, but grouping by proximity can be as fast and efficient as that by UC.

Journal ArticleDOI
TL;DR: Together, these experiments provide strong converging evidence that when two targets are easily discriminated from distractors by a basic property, spatial attention can be split across both locations.
Abstract: Experiments using two different methods and three types of stimuli tested whether stimuli at nonadjacent locations could be selected simultaneously In one set of experiments, subjects attended to red digits presented in multiple frames with green digits Accuracy was no better when red digits appeared successively than when pairs of red digits occurred simultaneously, implying allocation of attention to the two locations simultaneously Different tasks involving oriented grating stimuli produced the same result The final experiment demonstrated split attention with an array of spatial probes When the probe at one of two target locations was correctly reported, the probe at the other target location was more often reported correctly than were any of the probes at distractor locations, including those between the targets Together, these experiments provide strong converging evidence that when two targets are easily discriminated from distractors by a basic property, spatial attention can be split across both locations

Journal ArticleDOI
TL;DR: Postural responses to optic flow patterns presented at different retinal eccentricities during walking demonstrated functionally specific postural responses in both central and peripheral vision, contrary to the peripheral dominance and differential sensitivity hypotheses, but consistent with retinal invariance.
Abstract: Three hypotheses have been proposed for the roles of central and peripheral vision in the perception and control of self-motion: (1) peripheral dominance, (2) retinal invariance, and (3) differential sensitivity to radial flow. We investigated postural responses to optic flow patterns presented at different retinal eccentricities during walking in two experiments. Oscillating displays of radial flow (0° driver direction), lamellar flow (90°), and intermediate flow (30°, 45°) patterns were presented at retinal eccentricities of 0°, 30°, 45°, 60°, or 90° to participants walking on a treadmill, while compensatory body sway was measured. In general, postural responses were directionally specific, of comparable amplitude, and strongly coupled to the display for all flow patterns at all retinal eccentricities. One intermediate flow pattern (45°) yielded a bias in sway direction that was consistent with triangulation errors in locating the focus of expansion from visible flow vectors. The results demonstrate functionally specific postural responses in both central and peripheral vision, contrary to the peripheral dominance and differential sensitivity hypotheses, but consistent with retinal invariance. This finding emphasizes the importance of optic flow structure for postural control regardless of the retinal locus of stimulation.

Journal ArticleDOI
TL;DR: The anchoring of lightness perception was tested in simple visual fields composed of only two regions by placing observers inside opaque acrylic hemispheres, suggesting that a wide range of earlier work on area effects in brightness induction, lights contrast, lightness assimilation, and luminosity perception can be understood in terms of a few simple rules of anchoring.
Abstract: The anchoring of lightness perception was tested in simple visual fields composed of only two regions by placing observers inside opaque acrylic hemispheres. Both side-by-side and center/surround configurations were tested. The results, which undermine Gilchrist and Bonato’s (1995) recent claim that surrounds tend to appear white, indicate that anchoring involves both relative luminance and relative area. As long as the area of the darker region is equal to or smaller than the area of the lighter region, relative area plays no role in anchoring. Only relative luminance controls anchoring: The lighter region appears white, and the darker region is perceived relative to that value. When the area of the darker region becomes greater than that of the lighter region, relative area begins to playa role. As the darker region becomes larger and relative area shifts from the lighter region to the darker region, the appearance of the darker region moves toward white and the appearance of lighter region moves toward luminosity. This hitherto unrecognized rule is consistent with almost all of the many previous reports of area effects in lightness and brightness. This in turn suggests that a wide range of earlier work on area effects in brightness induction, lightness contrast, lightness assimilation, and luminosity perception can be understood in terms of a few simple rules of anchoring.

Journal ArticleDOI
TL;DR: The effects of the spatial scale of attention on feature and conjunction search were examined, suggesting that visuospatial attention possesses two dynamic properties—shifting in space and varying in scale—that are deployed independently, depending on task demands.
Abstract: The effects of the spatial scale of attention on feature and conjunction search were examined in two experiments. Adult participants in three age groups—young, young-old, and old-old—were given precues of varying validity and precision in indicating the location of a target letter subsequently presented in a visual array. Systematic decreases in the size of a valid precue (toward the size of the target) progressively facilitated both feature and conjunction search, with a greater benefit accruing to conjunction search. Age-related slowing in conjunction search was mitigated by precise (small and valid) precues, presumably because they reduced the need for participants in the young-old group to focus and to shift attention. Nevertheless, this benefit was reduced in the old-old group. The effects of valid location precue size varied with cue-target stimulus onset asynchrony (SOA) in a manner that interacted with search difficulty: Effects of cue size developed more rapidly in feature search but more slowly in conjunction search. Finally, when precues were invalid for target location, search was faster with larger sized precues. Thus, in both easy feature search and hard conjunction search, the scale of visuospatial attention modulates the speed of visual search. Furthermore, when the SOA is sufficiently long for cue effects to develop, the ability to dynamically adjust the scale of visuospatial attention appears to decline in advanced age. These results go beyond current models in suggesting that visuospatial attention possesses two dynamic properties—shifting in space and varying in scale—that are deployed independently, depending on task demands.

Journal ArticleDOI
TL;DR: The visual perception of relative phase is investigated through recordings of human interlimb oscillations exhibiting different frequencies, mean relative phases, and different amounts of phase variability to generate computer displays of spheres oscillating either side to side in a frontoparallel plane or in depth.
Abstract: Studies of bimanual coordination have found that only two stable relative phases (0° and 180°) are produced when a participant rhythmically moves two joints in different limbs at the same frequency. Increasing the frequency of oscillation causes an increase in relative phase variability in both of these phase modes. However, relative phasing at 180° is more variable than relative phasing at 0°, and when the frequency of oscillation reaches a critical frequency, a transition to 0° occurs. These results have been replicated when 2 people have coordinated their respective limb movements using vision. This inspired us to investigate the visual perception of relative phase. In Experiment 1, recordings of human interlimb oscillations exhibiting different frequencies, mean relative phases, and different amounts of phase variability were used to generate computer displays of spheres oscillating either side to side in a frontoparallel plane or in depth. Participants judged the stability of relative phase. Judgments covaried with phase variability only when the mean phase was 0° or 180°. Otherwise, judgments covaried with mean relative phase, even after extensive instruction and demonstration. In Experiment 2, mean relative phase and phase variability were manipulated independently via simulations, and participants were trained to perceive phase variability in testing sessions in which mean phase was held constant. The results of Experiment 1 were replicated. The HKB model was fitted to mean judgment standard deviations.

Journal ArticleDOI
TL;DR: This experiment examined whether this 3-D perceptual anisotropy, whereby spatial intervals oriented in depth are perceived to be smaller than physically equal intervals in the frontoparallel plane, is scale invariant, and indicated it is invariant across these two scales.
Abstract: A number of studies have resulted in the finding of a 3-D perceptual anisotropy, whereby spatial intervals oriented in depth are perceived to be smaller than physically equal intervals in the frontoparallel plane. In this experiment, we examined whether this anisotropy is scale invariant. The stimuli were L shapes created by two rods placed flat on a level grassy field, with one rod defining a frontoparallel interval, and the other, a depth interval. Observers monocularly and binocularly viewed L shapes at two scales such that they were projectively equivalent under monocular viewing. Observers judged the aspect ratio (depth/width) of each shape. Judged aspect ratio indicated a perceptual anisotropy that was invariant with scale for monocular viewing, but not for binocular viewing. When perspective is kept constant, monocular viewing results in perceptual anisotropy that is invariant across these two scales and presumably across still larger scales. This scale invariance indicates that the perception of shape under these conditions is determined independently of the perception of size.

Journal ArticleDOI
TL;DR: The ability to localize flashed stimuli is studied using a relative judgment task and it is concluded that the system in charge of the guidance of saccadic eye movements is also the system that provides the metric in perceived visual space.
Abstract: We studied the ability to localize flashed stimuli, using a relative judgment task. When observers are asked to localize the peripheral position of a probe with respect to the midposition of a spatially extended comparison stimulus, they tend to judge the probe as being more toward the periphery than is the midposition of the comparison stimulus. We report seven experiments in which this novel phenomenon was explored. They reveal that the mislocalization occurs only when the probe and the comparison stimulus are presented in succession, independent of whether the probe or the comparison stimulus comes first (Experiment 1). The size of the mislocalization is dependent on the stimulus onset asynchrony (Experiment 2) and on the eccentricity of presentation (Experiment 3). In addition, the illusion also occurs in an absolute judgment task, which links mislocalization with the general tendency to judge peripherally presented stimuli as being more foveal than they actually are (Experiment 4). The last three experiments reveal that relative mislocalization is affected by the amount of spatial extension of the comparison stimulus (Experiment 5) and by its structure (Experiments 6 and 7). This pattern of results allows us to evaluate possible explanations of the illusion and to relate it to comparable tendencies observed in eye movement behavior. It is concluded that the system in charge of the guidance of saccadic eye movements is also the system that provides the metric in perceived visual space.

Journal ArticleDOI
TL;DR: It is concluded that similar mechanisms underlie static and dynamic haptic curvature comparison, which is not only qualitatively, but also quantitatively similar.
Abstract: In four experiments, we tested whether haptic comparison of curvature ranging from −41m to +41m is qualitatively the same for static and for dynamic touch. In Experiments 1 and 3, we tested whether static and dynamic curvature discrimination are based on height differences, attitude (slope) differences, curvature differences, or a combination of these geometrical variables. It was found that both static and dynamic haptic curvature discrimination are based on attitude differences. In Experiments 2 and 4, we tested whether this mechanism leads to errors in the comparison of stimuli with different lengths for static and dynamic touch, respectively. If the judgments are based on attitude differences, subjects will make systematic errors in these comparisons. In both experiments, we found that subjects compared the curvatures of strips of the same length veridically, whereas they made systematic errors if they were required to compare the curvatures of strips of different lengths. Longer stimuli were judged to be more curved than shorter stimuli with the same curvature. We conclude that similar mechanisms underlie static and dynamic haptic curvature comparison. Moreover, additional data comparison showed that static and dynamic curvature comparison is not only qualitatively, but also quantitatively similar.

Journal ArticleDOI
TL;DR: In this article, it was shown that the material-weight illusion is mainly a haptically derived phenomenon: Haptically accessed material cues were both sufficient and necessary for full-strength illusions, whereas visual access was only sufficient to generate moderate strength illusions.
Abstract: Experiment 1 documents modality effects on the material-weight illusion for a low-mass object set (58.5 g). These modality effects indicate that the material-weight illusion is principally a haptically derived phenomenon: Haptically accessed material cues were both sufficient and necessary for full-strength illusions, whereas visually accessed material cues were only sufficient to generate moderate-strength illusions. In contrast, when a high-mass object set (357 g) was presented under the same modality conditions, no illusions were generated. The mass-dependent characteristic of this illusion is considered to be a consequence of differing grip forces. Experiment 2 demonstrates that the enforcement of a firm grip abolishes the low-mass material-weight illusion. Experiment 3 documents that a firm grip also diminishes perceptual differentiation of actual mass differences. Several possible explanations of the consequences of increasing grip force are considered.

Journal ArticleDOI
TL;DR: The results suggest that averaging has little effect when the GCM is the correct model, and the experiment supported the claim that averaging improves the fit of theGCM.
Abstract: Averaging across observers is common in psychological research. Often, averaging reduces the measurement error and, thus, does not affect the inference drawn about the behavior ofindividuals. However, in other situations, averaging alters the structure of the data qualitatively, leading to an incorrect inference about the behavior of individuals. In this research, the influence of averaging across observers on the fits of decision bound models (Ashby, 1992a) and generalized context models (GCM; Nosofsky, 1986) was investigated through Monte Carlo simulation of a variety of categorization conditions, perceptual representations, and individual difference assumptions and in an experiment. The results suggest that (1) averaging has little effect when the GCM is the correct model, (2) averaging often improves the fit of the GCM and worsens the fit of the decision bound model when the decision bound model is the correct model, (3) the GCM is quite flexible and, under many conditions, can mimic the predictions of the decision bound model, whereas the decision bound model is generally unable to mimic the predictions of the GCM, (4) the validity of the decision bound model’s perceptual representation assumption can have a large effect on the inference drawn about the form of the decision bound, and (5) the experiment supported the claim that averaging improves the fit of the GCM. These results underscore the importance of performing single-observer analysis if one is interested in understanding the categorization performance of individuals.