scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 1996"


Journal ArticleDOI
TL;DR: It is shown that repetition of an attention-driving feature primes the deployment of attention to the same feature on subsequent trials, and that position priming is largely object- or landmark-centered.
Abstract: In an earlier paper (Maljkovic & Nakayama, 1994) we showed that repetition of an attention-driving feature primes the deployment of attention to the same feature on subsequent trials. Here we show that repetition of the targetposition also primes subsequent trials. Position priming shows a characteristic spatial pattern. Facilitation occurs when the target position is repeated on subsequent trials, and inhibition occurs when the target falls on a position previously occupied by a distractor. Facilitation and inhibition also exist, though somewhat diminished, for positions adjacent to those of the target and distractors. Assessing the effect of a single trial over time, we show that the characteristic memory trace exerts its strongest influence on immediately following trials and decays gradually over the succeeding, approximately five to eight, trials. Throughout this period, target-position facilitation is always stronger than distractor-position inhibition. The characteristics of position priming are also seen under conditions in which the attention-driving feature either stays the same or differs from the previous trial, suggesting that feature and position priming operate independently. In a separate experiment, using the fact that position priming is cumulative over trials, we show that position priming is largely object- or landmark-centered.

490 citations


Journal ArticleDOI
TL;DR: The proposed adimension-weighting account showed that both tasks involveweight shifting, though (explicitly) discerning the dimension of a target requires some process additional to simply detecting its presence; and the intertrial facilitation is indeed (largely) dimension specific rather than feature specific in nature.
Abstract: Search for odd-one-out feature targets takes longer when the target can be present in one of several dimensions as opposed to only one dimension (Muller, Heller, & Ziegler, 1995; Treisman, 1988). Muller et al. attributed this cost to the need to discern the target dimension. They proposed adimension-weighting account, in which master map units compute, in parallel, the weighted sum of dimension-specific saliency signals. If the target dimension is known in advance, signals from that dimension are amplified. But if the target dimension is unknown, it is determined in a process that shifts weight from the nontarget to the target dimension. The weight pattern thus generated persists across trials, producing intertrial facilitation for a target (trialn+1) dimensionally identical to the preceding target (trialn). In the present study, we employed a set of new tasks in order to reexamine and extend this account. Targets were defined along two possible dimensions (color or orientation) and could take on one of two feature values (e.g., red or blue). Experiments 1 and 2 required absent/present and color/orientation discrimination of a single target, respectively. They showed that (1) both tasks involveweight shifting, though (explicitly) discerning the dimension of a target requires some process additional to simply detecting its presence; and (2) the intertrial facilitation is indeed (largely) dimension specific rather than feature specific in nature. In Experiment 3, the task was to count the number of targets in a display (either three or four), which could be either dimensionally the same (all color or all orientation) or mixed (some color and some orientation). As predicted by the dimension-weighting account, enumerating four targets all defined within the same dimension was faster than counting three such targets or mixed targets defined in two dimensions.

375 citations


Journal ArticleDOI
TL;DR: The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.
Abstract: Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.

357 citations


Journal ArticleDOI
TL;DR: Event-related brain potentials were recorded from subjects who attended to pairs of adjacent colored squares that were flashed sequentially to produce a perception of movement to support early-selection theories of attention that stipulate attentional control over the initial processing of stimulus features.
Abstract: Event-related brain potentials (ERPs) were recorded from subjects who attended to pairs of adjacent colored squares that were flashed sequentially to produce a perception of movement. The task was to attend selectively to stimuli in one visual field and to detect slower moving targets that contained the critical value of the attended feature, be it color or movement direction. Attention to location was reflected by a modulation of the early P1 and N1 components of the ERP, whereas selection of the relevant stimulus feature was associated with later selection negativity components. ERP indices of feature selection were elicited only by stimuli at the attended location and had distinctive scalp distributions for features mediated by “ventral” (color) and “dorsal” (motion) cortical areas. ERP indices of target selection were also contingent on the prior selection of location but initially did not depend on the selection of the relevant feature. These ERP data reveal the timing of sequential, parallel, and contingent stages of visual processing and support early-selection theories of attention that stipulate attentional control over the initial processing of stimulus features.

310 citations


Journal ArticleDOI
TL;DR: The data suggest that face identification is preferentially supported by a band of spatial frequencies of approximately 8-16 cycles per face; contrast or line-based explanations were found to be inadequate.
Abstract: If face images are degraded by block averaging, there is a nonlinear decline in recognition accuracy as block size increases, suggesting that identification requires a critical minimum range of object spatial frequencies. The identification of faces was measured with equivalent Fourier low-pass filtering and block averaging preserving the same information and with high-pass transformations. In Experiment 1, accuracy declined and response time increased in a significant nonlinear manner in all cases as the spatial-frequency range was reduced. However, it did so at a faster rate for the quantized and high-passed images. A second experiment controlled for the differences in the contrast of the high-pass faces and found a reduced but significant and nonlinear decline in performance as the spatial-frequency range was reduced. These data suggest that face identification is preferentially supported by a band of spatial frequencies of approximately 8-16 cycles per face; contrast or line-based explanations were found to be inadequate. The data are discussed in terms of current models of face identification.

275 citations


Journal ArticleDOI
TL;DR: Comparisons of global and local characteristics of eye movements during reading, scanning of transformed text, and visual search show that eye movements are not guided by a global strategy and local tactics, but by immediate processing demands.
Abstract: In an extension of a study by Vitu, O’Regan, Inhoff, and Topolski (1995), we compared global and local characteristics of eye movements during (1) reading, (2) the scanning of transformed text (in which each letter was replaced with a z), and (3) visual search. Additionally, we examined eye behavior with respect to specific target words of high or low frequency. Globally, the reading condition led to shorter fixations, longer saccades, and less frequent skipping of target strings than did scanning transformed text. Locally, the manipulation of word frequency affected fixation durations on the target word during reading, but not during visual search or z-string scanning. There were also more refixations on target words in reading than in scanning. Contrary to Vitu et al.’s (1995) findings, our results show that eye movements are not guided by a global strategy and local tactics, but by immediate processing demands.

230 citations


Journal ArticleDOI
TL;DR: It appears that judgments of tension arose from a convergence of several cognitive and psychoacoustics influences, whose relative importance varies, depending on musical training.
Abstract: This study investigates the effect of four variables (tonal hierarchies, sensory chordal consonance, horizontal motion, and musical training) on perceived musical tension. Participants were asked to evaluate the tension created by a chord X in sequences of three chords {C major → X → C major} in a C major context key. The X chords could be major or minor triads major-minor seventh, or minor seventh chords built on the 12 notes of the chromatic scale. The data were compared with Krumhansl’s (1990) harmonic hierarchy and with predictions of Lerdahl’s (1988) cognitive theory, Hutchinson and Knopoff’s (1978) and Parncutt’s (1989) sensory-psychoacoustical theories, and the model of horizontal motion defined in the paper. As a main outcome, it appears that judgments of tension arose from a convergence of several cognitive and psychoacoustics influences, whose relative importance varies, depending on musical training.

204 citations


Journal ArticleDOI
TL;DR: It is shown that memory for music seems to preserve the absolute tempo of the musical performance, and it is found that folk songs lacking a tempo standard generally have a large variability in tempo; this counters arguments thatMemory for the tempo of remembered songs is driven by articulatory constraints.
Abstract: We report evidence that long-term memory retains absolute (accurate) features of perceptual events. Specifically, we show that memory for music seems to preserve the absolute tempo of the musical performance. In Experiment 1, 46 subjects sang two different popular songs from memory, and their tempos were compared with recorded versions of the songs. Seventy-two percent of the productions on two consecutive trials came within 8% of the actual tempo, demonstrating accuracy near the perceptual threshold (JND) for tempo. In Experiment 2, a control experiment, we found that folk songs lacking a tempo standard generally have a large variability in tempo; this counters arguments that memory for the tempo of remembered songs is driven by articulatory constraints. The relevance of the present findings to theories of perceptual memory and memory for music is discussed.

195 citations


Journal ArticleDOI
TL;DR: A preprogramming model of the control of fixation duration during visual search appears to be indirect in a simple search task and is supported by the results of an experiment carried out under two conditions.
Abstract: Toobtain insight into the control of fixation duration during visual search, we had 4 subjects perform simple search tasks in which we systematically varied the discriminability of the target. The experiment was carried out under two conditions. Under the first condition (blocked), the discriminability of the target was kept constant during a session. Under the second condition (mixed), the discriminability of the target varied per trial. Under the blocked condition, fixation duration increased with decreasing dis­ criminability. For 2 subjects, we found much shorter fixation durations in difficult trials with the mixed condition than in difficult trials with the blocked condition. Overall, the subjects fixated the target, continued to search, and then went back to the target in M6-55% of the correct trials. In these trials, the result of the analysis of the foveal target was not used for preparing the next saccade. The results sup­ port a preprogramming model of the control of fixation duration. In a simple search task, control of fix­ ation duration appears to be indirect. In daily life, the oculomotor system and the visual sys­ tem work in close cooperation. On the one hand, eye po­ sition determines the part of the environment that is ac­ cessible to visual perception. On the other hand, visually perceived information is essential for making goal-directed eye movements. Extensive visual search and reading are good examples of this cooperation. In both tasks, a se­ quence of eye movements is required to gather visual in­ formation from a display that exceeds the area covered by a single glance. During periods of fixation (intersac­ cadic intervals), at least three processes relating to vision may occur. These processes are samplings of the visual field, analysis of the foveal part of the visual field, and planning ofthe next eye movement (Viviani, 1990). These three processes take time. Analysis of the foveal target takes at least 100 to 150 msec (Eriksen & Eriksen, 1971) and eye-movement programming takes about 150 to 200 msec (Becker & Jurgens, 1979). These two processes are assumed to act in parallel, but not much is known about the amount ofoverlap (Viviani, 1990). In this visual­ search study, we were interested in the relationship be­ tween the analysis of the foveal target and the control of fixation duration. In other words: Is the result ofthe analy­ sis ofthe foveal target used in the planning ofthe next eye movement? Two models have been proposed. The first is the process-monitoring model (Rayner, 1978), in which the analysis of the foveal target is monitored by the mecha­ nism that controls the fixation duration. The planning of the saccade starts after the analysis of the foveal target has been completed. Analysis ofthe foveal target and plan

194 citations


Journal ArticleDOI
TL;DR: Two experiments on performance on the traveling salesman problem (TSP) are reported, testing the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points.
Abstract: Two experiments on performance on the traveling salesman problem (TSP) are reported. The TSP consists of finding the shortest path through a set of points, returning to the origin. It appears to be an intransigent mathematical problem, and heuristics have been developed to find approximate solutions. The first experiment used 10-point, the second, 20-point problems. The experiments tested the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points. Both experiments supported the hypothesis. The experiments provided information on the quality of subjects’ solutions. Their solutions clustered close to the best known solutions, were an order of magnitude better than solutions produced by three well-known heuristics, and on average fell beyond the 99.9th percentile in the distribution of random solutions. The solution process appeared to be perceptually based.

192 citations


Journal ArticleDOI
TL;DR: When subjects were precued to expect targets in a narrow region of the display, the object effect was eliminated, implying that object-based selection may only operate within spatially attended regions.
Abstract: A new test was devised to avoid previous confounds in measures of object-based limits on divided visual attention The distinction between objects was manipulated across a wide spatial extent Target elements appeared on the same object only when far apart, and appeared close only when on different objects, so that object effects could not be reduced to spatial effects, nor vice versa Subjects judged whether two odd elements within a display of two dashed lines were the same or different They performed better when the target elements were far apart on a common line rather than on two distinct lines, even though the latter arrangement was more likely Thus, nonstrategic object-based limits on divided attention can arise even across large distances However, when subjects were precued to expect targets in a narrow region of the display, the object effect was eliminated, implying that object-based selection may only operate within spatially attended regions


Journal ArticleDOI
TL;DR: Results suggest that gender priming involves a combination of controlled postlexical processing and automatic prelexicalprocessing, with special reference to modular versus interactive-activation theories.
Abstract: The goals of the present study were (1) to determine whether grammatical gender on a noun modifier can prime recognition of the following noun, (2) to determine whether the priming effect involves facilitation, inhibition, or both, and (3) to compare performance across three different tasks that vary in the degree to which explicit attention to gender is required, including word repetition, gender monitoring, and grammaticality judgment. Results showed a clear effect of gender priming, involving both facilitation and inhibition. Priming was observed whether or not the subjects’ attention was directed to gender per se. Results suggest that gender priming involves a combination of controlled postlexical processing and automatic prelexical processing. Implications for different models of lexical access are discussed, with special reference to modular versus interactive-activation theories.

Journal ArticleDOI
TL;DR: Evidence is presented to show that reduced odor intensity following long-term exposure is accompanied by odorant-specific shifts in threshold, which distinguishes this phenomenon from the adaptation seen following shorter exposures and highlights the need for the study of exposure durations that are more similar to real-world exposures.
Abstract: Any individual living or working in an odorous environment can experience changes in odor perception, some of which are long lasting. Often, these individuals report a significant reduction in the perception of an odor following long-term exposure to that odor (adaptation). Yet, most experimental analyses of olfactory adaptation use brief odorant exposures which may not typify real-world experiences. Using a procedure combining long-term odor exposure in a naturalistic setting with psychophysical tests in the laboratory, we present evidence to show that reduced odor intensity following long-term exposure is accompanied by odorant-specific shifts in threshold. Subjects were exposed continuously to one of two odorants while in their home for a period of 2 weeks. Exposure produced an odorant-specific reduction in sensitivity and perceived intensity compared with preexposure baselines: Detection thresholds for the adapting odorant were elevated following exposure and perceived intensity ratings for weak concentrations were reduced. For most individuals, reduced sensitivity to the test odorant was still evident up to 2 weeks following the last exposure. The persistence of the change, as evidenced by the duration of recovery from adaptation, distinguishes this phenomenon from the adaptation seen following shorter exposures and highlights the need for the study of exposure durations that are more similar to real-world exposures.

Journal ArticleDOI
TL;DR: The results showed that mean RT increases with task complexity, but the exponents of the functions relating RT to stimulus intensity were found to be similar in the different experiments, indicating that Piéron’s law holds for CRT as well as for SRT.
Abstract: Pieron (1914, 1920, 1952) demonstrated that simple reaction time (SRT) decays as a hyperbolic function of luminance in detection tasks. However, whether such a relationship holds equally for choice reaction time (CRT) has been questioned (Luce, 1986; Nissen, 1977), at least when the task is not brightness discrimination. In two SRT and three CRT experiments, we investigated the function that relates reaction time (RT) to stimulus intensity for five levels of luminance covering the entire mesopic range. The psychophysical experiments consisted of simple detection, two-alternative forced choice (2 AFC) with spatial uncertainty, 2 AFC with semantic categorization, and 2 AFC with orientation discrimination. The results of the experiments showed that mean RT increases with task complexity. However, the exponents of the functions relating RT to stimulus intensity were found to be similar in the different experiments. This finding indicates that Pieron's law holds for CRT as well as for SRT. It describes RT as a power function of stimulus intensity, with similar exponents, regardless of the complexity of the task.

Journal ArticleDOI
TL;DR: The results suggest that head movement control is linked to postural control through gaze stabilization reflexes in sighted subjects; such reflexes are absent in congenitally blind individuals and may account for their higher levels of head displacement.
Abstract: Haptic cues from fingertip contact with a stable surface attenuate body sway in subjects even when the contact forces are too small to provide physical support of the body. We investigated how haptic cues derived from contact of a cane with a stationary surface at low force levels aids postural control in sighted and congenitally blind individuals. Five sighted (eyes closed) and five congenitally blind subjects maintained a tandem Romberg stance in five conditions: (1) no cane; (2,3) touch contact (< 2 N of applied force) while holding the cane in a vertical or slanted orientation; and (4,5) force contact (as much force as desired) in the vertical and slanted orientations. Touch contact of a cane at force levels below those necessary to provide significant physical stabilization was as effective as force contact in reducing postural sway in all subjects, compared to the no-cane condition. A slanted cane was far more effective in reducing postural sway than was a perpendicular cane. Cane use also decreased head displacement of sighted subjects far more than that of blind subjects. These results suggest that head movement control is linked to postural control through gaze stabilization reflexes in sighted subjects; such reflexes are absent in congenitally blind individuals and may account for their higher levels of head displacement.

Journal ArticleDOI
Jeff Miller1
TL;DR: In this article, the authors tabulated the distribution of sample\(hat d's\), which can be tabulated readily by computer and revealed a number of interesting properties of this distribution, including: sample\(\hat d's\) are biased, with an expected value that can be higher or lower than the true value, depending on the sample size, the true values itself, and the convention adopted for handling cases in which the sample is undefined.
Abstract: The distribution of sample\(\hat d's\), although mathematically intractable, can be tabulated readily by computer. Such tabulations reveal a number of interesting properties of this distribution, including: (1) sample\(\hat d's\) are biased, with an expected value that can be higher or lower than the true value, depending on the sample size, the true value itself, and the convention adopted for handling cases in which the sample\(\hat d'\) is undefined; (2) the variance of\(\hat d'\) also depends on the convention adopted for handling cases in which the sample\(\hat d'\) is undefined and is in some cases poorly approximated by the standard approximation formula, (3) the standard formula for a confidence interval for\(\hat d'\) is quite accurate with at least 50–100 trials per condition, but more accurate intervals can be obtained by direct computation with smaller samples.

Journal ArticleDOI
TL;DR: This study tested how well people judge their heading in the presence of moving objects and found that people perform remarkably well under a variety of conditions.
Abstract: When moving toward a stationary scene, people judge their heading quite well from visual information alone. Much experimental and modeling work has been presented to analyze how people judge their heading for stationary scenes. However, in everyday life, we often move through scenes that contain moving objects. Most models have difficulty computing heading when moving objects are in the scene, and few studies have examined how well humans perform in the presence of moving objects. In this study, we tested how well people judge their heading in the presence of moving objects. We found that people perform remarkably well under a variety of conditions. The only condition that affects an observer’s ability to judge heading accurately consists of a large moving object crossing the observer’s path. In this case, the presence of the object causes a small bias in the heading judgments. For objects moving horizontally with respect to the observer, this bias is in the object’s direction of motion. These results present a challenge for computational models.

Journal ArticleDOI
TL;DR: The results suggest that binaural processing is probably more important for solving the “cocktail party” problem when there are more than two concurrent sound sources.
Abstract: Listeners identified spoken words, letters, and numbers and the spatial location of these utterances in three listening conditions as a function of the number of simultaneously presented utterances. The three listening conditions were a normal listening condition, in which the sounds were presented over seven possible loudspeakers to a listener seated in a sound-deadened listening room; a one-headphone listening condition, in which a single microphone that was placed in the listening room delivered the sounds to a single headphone worn by the listener in a remote room; and a stationary KEMAR listening condition, in which binaural recordings from an acoustic manikin placed in the listening room were delivered to a listener in the remote room. The listeners were presented one, two, or three simultaneous utterances. The results show that utterance identification was better in the normal listening condition than in the one-headphone condition, with the KEMAR listening condition yielding intermediate levels of performance. However, the differences between listening in the normal and in the one-headphone conditions were much smaller when two, rather than three, utterances were presented at a time. Localization performance was good for both the normal and the KEMAR listening conditions and at chance for the one-headphone condition. The results suggest that binaural processing is probably more important for solving the “cocktail party” problem when there are more than two concurrent sound sources.

Journal ArticleDOI
TL;DR: Six experiments were conducted to investigate the nature of the contents of object files, temporary representations that store information about objects, and suggest that an object file includes identity information, but not semantic information.
Abstract: Six experiments were conducted to investigate the nature of the contents of object files, temporary representations that store information about objects. Experiment 1 used a lexical priming paradigm with a lexical decision task, in which the prime and target could appear in either the same or different locations. The results indicated a greater priming effect when the prime and target appeared in the same location than when they appeared in different locations (object- or location-specific priming). Experiment 2 replicated these findings for objects that changed position during the display. Experiment 3 demonstrated that these findings reflected the inclusion of abstract identity information, rather than physical form, in object files. Three additional experiments tested for the presence of three types of semantic information (related concepts, semantic features, and category membership) in object files. No object-specific priming effects were found. Taken together, these experiments suggest that an object file includes identity information, but not semantic information. Implications of the results for object file theory are discussed.

Journal ArticleDOI
TL;DR: The results from three experiments suggest that the differences between broad and focal attentional distributions are not the result of different stages of information processing indexed by RT measures as opposed to SD measures and support numerous prior studies showing that spatial attention affects perceptual sensitivity and that the strategic allocation of attention is a highly flexible process.
Abstract: Studies of the spatial distribution of visual attention have shown that attentional facilitation monotonically decreases in a graded fashion with increasing distance from an attended location. However, reaction time (RT) measures have typically shown broader gradients than have signal detection (SD) measures of perceptual sensitivity. It is not clear whether these differences have arisen because the stages of information processing indexed by RT measures are different from those indexed by SD measures, or whether these differences are due to methodological confounds in the SD studies. In the present set of experiments, the spatial distribution of attention was studied by using a luminance detection task in an endogenous cuing paradigm that was designed to permit accurate calculations of SD and RT measures for targets at cued and uncued locations. Subjects made target-present/absent decisions at one of six possible cued or uncued upper visual hemifield locations on each trial. The results from three experiments suggest that the differences between broad and focal attentional distributions are not the result of different stages of information processing indexed by RT measures as opposed to SD measures. Rather, the differing distributions appear to reflect variations in attentional allocation strategies induced by the perceptual requirements typical of RT paradigms as opposed to SD paradigms. These findings support numerous prior studies showing that spatial attention affects perceptual sensitivity and that the strategic allocation of attention is a highly flexible process.

Journal ArticleDOI
TL;DR: The beneficial effect of having targets appear at the same, as opposed to a different, level as that on the immediately preceding trial was unaffected by contrast balancing, suggesting that attentional selection between different levels of structure is not based on spatial frequency.
Abstract: Is attentional selection between local and global forms based on spatial frequency? This question was examined by having subjects identify local or global forms of stimuli that had been “contrast balanced,” a technique that eliminates low spatial frequencies Response times (RTs) to global (but not local) forms were slowed for contrast-balanced stimuli, suggesting that low spatial frequencies mediate the global RT advantage typically reported In contrast, the beneficial effect of having targets appear at the same, as opposed to a different, level as that on the immediately preceding trial was unaffected by contrast balancing This suggests that attentional selection between different levels of structure is not based on spatial frequency The data favor an explanation in terms of “priming,” rather than in terms of adjustments in the diameter of an attentional “spotlight”

Journal ArticleDOI
TL;DR: Four experiments examined the influence of categorical information and visual experience on the identification of tangible pictures, produced with a raised-line drawing kit, and proposed that part of the difficulty in identification of raised line pictures may derive from problems in locating picture categories or names, and not merely in perception of the patterns.
Abstract: Four experiments examined the influence of categorical information and visual experience on the identification of tangible pictures, produced with a raised-line drawing kit. In Experiment 1, prior categorical information aided the accuracy and speed of picture identification. In a second experiment, categorical information helped subjects when given after the examination of each picture, but before any attempt at identification. The benefits of categorical information were also obtained in another group of subjects, when the superordinate categories were named at the start of the experiment. In a third experiment, a multiple-choice picture recognition task was used to eliminate the difficulty of naming from the picture-identification task. The multiple-choice data showed higher accuracy and shorter latencies when compared with identification tasks. A fourth experiment evaluated picture identification in blindfolded sighted, early, and late blind participants. Congenitally blind subjects showed lower performance than did the other groups, despite the availability of prior categorical information. The data were consistent with theories that assume that visual imagery aids tactual perception in naming raised line drawings. It was proposed that part of the difficulty in identification of raised line pictures may derive from problems in locating picture categories or names, and not merely in perception of the patterns.

Journal ArticleDOI
TL;DR: It was concluded that both stereoscopic and changingsize cues provide additional motion-in-depth information that is used in perceiving self-motion, showing that the addition of stereoscopic cues to optic flow significantly improves forward linear vection in central vision.
Abstract: During self-motions, different patterns of optic flow are presented to the left and right eyes. Previous research has, however, focused mainly on the self-motion information contained in a single pattern of optic flow. The present experiments investigated the role that binocular disparity plays in the visual perception of self-motion, showing that the addition of stereoscopic cues to optic flow significantly improves forward linear vection in central vision. Improvements were also achieved by adding changing-size cues to sparse (but not dense) flow patterns. These findings showed that assumptions in the heading literature that stereoscopic cues facilitate self-motion only when the optic flow has ambiguous depth ordering do not apply to vection. Rather, it was concluded that both stereoscopic and changing-size cues provide additional motion-in-depth information that is used in perceiving self-motion.

Journal ArticleDOI
TL;DR: Results suggested that gravitational cues may play a role in haptic coding of orientation, although the effects of decreasing or increasing these cues are not symmetrical.
Abstract: The haptic perception of vertical, horizontal, � 45°-oblique, and � 135°-oblique orientations was studied in adults. The purpose was to establish whether the gravitational cues provided by the scanning arm‐hand system were involved in the haptic oblique effect (lower performances in oblique orientations than in vertical‐horizontal ones) and more generally in the haptic coding of orientation. The magnitude of these cues was manipulated by changing gravity constraints, and their variability was manipulated by changing the planes in which the task was performed (horizontal, frontal, and sagittal). In Experiment 1, only the horizontal plane was tested, either with the forearm resting on the disk supporting the rod (“supported forearm” condition) or with the forearm unsupported in the air. In the latter case, antigravitational forces were elicited during scanning. The oblique effect was present in the “unsupported” condition and was absent in the “supported” condition. In Experiment 2, the three planes were tested, either in a “natural” or in a “lightened forearm” condition in which the gravitational cues were reduced by lightening the subject’s forearm. The magnitude of the oblique effect was lower in the “lightened” condition than in the “natural” one, and there was no plane effect. In Experiment 3, the subject’s forearm was loaded with either a 500- or a 1,000-g bracelet, or it was not loaded. The oblique effect was the same in the three conditions, and the plane effect (lower performances in the horizontal plane than in the frontal and sagittal ones) was present only when the forearm was loaded. Taken together, these results suggested that gravitational cues may play a role in haptic coding of orientation, although the effects of decreasing or increasing these cues are not symmetrical.

Journal ArticleDOI
TL;DR: A robust gap effect for aimed hand movements (which required determination of a precise spatial location), regardless of whether the hand moved alone or was accompanied by a saccadic eye movement is shown.
Abstract: A temporal gap between fixation point offset and stimulus onset typically yields shorter saccadic latencies to the stimulus than if the fixation stimulus remained on. Several researchers have explored the extent to which this gap also reduces latencies of other responses but have failed to find a gap effect isolated from general warning effects. Experiment 1, however, showed a robust gap effect for aimed hand movements (which required determination of a precise spatial location), regardless of whether the hand moved alone or was accompanied by a saccadic eye movement. Experiment 2 replicated this aimed hand gap effect and also showed a smaller effect for choice manual keypress responses (which required determination of the direction of response only). Experiment 3 showed no gap effect for simple manual keypress responses (which required no spatial determination). The results are consistent with an interpretation of the gap effect in terms of facilitation of spatially oriented responses.

Journal ArticleDOI
TL;DR: It is estimated that pairwise depth comparisons are an order of magnitude less precise than might be expected from the attitude data, and thus the surface attitudes cannot be derived from a depth map as operationally defined by the methods.
Abstract: We measured local surface attitude for monocular pictorial relief and performed pairwise depthcomparison judgments on the same picture. Both measurements were subject to internal consistency checks. We found that both measurements were consistent with a relief (continuous pictorial surface) interpretation within the session-to-session scatter. We reconstructed the pictorial relief from both measurements separately, and found results that differed in detail but were quite similar in their basic structures. Formally, one expects certain geometrical identities that relate range and attitude data. Because we have independent measurements of both, we can attempt an empirical verification of such geometrical identities. Moreover, we can check whether the statistical scatter in the data indicates that, for example, the surface attitudes are derivable from a depth map or vice versa. We estimate that pairwise depth comparisons are an order of magnitude less precise than might be expected from the attitude data. Thus, the surface attitudes cannot be derived from a depth map as operationally defined by our methods, although the reverse is a possibility.

Journal ArticleDOI
TL;DR: The ability of subjects to discriminate sugars with a whole-mouth forced-choice paradigm, in which a standard solution was compared with a test solution of varied concentration, is investigated, finding the gustatory indiscriminability of these sugarsmonogeusia.
Abstract: We investigated the ability of subjects to discriminate sugars with a whole-mouth forced-choice paradigm, in which a standard solution was compared with a test solution of varied concentration. Discrimination probabilities were U-shaped functions of test concentration: for 6 subjects and pairwise combinations of fructose, glucose, and sucrose, discriminability always declined to chance over a narrow range of test concentrations. At concentrations < or = 100 mM, maltose was indiscriminable from fructose but discriminable at higher concentrations for 4 subjects. By analogy with the monochromacy of night vision, whereby any two lights are indiscriminable when their relative intensities are suitably adjusted, we call the gustatory indiscriminability of these sugars monogeusia. The simplest account of monogeusia is that all information about the indiscriminable sugars is represented by a single neural signal that varies only in magnitude. The discriminability of maltose from the other sugars at higher concentrations is consistent with the hypothesis that maltose also activates a second gustatory code.

Journal ArticleDOI
TL;DR: The results suggest that the preattentive character of some texture discrimination tasks with SOAs of only 100 msec is vitiated by the involuntary attentional shifts that are caused by orientation differences.
Abstract: We tested the ability of orientation differences to cause involuntary shifts of visual attention and found that these attentional shifts can occur in response to an orientation “pop-out” display. Texturelike cue stimuli consisting of discrete oriented bars, with either uniform orientation or containing a noninformative orthogonally oriented bar, were presented for a variable duration. Subsequent to or partially coincident with the cue stimulus was the target display of a localization or two-interval forced-choice task, followed by a mask display. Naive subjects consistently showed greater accuracy in trials with the target at the location of the orthogonal orientation compared with trials with uniformly oriented bars, with only 100 msec between the cue and mask onsets. Discriminating these orientations required a stimulus onset asynchrony (SOA) of 50–70 msec. The attentional facilitation is transient, in most cases absent, with a cue-mask SOA of 250 msec. These results suggest that the preattentive character of some texture discrimination tasks with SOAs of only 100 msec is vitiated by the involuntary attentional shifts that are caused by orientation differences.

Journal ArticleDOI
TL;DR: A series of experiments investigated concurrent discriminations of surface and nonsurface attributes, including color, brightness, texture, length, location, and motion, and found a model of visual attention in which separate visual subsystems are coordinated, converging to work on surface and boundary properties of the same selected object.
Abstract: A series of experiments investigated concurrent discriminations of surface and nonsurface attributes, including color, brightness, texture, length, location, and motion. In all cases but one, results matched those previously reported: Interference occurred when two discriminations concerned different objects, but not when they concerned the same one. In the two-object case, interference was the same whether discriminations were similar (e.g., two surface discriminations) or different (e.g., one surface, one boundary). Such results support a model of visual attention in which separate visual subsystems are coordinated, converging to work on surface and boundary properties of the same selected object. A partial exception is color: For reasons that are unclear, color escapes twoobject interference except from other, concurrent surface discriminations