scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 2001"


Journal ArticleDOI
TL;DR: An integrated approach to fitting psychometric functions, assessing the goodness of fit, and providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing is described.
Abstract: The psychometric function relates an observer’s performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (orlapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function’s parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditionalX2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.

2,263 citations


Journal ArticleDOI
TL;DR: The present paper’s principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes, and introduces improved confidence intervals that improve on the parametric and percentile-based bootstrap confidence intervals previously used.
Abstract: The psychometric function relates an observer's performance to an independent variable, usually a physical quantity of an experimental stimulus Even if a model is successfully fit to the data and its goodness of fit is acceptable, experimenters require an estimate of the variability of the parameters to assess whether differences across conditions are significant Accurate estimates of variability are difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statistical techniques are only asymptotically correct and can be shown to be unreliable in some common situations Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternative statistical techniques based on Monte Carlo resampling methods The present paper's principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes First, we outline the basic bootstrap procedure and argue in favor of the parametric, as opposed to the nonparametric, bootstrap Second, we describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested Third, we show how one's choice of sampling scheme (the placement of sample points on the stimulus axis) strongly affects the reliability of bootstrap confidence intervals, and we make recommendations on how to sample the psychometric function efficiently Fourth, we show that, under certain circumstances, the (arbitrary) choice of the distribution function can exert an unwanted influence on the size of the bootstrap confidence intervals obtained, and we make recommendations on how to avoid this influence Finally, we introduce improved confidence intervals (bias corrected and accelerated) that improve on the parametric and percentile-based bootstrap confidence intervals previously used Software implementing our methods is available

838 citations


Journal ArticleDOI
TL;DR: The general development of adaptive procedures is described, and typically, a threshold value is measured using these methods, and, in some cases, other characteristics of the psychometric function underlying perceptual performance, such as slope, may be developed.
Abstract: As research on sensation and perception has grown more sophisticated during the last century, new adaptive methodologies have been developed to increase efficiency and reliability of measurement. An experimental procedure is said to be adaptive if the physical characteristics of the stimuli on each trial are determined by the stimuli and responses that occurred in the previous trial or sequence of trials. In this paper, the general development of adaptive procedures is described, and three commonly used methods are reviewed. Typically, a threshold value is measured using these methods, and, in some cases, other characteristics of the psychometric function underlying perceptual performance, such as slope, may be developed. Results of simulations and experiments with human subjects are reviewed to evaluate the utility of these adaptive procedures and the special circumstances under which one might be superior to another.

735 citations


Journal ArticleDOI
TL;DR: The results suggest that the emotional expression in a face can be perceived outside the focus of attention and can guide focal attention to the location of the face.
Abstract: Four experiments were conducted to evaluate whether focal attention can be guided by an analysis of the emotional expression in a face. Participants searched displays of 7, 11, 15, and 19 schematic faces for the location of a unique face expressing either a positive or a negative emotion located among distractor faces expressing a neutral emotion. The slopes of the search functions for locating the negative face were shallower than the slopes of the search functions for locating the positive face (Experiments 1A and 2A). When the faces were inverted to reduce holistic face perception, the slopes of the search functions for locating positive and negative faces were not different (Experiments 1B and 2B). The results suggest that the emotional expression in a face can be perceived outside the focus of attention and can guide focal attention to the location of the face.

601 citations


Journal ArticleDOI
TL;DR: This paper examines various psychometric function topics inspired by this special symposium issue of Perception & Psychophysics, examining the relative merits of objective yes/no versus forced choice tasks (including threshold variance).
Abstract: The psychometric function, relating the subject’s response to the physical stimulus, is fundamental to psychophysics. This paper examines various psychometric function topics, many inspired by this special symposium issue ofPerception & Psychophysics: What are the relative merits of objective yes/no versus forced choice tasks (including threshold variance)? What are the relative merits of adaptive versus constant stimuli methods? What are the relative merits of likelihood versus up-down staircase adaptive methods? Is 2AFC free of substantial bias? Is there no efficient adaptive method for objective yes/no tasks? Should adaptive methods aim for 90% correct? Can adding more responses to forced choice and objective yes/no tasks reduce the threshold variance? What is the best way to deal with lapses? How is the Weibull function intimately related to thed’ function? What causes bias in the likelihood goodness-of-fit? What causes bias in slope estimates from adaptive methods? How good are nonparametric methods for estimating psychometric function parameters? Of what value is the psychometric function slope? How are various psychometric functions related to each other? The resolution of many of these issues is surprising.

459 citations


Journal ArticleDOI
TL;DR: The results show that stimulus-driven and expectancy-driven effects must be distinguished in studies of attending to different sensory modalities.
Abstract: We examined the effects of modality expectancy on human performance. Participants judged azimuth (left vs. right location) for an unpredictable sequence of auditory, visual, and tactile targets. In some blocks, equal numbers of targets were presented in each modality. In others, the majority (75%) of the targets were presented in just one expected modality. Reaction times (RTs) for targets in an unexpected modality were slower than when that modality was expected or when no expectancy applied. RT costs associated with shifting attention from the tactile modality were greater than those for shifts from either audition or vision. Any RT benefits for the most likely modality were due to priming from an event in the same modality on the previous trial, not to the expectancy per se. These results show that stimulus-driven and expectancy-driven effects must be distinguished in studies of attending to different sensory modalities.

408 citations


Journal ArticleDOI
TL;DR: Four of the eight papers in this symposium inPerception & Psychophysics deal with the use of search asymmetries to identify stimulus attributes that behave as basic features in this context, and another two deals with the long-standing question of whether a novelty can be considered to be a basic feature.
Abstract: In visual search tasks, observers look for a target stimulus among distractor stimuli. A visual search asymmetry is said to occur when a search for stimulus A among stimulus B produces different results from a search for B among A. Anne Treisman made search asymmetries into an important tool in the study of visual attention. She argued that it was easier to find a target that was defined by the presence of a preattentive basic feature than to find a target defined by the absence of that feature. Four of the eight papers in this symposium inPerception & Psychophysics deal with the use of search asymmetries to identify stimulus attributes that behave as basic features in this context. Another two papers deal with the long-standing question of whether a novelty can be considered to be a basic feature. Asymmetries can also arise when one type of stimulus is easier to identify or classify than another. Levin and Angelone’s paper on visual search for faces of different races is an examination of an asymmetry of this variety. Finally, Previc and Naegele investigate an asymmetry based on the spatial location of the target. Taken as a whole, these papers illustrate the continuing value of the search asymmetry paradigm.

251 citations


Journal ArticleDOI
TL;DR: These capture effects may reveal how temporal discrepancies in the input from different sensory modalities are reconciled and could provide a probe for examining the neural stages at which evoked responses correspond to the contents of conscious perception.
Abstract: We report that when a flash and audible click occur in temporal proximity to each other, the perceived time of occurrence of both events is shifted in such a way as to draw them toward temporal convergence. In one experiment, observers judged when a flash occurred by reporting the clock position of a rotating marker. The flash was seen significantly earlier when it was preceded by an audible click and significantly later when it was followed by an audible click, relative to a condition in which the flash and click occurred simultaneously. In a second experiment, observers judged where the marker was when the click was heard. When a flash preceded or followed the click, similar but smaller capture effects were observed. These capture effects may reveal how temporal discrepancies in the input from different sensory modalities are reconciled and could provide a probe for examining the neural stages at which evoked responses correspond to the contents of conscious perception.

208 citations


Journal ArticleDOI
TL;DR: Ventriloquism can be dissociated from exogenous visual attention and appears to reflect sensory interactions with little role for the direction of visual spatial attention.
Abstract: Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not depend on the direction of deliberate, orendogenous, attention (Bertelson, Vroomen, de Gelder, & Driver, 2000). In the present study, a similar question concerning automatic, orexogenous, attention was examined. The experimental manipulation was based on the fact that exogenous visual attention can be attracted toward asingleton—that is, an item different on some dimension from all other items presented simultaneously. A display was used that consisted of a row of four bright squares with one square, in either the left- or the rightmost position,smaller than the others, serving as the singleton. In Experiment 1, subjects made dichotomous left-right judgments concerning sound bursts, whose successivelocations were controlled by a psychophysical staircase procedure and which were presented in synchrony with a display with the singleton either left or right. Results showed that the apparent location of the sound was attractednot toward the singleton, but instead toward the big squares at the opposite end of the display. Experiment 2 was run to check that the singleton effectively attracted exogenous attention. The task was to discriminate target letters presented either on the singleton or on the opposite big square. Performance deteriorated when the target was on the big square opposite the singleton, in comparison with control trials with no singleton, thus showing that the singleton attracted attention away from the target location. In Experiment 3, localization and discrimination trials were mixed randomly so as to control for potential differences in subjects’ strategies in the two preceding experiments. Results were as before, showing that the singleton attracted attention, whereas sound localization was shifted away from the singleton. Ventriloquism can thus be dissociated from exogenous visual attention and appears to reflect sensory interactions with little role for the direction of visual spatial attention.

176 citations


Journal ArticleDOI
TL;DR: By monitoring eye movements, this work investigated the roles that recognition and guidance play in contextual cuing and the interaction between memorydriven search (contextual cuing) and stimulus-driven attentional capture by abrupt onsets.
Abstract: Contextual cuing is a memory-based phenomenon in which previously encountered global pattern information in a display can automatically guide attention to the location of a target (Chun& Jiang, 1998), leading to rapid and accurate responses. What is not clear is how contextual cuing works. By monitoring eye movements, we investigated the roles that recognition and guidance play in contextual cuing. Recognition does not appear to occur on every trial and sometimes does not have its effects until later in the search process. When recognition does occur, attention is guided straight to the target rather than in the general direction. In Experiment 2, we investigated the interaction between memorydriven search (contextual cuing) and stimulus-driven attentional capture by abrupt onsets. Contextual cuing was able to override capture by abrupt onsets. In contrast, onsets had almost no effect on the degree of contextual cuing. These data are discussed in terms of the role of top-down and bottom-up factors in the guidance of attention in visual search.

168 citations


Journal ArticleDOI
TL;DR: In three experiments, the results showed that if new elements were equiluminant with the background, no visual marking occurred, suggesting that new elements must have a luminance onset in order to be prioritized over old elements.
Abstract: In a standard visual marking experiment, observers are presented with a display containing one set of elements (old elements) followed after a certain time interval by a second set of elements (new elements). The task of observers is to search for a target among the new elements. Typically, the time to find the target depends only on the number of new elements in the display and not on the number of old elements, showing that observers search only among the new elements. This effect of prioritizing new elements over old elements is explained in terms of top-down inhibition of old objects—that is, visual marking (Watson & Humphreys, 1997). The present study addressed whether this prioritizing is in fact mediated by top-down inhibition of old objects, as suggested by Watson and Humphreys (1997), or whether it is mediated by the abrupt onsets of the newly presented elements (Yantis & Jonides, 1984). In three experiments, the presentations of the old and new elements were or were not accompanied by a luminance change. The results showed that if new elements were equiluminant with the background, no visual marking occurred, suggesting that new elements must have a luminance onset in order to be prioritized over old elements. Implications for current theories on visual selection are discussed.

Journal ArticleDOI
TL;DR: Conversion formulas for the most popular cases, including the logistic, Weibull, Quick, cumulative normal, and hyperbolic tangent functions as analytic representations, in both linear and log coordinates and to different log bases are provided.
Abstract: The psychometric function's slope provides information about the reliability of psychophysical threshold estimates. Furthermore, knowing the slope allows one to compare, across studies, thresholds that were obtained at different performance criterion levels. Unfortunately, the empirical validation of psychometric function slope estimates is hindered by the bewildering variety of slope measures that are in use. The present article provides conversion formulas for the most popular cases, including the logistic, Weibull, Quick, cumulative normal, and hyperbolic tangent functions as analytic representations, in both linear and log coordinates and to different log bases, the practical decilog unit, the empirically based interquartile range measure of slope, and slope in a d' representation of performance.

Journal ArticleDOI
TL;DR: The goal was to investigate correlations as a function of individual sensitivities to several bitter compounds representative of different chemical classes and infer the number and variety of potential bitterness transduction systems for these compounds.
Abstract: People vary widely in their sensitivities to bitter compounds, but the intercorrelation of these sensitivities is unknown. Our goal was to investigate correlations as a function of individual sensitivities to several bitter compounds representative of different chemical classes and, from these correlations, infer the number and variety of potential bitterness transduction systems for these compounds. Twenty-six subjects rated and ranked quinine HCl, caffeine, (−)-epicatechin, tetralone, L-phenylalanine, L-tryptophan, magnesium sulfate, urea, sucrose octaacetate (SOA), denatonium benzoate, andn-propylthiouracil (PROP) for bitterness. By examining individual differences, ratings and rankings could be grouped into two general clusters—urea/phenylalanine/tryptophan/epicatechin, and quinine/caffeine/SOA/denatonium benzoate/tetralone/magnesium sulfate—none of which contained PROP. When subjects were grouped into the extremes of sensitivity to PROP, a significant difference was found in the bitterness ratings, but not in the rankings. Therefore, there are also subjects who possess diminished absolute sensitivity to bitter stimuli but do not differ from other subjects in their relative sensitivities to these compounds.

Journal ArticleDOI
TL;DR: A stochastic model is presented that distinguishes a peripheral processing stage with separate parallel activation by visual and auditory information from a central processing stage at which intersensory integration takes place.
Abstract: In two experiments, saccadic response time (SRT) for eye movements toward visual target stimuli at different horizontal positions was measured under simultaneous or near-simultaneous presentation of an auditory nontarget (distractor). The horizontal position of the auditory signal was varied, using a virtual auditory environment setup. Mean SRT to a visual target increased with distance to the auditory nontarget and with delay of the onset of the auditory signal relative to the onset of the visual stimulus. A stochastic model is presented that distinguishes a peripheral processing stage with separate parallel activation by visual and auditory information from a central processing stage at which intersensory integration takes place. Two model versions differing with respect to the role of the auditory distractors are tested against the SRT data.

Journal ArticleDOI
TL;DR: The authors' results indicated that priming facilitated the direction of attention to the color-singleton target on a subsequent trial, when a color- Singleton item happened to be a critical item to be attended in one situation, another color- singleton item defined by the same color combination tended to attract attention in subsequent encounters.
Abstract: We investigated how the performance of a color-singleton search (the search for a single odd-colored item among homogeneously colored distractors) left a persistent memory trace (lasting up to six intervening trials or ∼17 sec) that facilitated a subsequent color-singleton search (when the same targetdistractor color combination was repeated) Specifically, we investigated the roles of attention in the encoding and “retrieval” stages of this priming effect by intermixing trials in which the target location was precued by an onset cue We found that the encoding of both target and distractor colors was automatic in that whether or not observers had to use color in locating the target in the preceding trial did not substantially affect priming However, priming required that the color-singleton item be attended in the preceding trial Once a color singleton display was encoded, our results indicated that priming facilitated the direction of attention to the color-singleton target on a subsequent trial In short, when a color-singleton item happened to be a critical item to be attended in one situation, another color-singleton item defined by the same color combination tended to attract attention in subsequent encounters

Journal ArticleDOI
TL;DR: Evidence is developed that fluctuations of the response criterion are much less detrimental to unforced-choice tasks than to yes/no tasks, and informal observations suggest that participants are more comfortable with unforced tasks than with forced ones.
Abstract: This paper evaluates an adaptive staircase procedure for threshold estimation that is suitable for unforced-choice tasks-ones with the additional response alternative don't know. Within the framework of a theory of indecision, evidence is developed that fluctuations of the response criterion are much less detrimental to unforced-choice tasks than to yes/no tasks. An adaptive staircase procedure for unforced-choice tasks is presented. Computer simulations show a slight gain in efficiency if don't know responses are allowed, even if response criteria vary. A behavioral comparison with forced-choice and yes/no procedures shows that the new procedure outdoes the other two with respect to reliability. This is especially true for naive participants. For well-trained participants it is also slightly more efficient than the forced-choice procedure, and it produces a smaller systematic error than the yes/no procedure. Moreover, informal observations suggest that participants are more comfortable with unforced tasks than with forced ones.

Journal ArticleDOI
TL;DR: The data indicated that subjects obtained no useful semantic information from words seen parafoveally that enabled them to identify them more quickly on the subsequent fixation.
Abstract: The question of whether meaning can be extracted from unidentified parafoveal words was examined using fluent Spanish-English bilinguals. In Experiment 1, subjects fixated on a central cross, and a preview word was presented to the right of fixation in parafoveal vision. During the saccade to the parafoveal preview word, the preview was replaced by the target word, which the subject was required to name. In Experiment 2, subjects read sentences containing the target word, and, as in the naming task, a preview word was replaced by the target word when the subject’s saccade crossed a boundary location. In both experiments, preview words were identical to the target word, translations, orthographic controls for the translations, or unrelated words in the opposite language. In both experiments, the preview benefit from the translation conditions was no greater than would be predicted by the orthographic similarity of the preview to the target. Hence, the data indicated that subjects obtained no useful semantic information from words seen parafoveally that enabled them to identify them more quickly on the subsequent fixation.

Journal ArticleDOI
Ruth Rosenholtz1
TL;DR: It is argued that a number of experiments purporting to show search asymmetry contain built-in design asymmetries, and a saliency model of visual search predicts the results of these experiments, using only a simple measure of target-distractor similarity, without reliance on asymmetric search mechanisms.
Abstract: In order to establish a search asymmetry, one must run an experiment with a symmetric design and get asymmetric results. Given an asymmetric design, one expects asymmetric results, and such results do not imply an asymmetry in the search mechanisms. In this paper, I argue that a number of experiments purporting to show search asymmetries contain built-in design asymmetries. A saliency model of visual search predicts the results of these experiments, using only a simple measure of target-distractor similarity, without reliance on asymmetric search mechanisms. These results have implications for search mechanisms and for other experiments purporting to show search asymmetries.

Journal ArticleDOI
TL;DR: This work tested the assumption that memorydriven search proceeds by sampling without replacement using a multiple-target search paradigm, and found that this assumption was consistent with the memory-free search hypothesis but would also be consistent with memory for a small number of previously attended locations.
Abstract: Models of visual search performance typically assume that search proceeds by sampling without replacement. This requires memory for each deployment of attention. We tested this assumption of memorydriven search using a multiple-target search paradigm. We held total set size constant, varied the number of targets in the display, and asked subjects to report whether or not there were at least n targets present, where n was varied by block. This allowed us to measure the time to find each subsequent target. Memory-driven search predicts that reaction time should be a linear function of n. The alternative memory-free search hypothesis predicts an accelerating function. The data falsify the memory-driven hypothesis. They were consistent with the memory-free search hypothesis but would also be consistent with memory for a small number of previously attended locations.

Journal ArticleDOI
TL;DR: In two visual search experiments, the detection of singleton feature targets redundantly defined on multiple dimensions was investigated, providing evidence for parallel-coactive processing of multiple dimensions, consistent with the dimension-weighting account of Müller, Heller, and Ziegler (1995).
Abstract: In two visual search experiments, the detection of singleton feature targets redundantly defined on multiple dimensions was investigated. Targets differed from the distractors in orientation, color, or both (redundant targets). In Experiment 1, the various target types were presented either in separate blocks or in random order within blocks. Reaction times to redundant targets significantly violated therace model inequality (Miller, 1982), but only when there was constancy of the target-defining dimension(s) within trial blocks. In Experiment 2, there was dimensional variability within blocks. Consistent with Experiment 1, constancy of the target-defining dimension(s), but this time across successive trials (rather than within blocks), was critical for observing violations of the race model inequality. These results provide evidence for parallel-coactive processing of multiple dimensions, consistent with thedimension-weighting account of Muller, Heller, and Ziegler (1995).

Journal ArticleDOI
TL;DR: Stimulus-driven capture was observed when color was neither the defining nor the reported target attribute and when subjects naive of visual search tasks were used, providing evidence supporting the shift hypothesis and giving experimental support to many contemporary models of visual attention.
Abstract: The aim of the present study was to investigate mechanisms underlying attentional capture by color. Previous work has shown that a color singleton is able to summon attention only in the presence of a relevant attentional set, whereas when a color singleton is not useful for a task, evidence for purely stimulus-driven attentional capture is controversial. Three visual search experiments (T-L task) were conducted using a method different from that based on set sizes, consisting of monitoring target-singleton distance in a unique display size. In Experiment 1, we demonstrated that attention can be summoned in a real stimulus-driven manner by an irrelevant color singleton. Experiment 2A extended this observation, showing that the color singleton attracted attention even when capture was detrimental. However, Experiment 2B showed that such capture can be strategically prevented. Finally, in Experiment 3, we examined whether such a capture was due to a spatial shift or to a filtering cost, providing evidence supporting the shift hypothesis. Stimulus-driven capture was observed when color was neither the defining nor the reported target attribute (Yantis, 1993) and when subjects naive of visual search tasks were used. The present results give experimental support to many contemporary models of visual attention.

Journal ArticleDOI
TL;DR: It is demonstrated that search asymmetry and search efficiency in the U—F condition are influenced by the presence of low-level feature differences between the familiar and the unfamiliar stimuli and suggested that the familiarity of the distractors, rather than the familiarity difference between the target and the distractor, determines search efficiency.
Abstract: Wang, Cavanagh, and Green (1994) demonstrated a pop-out effect in searching for an unfamiliar target among familiar distractors (U—F search) and argued for the importance of a familiarity difference between the target and the distractors in determining search efficiency. In four experiments, we explored the generality of that finding. Experiment 1 compared search efficiency across a variety of target-distractor pairs. In Experiments 2, 3, and 4, we used Chinese characters and their transforms as targets and distractors and compared search performance between Chinese and non-Chinese participants. We demonstrated that search asymmetry and search efficiency in the U—F condition are influenced by the presence of low-level feature differences between the familiar and the unfamiliar stimuli. Our findings suggest that the familiarity of the distractors, rather than the familiarity difference between the target and the distractors, determines search efficiency. We also documented a counterintuitive familiarity-inferiority effect, suggesting that knowledge of search stimuli may, sometimes, be detrimental to search performance.

Journal ArticleDOI
TL;DR: It is argued that existing theories are helpful in understanding these findings but that they need to be supplemented to account for the specific features that specify categories and to accounts for subjects’ ability to quickly locate targets representing heterogeneous and formally complex categories.
Abstract: In this report, we explored the features that support visual search for broadly inclusive natural categories. We used a paradigm in which subjects searched for a randomly selected target from one category (e.g., one of 32 line drawings of artifacts or animals in displays ranging from three to nine items) among a mixed set of distractors from the other. We found that search was surprisingly fast. Target-present slopes for animal targets among artifacts ranged from 10.8 to 16.0 msec/item, and slopes for artifact targets ranged from 5.5 to 6.2 msec/item. Experiments 2–5 tested factors that affect both the speed of the search and the search asymmetry favoring detection of artifacts among animals. They converge on the conclusion that target-distractor differences in global contour shape (e.g., rectilinearity/curvilinearity) and visual typicality of parts and form facilitate search by category. We argue that existing theories are helpful in understanding these findings but that they need to be supplemented to account for the specific features that specify categories and to account for subjects’ ability to quickly locate targets representing heterogeneous and formally complex categories.

Journal ArticleDOI
TL;DR: The results provide additional support to Luck and Vogel’s (1997) demonstration that integrated objects form the units of VSTM capacity.
Abstract: We investigated whether the capacity of visual short-term memory (VSTM) is defined by number of objects or number of spatial locations. Previous work is consistent with either alternative. To distinguish these factors, we used overlapping stimuli that allowed us to independently manipulate the number of spatial locations while holding constant the number of objects and features to be encoded (Duncan, 1984; Vecera & Farah, 1994). In Experiment 1, the number of spatial locations had no effect on VSTM, suggesting that VSTM is object based. Experiments 2 and 3 ruled out alternative explanations based on perceptual segregation difficulty or decision noise factors. Our results provide additional support to Luck and Vogel's (1997) demonstration that integrated objects form the units of VSTM capacity.

Journal ArticleDOI
TL;DR: The results indicate that the ML-PEST method gives reliable and precise threshold measurements and its ability to detect malingerers shows considerable promise.
Abstract: This paper evaluates the use of a maximum-likelihood adaptive staircase psychophysical procedure (ML-PEST), originally developed in vision and audition, for measuring detection thresholds in gustation and olfaction. The basis for the psychophysical measurement of thresholds with the ML-PEST procedure is developed. Then, two experiments and four simulations are reported. In the first experiment, ML-PEST was compared with the Wetherill and Levitt up-down staircase method and with the Cain ascending method of limits in the measurement of butyl alcohol thresholds. The four Monte Carlo simulations compared the three psychophysical procedures. In the second experiment, the test-retest reliability of ML-PEST for measuring NaCl and butyl alcohol thresholds was assessed. The results indicate that the ML-PEST method gives reliable and precise threshold measurements. Its ability to detect malingerers shows considerable promise. It is recommended for use in clinical testing.

Journal ArticleDOI
Jacob Feldman1
TL;DR: The Bayesian theory provides a far more quantitatively precise account of human contour integration than has been previously possible, allowing a very precise calculation of the subjective goodness of a virtual chain of dots.
Abstract: The process by which the human visual system parses an image into contours, surfaces, and objects—perceptual grouping—has proven difficult to capture in a rigorous and general theory. A natural candidate for such a theory is Bayesian probability theory, which provides optimal interpretations of data under conditions of uncertainty. But the fit of Bayesian theory to human grouping judgments has never been tested, in part because methods for expressing grouping hypotheses probabilistically have not been available. This paper presents such methods for the case ofcontour integration—that is, the aggregation of a sequence of visual items into a “virtual curve.” Two experiments are reported in which human subjects were asked to group ambiguous configurations of dots (in Experiment 1, a sequence of five dots could be judged to contain a “corner” or not; in Experiment 2, an arrangement of six dots could be judged to fall into two disjoint contours or one smooth contour). The Bayesian theory accounts extremely well for subjects’ judgments, explaining more than 75% of the variance in both tasks. The theory thus provides a far more quantitatively precise account of human contour integration than has been previously possible, allowing a very precise calculation of the subjective goodness of a virtual chain of dots. Because Bayesian theory is inferentially optimal, this finding suggests a “rational justification,” and hence possibly an evolutionary rationale, for some of the rules of perceptual grouping.

Journal ArticleDOI
TL;DR: Two experiments are reported here that show that the mere planning of a rotational hand movement is sufficient to cause interference with mental object rotation, and underlines the idea thatmental object rotation is an imagined (covert) action, rather than a pure visual-spatial imagery task, and that the interference between mentalobject rotation and rotationalhand movements is an interference between goals of actions.
Abstract: Recently, we showed that the simultaneous execution of rotational hand movements interferes with mental object rotation, provided that the axes of rotation coincide in space We hypothesized that mental object rotation and the programming of rotational hand movements share a common process presumably involved in action planning Two experiments are reported here that show that the mere planning of a rotational hand movement is sufficient to cause interference with mental object rotation Subjects had to plan different spatially directed hand movements that they were asked to execute only after they had solved a mental object rotation task Experiment 1 showed that mental object rotation was slower if hand movements were planned in a direction opposite to the presumed mental rotation direction, but only if the axes of hand rotation and mental object rotation were parallel in space Experiment 2 showed that this interference occurred independent of the preparatory hand movements observed in Experiment 1 Thus, it is the planning of hand movements and not their preparation or execution that interferes with mental object rotation This finding underlines the idea that mental object rotation is an imagined (covert) action, rather than a pure visual-spatial imagery task, and that the interference between mental object rotation and rotational hand movements is an interference between goals of actions

Journal ArticleDOI
TL;DR: The results suggest that the haptic oblique effect is not purely gravitationally or egocentrically defined but, rather, depends on a subjective gravitational reference frame that is tilted in a direction opposite to that of the head in tilted postures.
Abstract: The aim of this study was to examine the effect of body and head tilts on the haptic oblique effect. This effect reflects the more accurate processing of vertical and horizontal orientations, relative to oblique orientations. Body or head tilts lead to a mismatch between egocentric and gravitational axes and indicate whether the haptic oblique effect is defined in an egocentric or a gravitational reference frame. The ability to reproduce principal (vertical and horizontal) and oblique orientations was studied in upright and tilted postures. Moreover, by controlling the deviation of the haptic subjective vertical provoked by postural tilt, the possible role of a subjective gravitational reference frame was tested. Results showed that the haptic reproduction of orientations was strongly affected by both the position of the body (Experiment 1) and the position of the head (Experiment 2). In particular, the classical haptic oblique effect observed in the upright posture disappeared in tilted conditions, mainly because of a decrease in the accuracy of the vertical and horizontal settings. The subjective vertical appeared to be the orientation reproduced the most accurately. These results suggest that the haptic oblique effect is not purely gravitationally or egocentrically defined but, rather, depends on a subjective gravitational reference frame that is tilted in a direction opposite to that of the head in tilted postures (Experiment 3).

Journal ArticleDOI
TL;DR: Alternative, decision-level explanations of the spatial cuing effect are examined that attribute evidence of capture to postpresentation delays in the voluntary allocation of attention, rather than to on-line involuntary shifts in direct response to the cue.
Abstract: Under certain circumstances, external stimuli will elicit an involuntary shift of spatial attention, referred to as attentional capture. According to the contingent involuntary orienting account (Folk, Remington, & Johnston, 1992), capture is conditioned by top-down factors that set attention to respond involuntarily to stimulus properties relevant to one's behavioral goals. Evidence for this comes from spatial cuing studies showing that a spatial cuing effect is observed only when cues have goal-relevant properties. Here, we examine alternative, decision-level explanations of the spatial cuing effect that attribute evidence of capture to postpresentation delays in the voluntary allocation of attention, rather than to on-line involuntary shifts in direct response to the cue. In three spatial cuing experiments, delayed-allocation accounts were tested by examining whether items at the cued location were preferentially processed. The experiments provide evidence that costs and benefits in spatial cuing experiments do reflect the on-line capture of attention. The implications of these results for models of attentional control are discussed.

Journal ArticleDOI
TL;DR: It is found that perceived distance of the cube varies appropriately as the perceived location of contact between the platform and the ground varies; variability increases systematically as the relating surfaces move apart; and certain local edge alignments allow precise propagation of distance information.
Abstract: In complex natural scenes, objects at different spatial locations can usually be related to each other through nested contact relations among adjoining surfaces Our research asks how well human observers, under monocular static viewing conditions, are able to utilize this information in distance perception We present computer-generated naturalistic scenes of a cube resting on a platform, which is in turn resting on the ground Observers adjust the location of a marker on the ground to equal the perceived distance of the cube We find that (1) perceived distance of the cube varies appropriately as the perceived location of contact between the platform and the ground varies; (2) variability increases systematically as the relating surfaces move apart; and (3) certain local edge alignments allow precise propagation of distance information These results demonstrate considerable efficiency in the mediation of distance perception through nested contact relations among surfaces