scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 2004"


Journal ArticleDOI
TL;DR: The results support the view that attentional orienting underlies distortions in perceived duration, and show that TSE in the visual domain can occur because of semantic novelty, rather than image novelty per se.
Abstract: During brief, dangerous events, such as car accidents and robberies, many people report that events seem to pass in slow motion, as if time had slowed down. We have measured a similar, although less dramatic, effect in response to unexpected, nonthreatening events. We attribute the subjective expansion of time to the engagement of attention and its influence on the amount of perceptual information processed. We term the effect time's subjective expansion (TSE) and examine here the objective temporal dynamics of these distortions. When a series of stimuli are shown in succession, the low-probability oddball stimulus in the series tends to last subjectively longer than the high-probability stimulus even when they last the same objective duration. In particular, (1) there is a latency of at least 120 msec between stimulus onset and the onset of TSE, which may be preceded by subjective temporal contraction; (2) there is a peak in TSE at which subjective time is particularly distorted at a latency of 225 msec after stimulus onset; and (3) the temporal dynamics of TSE are approximately the same in the visual and the auditory domains. Two control experiments (in which the methods of magnitude estimation and stimulus reproduction were used) replicated the temporal dynamics of TSE revealed by the method of constant stimuli, although the initial peak was not apparent with these methods. In addition, a third, control experiment (in which the method of single stimuli was used) showed that TSE in the visual domain can occur because of semantic novelty, rather than image novelty per se. Overall, the results support the view that attentional orienting underlies distortions in perceived duration.

429 citations


Journal ArticleDOI
TL;DR: Manual reaction times to visual, auditory, and tactile stimuli presented simultaneously, or with a delay, were measured to test for multisensory interaction effects in a simple detection task with redundant signals and showed response enhancement increased with decreasing auditory and tactile stimulus intensity.
Abstract: Manual reaction times to visual, auditory, and tactile stimuli presented simultaneously, or with a delay, were measured to test for multisensory interaction effects in a simple detection task with redundant signals. Responses to trimodal stimulus combinations were faster than those to bimodal combinations, which in turn were faster than reactions to unimodal stimuli. Response enhancement increased with decreasing auditory and tactile stimulus intensity and was a U-shaped function of stimulus onset asynchrony. Distribution inequality tests indicated that the multisensory interaction effects were larger than predicted by separate activation models, including the difference between bimodal and trimodal response facilitation. The results are discussed with respect to previous findings in a focused attention task and are compared with multisensory integration rules observed in bimodal and trimodal superior colliculus neurons in the cat and monkey.

345 citations


Journal ArticleDOI
TL;DR: The ability to localize vibratory stimuli was examined at sites around the abdomen and found to be a function of separation among loci and, most significantly, of place on the trunk.
Abstract: In this study, we explore the conditions for accurate localization of vibrotactile stimuli presented to the abdomen. Tactile orientation systems intended to provide mobility information for people who are blind depend on accurate identification of location of stimuli on the skin, as do systems designed to indicate target positions in space or the status of remotely operated devices to pilots or engineers. The spatial acuity of the skin has been examined for simple touch, but not for the types of vibrating signals used in such devices. The ability to localize vibratory stimuli was examined at sites around the abdomen and found to be a function of separation among loci and, most significantly, of place on the trunk. Neither the structures underlying the skin nor the types of tactor tested appeared to affect localization. Evidence was found for anatomically defined anchor points that provide localization referents that enhance performance even with wide target spacing.

264 citations


Journal ArticleDOI
TL;DR: Two experiments involving comparisons of saccadic and manual parity judgment tasks clearly support the first view; they also establish a vertical SNARC effect, suggesting that the magnitude representation resembles a number map, rather than a number line.
Abstract: Bimanual parity judgments about numerically small (large) digits are faster with the left (right) hand, even though parity is unrelated to numerical magnitude per se (the SNARC effect; Dehaene, Bossini, & Giraux, 1993). According to one model, this effect reflects a space-related representation of numerical magnitudes (mental number line) with a genuine left-to-right orientation. Alternatively, it may simply reflect an overlearned motor association between numbers and manual responses—as, for example, on typewriters or computer keyboards#x2014;in which case it should be weaker or absent with effectors whose horizontal response component is less systematically associated with individual numbers. Two experiments involving comparisons of saccadic and manual parity judgment tasks clearly support the first view; they also establish a vertical SNARC effect, suggesting that our magnitude representation resembles a number map, rather than a number line.

250 citations


Journal ArticleDOI
TL;DR: Imitation of shadowed words was evaluated using Goldinger’s (1998) AXB paradigm and revealedshadowed words to be better imitations of target tokens than baseline, without an influence of AXB presentation order.
Abstract: Imitation of shadowed words was evaluated using Goldinger’s (1998) AXB paradigm. The first experiment was a replication of Goldinger’s experiments with different tokens. Experiment 1’s AXB tests showed that shadowed words were judged to be better imitations of target words than were baseline (read) counterparts more often than chance (.50). Order of presentation of baseline and shadowed words in the AXB test also significantly influenced judgments. Degree of prior exposure to token words did not significantly influence judgments of imitation. Experiment 2 employed modified target tokens with extended voice onset times (VOTs). In addition to AXB tests, VOTs of response tokens were compared across baseline and shadowing conditions. The AXB tests revealed shadowed words to be better imitations of target tokens than baseline, without an influence of AXB presentation order. Differences between baseline and shadowing VOTs were greater when VOTs were extended. The implications of spontaneous imitation in nonsocial settings are considered.

208 citations


Journal ArticleDOI
TL;DR: Endogenous temporal-orienting effects were studied using a cuing paradigm in which the cue indicated the time interval during which the target was most likely to appear, and they were larger when temporal expectancy was manipulated between blocks, rather than within blocks.
Abstract: Endogenous temporal-orienting effects were studied using a cuing paradigm in which the cue indicated the time interval during which the target was most likely to appear. Temporal-orienting effects were defined by lower reaction times (RTs) when there was a match between the temporal expectancy for a target (early or late) and the time interval during which the target actually appeared than when they mismatched. Temporal-orienting effects were found for both early and late expectancies with a detection task in Experiment 1. However, catch trials were decisive in whether temporal-orienting effects were observed in the early-expectancy condition. No temporal-orienting effects were found in the discrimination task. In Experiments 2A and 2B, temporal-orienting effects were observed in the discrimination task; however, they were larger when temporal expectancy was manipulated between blocks, rather than within blocks.

184 citations


Journal ArticleDOI
TL;DR: The results indicate that the four-interval task made it difficult for listeners to use phonetic information and, hence, that categorical perception may be a function of the type of task used for discrimination.
Abstract: Speech sounds are said to be perceived categorically. This notion is usually operationalized as the extent to which discrimination of stimuli is predictable from phoneme classification of the same stimuli. In this article, vowel continua were presented to listeners in a four-interval discrimination task (2IFC with flankers, or 4I2AFC) and a classification task. The results showed that there was no indication of categorical perception at all, since observed discrimination was found not to be predictable from the classification data. Variation in design, such as different step sizes or longer interstimulus intervals, did not affect this outcome, but a 2IFC experiment (without flankers, or 2I2AFC) involving the same stimuli elicited the traditional categorical results. These results indicate that the four-interval task made it difficult for listeners to use phonetic information and, hence, that categorical perception may be a function of the type of task used for discrimination.

180 citations


Journal ArticleDOI
TL;DR: It is concluded that an image-based mechanism is responsible for the influence of head profile on gaze perception, whereas the analysis of nose angle involves the configural processing of face features.
Abstract: We report seven experiments that investigate the influence that head orientation exerts on the perception of eye-gaze direction. In each of these experiments, participants were asked to decide whether the eyes in a brief and masked presentation were looking directly at them or were averted. In each case, the eyes could be presented alone, or in the context of congruent or incongruent stimuli. In Experiment 1A, the congruent and incongruent stimuli were provided by the orientation of face features and head outline. Discrimination of gaze direction was found to be better when face and gaze were congruent than in both of the other conditions, an effect that was not eliminated by inversion of the stimuli (Experiment 1B). In Experiment 2A, the internal face features were removed, but the outline of the head profile was found to produce an identical pattern of effects on gaze discrimination, effects that were again insensitive to inversion (Experiment 2B) and which persisted when lateral displacement of the eyes was controlled (Experiment 2C). Finally, in Experiment 3A, nose angle was also found to influence participants’ ability to discriminate direct gaze from averted gaze, but here the effectwas eliminated by inversion of the stimuli (Experiment 3B). We concluded that an image-based mechanism is responsible for the influence of head profile on gaze perception, whereas the analysis of nose angle involves the configural processing of face features.

172 citations


Journal ArticleDOI
TL;DR: Empirical support is provided for change blindness resulting from the failure to compare retained representations of both the pre- and postchange information: even when unaware of changes, observers still retained information about both the Pre- and Postchange objects on the same trial.
Abstract: Change blindness, the failure to detect visual changes that occur during a disruption, has increasingly been used to infer the nature of internal representations. If every change were detected, detailed representations of the world would have to be stored and accessible. However, because many changes are not detected, visual representations might not be complete, and access to them might be limited. Using change detection to infer the completeness of visual representations requires an understanding of the reasons for change blindness. This article provides empirical support for one such reason: change blindness resulting from the failure to compare retained representations of both the pre- and postchange information. Even when unaware of changes, observers still retained information about both the pre- and postchange objects on the same trial.

171 citations


Journal ArticleDOI
TL;DR: The authors found that attention and saccade programming are tightly coupled in such a way that attention always shifts to the intended location before the eyes begin to move, with the eyes following this pattern a short time later.
Abstract: There is considerable evidence that covert visual attention precedes voluntary eye movements to an intended location. What happens to covert attention when an involuntary saccadic eye movement is made? In agreement with other researchers, we found that attention and voluntary eye movements are tightly coupled in such a way that attention always shifts to the intended location before the eyes begin to move. However, we found that when an involuntary eye movement is made, attention first precedes the eyes to the unintended location and then switches to the intended location, with the eyes following this pattern a short time later. These results support the notion that attention and saccade programming are tightly coupled.

158 citations


Journal ArticleDOI
TL;DR: This review suggests that negative and/or nonmonotonic relationships are found, providing strong evidence for unconscious perception and further suggesting that conscious and unconscious perceptual influences are functionally exclusive.
Abstract: Unconscious perceptual effects remain controversial because it is hard to rule out alternative conscious perception explanations for them. We present a novel methodological framework, stressing the centrality of specifying the single-process conscious perception model (i.e., the null hypothesis). Various considerations, including those of SDT (Macmillan & Creelman, 1991), suggest that conscious perception functions hierarchically, in such a way that higher level effects (e.g., semantic priming) should not be possible without lower level discrimination (i.e., detection and identification). Relatedly, alternative conscious perception accounts (as well as the exhaustiveness, null sensitivity, and exclusiveness problems—Reingold & Merikle, 1988, 1990) predict positive relationships between direct and indirect measures. Contrariwise, our review suggests that negative and/or nonmonotonic relationships are found, providing strong evidence for unconscious perception and further suggesting that conscious and unconscious perceptual influences are functionally exclusive (cf. Jones, 1987), in such a way that the former typically override the latter when both are present. Consequently, unconscious perceptual effects manifest reliably only when conscious perception is completely absent, which occurs at the objective detection (butnot identification) threshold.

Journal ArticleDOI
TL;DR: Both individual target-distractor associations and configural associations are learned in contextual cuing, suggesting that the relative locations among items were also learned.
Abstract: With the use of spatial contextual cuing, we tested whether subjects learned to associate target locations with overall configurations of distractors or with individual locations of distractors. In Experiment 1, subjects were trained on 36 visual search displays that contained 36 sets of distractor locations and 18 target locations. Each target location was paired with two sets of distractor locations on separate trials. After training, the subjects showed perfect transfer to recombined displays, which were created by combining half of one trained distractor set with half of another trained distractor set. This result suggests that individual distractor locations were sufficient to cue the target location. In Experiment 2, the subjects showed good transfer from trained displays to rescaled, displaced, and perceptually regrouped displays, suggesting that the relative locations among items were also learned. Thus, both individual target-distractor associations and configural associations are learned in contextual cuing.

Journal ArticleDOI
TL;DR: The results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.
Abstract: In this study, we evaluated observers’ ability to compare naturally shaped three-dimensional (3-D) objects, using their senses of vision and touch. In one experiment, the observers haptically manipulated 1 object and then indicated which of 12 visible objects possessed the same shape. In the second experiment, pairs of objects were presented, and the observers indicated whether their 3-D shape was thesame ordifferent. The 2 objects were presented either unimodally (vision-vision or haptic-haptic) or cross-modally (vision-haptic or haptic-vision). In both experiments, the observers were able to compare 3-D shape across modalities with reasonably high levels of accuracy. In Experiment 1, for example, the observers’ matching performance rose to 72% correct (chance performance was 8.3%) after five experimental sessions. In Experiment 2, small (but significant) differences in performance were obtained between the unimodal vision-vision condition and the two cross-modal conditions. Taken together, the results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.

Journal ArticleDOI
TL;DR: The results to the design of haptic interfaces for teleoperation and virtual environments, which share some of the same reduction of sensory cues that the experimentally produced results share, are related.
Abstract: In the present article, we will consider how well people can haptically identify common objects when manual exploration is constrained. The constraints imposed in this study were produced by imposing a rigid link between the skin and the object, in the form of a sheath over the finger or a probe held in the hand. Any constraint that forms an intermediate barrier between skin and objects can be described as producing remote (or indirect) perception. Such intermediaries serve to constrain haptic exploration by reducing the cutaneous and/or kinesthetic inputs available. In earlier work (Klatzky, Loomis, Lederman, Wake, & Fujita, 1993), manual exploration has been constrained in several other ways that did not involve using a rigid link. In this article, we integrate the results of the earlier Klatzky et al. (1993) study with those of the present study to further our understanding of haptic object identification by direct and remote touch. In addition, we relate the result to the design of haptic interfaces for teleoperation and virtual environments, which share some of the same cue reductions that we have produced experimentally. Our article arises from two separate but related themes in previous research on haptic perception.

Journal ArticleDOI
TL;DR: The results suggest that tactile, spatially selective attention can operate according to an abstract spatial frame of reference, which is significantly modulated by multisensory contributions from both proprioception and vision.
Abstract: This study addressed the role of proprioceptive and visual cues to body posture during the deployment of tactile spatial attention. Participants made speeded elevation judgments (up vs. down) to vibrotactile targets presented to the finger or thumb of either hand, while attempting to ignore vibrotactile distractors presented to the opposite hand. The first two experiments established the validity of this paradigm and showed that congruency effects were stronger when the target hand was uncertain (Experiment 1) than when it was certain (Experiment 2). Varying the orientation of the hands revealed that these congruency effects were determined by the position of the target and distractor in external space, and not by the particular skin sites stimulated (Experiment 3). Congruency effects increased as the hands were brought closer together in the dark (Experiment 4), demonstrating the role of proprioceptive input in modulating tactile selective attention. This spatial modulation was also demonstrated when a mirror was used to alter the visually perceived separation between the hands (Experiment 5). These results suggest that tactile, spatially selective attention can operate according to an abstract spatial frame of reference, which is significantly modulated by multisensory contributions from both proprioception and vision.

Journal ArticleDOI
TL;DR: Healthy aging altered the deployment of attentional scaling: the benefit of valid precues on search initially was increased but later (in participants 75-85 years of age) was reduced, providing evidence that cue size effects are attentional, not strategic.
Abstract: A model of visual search (Greenwood & Parasuraman, 1999) postulating that visuospatial attention is composed of two processing components—shifting and scaling of a variable-gradient attentional focus—was tested in three experiments. Whereas young participants are able to dynamically constrict or expand the focus of visuospatial attention on the basis of prior information, in healthy aging individuals visuospatial attention becomes a poorly focused beam, unable to be constricted around one array element. In the present work, we sought to examine predictions of this view in healthy young and older participants. An attentional focus constricted in response to an element-sized precue had the strongest facilitatory effect on visual search. However, this was true only when the precue correctly indicated the location of a target fixed in size. When precues incorrectly indicated target location or when target size varied, the optimal spatial scale of attention for search was larger, encompassing a number of array elements. Healthy aging altered the deployment of attentional scaling: The benefit of valid precues on search initially (in participants 65-74 years of age) was increased but later (in those 75-85 years of age) was reduced. The results also provided evidence that cue size effects are attentional, not strategic. This evidence is consistent with the proposed model of attentional scaling in visual search.

Journal ArticleDOI
TL;DR: Correlational evidence suggested that cognitive resources were significant factors in accounting for age-related decline in path integration performance.
Abstract: In a triangle completion task designed to assess path integration skill, younger and older adults performed similarly after being led, while blindfolded, along the route segments on foot, which provided both kinesthetic and vestibular information about the outbound path. In contrast, older adults’ performance was impaired, relative to that of younger adults, after they were conveyed, while blindfolded, along the route segments in a wheelchair, which limited them principally to vestibular information. Correlational evidence suggested that cognitive resources were significant factors in accounting for age-related decline in path integration performance.

Journal ArticleDOI
TL;DR: In detecting location changes, subjects were unable to ignore changes in orientation unless additional, invariant grouping cues were provided or unless the items changing orientation could be actively ignored using feature-based attention (color cues).
Abstract: Detection of an item’s changing of its location from one instance to another is typically unaffected by changes in the shape or color of contextual items. However, we demonstrate here that such location change detection is severely impaired if the elongated axes of contextual items change orientation, even though individual locations remain constant and even though the orientation was irrelevant to the task. Changing the orientations of the elongated stimuli altered the perceptual organization of the display, which had an important influence on change detection. In detecting location changes, subjects were unable to ignore changes in orientation unless additional, invariant grouping cues were provided or unless the items changing orientation could be actively ignored using feature-based attention (color cues). Our results suggest that some relational grouping cues are represented in change detection even when they are task irrelevant.

Journal ArticleDOI
TL;DR: A probe dot procedure was used to examine the time course of attention in preview search and it was demonstrated that detection on old items was facilitated when the probes appeared 200 msec after previews, whereas there was worse detection on new items when the probing followed 800 msec after Previews.
Abstract: We used a probe dot procedure to examine the time course of attention in preview search (Watson & Humphreys, 1997). Participants searched for an outline red vertical bar among other new red horizontal bars and old green vertical bars, superimposed on a blue background grid. Following the reaction time response for search, the participants had to decide whether a probe dot had briefly been presented. Previews appeared for 1,000 msec and were immediately followed by search displays. In Experiment 1, we demonstrated a standard preview benefit relative to a conjunction search baseline. In Experiment 2, search was combined with the probe task. Probes were more difficult to detect when they were presented 1,200 msec, relative to 800 msec, after the preview, but at both intervals detection of probes at the locations of old distractors was harder than detection on new distractors or at neutral locations. Experiment 3A demonstrated that there was no difference in the detection of probes at old, neutral, and new locations when probe detection was the primary task and there was also no difference when all of the shapes appeared simultaneously in conjunction search (Experiment 3B). In a final experiment (Experiment 4), we demonstrated that detection on old items was facilitated (relative to neutral locations and probes at the locations of new distractors) when the probes appeared 200 msec after previews, whereas there was worse detection on old items when the probes followed 800 msec after previews. We discuss the results in terms of visual marking and attention capture processes in visual search.

Journal ArticleDOI
TL;DR: Skin conformance cannot account for the loss of spatial acuity reported in earlier studies and confirmed in this study, and it is inferred that the loss must be neural in origin.
Abstract: The ability of the skin to conform to the spatial details of a surface or an object is an essential part of our ability to discriminate fine spatial features haptically. In this study, we examined the extent to which differences in tactual acuity between subjects of the same age and between younger and older subjects can be accounted for by differences in the properties of the skin. We did so by measuring skin conformance and tactile spatial acuity in the glabrous skin at the fingertip in 18 younger (19–36 years old) and 9 older (61–69 years old) subjects. Skin conformance was measured as the degree to which the skin invaded the spaces in the psychophysical stimuli. There were several findings. First, skin conformance accounted for 50% of the variance in our measure of tactile spatial acuity (the threshold for grating orientation discrimination) between the younger subjects. The subjects with more compliant skin had substantially lower thresholds than did the subjects with stiffer skin. Second, the skin of the younger subjects was more compliant across than along the skin ridges, and this translated into significantly greater performance when the gratings were oriented along than when oriented across the skin. Third, skin conformance was virtually identical in the younger and the older subjects. Consequently, skin conformance cannot account for the loss of spatial acuity reported in earlier studies and confirmed in this study. We infer that the loss must be neural in origin.

Journal ArticleDOI
TL;DR: This work examines the role of distractors in the attentional blink using a modified two-target paradigm with a central stream of task-irrelevant distractors and suggests a conceptual link between the AB and a form of nonspatial contingent capture attributable to distractor processing.
Abstract: When two sequential targets (T1 and T2) are presented within about 600 msec, perception of the second target is impaired. This attentional blink (AB) has been studied by means of two paradigms: rapid serial visual presentation (RSVP), in which targets are embedded in a stream of central distractors, and the two-target paradigm, in which targets are presented eccentrically without distractors. We examined the role of distractors in the AB, using a modified two-target paradigm with a central stream of task-irrelevant distractors. In six experiments, the RSVP stream of distractors substantially impaired identification of both T1 and T2, but only when the distractors shared common characteristics with the targets. Without such commonalities, the distractors had no effect on performance. This points to the subjects’ attentional control setting as an important factor in the AB deficit and suggests a conceptual link between the AB and a form of nonspatial contingent capture attributable to distractor processing.

Journal ArticleDOI
TL;DR: It is hypothesized that heaviness perception for a freely wielded nonvisible object can be mapped to a point in a three-dimensional heaviness space, and a promising conjecture is that the haptic perceptual system maps the combination of an object's inertia for translation and inertia for rotation to a perception of the object’s maneuverability.
Abstract: It is hypothesized that heaviness perception for a freely wielded nonvisible object can be mapped to a point in a three-dimensional heaviness space. The three dimensions are mass, the volume of the inertia ellipsoid, and the symmetry of the inertia ellipsoid. Within this space, particular combinations yield heaviness metamers (objects of different mass that feel equally heavy), whereas other combinations yield analogues to the size-weight illusion (objects of the same mass that feel unequally heavy). Evidence for the two types of combinations was provided by experiments in which participants wielded occluded hand-held objects and estimated the heaviness of the objects relative to a standard. Further experiments with similar procedures showed that metamers of heaviness were metamers of moveableness but not metamers of length. A promising conjecture is that the haptic perceptual system maps the combination of an object’s inertia for translation and inertia for rotation to a perception of the object’s maneuverability.

Journal ArticleDOI
TL;DR: The results showed that a decrease in the number of saccades occurred along with a fall in search time, and an effective search period in which each saccade monotonically brought the fixation closer to the target.
Abstract: Previous studies have shown that context-facilitated visual search can occur through implicit learning. In the present study, we have explored its oculomotor correlates as a step toward unraveling the mechanisms that underlie such learning. Specifically, we examined a number of oculomotor parameters that might accompany the learning of context-guided search. The results showed that a decrease in the number of saccades occurred along with a fall in search time. Furthermore, we identified an effective search period in which each saccade monotonically brought the fixation closer to the target. Most important, the speed with which eye fixation approached the target did not change as a result of learning. We discuss the general implications of these results for visual search.

Journal ArticleDOI
TL;DR: All three dual-process models, which are based on both conscious and unconscious perception, should be rejected in favor of the single-process conscious perception model.
Abstract: According to Snodgrass, Bernat, and Shevrin (2004), unconscious perception can be demonstrated convincingly only at the objective detection threshold, provided that the conditions of theirobjective detection/strategic model are met, whereas both thesubjective threshold model of Cheesman and Merikle (1984, 1986) and theobjective threshold/rapid decay model of Greenwald, Draine, and Abrams (1996) are inconclusive. We argue on theoretical, metatheoretical, and empirical grounds that all three dual-process models, which are based on both conscious and unconscious perception, should be rejected in favor of thesingle-process conscious perception model.

Journal ArticleDOI
TL;DR: The Spearman-Kärber method appears to be a valuable addition to the toolbox of psychophysical methods, because it is most accurate for estimating the mean and difference thresholds and dispersion of the psychometric function, although it is not optimal for estimating percentile-based parameters of this function.
Abstract: The Spearman-Karber method can be used to estimate the threshold value or difference limen in two-alternative forced-choice tasks. This method yields a simple estimator for the difference limen and its standard error, so that both can be calculated with a pocket calculator. In contrast to previous estimators, the present approach does not require any assumptions about the shape of the true underlying psychometric function. The performance of this new nonparametric estimator is compared with the standard technique of probit analysis. The Spearman-Karber method appears to be a valuable addition to the toolbox of psychophysical methods, because it is most accurate for estimating the mean (i.e., absolute and difference thresholds) and dispersion of the psychometric function, although it is not optimal for estimating percentile-based parameters of this function.

Journal ArticleDOI
TL;DR: The effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words are investigated, showing that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality.
Abstract: In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

Journal ArticleDOI
TL;DR: These experiments are consistent with the hypothesis that high spatial resolution information is not necessary for audiovisual speech perception and that a limited range of spatial frequency spectrum is sufficient.
Abstract: Spatial frequency band-pass and low-pass filtered images of a talker were used in an audiovisual speech-in-noise task. Three experiments tested subjects’ use of information contained in the different filter bands with center frequencies ranging from 2.7 to 44.1 cycles/face (c/face). Experiment 1 demonstrated that information from a broad range of spatial frequencies enhanced auditory intelligibility. The frequency bands differed in the degree of enhancement, with a peak being observed in a mid-range band (11-c/face center frequency). Experiment 2 showed that this pattern was not influenced by viewing distance and, thus, that the results are best interpreted in object spatial frequency, rather than in retinal coordinates. Experiment 3 showed that low-pass filtered images could produce a performance equivalent to that produced by unfiltered images. These experiments are consistent with the hypothesis that high spatial resolution information is not necessary for audiovisual speech perception and that a limited range of spatial frequency spectrum is sufficient.

Journal ArticleDOI
TL;DR: New methodologies are employed to assess serial versus parallel processing and find strong evidence for pure serial or pure parallel processing, with some striking apparent differences across individuals and interstimulus conditions.
Abstract: Many mental tasks that involve operations on a number of items take place within a few hundred milliseconds. In such tasks, whether the items are processed simultaneously (in parallel) or sequentially (serially) has long been of interest to psychologists. Although certain types of parallel and serial models have been ruled out, it has proven extremely difficult to entirely separate reasonable serial and limitedcapacity parallel models on the basis of typical data. Recent advances in theory-driven methodology now permit strong tests of serial versus parallel processing in such tasks, in ways that bypass the capacity issue and that are distribution and parameter free. We employ new methodologies to assess serial versus parallel processing and find strong evidence for pure serial or pure parallel processing, with some striking apparent differences across individuals and interstimulus conditions.

Journal ArticleDOI
TL;DR: The results indicate that when multiple contexts are redundant, contextual learning occurs selectively, depending on the predictability of the target location.
Abstract: To conduct an efficient visual search, visual attention must be guided to a target appropriately. Previous studies have suggested that attention can be quickly guided to a target when the spatial configurations of search objects or the object identities have been repeated. This phenomenon is termedcontextual cuing. In this study, we investigated the effect of learning spatial configurations, object identities, and a combination of both configurations and identities on visual search. The results indicated that participants could learn the contexts of spatial configurations, but not of object identities, even when both configurations and identities were completely correlated (Experiment 1). On the other hand, when only object identities were repeated, an effect of identity learning could be observed (Experiment 2). Furthermore, an additive effect of configuration learning and identity learning was observed when, in some trials, each context was the relevant cue for predicting the target (Experiment 3). Participants could learn only the context that was associated with target location (Experiment 4). These findings indicate that when multiple contexts are redundant, contextual learning occurs selectively, depending on the predictability of the target location.

Journal ArticleDOI
TL;DR: The hypothesis that in vivo size estimation of familiar objects may employ a mechanism that derives size from memory and that size memory can be distorted by the way an object was used is supported.
Abstract: A ladle was recalled as being taller by participants who observed tedious removal of sand from it with a small teaspoon than by those who observed removal with a larger spoon. A second experiment showed that the number of darts thrown in order to hit a target correlated negatively with memory estimates of the size of the target, a finding replicated in a third experiment with size estimates made while the target was visible. The first two experiments suggest that the way an object is used can influence memory of its size. The third experiment supports the hypothesis that in vivo size estimation of familiar objects may employ a mechanism that derives size from memory and that size memory can be distorted by the way an object was used.