scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1986"


Journal ArticleDOI
TL;DR: The idea that a sensory input can give rise to semantic activation without concomitant conscious identification was the central thesis of the controversial research in subliminal perception as discussed by the authors, which can be demonstrated by the ability of a person to perform discriminations on the basis of the meaning of the stimulus.
Abstract: When the stored representation of the meaning of a stimulus is accessed through the processing of a sensory input it is maintained in an activated state for a certain amount of time that allows for further processing. This semantic activation is generally accompanied by conscious identification, which can be demonstrated by the ability of a person to perform discriminations on the basis of the meaning of the stimulus. The idea that a sensory input can give rise to semantic activation without concomitant conscious identification was the central thesis of the controversial research in subliminal perception. Recently, new claims for the existence of such phenomena have arisen from studies in dichotic listening, parafoveal vision, and visual pattern masking. Because of the fundamental role played by these types of experiments in cognitive psychology, the new assertions have raised widespread interest.The purpose of this paper is to show that this enthusiasm may be premature. Analysis of the three new lines of evidence for semantic activation without conscious identification leads to the following conclusions. (1) Dichotic listening cannot provide the conditions needed to demonstrate the phenomenon. These conditions are better fulfilled in parafoveal vision and are realized ideally in pattern masking. (2) Evidence for the phenomenon is very scanty for parafoveal vision, but several tentative demonstrations have been reported for pattern masking. It can be shown, however, that none of these studies has included the requisite controls to ensure that semantic activation was not accompanied by conscious identification of the stimulus at the time of presentation. (3) On the basis of current evidence it is most likely that these stimuli were indeed consciously identified.

1,143 citations


Journal ArticleDOI
01 Apr 1986-Nature
TL;DR: Evidence is presented that a change in the position of a visual target during a reaching movement can modify the trajectory even when vision of the hand is prevented, and the mechanisms that maintain the apparent stability of a target in space are dissociable from those that mediate the visuomotor outputdirected at that target.
Abstract: When we reach towards an object that suddenly appears in our peripheral visual field, not only does our arm extend towards the object, but our eyes, head and body also move in such a way that the image of the object falls on the fovea. Popular models of how reaching movements are programmed1,2 have argued that while the first part of the limb movement is ballistic, subsequent corrections to the trajectory are made on the basis of dynamic feedback about the relative positions of the hand and the target provided by central vision. These models have assumed that the adjustments are dependent on seeing the hand moving with respect to the target. Here we present evidence that a change in the position of a visual target during a reaching movement can modify the trajectory even when vision of the hand is prevented. Moreover, these dynamic corrections to the trajectory of the moving limb occur without the subject perceiving the change in target location. These findings demonstrate that (1) visual feedback about the relative position of the hand and target is not necessary for visually driven corrections in reaching to occur, and (2) the mechanisms that maintain the apparent stability of a target in space are dissociable from those that mediate the visuomotor outputdirected at that target.

886 citations



Journal ArticleDOI
TL;DR: Older people seem to be highly susceptible to the distracting effects of irrelevant or interfering visual stimuli, so visual displays in which observers had to localize the position of a face are studied.
Abstract: Older people seem to be highly susceptible to the distracting effects of irrelevant or interfering visual stimuli, We studied this susceptibility using visual displays in which observers had to localize the position of a face. When a face appeared in isolation, observers of all ages did equally well; when distracting stimuli surrounded the face, older observers alone performed poorly. Brief periods of practice produce substantial and long-lasting improvement in performance.

288 citations


Journal ArticleDOI
TL;DR: It was found that adults can keep up to date on the changing structure of their perspectives even in the absence of sights and sounds that specify changes in self-to-object relations.
Abstract: Experiments are reported of the nonvisual sensitivity of observers to their paths of locomotion and to the resulting changes in the structure of their perspectives, ie changes in the network of directions and distances spatially relating them to objects fixed in the surrounding environment. In the first experiment it was found that adults can keep up to date on the changing structure of their perspectives even in the absence of sights and sounds that specify changes in self-to-object relations. They do this rapidly, accurately, and, according to the subjects' reports, automatically, as if perceiving the new perspective structures. The second experiment was designed to investigate the role of visual experience in the development of sensitivity to occluded changes in perspective structure by comparing the judgments of sighted adults with those of late-blinded adults (who had extensive life histories of vision) and those of early-blinded adults (who had little or no history of vision). The three groups perfo...

276 citations


Journal ArticleDOI
11 Apr 1986-Science
TL;DR: The effects of retinal image deprivation (monocular form deprivation) on four psychophysical functions were investigated in rhesus monkeys to determine if the sensitive period is of the same duration for all types of visual information processing.
Abstract: Early in life, abnormal visual experience may disrupt the developmental processes required for the maturation and maintenance of normal visual function. The effects of retinal image deprivation (monocular form deprivation) on four psychophysical functions were investigated in rhesus monkeys to determine if the sensitive period is of the same duration for all types of visual information processing. The basic spectral sensitivity functions of rods and cones have relatively short sensitive periods of development (3 and 6 months) when compared to more complex functions such as monocular spatial vision or resolution (25 months) and binocular vision (greater than 25 months). Therefore, there are multiple, partially overlapping sensitive periods of development and the sensitive period for each specific visual function is probably different.

221 citations


Journal ArticleDOI
TL;DR: A series of six experiments offers converging evidence that there is no fixed dominance hierarchy for the perception of textured patterns, and in doing so, highlights the importance of recognizing the multidimensionality of texture perception.
Abstract: A series of six experiments offers converging evidence that there is no fixed dominance hierarchy for the perception of textured patterns, and in doing so, highlights the importance of recognizing the multidimensionality of texture perception. The relative bias between vision and touch was reversed or considerably altered using both discrepancy and nondiscrepancy paradigms. This shift was achieved merely by directing observers to judge different dimensions of the same textured surface. Experiments 1, 4, and 5 showed relatively strong emphasis on visual as opposed to tactual cues regarding the spatial density of raised dot patterns. In contrast, Experiments 2, 3, and 6 demonstrated considerably greater emphasis on the tactual as opposed to visual cues when observers were instructed to judge the roughness of the same surfaces. The results of the experiments were discussed in terms of a modality appropriateness interpretation of intersensory bias. A weighted averaging model appeared to describe the nature of the intersensory integration process for both spatial density and roughness perception.

212 citations


Journal ArticleDOI
TL;DR: This article showed that illusory conjunctions of letter shape and identity can illuminate the units of analysis that are used by the visual system in word perception, and they also showed that these conjunctions can reveal the functional units in visual analysis of words and word-like stimuli.

208 citations





Journal ArticleDOI
TL;DR: Control measurements with various degrees of optical blur demonstrate that direction discrimination does not require a well-focussed retinal image, and rules out optical factors as the potential cause of the prepractice differences between groups.
Abstract: Younger observers (M = 21 years) proved to be better than older observers (M = 68 years) at discriminating one direction of motion from another, highly similar one. Several days' practice steadily improved performance for both groups equally. Improvement was well restricted to the direction with which that observer practiced, and the full gains were retained for at least 1 month. Control measurements with various degrees of optical blur demonstrate that direction discrimination does not require a well-focussed retinal image. This rules out optical factors as the potential cause of the prepractice differences between groups.

Journal ArticleDOI
TL;DR: The current experiments found that children are poorer lip-readers than adults and a positive correlation was observed between lip-reading ability and the size of the visual contribution to bimodal speech perception.


Journal ArticleDOI
TL;DR: A new theory linking information extraction patterns, specifically adapted for the guidance of eye movements, to the visual perception of direction and extent is presented, and a research strategy to test for efferent involvement in visual perception in humans is presented.
Abstract: After outlining the history of motor theories of visual perception, a new theory linking information extraction patterns, specifically adapted for the guidance of eye movements, to the visual perception of direction and extent is presented. Following a brief discussion of comparative and physiological considerations, a research strategy to test for efferent involvement in visual perception in humans is presented. In seven demonstration experiments, predictions from efferent considerations are used to create a new set of illusions of direction and extent and to demonstrate new predictable variations in the magnitude of some classical illusion figures. Another demonstration illustrates that systematic changes in visual perception occur as a function of changes in motoric demands, even in the absence of any configurational changes in the stimulus. A final section shows the relationship between attention and efferent readiness and their interaction in the formation of the conscious visual percept. From a historical perspective, most contemporary theories of visual perception are quite conservative. This conservatism springs from an apparent acceptance of the premise that any proper analysis of visual experience must avoid reference to nonvisual mechanisms, except for labeling and semantic aspects of the perceptual process. It follows that most visual theorists tend to derive virtually every aspect of the conscious percept solely from either the physical characteristics of the visual stimulus array or the operation of readily definable neurological units in the visual system. Characteristic of the former viewpoint is Gibson's (1979) theory of ecological optics, which maintains that virtually all aspects of the final percept are predictable from invariants in the stimulus array. Current attempts to derive the conscious percept from a hypothesized Fourier analysis occurring within the visual system are similar in approach, merely relying on higher

Journal ArticleDOI
TL;DR: The visual span control hypothesis, which considers that, in such a task, eye movements are controlled as a direct function of spatial visibility limits, is confirmed and interpreted in relation to recent models of eye-movement control by two largely independent subsystems functioning in parallel.
Abstract: In order to distinguish between the effects of low-level sensory mechanisms and those of higher level factors on eye-movement control processes, a simple letter search task was used in which cognitive load was reduced to the very minimum. The special purpose of this study was to test the visual span control hypothesis, which considers that, in such a task, eye movements are controlled as a direct function of spatial visibility limits (O’Regan, Levy-Schoen, & Jacobs, 1983). In a first psychophysical experiment, three methods were used to manipulate the spatial visibility limits (visual span), as measured by a psychophysical procedure: changing viewing distance, interletter spacing, and target-background similarity. The results of this experiment then were used as a reference for predicting mean saccade sizes and fixation durations in a visual search task in which the same visibility changes were made. About 80% of the variance of mean saccade sizes could be accounted for by adjustment of saccades to changes in visual span, so the visual span control hypothesis was confirmed. As to the temporal characteristics of scanning behavior, less than 50% of fixation duration variance seemed to be determined by visual span changes. Other, higher level factors, possibly related to decisional processes intervening in the triggering of saccades and the computation of their spatial parameters, might play an important role in determining fixation durations in a simple search task. The results are interpreted in relation to recent models of eye-movement control by two largely independent subsystems functioning in parallel.


Journal ArticleDOI
TL;DR: This special issue is devoted exclusively to studies on selective attention in vision, and one first step towards adequate theorizing about visual attention is to consider in which respects visual information processing is different from auditory information processing.
Abstract: Theories of selective attention are often put forward as having, in intent, equivalent application to audition, to vision, and, for that matter, to information processing in all other sensory modalities. However, there are certain fundamental differences in the character of information processing in vision and audition; the wiser, or at least the more cautious strategy, may be to concentrate, first of all, on developing models of the selective processes within the different senses individually. Only then, when our understanding of selection within these functionally very different modalities is more secure, will it perhaps be fruitful to look for generalizations about the selection mechanisms across different sensory systems. Accordingly, this special issue is devoted exclusively to studies on selective attention in vision. The early ideas on selective attention, within the information-processing framework, were shaped principally by work on auditory selection, in particular by the demands of selection among concurrent speech signals. In part, this was due to the human engineering context in which modern research on attention evolved. However, there was also a widespread belief that hearing is, because of its functional properties, especially suited to the study of attentional selection. This was, for example, the main message in the introductory chapter of Broadbent's (1958) influential monograph, the argument being that auditory selection is almost completely central, whereas peripheral sensory adjustments play a large part in visual selection. In the decades during which modern attention research took shape, theories of attention were thus essentially theories of auditory attention. When attention research began to be extended to the visual modality, many of the central ideas and theoretical alternatives had already been formulated, and theorizing continued to be influenced by them. Meanwhile as witnessed by the contributions to this special issue visual attention has come of age as a research field; it may be timely to consider its particular functional basis. One first step towards adequate theorizing about visual attention is to consider in which respects visual information processing is different from auditory information processing. The properties of vision that are potentially relevant to our understanding of visual attention, and that we

Journal ArticleDOI
01 Feb 1986-Brain
TL;DR: It was determined that damage to either the left or right hemisphere results in a general slowing of reaction times to visual stimuli irrespective of where such stimuli appear, and that patients with right parietal lesions are further impaired at shifting attention within the left visual field.
Abstract: The contribution of attentional factors per se in response to visual stimuli was studied in patients with unilateral lesions of the left or right cerebral hemispheres. Subjects were required to respond to visual targets that were presented tachistoscopically, and were preceded by spatial cues that served to manipulate the spatial locus of attention. On 'valid' cue trials, the cue directed attention to the target's spatial coordinates; on 'invalid' cue trials, the cue misdirected attention. It was determined that damage to either the left or right hemisphere results in a general slowing of reaction times to visual stimuli irrespective of where such stimuli appear, and that patients with right parietal lesions are further impaired at shifting attention within the left visual field.

Journal ArticleDOI
TL;DR: The two experiments suggest that fast internal tracing of curves is employed by the visual system in the perception of certain shape properties and spatial relations and that people can trace curves in a visual display internally at high speed.
Abstract: The two experiments in this study suggest that fast internal tracing of curves is employed by the visual system in the perception of certain shape properties and spatial relations. The experimental task in the first experiment was to determine, as rapidly as possible, whether two Xs lay on the same curve or on different curves in a visual display. Mean response time for “same” responses increased monotonically with increasing distance along the curve between the Xs. The task in the second experiment was to decide either that a curve joining two Xs was unbroken or that the curve had a gap. Decision times again increased as the length of the curve joining the Xs was increased. The results of both experiments suggest that people can trace curves in a visual display internally at high speed (the average rate of tracing was about 40° of visual angle per second). Curve tracing may be an important visual process used to integrate information from different parts of a visual display.

Journal ArticleDOI
TL;DR: In this paper, it is argued that, contrary to the assumptions of many cognitive theorists, the computational approach does not provide coherent answers to these problems, and that a more promising start would be to fall back on mathematical communication theory and, with the help of evolutionary biology and neurophysiology, to attempt a characterization of the adaptive processes involved in visual perception.
Abstract: This article responds to two unresolved and crucial problems of cognitive science: (1) What is actually accomplished by functions of the nervous system that we ordinarily describe in the intentional idiom? and (2) What makes the information processing involved in these functions semantic? It is argued that, contrary to the assumptions of many cognitive theorists, the computational approach does not provide coherent answers to these problems, and that a more promising start would be to fall back on mathematical communication theory and, with the help of evolutionary biology and neurophysiology, to attempt a characterization of the adaptive processes involved in visual perception. Visual representations are explained as patterns of cortical activity that are enabled to focus on objects in the changing visual environment by constantly adjusting to maintain levels of mutual information between pattern and object that are adequate for continuing perceptual control. In these terms, the answer proposed to (1) is that the intentional functions of vision are those involved in the establishment and maintenance of such representations, and to (2) that semantic features are added to the information processes of vision with the focus on objects that these representations accomplish. The article concludes with proposals for extending this account of intentionality to the higher domains of conceptualization and reason, and with speculation about how semantic information-processing might be achieved in mechanical systems.


Journal ArticleDOI
TL;DR: In this paper, the authors argue that the use of visual similarity as a cue to category membership may produce the picture advantage and show that pictures from the same category are more similar than pictures from different categories.
Abstract: Categorization is usually assumed to require access to a concept's meaning. When pictures are categorized faster than words, they are assumed to be understood faster than words. However, pictures from the same category are more similar than pictures from different categories. The present article argues that the use of visual similarity as a cue to category membership may produce the picture advantage. The visual similarity hypothesis was tested in two experiments. In the first experiment, pictures showed a disadvantage for the visually similar categories of fruits and vegetables, but showed their usual advantage for the visually dissimilar categories of fruits and animals. In the second experiment, with a mixed list design, pictures were slower only for visually similar different decisions, but showed the usual advantage for all other decisions. The reliability of visual similarity as a cue to the decision accounted well for these results. Because visual similarity can be shown to have large effects on picture categorization, the use of categorization to compare speed of understanding of pictures and words is questionable.

Journal ArticleDOI
TL;DR: The model's adequacy and usefulness for interpreting and guiding research on normal and brain-damaged people is discussed, and the model can accommodate dynamic features characteristic of competing efferent (attentional) models without sacrificing its basic structure.

Journal ArticleDOI
TL;DR: A historical review of the development of interaction theories in visual perception is presented and the idea of ‘cancellation’ between afferent visual movement signals and corollary signals evoked by the motor compounds of gaze movement was first proposed by Purkyně.


Journal ArticleDOI
Delbert Elliott1
TL;DR: In this paper, it was shown that visual information useful in the control of movement persists for up to 8 s after visual occlusion, indicating that there is no substitute for continuous visual information.
Abstract: The purpose of the two experiments reported here was to replicate previous research (Thomson, 1983) which suggests that visual information useful in the control of movement persists for up to 8 s after visual occlusion. Contrary to other findings (Thomson, 1980,1983), little evidence was found for an 8-s visual representation of the environmental layout, indicating there is no substitute for continuous visual information in the control of movement. Methodological and statistical problems with Thomson's work are discussed. In the last few years, there has been a resurgence of interest in the role of visual information in the control of movement


Book ChapterDOI
01 Jan 1986
TL;DR: The prosopagnosia is a rare neurobehavioral syndrome in which a patient with brain damage becomes unable to recognize previously familiar persons by visual reference to their facial features.
Abstract: Prosopagnosia is a rare neurobehavioral syndrome in which a patient with brain damage becomes unable to recognize previously familiar persons by visual reference to their facial features (Bodamer, 1947; Bauer & Rubens, 1985). It is distinct from disorders in the perceptual processing of previously unfamiliar faces (Benton, 1980) and is dissociable from specific impairments in learning new faces seen as part of a more general visual recent memory deficit (Ross, 1980). In many cases, the disorder extends to famous faces and may even prevent identification of the patient’s own face in a mirror. Prosopagnosics invariably recognize faces as faces, and are able to achieve immediate and certain recognition when they hear the person’s voice or when some informative extrafacial visual cue (clothing, gait, etc.) is available. Prosopagnosia cannot be solely attributed to aphasic misnaming or to perceptual impairment, since tests of language and visual perception are usually performed at normal or near-normal levels (cf. Bauer & Rubens, 1985 for review).

Journal ArticleDOI
TL;DR: The present article examines the empirical basis for anatomically-specific hypotheses and considers alternative explanations for the observed ERP changes during selective attention.