scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1992"


Journal ArticleDOI
TL;DR: It is proposed that the ventral stream of projections from the striate cortex to the inferotemporal cortex plays the major role in the perceptual identification of objects, while the dorsal stream projecting from the stripping to the posterior parietal region mediates the required sensorimotor transformations for visually guided actions directed at such objects.

5,878 citations


Journal ArticleDOI
TL;DR: This paper discusses several defects of vision and the classical theories of how they are overcome, and suggests an alternative approach, in which the outside world is considered as a kind of external memory store which can be accessed instantaneously by casting one's eyes (or one's attention) to some location.
Abstract: Visual science is currently a highly active domain, with much progress being made in fields such as colour vision, stereo vision, perception of brightness and contrast, visual illusions, etc. But the "real" mystery of visual perception remains comparatively unfathomed, or at least relegated to philosophical status: Why it is that we can see so well with what is apparently such a badly constructed visual apparatus? In this paper I will discuss several defects of vision and the classical theories of how they are overcome. I will criticize these theories and suggest an alternative approach, in which the outside world is considered as a kind of external memory store which can be accessed instantaneously by casting one's eyes (or one's attention) to some location. The feeling of the presence and extreme richness of the visual world is, under this view, a kind of illusion, created by the immediate availability of the information in this external store.

867 citations


Journal ArticleDOI
TL;DR: This article provides further evidence concerning the importance of perceptual organization in attending to objects and demonstrates that perceptual grouping, which is usually conceived of as a purely stimulus-driven process, can be governed by goal-directed mechanisms.

520 citations


Journal ArticleDOI
TL;DR: Evidence is presented that the pulvinar contains neurons that generate signals related to the salience of visual objects, and that these neurons produce behavioral changes in cued attention paradigms with visual distracter tasks.

464 citations


Journal ArticleDOI
TL;DR: The results establish that neither texture segregation nor grouping by similarity of lightness or proximity are perceived under conditions of inattention, which supports the conclusion that there is an earlier stage of processing than that referred to as preattentive.

438 citations


Journal ArticleDOI
TL;DR: The effect of processing load on event-related brain potentials (ERPs) was investigated in an intermodal selective attention task in which subjects attended selectively to auditory or visual stimuli.

360 citations


Journal ArticleDOI
04 Sep 1992-Science
TL;DR: In this article, a theoretical framework is proposed to understand binocular visual surface perception based on the idea of a mobile observer sampling images from random vantage points in space, which can be considered as inverse ecological optics based on learning through ecological optics.
Abstract: A theoretical framework is proposed to understand binocular visual surface perception based on the idea of a mobile observer sampling images from random vantage points in space. Application of the generic sampling principle indicates that the visual system acts as if it were viewing surface layouts from generic not accidental vantage points. Through the observer's experience of optical sampling, which can be characterized geometrically, the visual system makes associative connections between images and surfaces, passively internalizing the conditional probabilities of image sampling from surfaces. This in turn enables the visual system to determine which surface a given image most strongly indicates. Thus, visual surface perception can be considered as inverse ecological optics based on learning through ecological optics. As such, it is formally equivalent to a degenerate form of Bayesian inference where prior probabilities are neglected.

355 citations


Journal ArticleDOI
TL;DR: It is implied that the extraction of shape from shading is an “early” visual process that occurs prior to perceptual grouping, motion perception, and vestibular (as well as “cognitive”) correction for head tilt.
Abstract: The extraction of three-dimensional shape from shading is one of the most perceptually compelling, yet poorly understood, aspects of visual perception. In this paper, we report several new experiments on the manner in which the perception of shape from shading interacts with other visual processes such as perceptual grouping, preattentive search (“pop-out”), and motion perception. Our specific findings are as follows: (1) The extraction of shape from shading information incorporates at least two “assumptions” or constraints—first,that there is a single light source illuminating the whole scene, and second, that the light is shining from “above” in relation to retinal coordinates. (2) Tokens defined by shading can serve as a basis for perceptual grouping and segregation. (3) Reaction time for detecting a single convex shape does not increase with the number of items in the display. This “pop-out” effect must be based on shading rather than on differences in luminance polarity, since neither left-right differences nor step changes in luminance resulted in pop-out. (4) When the subjects were experienced, there were no search asymmetries for convex as opposed to concave tokens, but when the subjects were naive, cavities were much easier to detect than convex shapes. (5) The extraction of shape from shading can also provide an input to motion perception. And finally, (6) the assumption of “overhead illumination” that leads to perceptual grouping depends primarily on retinal rather than on “phenomenal” or gravitational coordinates. Taken collectively, these findings imply that the extraction of shape from shading is an “early” visual process that occurs prior to perceptual grouping, motion perception, and vestibular (as well as “cognitive”) correction for head tilt. Hence, there may be neural elements very early in visual processing that are specialized for the extraction of shape from shading.

301 citations


Journal ArticleDOI
TL;DR: Cue theory, which states that the visual system computes the distances of objects in the environment based on information from the posture of the eyes and from the patterns of light projected onto the retinas by the environment, is presented.
Abstract: The sources of visual information that must be present to correctly interpret spatial relations in images, the relative importance of different visual information sources with regard to metric judgments of spatial relations in images, and the ways that the task in which the images are used affect the visual information's usefulness are discussed Cue theory, which states that the visual system computes the distances of objects in the environment based on information from the posture of the eyes and from the patterns of light projected onto the retinas by the environment, is presented Three experiments in which the influence of pictorial cues on perceived spatial relations in computer-generated images was assessed are discussed Each experiment examined the accuracy with which subjects matched the position, orientation, and size of a test object with a standard by interactively translating, rotating, and scaling the test object >

300 citations


Journal ArticleDOI
28 Feb 1992-Science
TL;DR: When stimulated with moving patterns characterized by one of three very diverse cues for form, many middle temporal neurons exhibited similar directional tuning, and this lack of sensitivity for figural cue characteristics may allow the uniform perception of motion of objects having a broad spectrum of physical cues.
Abstract: The direction and rate at which an object moves are normally not correlated with the manifold physical cues (for example, brightness and texture) that enable it to be seen. As befits its goals, human perception of visual motion largely evades this diversity of cues for image form; direction and rate of motion are perceived (with few exceptions) in a fashion that does not depend on the physical characteristics of the object. The middle temporal visual area of the primate cerebral cortex contains many neurons that respond selectively to motion in a particular direction and is an integral part of the neural substrate for perception of motion. When stimulated with moving patterns characterized by one of three very diverse cues for form, many middle temporal neurons exhibited similar directional tuning. This lack of sensitivity for figural cue characteristics may allow the uniform perception of motion of objects having a broad spectrum of physical cues.

299 citations


Journal ArticleDOI
TL;DR: This finding suggests that patients with neglect are able to process stimuli presented to the neglected field to a categorical level of representation even when they deny the stimulus presence in the affected field.
Abstract: Can visual processing be carried out without visual awareness of the presented objects? In the present study we addressed this problem in patients with severe unilateral neglect. The patients were required to respond as fast as possible to target stimuli (pictures of animals and fruits) presented to the normal field by pressing one of the two keys according to the category of the targets. We then studied the influence of priming stimuli, again pictures of animals or fruits, presented to the neglected field on the responses to targets. By combining different pairs of primes and targets, three different experimental conditions were obtained. In the first condition, "Highly congruent," the target and prime stimuli belonged to the same category and were physically identical; in the second condition, "Congruent," the stimuli represented two elements of the same category but were physically dissimilar; in the third condition, "Noncongruent," the stimuli represented one exemplar from each of the two categories of stimuli. The results showed that the responses were facilitated not only in the Highly congruent condition, but also in the Congruent one. This finding suggests that patients with neglect are able to process stimuli presented to the neglected field to a categorical level of representation even when they deny the stimulus presence in the affected field. The implications of this finding for psychological and physiological theory of neglect and visual cognition are discussed.

Journal ArticleDOI
05 Nov 1992-Nature
TL;DR: The authors showed that information which is neglected and unavailable to higher levels of visual processing can nevertheless be processed by earlier stages in the visual system concerned with segmentation, showing that attentional impairment arises after normal segmentation of the image into figures and background has taken place.
Abstract: A central controversy in current research on visual attention is whether figures are segregated from their background preattentively, or whether attention is first directed to unstructured regions of the image. Here we present neurological evidence for the former view from studies of a brain-injured patient with visual neglect. His attentional impairment arises after normal segmentation of the image into figures and background has taken place. Our results indicate that information which is neglected and unavailable to higher levels of visual processing can nevertheless be processed by earlier stages in the visual system concerned with segmentation.

Journal ArticleDOI
TL;DR: Different explanations of color vision favor different philosophical positions: Computational vision is more compatible with objectivism (the color is in the object), psychophysics and neurophysiology with subjectivism, and comparative research suggests that an explanation of color must be both experientialist and ecological (unlike subjectivism).
Abstract: Different explanations of color vision favor different philosophical positions: Computational vision is more compatible with objectivism (the color is in the object), psychophysics and neurophysiology with subjectivism (the color is in the head). Comparative research suggests that an explanation of color must be both experientialist (unlike objectivism) and ecological (unlike subjectivism). Computational vision's emphasis on optimally “recovering” prespecified features of the environment (i.e., distal properties, independent of the sensory-motor capacities of the animal) is unsatisfactory. Conceiving of visual perception instead as the visual guidance of activity in an environment that is determined largely by that very activity suggests new directions for research.

Journal ArticleDOI
TL;DR: The properties of IT neurons are reviewed and it is considered how these properties may underlie the perceptual and mnemonic functions of IT cortex.
Abstract: In primates, inferior temporal (IT) cortex is crucial for the processing and storage of visual information about form and colour. This article reviews the properties of IT neurons and considers how these properties may underlie the perceptual and mnemonic functions of IT cortex. The available evidence suggests that the processing of the facial image by IT cortex is similar to its processing of other visual patterns. Faces and other complex visual stimuli appear to be represented by the pattern of responses over a population of IT neurons rather than by the responses of specific \`feature detectors' or \`grandmother' cells. IT neurons with adult-like stimulus properties are present in monkeys as young as six weeks old.

Journal ArticleDOI
TL;DR: The afunctional sensitivity hypothesis is proposed—that self-motion is perceived on the basis of optical information rather than the retinal locus of stimulation, but that central and peripheral vision are differentially sensitive to the information characteristic of each retinal region.
Abstract: Three experiments were performed to examine the role that central and peripheral vision play in the perception of the direction of translational self-motion, or heading, from optical flow. When the focus of radial outflow was in central vision, heading accuracy was slightly higher with central circular displays (10 degrees-25 degrees diameter) than with peripheral annular displays (40 degrees diameter), indicating that central vision is somewhat more sensitive to this information. Performance dropped rapidly as the eccentricity of the focus of outflow increased, indicating that the periphery does not accurately extract radial flow patterns. Together with recent research on vection and postural adjustments, these results contradict the peripheral dominance hypothesis that peripheral vision is specialized for perception of self-motion. We propose a functional sensitivity hypothesis--that self-motion is perceived on the basis of optical information rather than the retinal locus of stimulation, but that central and peripheral vision are differentially sensitive to the information characteristic of each retinal region.

Journal ArticleDOI
TL;DR: It is concluded that perceptual organization is initially based on a principle in which connected regions of uniform stimulation are inferred to be discrete units (the principle of uniform connectedness).

Journal ArticleDOI
TL;DR: The hypothesis, developed from previous work, that at the base of movement learning is a sensorimotor representation that consists of integrated information from central processes and sensory feedback derived from previous experiences on the movement task is tested.
Abstract: Our previous work (Proteau, Marteniuk, Girouard, & Dugas, 1987) was concerned with determining whether with relatively extensive practice on a movement aiming task, as the skill theoretically starts becoming open-loop, there would be evidence for a decreasing emphasis on visual feedback for motor control. We eliminated vision of the moving limb after moderate and extensive practice and found that the movement became more dependent on this feedback with greater amounts of practice. In the present study, we wished to test the hypothesis, developed from our previous work, that at the base of movement learning is a sensorimotor representation that consists of integrated information from central processes and sensory feedback derived from previous experiences on the movement task. A strong test of this hypothesis would be the prediction that for an aiming task, the addition of vision, after moderate and relatively extensive practice without vision, would lead to an increasingly large movement decrement, relative to appropriate controls. We found good support for this prediction. From these and our previous results, and the idea of the sensorimotor representation underlying learning, we develop the idea that learning is specific to the conditions that prevail during skill acquisition. This has implications for the ideas of the generalized motor program and schema theory.

Book ChapterDOI
TL;DR: This chapter reviews the major experimental evidence that has been used to suggest that learning a motor skill can be equated with either a reduction of the need for sensory information or a decrease in the importance of visual afference in the favor of kinesthetic feedback.
Abstract: Publisher Summary Hybrid control models propose that motor control is achieved by an interplay between central planning and processing of afferent information. This chapter reviews the major experimental evidence that has been used to suggest that learning a motor skill can be equated with either a reduction of the need for sensory information or a decrease in the importance of visual afference in the favor of kinesthetic feedback. It discusses the role played by visual information for movement control as a particular individual's expertise at the task increases. Several studies are presented in which the availability of visual information for the control of various types of movement is manipulated. The “ball catching” task is discussed, which describes that a normally available visual information is a major source of afference for movement control. A transfer paradigm is used to assess the effects of different sources of afference on movement learning and control.

Journal ArticleDOI
TL;DR: A preliminary model of eye movement control in scene perception is described and directions for future research are suggested.
Abstract: Research on eye movements and scene perception is reviewed. Following an initial discussion of some basic facts about eye movements and perception, the following topics are discussed: (I) the span of effective vision during scene perception, (2) the role of eye movements in scene perception, (3) integration of information across saccades, (4) scene context, object identification and eye movements, and (5) the control of eye movements. The relationship of eye movements during reading to eye movements during scene perception is considered. A preliminary model of eye movement control in scene perception is described and directions for future research are suggested.


Journal ArticleDOI
15 Oct 1992-Nature
TL;DR: It is concluded that rich internal representations can be activated to support visual imagery even when they cannot support visually mediated perception of objects.
Abstract: VISUAL imagery is the creation of mental representations that share many features with veridical visual percepts. Studies of normal and brain-damaged people reinforce the view that visual imagery and visual perception are mediated by a common neural substrate and activate the same representations1–4. Thus, brain-damaged patients with intact vision who have an impairment in perception should have impaired visual imagery. Here we present evidence to the contrary from a patient with severely impaired object recognition (visual object agnosia) but with normal mental imagery. He draws objects in considerable detail from memory and uses information derived from mental images in a variety of tasks. In contrast, he cannot identify visually presented objects, even those he has drawn himself. He has normal visual acuity and intact perception of equally complex material in other domains. We conclude that rich internal representations can be activated to support visual imagery even when they cannot support visually mediated perception of objects.

Journal ArticleDOI
TL;DR: It is put forward the possibility that temporal synchronization between neurons to implement binding may not be generally used in the visual system as a solution to the binding problem, at least when static objects are being processed and recognised in higher parts of theVisual system.
Abstract: It has been suggested in studies in the visual system of anaesthetized cats that oscillatory activity with a frequency of 40-60 Hz occurs during the presentation of moving visual stimuli and reflects a synchronization process between neurons that could implement the binding together of related neurons into different sets. We found no evidence for such oscillations in the inferior temporal visual cortex and related areas of awake macaques fixating effective static visual stimuli, which for the neurons analysed were faces. We put forward the possibility that temporal synchronization between neurons to implement binding may not be generally used in the visual system as a solution to the binding problem, at least when static objects are being processed and recognised in higher parts of the visual system.

Journal ArticleDOI
TL;DR: This article describes where the active perception paradigm does and does not provide computational benefits along this dimension, and a formalization of the search component of active perception is presented in order to accomplish this.
Abstract: Here, this author attempts to tie the concept of active perception to attentive processing in general and to the complexity level analysis of visual search described previously; the aspects of active vision as they have been currently described form a subset of the full spectrum of attentional capabilities Our approach is motivated by the search requirements of vision tasks and thus we cast the problem as one of search preceding the application of methods for shape-from-X, optical flow, etc, and recognition in general This perspective permits a dimension of analysis not found in current formulations of the active perception problem, that of computational complexity This article describes where the active perception paradigm does and does not provide computational benefits along this dimension A formalization of the search component of active perception is presented in order to accomplish this The link to attentional mechanisms is through the control of data acquisition and processing by the active process It should be noted that the analysis performed here applies to the general hypothesize-and-test search strategy, to time-varying scenes as well as to the general problem of integration of successive fixations Finally, an argument is presented as to why this framework is an extension of the behaviorist approaches to active vision

BookDOI
01 Jan 1992
TL;DR: This paper presents the results of a large-scale study of the naturalization ofSpatial Memory Structure and Capacity in Pigeons and its role in Problem-Solving and Memory-Coding Strategies conducted in the Context of Food-Storing Birds.
Abstract: Contents: J.G. Fetterman, D.A. Stubbs, D. MacEwen, The Perception of the Extended Stimulus. L.R. Dreyfus, Absolute and Relational Control in a Temporal Comparison Task. M.L. Spetch, B. Rusak, Time Present and Time Past. K. Cheng, Three Psychophysical Principles in the Processing of Spatial and Temporal Information. D.M. Wilkie, R.J. Wilson, S.E. MacDonald, Animals' Perception and Memory for Places. D.F. Kendrick, Pigeon's Concept of Experienced and Nonexperienced Real-World Locations: Discrimination and Generalization Across Seasonal Variations. W.A. Roberts, M.T. Phelps, G.B. Schacter, Stimulus Control of Central Place Foraging on the Radial Maze. B.C. Rakitin, N.L. Dallal, W.H. Meck, Spatial Memory Structure and Capacity: Influences on Problem-Solving and Memory-Coding Strategies. D.F. Sherry, Landmarks, the Hippocampus, and Spatial Search in Food-Storing Birds. E.A. Wasserman, R.S. Bhatt, Conceptualization of Natural and Artificial Stimuli by Pigeons. A.A. Wright, The Study of Animal Cognitive Processes. R. Weisman, L. Ratcliffe, The Perception of Pitch Constancy in Bird Songs. D.S. Blough, Features of Forms in Pigeon Perception. R.G. Cook, The Visual Perception and Processing of Textures by Pigeons. W.K. Honig, Emergent Properties of Complex Arrays. J.J. Neiworth, Cognitive Aspects of Movement Estimations: A Test of Imagery in Animals. M. Rilling, An Ecological Approach to Stimulus Control and Tracking. S.T. Boysen, Counting as the Chimpanzee Views It. E.J. Capaldi, Levels of Organized Behavior in Rats. H. Davis, Logical Transitivity in Animals.


Journal ArticleDOI
TL;DR: Animals with uncinate fascicle section showed no impairment in learning to choose between visual stimuli based on their differential association with food reward or other non‐visual cues, but were unable to learn to choose Between visual stimulibased on their different association with another visual stimulus.
Abstract: We report a series of six experiments in which we examined the behavioural effects of disconnecting the inferior temporal cortex from the prefrontal cortex in cynomolgus monkeys by sectioning the direct cortico-cortical pathway between them, the uncinate fascicle. In experiment 1, monkeys with bilateral section of the uncinate fascicle showed a marked deficit in learning visuomotor conditional problems. Experiments 2 and 3 demonstrated that this deficit was not the result of a mild motor impairment, nor of a visual discrimination impairment. However, experiment 4 showed that the impairment extended to visual - visual conditional learning. In contrast, following bilateral section of the uncinate fascicle monkeys were unimpaired at two other tasks of visual associative learning: a reward - visual associative task (experiment 5), in which the presence or absence of a food reward served as a cue to the correct choice between two visual stimuli, and a time - visual associative task (experiment 6), in which the cue to the correct choice was the length of the intertrial interval. Thus, animals with uncinate fascicle section showed no impairment in learning to choose between visual stimuli based on their differential association with food reward or other non-visual cues, but were unable to learn to choose between visual stimuli based on their differential association with another visual stimulus. They were equally unable to choose between two motor responses on the basis of the visual cue.

Journal ArticleDOI
TL;DR: The experiments with machine vision raise questions about the part played by perceptual context for object recognition in natural vision, and the neural mechanisms which might serve such a role.
Abstract: Recent work on the visual interpretation of traffic scenes is described which relies heavily on a priori knowledge of the scene and position of the cam era, and expectations about the shapes of vehicles and their likely movements in the scene. Knowledge is represented in the computer as explicit three-dimensional geometrical models, dynamic filters, and descriptions of behaviour. Model-based vision, based on reasoning with analogue models, avoids many of the classical problems in visual perception: recognition is robust against changes in the image of shape, size, colour and illumination. The three-dimensional understanding of the scene which results also deals naturally with occlusion, and allows the behaviour of vehicles to be interpreted. The experiments with machine vision raise questions about the part played by perceptual context for object recognition in natural vision, and the neural mechanisms which might serve such a role.

Journal ArticleDOI
TL;DR: It was demonstrated that infants' failure to detect the changes in pitch-color/shape relations could not be attributed to an inability to discriminate the pitch or the color/shape changes used in Experiment 1, and infants showed robust discrimination of the contrasts used.

Journal ArticleDOI
TL;DR: In this article, the authors examined how visual information in an ad may interact with and influence processing of verbal information and facilitate or inhibit self-referent judgments and found that the verbal focus of an ad was shown to encourage varying levels of selfreferencing and differential attitudes and intentions when a product visual was featured, but not when a slice-of-life setting was featured.

Journal ArticleDOI
TL;DR: In this paper, a processing account is outlined in which stimulus quality affects the orthographic input lexicon, whereas context influences both the orthogonal lexicon and the semantic system.
Abstract: It is well known that visual word recognition is influenced by context, word frequency, and stimulus quality. A processing account is outlined in which stimulus quality affects the orthographic input lexicon, whereas context influences both the orthographic input lexicon and the semantic system. Word frequency exerts its primary effects on the pathways that link lexical systems with each other and with the semantic system. Previous findings that are problematic for alternative models along with the results of two new experiments are consistent with this account