scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1984"


Journal ArticleDOI
01 Sep 1984-Brain
TL;DR: A quantitative investigation of the visual identification and auditory comprehension deficits of 4 patients who had made a partial recovery from herpes simplex encephalitis finds category specificity in the organization of meaning systems that are also modality specific semantic systems.
Abstract: We report a quantitative investigation of the visual identification and auditory comprehension deficits of 4 patients who had made a partial recovery from herpes simplex encephalitis. Clinical observations had suggested the selective impairment and selective preservation of certain categories of visual stimuli. In all 4 patients a significant discrepancy between their ability to identify inanimate objects and inability to identify living things and foods was demonstrated. In 2 patients it was possible to compare visual and verbal modalities and the same pattern of dissociation was observed in both. For 1 patient, comprehension of abstract words was significantly superior to comprehension of concrete words. Consistency of responses was recorded within a modality in contrast to a much lesser degree of consistency between modalities. We interpret our findings in terms of category specificity in the organization of meaning systems that are also modality specific semantic systems.

1,911 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of temporal discontinuity on visual search was assessed by presenting a display in which one item had an abrupt onset, while other items were introduced by gradually removing line segments that camouflaged them.
Abstract: The effect of temporal discontinuity on visual search was assessed by presenting a display in which one item had an abrupt onset, while other items were introduced by gradually removing line segments that camouflaged them. We hypothesized that an abrupt onset in a visual display would capture visual attention, giving this item a processing advantage over items lacking an abrupt leading edge. This prediction was confirmed in Experiment 1. We designed a second experiment to ensure that this finding was due to attentional factors rather than to sensory or perceptual ones. Experiment 3 replicated Experiment 1 and demonstrated that the procedure used to avoid abrupt onset--camouflage removal--did not require a gradual waveform. Implications of these findings for theories of attention are discussed.

1,378 citations


Journal ArticleDOI
TL;DR: These findings suggested that spatially focused attention involves a gating or modulation of evoked neural activity in the visual pathways, whereas color selection is manifested by an endogenous ERP complex.
Abstract: Event-related brain potentials (ERPs) were recorded from subjects as they attended to colored bars that were flashed in random order to the left or right of fixation. The task was to detect slightly smaller target bars having a specified color (red or blue) and location (left or right). The ERP elicited by stimuli at an attended location contained a sequence of phasic components (P122/N168/N264) that was highly distinct from the sequence associated with selection on the basis of color (N150-350/P199/P400-500). These findings suggested that spatially focused attention involves a gating or modulation of evoked neural activity in the visual pathways, whereas color selection is manifested by an endogenous ERP complex. When the stimulus locations were widely separated, the ERP signs of color selection were hierarchically dependent upon the prior selection for spatial location. In contrast, when the stimulus locations were adjacent to one another, the ERP signs of color selection predominated over those of location selection. These results are viewed as supporting “early selection” theories of attention that specify the rejection of irrelevant inputs prior to the completion of perceptual processing. The implications of ERP data for theories of multidimensional stimulus processing are considered.

613 citations


Journal ArticleDOI
TL;DR: An attempt to interpret the patterns of deficits and preserved abilities in reports of loss of mental imagery following brain damage in terms of a componential information-processing model of imagery found a consistent pattern of deficit in a subset of patients.

469 citations


Journal ArticleDOI
Bela Julesz1
TL;DR: In the preattentive mode no complex forms are processed, and yet in parallel, without effort or scrutiny, differences in a few local conspicuous features (called textons) are detected over the entire visual field.

329 citations


Journal ArticleDOI
TL;DR: In this article, infants were shown two visual images side-by-side of a talker articulating, in synchrony, two different vowel sounds, while a sound matching one of the two vowels was auditorially presented.
Abstract: Infants' abilities to detect auditory-visual correspondences for speech were tested in two experiments. Infants were shown two visual images side-by-side of a talker articulating, in synchrony, two different vowel sounds, while a sound matching one of the two vowels was auditorially presented. Infants' visual fixations to the two faces were video-recorded and scored by an independent observer who could neither see the faces nor hear the sounds. The results of Experiment 1 showed that the auditory stimulus systematically influenced infants' visual fixations. In- fants looked longer at the face that matched the sound. In Experiment 2, the same visual stimuli were presented, but the auditory stimuli were altered so as to remove the spectral information contained in the vowels while preserving their temporal characteristics. Performance fell to chance. Taken together, the experiments suggest that infants recognize the correspondences between speech information presented auditorially and visually, and moreover, that this correspondence is based on the spectral information contained in the speech sounds. This suggests that infants represent speech information intermodally.

279 citations



Journal ArticleDOI
TL;DR: The results of a cognitive experimental analysis of the reading functions of four developmentally dyslexic subjects are presented in this paper, where data are considered on an individual basis in an attempt to specify the primary source of disturbance in each case, and the manner in which the normal course of reading development has been distorted.
Abstract: The results of a cognitive experimental analysis of the reading functions of four developmentally dyslexic subjects are presented. Data are considered on an individual basis in an attempt to specify the primary source of disturbance in each case, and the manner in which the normal course of reading development has been distorted. Subject RO showed an effect on the analytic functioning of the visual processor, which affected speed of non-word reading, but appeared not to have prevented the construction of a normal visual word recognition system. In subject LT a phonological impairment was implicated, and this was associated with a pattern of performance similar to that of adult phonological dyslexics. Subjects GS and SE gave evidence of serial letter-by-letter reading (impaired wholistic function of the visual processor), which produced a pattern not unlike that of adult surface dyslexics (effects of spelling regularity, regularisation errors). The manner in which these impairments might disrupt t...

218 citations


Journal ArticleDOI
TL;DR: This paper presents a meta-anatomy of human perception called ‘The Foundations of Affective Neuroscience’, a probabilistic model that describes the ‘building blocks of knowledge’ of human interaction with the world around us.
Abstract: Department of Neuroscience Albert Einstein College of Medicine Bronx, New York 10461 Department of Psychiatry Veterans Administration Hospital Palo Alto, California 94305 ‘Institute for Perception Soesterberg, the Netherlands dDepartment of Psychology University of North Carolina Greensboro, North Carolina 2741 2 Department of Neurosciences University of California San Diego, California 92093 /Department of Psychology University of Helsinki 00170 Helsinki 17. Finland gDepartment of Neurology University of California Irvine, California 92668 Hospital de la Salpetriere F-75634 Paris, Cedex 13, France ‘ Nebraska Psychiatric Institute Omaha, Nebraska 68106 RISTO NAATANEN! JOHN POLICH: BERNARD

155 citations


Journal ArticleDOI
TL;DR: Tests of quantitative models indicated that both preschool children and adults had available continuous and independent sources of information and the only developmental difference was less of an influence of the visual source of information for children relative to adults.
Abstract: Preschool children's evaluation and integration of visual and auditory information in speech perception was compared with that of adults. Subjects identified speech events, which consisted of synthetic speech syllables ranging from /ba/ to /da/ combined with a videotaped /ba/, /da/, or no articulation. Both variables influenced the identification judgments for both groups of subjects. The results were used to test current views of the development of perceptual categorization and speech perception. Tests of quantitative models indicated that both preschool children and adults had available continuous and independent sources of information. The results were well described by a fuzzy logical model of perception, which assumes that the perceiver integrates continuous and independent sources of information and determines the relative goodness of match to prototype definitions in memory. The only developmental difference was less of an influence of the visual source of information for children relative to adults. 1 explanation is that the children simply attended less to the visual source. A second experiment eliminated the attentional explanation by showing identical results when the children were also required to indicate whether or not the speaker's mouth was moving.

146 citations


Journal ArticleDOI
TL;DR: S Severity of language impairment appeared to be a major factor differentiating the 2 groups: those who failed to show evidence of visual self-recognition were more likely than those who did show Evidence of visual recognition to be mute or lacking in communicative speech.
Abstract: Employing a mirror procedure, 52 autistic children (CA = 3-7 to 12-8, means = 7-7) were tested for visual self-recognition. Substantial behavioral and psychometric data were collected from school records, teacher interviews, and classroom observations. Of the 52 children, 36 (69%) showed evidence of mirror self-recognition, while 16 (31%) failed to give clear indications of recognizing their mirror images. The 2 groups did not differ on CA. Severity of language impairment appeared to be a major factor differentiating the 2 groups: those who failed to show evidence of visual self-recognition were more likely than those who did show evidence of visual recognition to be mute or lacking in communicative speech (p less than .001). Other indices of impairment indicated that the children who showed the capacity for visual self-recognition had higher levels of functioning. The results are discussed in terms of an organizational perspective. This perspective argues that the study of atypical populations may elucidate the process of development by describing the coordination or sequential organization of different behavioral systems.

Journal ArticleDOI
TL;DR: It is suggested on the basis of these results and other studies that these neurons are involved in pattern-specific habituation to repeated visual stimuli, and in attention an orientation to a changed visual stimulus pattern.

OtherDOI
TL;DR: The sections in this article are: Development of Visual Perception, Functional Role of Visual Experience, Genetic and Experiential Factors in Visual Development, and Effects of Unusual Visual Input on Cortical Development.
Abstract: The sections in this article are: 1 Development of Visual Perception 1.1 Methods of Study 1.2 Development of Spatial Resolution 1.3 Development of Depth Perception and Stereopsis 1.4 Overview 2 Development of Visual Neural Processes 2.1 Retina 2.2 Lateral Geniculate Nucleus 2.3 Visual Cortex 3 Consequences of Binocular Visual Deprivation 3.1 Forms of Binocular Deprivation 3.2 Effects on Perception 4 Effects of Selected Visual Experience on Neural Processes and Perception 5 Conditions that Influence Ocular Dominance 5.1 Monocular Deprivation 5.2 Artificial Strabismus 5.3 Alternating Monocular Deprivation 6 Conditions that Influence Other Receptive-Field Properties 6.1 Orientation Selectivity 6.2 Movement and Direction Selectivity 6.3 Effects of Unusual Visual Input on Cortical Development 7 Genetic and Experiential Factors in Visual Development 7.1 Functional Role of Visual Experience

Journal ArticleDOI
TL;DR: Analysis of recent evidence indicates that rats and hamsters with collicular damage usually make no detectable response of any kind in tests of neglect, and that in some situations they do not respond to visual stimuli that produce a variety of behaviours in normal animals.

Book
01 Apr 1984
TL;DR: Combined with the traditional approaches of psychology and neurophysiology, this computational approach provides an exciting analysis of visual function, raising many new questions about the human vision system for further investigation.
Abstract: From the Publisher: Computer scientists designing machine vision systems, psychologists working in visual perception, visual neurophysiologists, and theoretical biologists will derive a deeper understanding of visual function - in particular the computations that the human visual system uses to analyze motion-from the important research reported in this book. The organization of movement in the changing image that reaches the eye provides our visual system with a valuable source of information for analyzing the structure of our surroundings. This book examines the measurement of this movement and the use of relative movement to locate the boundaries of physical objects in the environment. It investigates the nature of the computations that are necessary to perform this analysis by any vision system, biological or artificial. The author first defines the goals of these visual tasks, reveals the properties of the physical world that a vision system can rely upon to achieve such goals, and suggests general methods that can be used to carry out the tasks. From the general methods, she designs algorithms specifying a particular sequence of computations that a vision system can execute to perform these visual tasks. These algorithms are implemented on a computer system under a variety of circumstances. Combined with the traditional approaches of psychology and neurophysiology, this computational approach provides an exciting analysis of visual function, raising many new questions about the human vision system for further investigation. Ellen Catherine Hildreth received her doctorate from MIT. She is a Research Scientist in the MIT Artificial Intelligence Laboratory and associate director of theCenter for Biological Information Processing at the Whitaker College of Health Sciences, Technology, and Management. The Measurement of Visual Motion is an ACM Distinguished Dissertation.



Journal ArticleDOI
TL;DR: The use of superimposed and lateral conditions revealed antagonistic contributions to the VEP, possibly reflecting direct-through excitatory and lateral inhibitory pathways, respectively.
Abstract: Nonlinear interactions in the human visual system were studied using visual evoked potentials (VEPs). In one experiment (superimposed condition), all segments of a dartboard pattern were contrast reversed in time by a sum of two sinusoidal signals. In a second experiment (lateral condition), segments in some regions of the dartboard pattern were contrast reversed by a single sinusoid of one frequency, while segments in other (contiguous) regions of the pattern were contrast reversed by a single sinusoid of another frequency. An identical set of ten frequency pairs was used in each experiment. The frequency pairs were chosen such that the difference between frequencies in each pair was 2 Hz. Amplitudes and phases of the sum and difference frequency components of the VEP (intermodulation terms) were retrieved by Fourier analysis and served as measures of nonlinear interactions. The use of input pairs with a fixed separation in frequency enabled the estimation of the temporal characteristics of the visual pathways prior to a second linear stage. The use of superimposed and lateral conditions revealed antagonistic contributions to the VEP, possibly reflecting direct-through excitatory and lateral inhibitory pathways, respectively.

Journal ArticleDOI
TL;DR: The most striking finding is that only a small number of cues and mechanisms are involved, and there are thus considerable inhomogeneities in spatial perception, even under focused attention and foveal viewing.

Journal ArticleDOI
TL;DR: The results suggest that newborns will reliably give novelty preferences when an infant-controlled habituation procedure is used, and give support to the model of habituation which assumes it to be an exponentially decreasing process.
Abstract: Four experiments are described in which the newborn's ability to habituate to a visual stimulus and subsequently to display novelty/familiarity preferences was explored. The same two types of stimuli, simple geometric shapes and complex colored patterns, were used throughout. The results suggest that newborns will reliably give novelty preferences when an infant-controlled habituation procedure is used. However, no reliable preferences emerged following either a brief exposure to a stimulus, or when novel and familiar stimuli were presented paired together over several trials. In experiment 4 different, novel stimuli were presented on successive infant-controlled trials and the decline in trial length observed during habituation trials was not found. Although this is further evidence that habituation to a repeated visual stimulus does occur in the newborn, half of the subjects in experiment 4 would have met the infant-controlled criterion of habituation: these results are discussed in terms of artifacts that can affect habituation. While there is considerable intra-and intersubject variability in trial duration, and in other dependent measures, the results give support to the model of habituation which assumes it to be an exponentially decreasing process.

01 Jan 1984
TL;DR: This study addresses the question of how simple networks can account for a variety of phenomena associated with the shift of a specialized processing focus across the visual scene and suggests possible implementations in neuronal hardware, including a possible role for the extensive back-projection form the cortex to the LGN.
Abstract: This study addresses the question of how simple networks can account for a variety of phenomena associated with the shift of a specialized processing focus across the visual scene. We address in particular aspects of the dichotomy between the preattentive-parallel and the attentive-serial modes of visual perception and their hypothetical neuronal implementations. Specifically, we propose the following: (1) A number of elementary features, such as color, orientation, direction of movement, disparity etc. are represented in parallel in different topographical maps, called the early representation. (2) There exists a selective mapping from this early representation into a more central representation, such that at any instant the central representation contains the properties of only a single location in the visual scene, the {\it selected} location. (3) We discuss some selection rules that determine which location will be mapped into the central representation. The major rule, using the saliency or conspicuity of locations in the early representation, is implemented using a so-called Winner-Take-All network. A hierarchical pyramid--like architecture is proposed for this network. We suggest possible implementations in neuronal hardware, including a possible role for the extensive back-projection form the cortex to the LGN.

Journal ArticleDOI
TL;DR: An algorithm of sequential steps that can be used to assess visual function is reviewed and the pathophysiology of retinal, anterior visual pathways and retrochiasmal pathways can be objectively evaluated by VEPs.
Abstract: Visual evoked potentials (VEPs) can be used in a multitude of ways to assess the various levels of visual processing. The human visual system consists of multiple, parallel channels which process different information, and each channel constitutes a set of sequential processes. An algorithm of sequential steps that can be used to assess visual function is reviewed. The pathophysiology of retinal, anterior visual pathways and retrochiasmal pathways can be objectively evaluated by VEPs.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the rules by which the component features of faces are combined when presented in the left or right visual field, and examine the validity of the analytic-holistic processing dichotomy, using concepts elaborated by Garner (1978, 1981) to specify stimulus properties and models of similarity relations as performance criteria.
Abstract: This study investigates the rules by which the component features of faces are combined when presented in the left or the right visual field, and it examines the validity of the analytic-holistic processing dichotomy, using concepts elaborated by Garner (1978, 1981) to specify stimulus properties and models of similarity relations as performance criteria. Latency measures of dissimilarity, obtained for the two visual fields, among a set of eight faces varying on three dimensions of two levels each, were fitted to the dominance metric model, the feature-matching model, the city-block distance metric model, and the Euclidean distance metric model. In addition to a right-visual-field superiority in different responses, a maximum likelihood estimation procedure showed that, for each subject and each visual field, the Euclidean model provided the best fit of the data, suggesting that the faces were compared in terms of their overall similarity. Moreover, the spatial representations of the results revealed interactions among the component facial features in the processing of faces. Taken together, these two findings indicate that faces initially projected to the right or to the left hemisphere were not processed analytically but in terms of their gestalt. Human information-processing capacities are the product of a highly adaptive and versatile nervous system that provides individuals with a large number of alternative means for achieving successful performance on any particular task. This versatility is partly attributed to the functional specialization of the two cerebral hemispheres whereby specific skills are alleged to be unilaterally represented, thus doubling the brain processing capacity while avoiding potential conflicts that would result from promiscuity. This specialization was initially characterized in terms of information that each hemisphere was better equipped to operate on (e.g., Milner, 1971). However, the diversity and heterogeneity of the type of information that each hemisphere could be shown to process, initially in experiments with commissurotomized patients, prompted researchers to inquire about the processes un

Journal ArticleDOI
TL;DR: Cross-modal influences on perceptual organization were demonstrated using a display that combined a stimulus for auditory stream segregation with its visual apparent movement analogue, which segregation occurred at larger SOAs when the nontarget stimulation indicated two objects than when it represented one.
Abstract: Cross-modal influences on perceptual organization were demonstrated using a display that combined a stimulus for auditory stream segregation with its visual apparent movement analogue. Both phenomena give rise to the perception of either one or two objects, depending on the rate of presentation of the stimuli. At slower rates, one object is perceived, while two are perceived at faster rates. Subjects indicated the stimulus onset asynchrony (SOA) between successive stimuli at which the perceptual shift occurred in each modality. Then visual and auditory stimuli were presented concurrently and subjects responded to the “target” modality sequence. Two intergroup separations for the nontarget stimuli were used. Distances were chosen, based on the subject’s calibration data, which represented one and two objects, respectively, at the stream segregation point for the target sequence. Segregation occurred at larger SOAs when the nontarget stimulation indicated two objects than when it represented one. This was true for both visual and auditory target sequences.


Journal ArticleDOI
TL;DR: Observers at all grade levels were able to reliably judge relative weight in both collisions and lifting events, and could differentiate between natural and anomalous collisions.
Abstract: The present study examined whether younger observers (kindergartners, second graders, and fourth graders) could extract relative weight information from collisions and also lifting events, and if they could judge whether collisions were natural (i.e., momentum conserving) or anomalous (non-momentum conserving). 20 children at each age and 20 adults viewed videotapes of 8 collisions (4 natural, 4 anomalous) and 6 sequences of lifting events. Observers also viewed sequences of static images taken from these events. Observers at all grade levels were able to reliably judge relative weight in both collisions and lifting events, and could differentiate between natural and anomalous collisions. Performance was much poorer when static sequences of the events were viewed, especially for the young children. A consistent age trend was noted across tasks: adults performed better than second and fourth graders who, in turn, performed better than kindergartners. In addition, there was evidence that younger children were differentially aided when the kinematics of the event made the kinetics more pronounced.

Journal ArticleDOI
TL;DR: It is argued that this perception of a rigid two-dimensional figure rotating in the frontal plane is perceived as a distorting three-dimensional shape results from the stimulation of automatic processes for perceiving size change, and that these processes are not subject to a general rigidity assumption.
Abstract: It has been proposed that the human visual system prefers perceptions of objects that are rigid or undergo minimum form change. A counterexample is presented in which a rigid two-dimensional figure rotating in the frontal plane is perceived as a distorting three-dimensional shape. It is argued that this perception results from the stimulation of automatic processes for perceiving size change, and that these processes are not subject to a general rigidity assumption.

Journal ArticleDOI
TL;DR: A patient who suffered traumatic hematomas of both occipitotemporal regions, but who had normal visual acuity, language, and cognitive functions, could not recognize faces of family members, celebrities, or recent acquaintances (prosopagnosia), which is suggested to be part of a more general inability to distinguish among objects within a visual semantic class.
Abstract: A patient who suffered traumatic hematomas of both occipitotemporal regions, but who had normal visual acuity, language, and cognitive functions, could not recognize faces of family members, celebrities , or recent acquaintances (prosopagnosia). He could distinguish same from different faces when they were presented simultaneously, but could not recognize faces that had been presented to him 90 seconds earlier. He could read and name objects correctly, but could not recognize any previously viewed object if it was reexamined later with other objects of the same semantic class. He had no difficulty copying complex figures, but synthesized incomplete visual information poorly and pursued an abnormal visual search strategy. We suggest that prosopagnosia is part of a more general inability to distinguish among objects within a visual semantic class. It results from impaired visual memory and perception caused by visual association cortex damage and interruption of the inferior longitudinal fasciculus connecting visual association cortex and temporal lobe.

Journal ArticleDOI
01 Mar 1984-Cortex
TL;DR: This report describes a case of associative visual agnosia caused by a left sided, cortico-subcortical, inferior temporo-occipital infarction, which was interpreted as a visuo-verbal disconnection plus a categorization deficit for visual meaningful common stimuli.

Journal ArticleDOI
TL;DR: The result suggests that the infants' perception of the objects' distances was more veridical in the binocular condition than in the monocular condition, which indicated sensitivity to monocular depth information.