scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1991"


01 Jan 1991
TL;DR: Early vision as discussed by the authors is defined as measuring the amounts of various kinds of visual substances present in the image (e.g., redness or rightward motion energy) rather than in how it labels "things".
Abstract: What are the elements of early vision? This question might be taken to mean, What are the fundamental atoms of vision?—and might be variously answered in terms ofsuch candidate structures as edges, peaks, corners, and so on. In this chapter we adopt a rather different point of view and ask the question, What are the fundamentalsubstances of vision? This distinction is important becausewe wish to focus on the first steps in extraction of visualinformation. At this level it is premature to talk aboutdiscrete objects, even such simple ones as edges and corners.There is general agreement that early vision involvesmeasurements of a number of basic image properties in-cluding orientation, color, motion, and so on. Figure l.lshows a caricature (in the style of Neisser, 1976), of the sort of architecture that has become quite popular as a model for both human and machine vision. The first stageof processing involves a set of parallel pathways, eachdevoted to one particular-visual property. We propose that the measurements of these basic properties be con-sidered as the elements of early vision. We think of earlyvision as measuring the amounts of various kinds of vi-sual "substances" present in the image (e.g., redness orrightward motion energy). In other words, we are inter- ested in how early vision measures “stuff” rather than in how it labels “things.”What, then, are these elementary visual substances?Various lists have been compiled using a mixture of intui-tion and experiment. Electrophysiologists have describedneurons in striate cortex that are selectively sensitive tocertain visual properties; for reviews, see Hubel (1988) and DeValois and DeValois (1988). Psychophysicists haveinferred the existence of channels that are tuned for cer- tain visual properties; for reviews, see Graham (1989), Olzak and Thomas (1986), Pokorny and Smith (1986), and Watson (1986). Researchers in perception have foundaspects of visual stimuli that are processed pre-attentive- ly (Beck, 1966; Bergen & Julesz, 1983; Julesz & Bergen,

1,576 citations


Journal ArticleDOI
10 Jan 1991-Nature
TL;DR: The quantitative analyses demonstrate strikingly accurate guidance of hand and finger movements directed at the very objects whose qualities she fails to perceive and suggest that the neural substrates for the visual perception of object qualities such as shape, orientation and size are distinct from those underlying the use of those qualities in the control of manual skills.
Abstract: Studies of the visual capacity of neurological patients have provided evidence for a dissociation between the perceptual report of a visual stimulus and the ability to direct spatially accurate movements toward that stimulus. Some patients with damage to the parietal lobe, for example, are unable to reach accurately towards visual targets that they unequivocally report seeing. Conversely, some patients with extensive damage to primary visual cortex can make accurate pointing movements or saccades toward a stimulus presented in their 'blind' scotoma. But in investigations of visuomotor control in patients with visual disorders, little consideration has been given to complex acts such as manual prehension. Grasping a three-dimensional object requires knowledge not only of the object's spatial location, but also of its form, orientation and size. We have examined a patient with a profound disorder in the perception of such object qualities. Our quantitative analyses demonstrate strikingly accurate guidance of hand and finger movements directed at the very objects whose qualities she fails to perceive. These data suggest that the neural substrates for the visual perception of object qualities such as shape, orientation and size are distinct from those underlying the use of those qualities in the control of manual skills.

1,344 citations


Journal ArticleDOI
TL;DR: For instance, this article showed that infants look where someone else is looking in the first 18 months of their life, by extrapolating from the orientation of the mother's head and eyes to the intersection of the line of sight within a relatively precise zone of the infant's own visual space.
Abstract: A series of experiments is reported which show that three successive mechanisms are involved in the first 18 months of life in ‘looking where someone else is looking’. The earliest ‘ecological’ mechanism enables the infant to detect the direction of the adult's visual gaze within the baby's visual field but the mother's signal alone does not allow the precise localization of the target. Joint attention to the same physical object also depends on the intrinsic, attention-capturing properties of the object in the environment. By about 12 months, we have evidence for presence of a new ‘geometric’ mechanism. The infant extrapolates from the orientation of the mother's head and eyes, the intersection of the mother's line of sight within a relatively precise zone of the infant's own visual space. A third ‘representational’ mechanism emerges between 12 and 18 months, with an extension of joint reference to places outside the infant's visual field. None of these mechanisms require the infant to have a theory that others have minds; rather the perceptual systems of different observers ‘meet’ in encountering the same objects and events in the world. Such a ‘realist’ basis for interpersonal knowledge may offer an alternative starting point for development of intrapersonal knowledge, rather than the view that mental events can only be known by construction of a theory.

825 citations


Journal ArticleDOI
TL;DR: Electrophysiological findings support models proposing that the behavioral effects of precuing expected target locations are due, at least in part, to changes in sensory-perceptual processing.
Abstract: Reaction time (RT) differences to visual stimuli as a function of expectancy have been attributed to changes in perceptual processing or entirely to shifts in decision and response criteria. To help distinguish between these competing interpretations, event-related brain potentials (ERPs) were recorded to lateralized flashes delivered to visual field locations precued by a central arrow (valid stimuli) or not precued (invalid stimuli). Validly cued stimuli in both simple and choice RT tasks elicited consistent amplitude enhancements of the early, sensory-evoked PI component of the ERP recorded at scalp sites overlying lateral prestriate visual cortex (90-130 ms poststimulus). In contrast, the subsequent N1 component (150-200 ms) was enhanced by validly cued stimuli in the choice RT task condition only. These electrophysiological findings support models proposing that the behavioral effects of precuing expected target locations are due, at least in part, to changes in sensory-perceptual processing. Furthermore, these data provide specific information regarding the neural mechanisms underlying such effects.

796 citations


Journal ArticleDOI
TL;DR: A new theory explaining the perception of partly occluded objects and illusory figures, from both static and kinematic information, in a unified framework is described, with a detailed theory of unit formation that accounts for most cases of boundary perception in the absence of local physical specification.

674 citations


Book
01 Nov 1991
TL;DR: Part 1 The task of vision: the plenoptic function and the elements of early vision, Edward H.Adelson and James R.Bergen and Michael S.Landy.
Abstract: Part 1 The task of vision: the plenoptic function and the elements of early vision, Edward H.Adelson and James R.Bergen. Part 2 Receptors and sampling: learning receptor positions, Albert J.Ahumada, Jr a model of aliasing in extrafoveal human vision, Carlo L.M.Tiana et al models of human rod receptors and the ERG, Donald C.Hood and Davic G.Birch. Part 3 Models of neural function: the design of chromatically opponent receptive fields, Peter Lennie et al spatial receptive field organization in monkey V1 and its relationship to the cone mosaic, Michael J.Hawken and Andrew J.Parker neural contrast sensitivity, Andrew B.Watson spatiotemporal receptive fields and direction selectivity, Robert Shapley et al nonlinear model of neural responses in cat visual cortex, David J.Heeger. Part 4 Detection and discrimination: a template matching model of subthreshold summation, Jacob Nachmias noise in the visual system may be early, Denis G.Pelli pattern discrimination, visual filters and spatial sampling irregularity, Hugh R.Wilson. Part 5 Color and shading: a bilinear model of the illuminant's effect on color appearance, David H.Brainard and Brian A.Wandell shading ambiguity - reflectance and illumination, Michael D'Zumara transparency and the cooperative computation of scene attributes, Daniel Kersten. Part 6 Motion and texture: theories for the visual perception of local velocity and coherent motion, Norberto M.Grzywacz and Alan L.Yuille computational modeling of visual texture segregation, James R.Bergen and Michael S.Landy complex channels, early local nonlinearities, and normalization in texture segregation, Norma Graham orthogonal distribution analysis - a new approach to the study of texture perception, Charles Chubb and Michael S.Landy. Part 7 3D Shape: shape from X - psychophysics and computation, Heinrich H.Bulthoff computational issues in solving the stereo correspondence problem, John P.Frisby and Stephen B.Pollard stereo, surfaces, and shape, Andrew J.Parker et al.

602 citations


Journal ArticleDOI
TL;DR: The best predictor of accident frequency as recorded by the state was a model incorporating measures of early visual attention and mental status, which together accounted for 20% of the variance, a much stronger model than in earlier studies.
Abstract: Older drivers have more accidents per miles driven than any other age group and tend to have significant impairments in their visual function, which could interfere with driving. Previous research has largely failed to document a link between vision and driving in the elderly. We have taken a comprehensive approach by examining how accident frequency in older drivers relates to the visual/cognitive system at a number of levels: ophthalmological disease, visual function, visual attention, and cognitive function. The best predictor of accident frequency as recorded by the state was a model incorporating measures of early visual attention and mental status, which together accounted for 20% of the variance, a much stronger model than in earlier studies. Those older drivers with a visual attentional disorder or with poor scores on a mental status test had 3-4 times more accidents (of any type) and 15 times more intersection accidents than those without these problems.

584 citations


Journal ArticleDOI
01 Feb 1991-Brain
TL;DR: It is proposed that many of these perceptual disorders might be the combined result of a selective loss of the cortical elaboration of the magnocellular visual processing stream, and a selective output disconnection from a central processor of visual boundaries and shape primitives in the occipital cortex.
Abstract: A single case study of a patient with 'visual form agnosia' is presented. A severe visual recognition deficit was accompanied by impairments in discriminating shape, reflectance, and orientation, although visual acuity and colour vision, along with tactile recognition and intelligence, were largely preserved. Neuropsychological and behavioural investigations have indicated that the patient is able to utilize visual pattern information surprisingly well for the control of hand movements during reaching, and can even read many whole words, despite being unable to make simple discriminative judgements of shape or orientation. She seems to have no awareness of shape primitives through Gestalt grouping by similarity, continuity or symmetry. It is proposed that many of these perceptual disorders might be the combined result of (1) a selective loss of the cortical elaboration of the magnocellular visual processing stream, and (2) a selective output disconnection from a central processor of visual boundaries and shape primitives in the occipital cortex.

551 citations


Journal ArticleDOI
01 Dec 1991-Science
TL;DR: This hypothesis is given support by the demonstration that it is possible to synthesize, from a small number of examples of a given task, a simple network that attains the required performance level.
Abstract: In many different spatial discrimination tasks, the human visual system exhibits hyperacuity-level performance by evaluating spatial relations with the precision of a fraction of a photoreceptor''s diameter. We propose that this impressive performance depends in part on a fast learning process that uses relatively few examples and occurs at an early processing stage in the visual pathway. We demonstrate that it is possible to synthesize from a small number of examples a simple (HyperBF) network that attains the required performance level. We verify with psychophysical experiments some key predictions of our conjecture. We show that fast stimulus-specific learning indeed takes place in the human visual system and that this learning does not transfer between two slightly different hyperacuity tasks.

540 citations


Journal ArticleDOI
TL;DR: Results indicated that only the 4--month-old group was easily able to disengage from an attractive central stimulus to orient toward a simultaneously presented target, consistent with the predictions of matura-tional accounts of the development of visual orienting.
Abstract: Three aspects of the development of visual orienting in infants of 2, 3, and 4 months of age are examined in this paper. These are the age of onset and sequence of development of (1) the ability to readily disengage gaze from a stimulus, (2) the ability to consistently show “anticipatory” eye movements, and (3) the ability to use a central cue to predict the spatial location of a target. Results indicated that only the 4--month-old group was easily able to disengage from an attractive central stimulus to orient toward a simultaneously presented target. The 4--month-old group also showed more than double the percentage of “anticipatory” looks than did the other age groups. Finally, only the 4--month-old group showed significant evidence of being able to acquire the contingent relationship between a central cue and the spatial location (to the right or to the left) of a target. Measures of anticipatory looking and contingency learning were not correlated. These findings are, in general terms, consistent with the predictions of matura-tional accounts of the development of visual orienting.

522 citations


Journal ArticleDOI
TL;DR: It is demonstrated that unit responses recorded from the posteromedial lateral suprasylvian area, a visual association area specialized for the analysis of motion, also exhibit an oscillatory temporal structure, supporting the hypothesis that synchronous neuronal oscillations may serve to establish relationships between features processed in different areas of visual cortex.
Abstract: Recent studies have shown that neurons in area 17 of cat visual cortex display oscillatory responses which can synchronize across spatially separate orientation columns. Here, we demonstrate that unit responses recorded from the posteromedial lateral suprasylvian area, a visual association area specialized for the analysis of motion, also exhibit an oscillatory temporal structure. Cross-correlation analysis of unit responses reveals that cells in area 17 and the posteromedial lateral suprasylvian area can oscillate synchronously. Moreover, we find that the interareal synchronization is sensitive to features of the visual stimuli, such as spatial continuity and coherence of motion. These results support the hypothesis that synchronous neuronal oscillations may serve to establish relationships between features processed in different areas of visual cortex.

Journal ArticleDOI
25 Apr 1991-Nature
TL;DR: It is found that 'filling in' is an active visual process that probably involves creating an actual neural representation of the surround rather than merely ignoring the absence of information from the scotoma.
Abstract: PATIENTS with scotomas or blind-spots in their visual field1–5 resulting from damage to the visual pathways often report that the pattern from the rest of the visual field 'fills in' to occupy the scotoma. Here we describe a novel technique for generating an artificial perceptual scotoma which enabled us to study the spatial and temporal characteristics of this filling-in process. A homogeneous grey square subtending 1.5° was displayed against a background of twinkling two-dimensional noise of equal mean luminance (Fig. 1). On steady eccentric fixation for 10 s the square vanished and was filled in by the twinkling noise from the surround. Using this display we found that 'filling in' is an active visual process that probably involves creating an actual neural representation of the surround rather than merely ignoring the absence of information from the scotoma; filling in can occur separately for colour and texture, suggesting separate mechanisms; the filling-in process does not completely suppress information from the scotoma, even after an image has faded completely from consciousness it can nevertheless contribute to motion perception; and the process can be strongly influenced by illusory contours.

Journal ArticleDOI
TL;DR: This work cued attention to a moving object and found subsequent inhibition at the locus the object later occupied, implying that previously examined objects are suppressed.
Abstract: Our response to visual events can be delayed at positions we have recently examined attentively Such inhibition could organize visual search through static scenes by suppressing those loci already searched, but this mechanism would fail in moving scenes as objects' locations then change during search We cued attention to a moving object and found subsequent inhibition at the locus the object later occupied This implies that previously examined objects are suppressed Such object-centred inhibition would be highly adaptive, but would require a sophisticated neural implementation for a mechanism held to be sub-cortical

Book
01 Jan 1991
TL;DR: This thesis describes Sonja, a system which uses instructions in the course of visually-guided activity which can operate autonomously and make flexible use of instructions provided by a human advisor.
Abstract: This thesis describes Sonja, a system which uses instructions in the course of visually-guided activity The thesis explores an integration of research in vision, activity, and natural language pragmatics Sonja''s visual system demonstrates the use of several intermediate visual processes, particularly visual search and routines, previously proposed on psychophysical grounds The computations Sonja performs are compatible with the constraints imposed by neuroscientifically plausible hardware Although Sonja can operate autonomously, it can also make flexible use of instructions provided by a human advisor The system grounds its understanding of these instructions in perception and action

Journal ArticleDOI
TL;DR: The invariance of the flanker compatibility effect across conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis, and shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.
Abstract: When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.

Journal ArticleDOI
11 Jul 1991-Nature
TL;DR: There is a corresponding anisotropy in the cortical representation of binocular information: receptive-field profiles for left and right eyes are matched for cells that are tuned to horizontal orientations of image contours, and a major modification is required of the conventional notion of disparity processing.
Abstract: Binocular neurons in the visual cortex are thought to perform the first stage of processing for the fine stereoscopic depth discrimination exhibited by animals with frontally located eyes. Because lateral separation of the eyes gives a slightly different view to each eye, there are small variations in position (disparities), mainly along the horizontal dimension, between corresponding features in the two retinal images. The visual system uses these disparities to gauge depth. We studied neurons in the cat's visual cortex to determine whether the visual system uses the anisotropy in the range of horizontal and vertical disparities. We report here that there is a corresponding anisotropy in the cortical representation of binocular information: receptive-field profiles for left and right eyes are matched for cells that are tuned to horizontal orientations of image contours. For neurons tuned to vertical orientations, left and right receptive fields are predominantly dissimilar. Therefore, a major modification is required of the conventional notion of disparity processing. The modified scheme allows a unified encoding of monocular form and binocular disparity information.

Journal ArticleDOI
TL;DR: Some patients can respond to visual stimuli presented within their clinically absolute visual field defects that have been caused by partial destruction of striate cortex.

Journal ArticleDOI
TL;DR: Support is lent to a speed or efficiency of stimulus processing interpretation of infant fixation duration as the amount of time allotted for infants to solve either type of task was decreased, short lookers' performance was superior to that of long lookers.
Abstract: Individual differences in the duration of infants' visual fixations are reliable and stable and have been linked to differential cognitive performance; short-looking infants typically perform better than long-looking infants. 4 experiments tested the possibility of whether short lookers' superiority on perceptual-cognitive tasks is attributable to attention to the featural details of visual stimuli, or simply to differences in the speed or efficiency of visual processing. To do this, the performance of long- and short-looking 4-month-olds was examined on separate discrimination tasks that could be solved only by processing either featural or global information. The global task was easier than the featural task, but as the amount of time allotted for infants to solve either type of task was decreased, short lookers' performance was superior to that of long lookers. These results thus lend support to a speed or efficiency of stimulus processing interpretation of infant fixation duration.

Journal ArticleDOI
TL;DR: This model is modified and extended to address the problems of perceptual grouping and figure-ground segregation in vision and is able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the Gestalt laws.
Abstract: The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws.

Journal ArticleDOI
TL;DR: The experiments presented here suggest that repetition blindness is a more general visual phenomenon, and examine its relationship to feature integration theory, and suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.
Abstract: Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.

Journal ArticleDOI
TL;DR: A neural network model of synchronized oscillator activity in visual cortex is presented in order to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus.

Journal ArticleDOI
TL;DR: Across the whole population there was no systematic change in either responsivity or selectivity for orientation under the two conditions, and the nature of the information signaled by these neurons was specified more precisely than has previously been possible.
Abstract: Several neurophysiological studies have shown that the visual cerebral cortex of macaque monkeys performing delayed match-to-sample tasks contains individual neurons whose levels of activity depend on the sample the animal is required to remember. Haenny et al. (1988) reported that the activity of neurons in area V4 of monkeys performing an orientation matching task depends on the orientation for which the animal is searching. It was proposed that these neurons contribute to a representation of the orientation being sought. We have further characterized these neurons by recording visual responses from individual neurons during multiple behavioral tasks. Animals were trained to perform an orientation match-to-sample task using either a visual or a tactile orientation sample. In a set of 89 neurons examined using both types of sample, 25% showed statistically significant effects of sample orientation regardless of whether the sample was visual or tactile. Most of these preferred the same sample orientation in both conditions. These results allow us to specify the nature of the information signaled by these neurons more precisely than has previously been possible. For 193 units tested using one of the matching tasks, responses were also recorded while the animal performed a simple fixation task. In this task the animal was not required to attend to the visual stimuli that were presented. A few neurons that were responsive during the matching task were silent during fixation, but a comparable number was much more responsive during fixation. Across the whole population there was no systematic change in either responsivity or selectivity for orientation under the two conditions.

Journal ArticleDOI
TL;DR: Left-hemisphere advantages shown in face processing are suggested to be due to the parsing and analysis of the local elements of a face, which are not completely face-specific.
Abstract: This article addresses three issues in face processing: First, is face processing primarily accomplished by the right hemisphere, or do both left-and right-hemisphere mechanisms play important roles? Second, are the mechanisms the same as those involved in general visual processing, or are they dedicated to face processing? Third, how can the mechanisms be characterized more precisely in terms of processes such as visual parsing? We explored these issues using the divided visual field methodology in four experiments. Experiments 1 and 2 provided evidence that both left-and right-hemisphere mechanisms are involved in face processing. In Experiment 1, a right-hemisphere advantage was found for both Same and Different trials when Same faces were identical and Different faces differed on all three internal facial features. Experiment 2 replicated the right-hemisphere advantage for Same trials but showed a left-hemisphere advantage for Different trials when one of three facial features differed between the target and the probe faces. Experiment 3 showed that the right-hemisphere advantage obtained with upright faces in Experiment 2 disappeared when the faces were inverted. This result suggests that there are right-hemisphere mechanisms specialized for processing upright faces, although it could not be determined whether these mechanisms are completely face-specific. Experiment 3 also provided evidence that the left-hemisphere mechanisms utilized in face processing tasks are general-purpose visual mechanisms not restricted to particular classes of visual stimuli. In Experiment 4, a left-hemisphere advantage was obtained when the task was to find one facial feature that was the same between the target and the probe faces. We suggest that left-hemisphere advantages shown in face processing are due to the parsing and analysis of the local elements of a face.

Journal ArticleDOI
TL;DR: An oscillator neural network model that is capable of processing local and global attributes of sensory input is proposed and analyzed and the computational capabilities of the model for performing discrimination and segmentation tasks are demonstrated.
Abstract: An oscillator neural network model that is capable of processing local and global attributes of sensory input is proposed and analyzed. Local features in the input are encoded in the average firing rate of the neurons while the relationships between these features can modulate the temporal structure of the neuronal output. Neurons that share the same receptive field interact via relatively strong feedback connections, while neurons with different fields interact via specific, relatively weak connections. The model is studied in the context of processing visual stimuli that are coded for orientation. The effect of axonal propagation delays on synchronization of oscillatory activity is analyzed. We compare our theoretical results with recent experimental evidence on coherent oscillatory activity in the cat visual cortex. The computational capabilities of the model for performing discrimination and segmentation tasks are demonstrated. Coding and linking of visual features other than orientation are discussed.

Book
17 Jun 1991
TL;DR: The model, called MORSEL, is unique in providing a broad and unified explanation for a wide range of experimental psychological data on visual perception and attention, and draws on existing theoretical perspectives from cognitive psychology.
Abstract: "The Perception of Multiple Objects "describes a neurally inspired computational model of two-dimensional object recognition and spatial attention that can explain many characteristics of human visual perception. The model, called MORSEL (named for its ability to perform Multiple Object Recognition and attentional Selection), is unique in providing a broad and unified explanation for a wide range of experimental psychological data on visual perception and attention. Although it draws on existing theoretical perspectives from cognitive psychology, it is a fully mechanistic account, not just a functional-level theory.MORSEL has been trained to recognize letters and words in various positions on its "retina." Following training, it can also recognize several items at once, subject to capacity limitations. The model makes predictions about what sorts of information the visual system can process in parallel and what sorts must be processed serially.Through simulation experiments, chiefly in letter and word perception, MORSEL has been shown to account for a variety of psychological phenomena, including perceptual errors that arise when several items appear simultaneously in the visual field, facilitatory effects of context and redundant information, attentional phenomena, visual search performance, and behaviors exhibited by neurological patients with acquired dyslexia.

Book
12 Apr 1991
TL;DR: In this paper, the authors present a model of visual perception and the physical environment for measuring visual perception, including visual distance, visual direction, and visual perception of objects and their properties.
Abstract: Understanding Visual Perception. Functions of Visual Perception. Models of Visual Perception. Measuring Visual Perception. Visual Perception and the Physical Environment. Development of Perception. When Vision Goes Wrong. The Heritage. Optics. Art and Representation. Life Sciences. Philosophy. Psychology. Light and the Eye. Visual Optics. Visual Neurophysiology. Location. Frames of Reference. Visual Direction. Visual Distance. Navigation. Motion. Sources of Motion Stimulation. Motion Phenomena. Retinocentric Motion. Egocentric Motion. Geocentric Motion. Recognition. Perceiving Object Properties. Perceptual Constancies. Recognising Objects. Representations and Vision. Pictures and Objects. Computer Generated Displays. Summary and Conclusions.

Book
01 Aug 1991
TL;DR: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.
Abstract: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.

Journal ArticleDOI
01 Feb 1991-Brain
TL;DR: It is argued that (1) the amnesia is due to damage to the fornix where that structure is closely applied to the splenium and that it is the result of a disconnection between the frontal and temporal lobes, although the possibility that damage to more than one structure cannot be excluded.
Abstract: The neuropsychological abnormalities found in 9 patients with tumours involving the splenium of the corpus callosum are described. The outstanding features of their cognitive deficits were a severe memory deficit and visual perception impairment in the presence of relatively intact intellect. It is argued that (1) the amnesia is due to damage to the fornix where that structure is closely applied to the splenium and that it is the result of a disconnection between the frontal and temporal lobes, although the possibility that damage to more than one structure, for example, retrosplenial cortex and fornix, cannot be excluded; (2) there is a dual pathway for visual object recognition, one of which passes directly to the dominant hemisphere for semantic analysis and the other via the nondominant hemisphere for prior perceptual analysis. Further, it is postulated that there is a subcortical as well as a callosal route between the hemispheres that is important for visual object recognition.

Journal ArticleDOI
01 Aug 1991-Brain
TL;DR: It is suggested that the patient's simultanagnosia is attributable to an impairment in the process by which activated structural descriptions are linked to information coding the location of the object.
Abstract: Simultanagnosia is a disorder of visual perception characterized by the inability to interpret complex visual arrays despite preserved recognition of single objects. We report a series of investigations on a simultanagnosic patient which attempt to establish the nature of this visual processing disturbance. The patient performed normally on a feature detection task but was impaired on a test of attention-requiring visual search in which she was asked to distinguish between stimuli containing different numbers of targets. She was not impaired on a visual-spatial orienting task. She identified single briefly presented words and objects as rapidly and reliably as controls suggesting that access to stored structural descriptions was not impaired. With brief, simultaneous presentation of 2 words or drawings, she identified both stimuli significantly more frequently when the stimuli were semantically related than when they were unrelated. On the basis of these and other data, we suggest that the patient's simultanagnosia is attributable to an impairment in the process by which activated structural descriptions are linked to information coding the location of the object.

Journal ArticleDOI
TL;DR: The results confirm that although selection of motor responses constitutes a processing bottleneck, the control of visual attention operates independently of this bottleneck.
Abstract: Does shifting visual attention require the same central mechanism as that required for selecting overt motor responses? In Experiment 1, Ss performed 2 tasks: a speeded manual response to a tone and an unspeeded report of a cued target letter in a brief masked array. Stimulus onset asynchrony (SOA) between tone and array was varied. If the attention shift to the target was delayed by the first task, then there should be more second-task errors at short SOAs and on trials with slow first-task responses. In fact, SOA effects and dependencies were minimal. Results were unchanged in further experiments in which the relation between cue and target was symbolic, spatially "unnatural," or based on the color of the target. Two additional experiments validated key assumptions of the method. The results confirm that although selection of motor responses constitutes a processing bottleneck, the control of visual attention operates independently of this bottleneck.