scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1989"


Book ChapterDOI
10 Sep 1989

5,275 citations


Journal ArticleDOI
TL;DR: The findings indicate the existence of a sustained and a transient component of attention, and it is hypothesize that of the two, the transient component is operative at an earlier stage of visual cortical processing.

1,026 citations


Journal ArticleDOI
TL;DR: The overlap in ranges of the color differences for those comparisons rated matches and mismatches indicates the importance of other factors in appearance matching, such as translucency and the effects of other surrounding visual stimuli.
Abstract: Judgments of appearance matching by means of the visual criteria established by the United States Public Health Service (USPHS) and by means of an extended visual rating scale were determined for composite resin veneer restorations and their comparison teeth. Using a colorimeter of 45°/0° geometry and the CIELAB color order system, we used the color of the restorations and comparison teeth to calculate a color difference for every visual rating. Statistically significant relationships were found between each of the two visual rating systems and the color differences. The average CIELAB color difference of those ratings judged a match by the USPHS criteria was found to be 3. 7. However, the overlap in ranges of the color differences for those comparisons rated matches and mismatches indicates the importance of other factors in appearance matching, such as translucency and the effects of other surrounding visual stimuli. The extended visual rating scale offers no advantages to the more broadly defined crite...

922 citations


Book
21 Sep 1989
TL;DR: In this article, the authors present an introduction and introduction of the concept of adapTION, and a summary of the main points of the proposed approach, including identification and identification.
Abstract: PART I: INTRODUCTION PART II: ADAPTION PART III: SUMMATION PART IV: UNCERTAINTY PART V: IDENTIFICATION PART VI: MULTIPLE DIMENSIONS PART VII: EPILOGUE

892 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a theory of selective attention that is intended to account for the identification of a visual shape in a cluttered display, where the selected area of attention is controlled by a filter that operates on the location information in a display.
Abstract: This article presents a theory of selective attention that is intended to account for the identification of a visual shape in a cluttered display. The selected area of attention is assumed to be controlled by a filter that operates on the location information in a display. The location information selected by the filter in turn determines the feature information that is to be identified. Changes in location of the selected area are assumed to be governed by a gradient of processing resources. Data from three new experiments are fit more parsimoniously by a gradient model than by a moving-spotlight model. The theory is applied to experiments in the recent literature concerned with precuing locations in the visual field, and to the issue of attentional and automatic processing in the identification of words. Finally, data from neuroanatomical experiments are reviewed to suggest ways that the theory might be realized in the primate brain.

651 citations


Journal ArticleDOI
TL;DR: It is shown that assumptions about the structure of short-term verbal memory are shown to account for many of the observed effects of presentation morality.
Abstract: The effects of auditory and visual presentation upon short-term retention of verbal stimuli are reviewed, and a model of the structure of short-term memory is presented. The main assumption of the model is that verbal information presented to the auditory and visual modalities is processed in separate streams that have different properties and capabilities. Auditory items are automatically encoded in both the A (acoustic) code, which, in the absence of subsequent input, can be maintained for some time without deliberate allocation of attention, and a P (phonological) code. Visual items are retained in both the P code and a visual code. Within the auditory stream, successive items are strongly associated; in contrast, in the visual modality, it is simultaneously presented items that are strongly associated. These assumptions about the structure of short-term verbal memory are shown to account for many of the observed effects of presentation morality.

629 citations


Journal ArticleDOI
18 Aug 1989-Science
TL;DR: Neuronal activity in the superior temporal sulcus of monkeys, a cortical region that plays an important role in analyzing visual motion, was related to the subjective perception of movement during a visual task, indicating that this region may mediate the perceptual experience of a moving object.
Abstract: Neuronal activity in the superior temporal sulcus of monkeys, a cortical region that plays an important role in analyzing visual motion, was related to the subjective perception of movement during a visual task. Single neurons were recorded while monkeys (Macaca mulatta) discriminated the direction of motion of stimuli that could be seen moving in either of two directions during binocular rivalry. The activity of many neurons was dictated by the retinal stimulus. Other neurons, however, reflected the monkeys' reported perception of motion direction, indicating that these neurons in the superior temporal sulcus may mediate the perceptual experience of a moving object.

571 citations


Journal ArticleDOI
TL;DR: Magnetic coil stimulation percutaneously of human occipital cortex was tested on perception of 3 briefly presented, randomly generated alphabetical characters, finding effects consistent with the topographical representation in visual cortex, but incompatible with an effect on attention or suppression from an eyeblink.

541 citations




Journal ArticleDOI
TL;DR: The technique, derived from Gordon's algorithm, accounts for visual perception criteria, namely for contour detection, and the efficiency of the algorithm is compared to Gordon's and to the classical ones.
Abstract: A digital processing technique is proposed in order to enhance image contrast without significant noise enhancement. The technique, derived from Gordon's algorithm, accounts for visual perception criteria, namely for contour detection. The efficiency of our algorithm is compared to Gordon's and to the classical ones.

Journal ArticleDOI
TL;DR: A new analysis is described, based on the concept of the ideal observer in signal detection theory, that allows one to trace the flow of discrimination information through the initial physiological stages of visual processing, for arbitrary spatio-chromatic stimuli.
Abstract: Visual stimuli contain a limited amount of information that could potentially be used to perform a given visual task. At successive stages of visual processing, some of this information is lost and some is transmitted to higher stages. This article describes a new analysis, based on the concept of the ideal observer in signal detection theory, that allows one to trace the flow of discrimination information through the initial physiological stages of visual processing, for arbitrary spatio-chromatic stimuli. This ideal-observer analysis provides a rigorous means of measuring the information content of visual stimuli and of assessing the contribution of specific physiological mechanisms to discrimination performance. Here, the analysis is developed for the physiological mechanisms up to the level of the photoreceptor. It is shown that many psychophysical phenomena previously attributed to neural mechanisms may be explained by variations in the information content of the stimuli and by preneural mechanisms.

Journal ArticleDOI
TL;DR: The view is that mental imagery involves the efferent activation of visual areas in prestriate occipital cortex, parietal and temporal cortex, and that these areas represent the same kinds of specialized visual information in imagery as they do in perception.

Journal ArticleDOI
TL;DR: The present results show that V3A gaze-dependent neurons combine information about the position of the eye in the orbit with that of a restricted retinal locus (their receptive field), and it is suggested that they might directly encode spatial locations of the animal's field of view in a head frame of reference.
Abstract: Extracellular recordings from single neurons of the prestriate area V3A were carried out in awake, behaving monkeys, to test the influence of the direction of gaze on cellular activity. The responsiveness to visual stimulation of about half of the studied neurons (88/187) was influenced by the animal's direction of gaze: physically identical visual stimuli delivered to identical retinotopic positions (on the receptive field) evoked different responses, depending upon the direction of gaze. Control experiments discount the possibility that the observed phenomenon was due to changes in visual background or in depth, depending on the direction in which the animal was looking. The gaze effect modulated cell excitability with different strengths for different gaze directions. The majority of these neurons were more responsive when the animal looked contralaterally with respect to the hemisphere they were recorded from. Gaze-dependent neurons seem to be segregated in restricted cortical regions, within area V3A, without mixing with non-gaze-dependent cells of the same cortical area. The most reliable differences between V3A gaze-dependent neurons and the same type of cells previously described in area 7a (Andersen and Mountcastle, 1983) concern the small receptive field size, the laterality of gaze effect, and the lack of straight-ahead facilitated or inhibited neurons in area V3A. Since the present results show that V3A gaze-dependent neurons combine information about the position of the eye in the orbit with that of a restricted retinal locus (their receptive field), we suggest that they might directly encode spatial locations of the animal's field of view in a head frame of reference. These cells might be involved in the construction of an internal map of the visual environment in which the topographical position of the objects reflects their objective position in space instead of reflecting the retinotopic position of their images. Such an objective map of the visual world might allow the stability of visual perception despite eye movement.

Book
08 Jun 1989
TL;DR: The Varied Calibre of Scientific Theories as mentioned in this paper is a collection of works on scientific theories and their applications in neuroscience, psychology, and neuroscience, including: Neural Function Neural Function. Gestalt Psychology. J.J. Gibson.
Abstract: The Varied Calibre of Scientific Theories. Gestalt Psychology. Egon Brunswik. Neural Function. Empiricism. J.J. Gibson. David Marr. Final Summary and Conclusions.

Journal ArticleDOI
TL;DR: Three experiments are reported that examined the relationship between covert visual attention and a viewer’s ability to use extrafoveal visual information during object identification, and evidence is found for a model in which extraFoveal information acquired during a fixation derives primarily from the location toward which the eyes will move next.
Abstract: Three experiments are reported that examined the relationship between covert visual atten­ tion and a viewer's ability to use extrafoveal visual information during object identification. Sub­ jects looked at arrays of four objects while their eye movements were recorded. Their task was to identify the objects in the array for an immediate probe memory test. During viewing, the number and location of objects visible during given fixations were manipulated. In Experiments 1 and 2, we found that multiple extrafoveal previews of an object did not afford any more benefit than a single extrafoveal preview, as assessed by means of time of fixation on the objects. In Ex­ periment 3, we found evidence for a model in which extrafoveal information acquired during a fixation derives primarily from the location toward which the eyes will move next. The results are discussed in terms of their implications for the relationship between covert visual attention and extrafoveal information use, and a sequential attention model is proposed. 196 During normal perception, the eyes tend to move in short hops or saccades, punctuated by fixations during which the eyes remain relatively stationary and visual in­ formation acquisition takes place. This article is concerned with the acquisition and use of information from stimuli beyond the fovea (i.e., extrafoveal stimuli) during an eye fixation, given that the fixation occurs during a sequence of fixations and saccades (in contrast to tachistoscopic procedures). The acquisition of information from an extra­ foveal stimulus can be seen as serving at least two dis­ tinct functions. First, extrafoveal processing is necessary for determining where to position future eye movements (see, e.g., Loftus, 1983; Morris, 1987; Rayner & Pol­ latsek, 1987). Second, and more important for the pur­ poses of this study, while an extrafoveal stimulus may not be fully analyzed before it is fixated, partial analysis of an extrafoveal stimulus often provides information that subsequently speeds the analysis of that stimulus once it

Journal ArticleDOI
TL;DR: It is found that presenting a pre-cue, designating the target position, facilitated target detectability, and that, even though the target is very similar to the background, a parallel mechanism, used for the extraction of stimulus features, designates prospective target locations which may be subsequently checked by a (slow) attentional process.

Journal ArticleDOI
TL;DR: In this article, the early and late components of the event-related brain potential (ERP) elicited by auditory and visual stimuli were studied in 40 normal females between the ages of 7 and 20.
Abstract: The behavior of the early and late components of the event-related brain potential (ERP) elicited by auditory and visual stimuli was studied in 40 normal females between the ages of 7 and 20. The ERPs were collected using two different tasks (i.e.,count and reaction time) in an oddball paradigm. Analysis of the early component (i.e., N1, P2, N2) latencies revealed small but significant decreases with age in the visual modality but no change in the auditory modality. Except for the visual N1, early component amplitudes did not change significantly over this age range. The results showed that auditory and visual P300 latencies, but not amplitudes, changed at significantly different rates over this age range. P300 latencies in the auditory modality showed a relatively abrupt change around age 12, after which P300 latencies changed little and were essentially at their adult levels. The latencies of visual P300s showed a much smaller and more steady decrease with age. Thus visual P300 latencies were shorter than auditory P300s in young children but longer than auditory P300s in older children. Significantly different scalp distributions were found for auditory and visual P300s. Although all P300 activity was maximal over parietal scalp, visual P300s were significantly larger than auditory P300s over central and frontal scalp. The developmental differences, combined with the presence of significantly different scalp topographies for auditory and visual P300s, provide convergent evidence that P300 activity is not independent of the modality of the eliciting stimulus.

Journal ArticleDOI
30 Nov 1989-Nature
TL;DR: It is demonstrated that an independent focus of attention is deployed by each of the surgically separated hemispheres in a visual search task, such that bilateral stimulus arrays can be scanned at a faster rate by 'split-brain' subjects than by normal control subjects.
Abstract: The primate visual system is adept at identifying objects embedded within complex displays that contain a variety of potentially distracting elements. Theories of visual perception postulate that this ability depends on spatial selective attention, a mechanism analogous to a spotlight or zoom lens, which concentrates high-level processing resources on restricted portions of the visual field. Previous studies in which attention was pre-cued to specific locations in the visual field have shown that the spotlight has a single, unified focus, even in the disconnected hemispheres of patients who have undergone surgical transection of the corpus callosum. Here we demonstrate that an independent focus of attention is deployed by each of the surgically separated hemispheres in a visual search task, such that bilateral stimulus arrays can be scanned at a faster rate by 'split-brain' subjects than by normal control subjects. The attentional system used for visual search therefore seems to be functionally and anatomically distinct from the system that mediates voluntary orienting of attention.

Book
01 Jan 1989
TL;DR: An Introduction to Methods for Studying Visual Cognition and Seeing Static Forms: Dynamic Aspects of Vision and Visual Memory and Imagery.
Abstract: An Introduction to Methods for Studying Visual Cognition. Seeing Static Forms. Visual Object Recognition. Dynamic Aspects of Vision. Visual Attention. Visual Memory and Imagery. Visual Processing in Reading. References. Indices.

Journal ArticleDOI
22 Sep 1989-Science
TL;DR: The ocular responses to translational disturbances of the observer and of the scene were recorded from monkeys and the associated vestibular and visual responses were both linearly dependent on the inverse of the viewing distance.
Abstract: Eye movements exist to improve vision, in part by preventing excessive retinal image slip. A major threat to the stability of the retinal image comes from the observer's own movement, and there are visual and vestibular reflexes that operate to meet this challenge by generating compensatory eye movements. The ocular responses to translational disturbances of the observer and of the scene were recorded from monkeys. The associated vestibular and visual responses were both linearly dependent on the inverse of the viewing distance. Such dependence on proximity is appropriate for the vestibular reflex, which must transform signals from Cartesian to polar coordinates, but not for the visual reflex, which operates entirely in polar coordinates. However, such shared proximity effects in the visual reflex could compensate for known intrinsic limitations that would otherwise compromise performance at near viewing.

Journal ArticleDOI
TL;DR: In this article, a theory for the visual and cognitive processing of pictures and words is introduced, which accounts for slower naming of pictures than reading of words and the symmetry of visual and conceptual comparison results supports the hypothesis that the coding of the mind is neither intrinsically linguistic nor imagistic, but rather abstract.
Abstract: This article reviews the research literature on the differences between word reading and picture naming. A theory for the visual and cognitive processing of pictures and words is then introduced. The theory accounts for slower naming of pictures than reading of words. Reading aloud involves a fast, grapheme-to-phoneme transformation process, whereas picture naming involves two additional processes: (a) determining the meaning of the pictorial stimulus and (b) finding a name for the pictorial stimulus. We conducted a reading-naming experiment, and the time to achieve (a) and (b) was determined to be approximately 160 ms. On the basis of data from a second experiment, we demonstrated that there is no significant difference in time to visually compare two pictures or two words when size of the stimuli is equated. There is no difference in time to make the two types of cross-modality conceptual comparisons (picture first, then word, or word first, then picture). The symmetry of the visual and conceptual comparison results supports the hypothesis that the coding of the mind is neither intrinsically linguistic nor imagistic, but rather it is abstract. There is a potent stimulus size effect, equal for both pictorial and lexical stimuli. Small stimuli take longer to be visually processed than do larger stimuli. For optimal processing, stimuli should not only be equated for size, but should subtend a visual angle of at least 3 degrees. The article ends with the presentation of a mathematical theory that jointly accounts for the data from word-reading, picture-naming visual comparison, and conceptual-comparison experiments.

Journal ArticleDOI
TL;DR: In studies where it is reported that illusory self-rotation (circular vection) is induced more by peripheral displays than by central displays, eccentricity may have been confounded with perceived relative distance and area.
Abstract: In studies where it is reported that illusory self-rotation (circular vection) is induced more by peripheral displays than by central displays, eccentricity may have been confounded with perceived relative distance and area. Experiments are reported in which the direction and magnitude of vection induced by a central display in the presence of a surround display were measured. The displays varied in relative distance and area and were presented in isolation, with one moving and one stationary display, or with both moving in opposite directions. A more distant display had more influence over vection than a near display. A central display induced vection if seen in isolation or through a ‘window’ in a stationary surrounding display. Motion of a more distant central display weakened vection induced by a nearer surrounding display moving the other way. When the two displays had the same area their effects almost cancelled. A moving central display nearer than a textured stationary surround produced vection in...

Journal ArticleDOI
TL;DR: This study examined a widely held assumption concerning the development of visual attention, namely, that different aspects of visual selectivity depend on common processing resources, and found that covert orienting and filtering share processing resources.

Journal ArticleDOI
TL;DR: This paper showed that concurrent articulation impairs short-term memory, abolishing both phonological similarity effect and the word length effect when visual presentation is used, and also interferes with ability to judge whether visually presented words rhyme.
Abstract: In normal adults, concurrent articulation impairs short-term memory, abolishing both the phonological similarity effect and the word length effect when visual presentation is used. It also interferes with ability to judge whether visually presented words rhyme. It is generally assumed that concurrent articulation impairs performance because it prevents people from recoding material into an articulatory form. If this is the explanation, then individuals who are congenitally speechless (anarthric) or speech-impaired (dysarthric) should show the same impairments as normal individuals who are concurrently articulating—i.e. they should have reduced memory spans, fail to show word length and phonological similarity effects in short-term memory, and find rhyme judgement difficult. These predictions were tested in a study of 48 cerebral palsied individuals: 12 anarthric, 12 dysarthric, and 24 controls individually matched to the speech-impaired subjects. There was no impairment of memory span in speech-impaired s...

Journal ArticleDOI
TL;DR: It is concluded that prosopagnosia represents a loss of visual "configural processing"--a learned skill enabling immediate identification of individual members of a class without conscious visuospatial analysis or remembering.

Journal ArticleDOI
TL;DR: The data indicate that when confronted with consistently discordant localization information from the auditory and visual systems, developing owls use vision to calibrate associations of auditory localization cues with locations in space in an attempt to bring into alignment the perceived locations of auditory andVisual stimuli emanating from a common source.
Abstract: This study demonstrates that continuous exposure of baby barn owls to a displaced visual field causes a shift in sound localization in the direction of the visual displacement. This implies an innate dominance of vision over audition in the development and maintenance of sound localization. Twelve owls were raised from the first day of eye opening wearing binocular prisms that displaced the visual field to the right by 11 degrees, 23 degrees, or 34 degrees. The prisms were worn for periods of up to 7 months. Consistent with previous results (Knudsen and Knudsen, 1989a), owls reared with displacing prisms did not adjust head orientation to visual stimuli. While wearing prisms, owls consistently oriented the head to the right of visual targets, and, as soon as the prisms were removed, they oriented the head directly at visual targets, as do normal owls. In contrast, prism-reared owls did change head orientation to sound sources even though auditory cues were not altered significantly. Birds reared wearing 11 degrees or 23 degrees prisms oriented the head to the right of acoustic targets by an amount approximately equal to the optical displacement induced by the prisms. Birds raised wearing 34 degrees prisms adjusted sound localization by only about 50% of the optical displacement. Thus, visually guided adjustment of sound localization appears to be limited to about 20 degrees in azimuth. The data indicate that when confronted with consistently discordant localization information from the auditory and visual systems, developing owls use vision to calibrate associations of auditory localization cues with locations in space in an attempt to bring into alignment the perceived locations of auditory and visual stimuli emanating from a common source. Vision exerts this instructive influence on sound localization whether or not visual information is accurate.


Journal ArticleDOI
TL;DR: In this paper, it is argued that our visual knowledge of smoothly curved surfaces can also be defined in terms of local, non-metric order relations, and that relative depth judgments between any two surface regions should be dramatically influenced by monotonicity of depth change along the intervening portions of the surface through which they are separated.
Abstract: In theoretical analyses of visual form perception, it is often assumed that the 3-dimensional structures of smoothly curved surfaces are perceptually represented as point-by-point mappings of metric depth and/or orientation relative to the observer. This article describes an alternative theory in which it is argued that our visual knowledge of smoothly curved surfaces can also be defined in terms of local, nonmetric order relations. A fundamental prediction of this analysis is that relative depth judgments between any two surface regions should be dramatically influenced by monotonicity of depth change (or lack of it) along the intervening portions of the surface through which they are separated. This prediction is confirmed in a series of experiments using surfaces depicted with either shading or texture. Additional experiments are reported, moreover, that demonstrate that smooth occlusion contours are a primary source of information about the ordinal structure of a surface and that the depth extrema in between contours can be optically specified by differences in luminance at the points of occlusion.

Journal ArticleDOI
TL;DR: In this article, the authors examine the contribution of cross-cultural studies to our understanding of the perception and representation of space and show that different cultures use different skills to perform the same perceptual tasks.
Abstract: This paper examines the contribution of cross-cultural studies to our understanding of the perception and representation of space. A cross-cultural survey of the basic difficulties in understanding pictures—ranging from the failure to recognise a picture as a representation to the inability to recognise the object represented in the picture— indicates that similar difficulties occur in pictorial and nonpictorial cultrues. The experimental work on pictorial space derives from two distinct traditions: the study of picture perception in “remote” populations and the study of the perceptual illusions. A comprison of the findings on pictorial space perception with those on real space perceptual illusions. A comparison of findings on pictorial space perception with those on real space perception and perceptual constancy suggersts that cross-cultural differences in the perception of both real and representational space involve two different types of skills: those related exclusively to either real space or representational space, and those related to both. Different cultural groups use different skills to perform the same perceptual tasks.