scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1988"


Book
01 Oct 1988
TL;DR: Cognitive neuropsychology is a branch of psychology that investigates the role of language in the development of personality and the role that language plays in the formation of identity.
Abstract: As a cognitive neuropsychologist, Tim Shallice considers the general question of what can be learned about the operation of the normal cognitive system from the study of the cognitive difficulties arising from neurological damage and disease. He distinguishes two types of theories of normal function - primarily modular and primary non-modular - and argues that the problems of making valid inferences about normal function from studies of brain-damaged subjects are more severe for the latter. He first analyzes five well-researched areas in which some modularity can be assumed: short-term memory, reading, writing, visual perception, and the relation between input and output language processing. His aim is to introduce the methods about normal function mirror ones derived directly from studies of normal subjects and indeed at times preceded them. He then more theoretically examines these inferences, from group studies and individual case studies to modular and non-modular systems. Finally, he considers five areas where theories of normal function are relatively undeveloped and neuropsychology provides counterintuitive phenomena and guides to theory-building: the organization of semantic systems, visual attention, concentration and will, episodic memory, and consciousness.

3,212 citations


Journal ArticleDOI
TL;DR: Two studies are reported which suggest that, while certain aspects of attention require that locations be scanned serially, at least one operation may be carried out in parallel across several independent loci in the visual field, that is the operation of indexing features and tracking their identity.
Abstract: There is considerable evidence that visual attention is concentrated at a single locus in the visual field, and that this locus can be moved independent of eye movements. Two studies are reported which suggest that, while certain aspects of attention require that locations be scanned serially, at least one operation may be carried out in parallel across several independent loci in the visual field. That is the operation of indexing features and tracking their identity. The studies show that: (a) subjects are able to track a subset of up to 5 objects in a field of 10 identical randomly-moving objects in order to distinguish a change in a target from a change in a distractor; and (b) when the speed and distance parameters of the display are designed so that, on the basis of some very conservative assumptions about the speed of attention movement and encoding times, the predicted performance of a serial scanning and updating algorithm would not exceed about 40% accuracy, subjects still manage to do the task with 87% accuracy. These findings are discussed in relation to an earlier, and independently motivated model of feature-binding--called the FINST model--which posits a primitive identity maintenance mechanism that indexes and tracks a limited number of visual objects in parallel. These indexes are hypothesized to serve the function of binding visual features prior to subsequent pattern recognition.

1,715 citations


Journal ArticleDOI
TL;DR: (1988).
Abstract: (1988). Features and objects: The fourteenth bartlett memorial lecture. The Quarterly Journal of Experimental Psychology Section A: Vol. 40, No. 2, pp. 201-237.

1,694 citations


Journal ArticleDOI
TL;DR: The results indicate that neural activity in MT contributes selectively to the perception of motion.
Abstract: Physiological experiments indicate that the middle temporal visual area (MT) of primates plays a prominent role in the cortical analysis of visual motion. We investigated the role of MT in visual perception by examining the effect of chemical lesions of MT on psychophysical thresholds. We trained rhesus monkeys on psychophysical tasks that enabled us to assess their sensitivity to motion and to contrast. For motion psychophysics, we employed a dynamic random dot display that permitted us to vary the intensity of a motion signal in the midst of masking motion noise. We measured the threshold intensity for which the monkey could successfully complete a direction discrimination. In the contrast task, we measured the threshold contrast for which the monkeys could successfully discriminate the orientation of stationary gratings. Injections of ibotenic acid into MT caused striking elevations in motion thresholds, but had little or no effect on contrast thresholds. The results indicate that neural activity in MT contributes selectively to the perception of motion.

1,605 citations


Journal ArticleDOI
TL;DR: For instance, this article found that the detection of change when one display of familiar objects replaces another display might be based purely upon visual codes, or also on identity information (i.e., knowing what was present where in the initial display).
Abstract: Detection of change when one display of familiar objects replaces another display might be based purely upon visual codes, or also on identity information (i.e., knowingwhat was presentwhere in the initial display). Displays of 10 alphanumeric characters were presented and, after a brief offset, were presented again in the same position, with or without a change in a single character. Subjects’ accuracy in change detection did not suggest preservation of any more information than is usually available in whole report, except with the briefest of offsets (under 50 msec). Stimulus duration had only modest effects. The interaction of masking with offset duration followed the pattern previously observed with unfamiliar visual stimuli (Phillips, 1974). Accuracy was not reduced by reflection of the characters about a horizontal axis, suggesting that categorical information contributed negligibly. Detection of change appears to depend upon capacity-limited visual memory; (putative) knowledge of what identities are present in different display locations does not seem to contribute.

981 citations


Journal ArticleDOI
27 Oct 1988-Nature
TL;DR: The results indicate that the selectivity acquired by cells in the anterior ventral temporal cortex of monkeys represents a neuronal correlate of the associative long-term memory of pictures.
Abstract: In human long-term memory, ideas and concepts become associated in the learning process. No neuronal correlate for this cognitive function has so far been described, except that memory traces are thought to be localized in the cerebral cortex; the temporal lobe has been assigned as the site for visual experience because electric stimulation of this area results in imagery recall and lesions produce deficits in visual recognition of objects. We previously reported that in the anterior ventral temporal cortex of monkeys, individual neurons have a sustained activity that is highly selective for a few of the 100 coloured fractal patterns used in a visual working-memory task. Here I report the development of this selectivity through repeated trials involving the working memory. The few patterns for which a neuron was conjointly selective were frequently related to each other through stimulus-stimulus association imposed during training. The results indicate that the selectivity acquired by these cells represents a neuronal correlate of the associative long-term memory of pictures.

715 citations


Journal ArticleDOI
TL;DR: Although differences in surface characteristics such as color, brightness, and texture can be instrumental in defining edges and can provide cues for visual search, they play only a secondary role in the real-time recognition of an intact object when its edges can be readily extracted.

614 citations


Journal ArticleDOI
TL;DR: The performance of a brain-damaged patient with impaired visual appearance representations on a variety of tasks used by cognitive psychologists on one side or other of the visual vs spatial imagery debate implies that the two groups of tasks tap distinct types of representation.

483 citations


Journal ArticleDOI
TL;DR: Previously overlooked neuropsychological evidence on the relation between imagery and perception is reviewed, and its relative immunity to the foregoing alternative explanations is discussed.
Abstract: Does visual imagery engage some of the same representations used in visual perception? The evidence collected by cognitive psychologists in support of this claim has been challenged by three types of alternative explanation: Tacit knowledge, according to which subjects use nonvisual representations to simulate the use of visual representations during imagery tasks, guided by their tacit knowledge of their visual systems; experimenter expectancy, according to which the data implicating shared representations for imagery and perception is an artifact of experimenter expectancies; and nonvisual spatial representation, according to which imagery representations are partially similar to visual representations in the way they code spatial relations but are not visual representations. This article reviews previously overlooked neuropsychological evidence on the relation between imagery and perception, and discusses its relative immunity to the foregoing alternative explanations. This evidence includes electrophysiological and cerebral blood flow studies localizing brain activity during imagery to cortical visual areas, and parallels between the selective effects of brain damage on visual perception and imagery. Because these findings cannot be accounted for in the same way as traditional cognitive data using the alternative explanations listed earlier, they can play a decisive role in answering the title question.

417 citations


Journal ArticleDOI
TL;DR: The performance of schizophrenic patients was compared with nonschizophrenic control subjects in their ability to direct visual attention and the patients demonstrated deficits in attention similar to patients from previous studies who had unilateral lesions of the left hemisphere.
Abstract: • Investigators have long suggested that schizophrenia might be related to an impairment in the regulation of attention. In this report, the performance of schizophrenic patients was compared with nonschizophrenic control subjects in their ability to direct visual attention. In the first experiment, patients were distinguished from controls by a slower response to a target in the right visual field than to a target in the left visual field when attention was not first directed to the target location. In the second experiment, patients were distinguished from controls by a stronger bias in favor of symbolic information over language information about spatial direction. In both experiments, the patients demonstrated deficits in attention similar to patients from previous studies who had unilateral lesions of the left hemisphere. The identification of performance abnormalities using tasks that are simple, have dissectable cognitive components, have been related to discrete neural systems, and control for nonspecific variables provide the basis for constructing reasonable hypotheses about the cognitive psychology and functional neuroanatomy of schizophrenia.

383 citations


Journal ArticleDOI
TL;DR: The prevalence of countershading in a variety of species, including many fishes, suggests that shading may be a crucial source of information about three-dimensional shape.
Abstract: Our visual experience of the world is based on two-dimensional images: nat patterns of varying light intensity and color faIling on a single plane of cells in the retina. Yet we come to perceive solidity and depth. We can do this because a number of cues about depth are available in the retinal image: shading, perspective, occlusion of one object by another and stereoscopic disparity. In some mysterious way the brain is able to exploit these cues to recover the three-dimensional shapes of objects. Of the many mechanisms employed by the visual system to recover the third dimension, the ability to exploit shading is probably the most primitive. One reason for believing this is that in the natural world many animals have evolved pale undersides, presumably to make themselves less visible to predators. \"Countershading\" compensates for the shading effects caused by the sun shining from above and has at least two benefits: it reduces the contrast with the background and it \"flattens\" the animal's perceived shape. The prevalence of countershading in a variety of species, including many fishes, suggests that shading may be a crucial source of information about three-dimensional shape. Painters, of course, have long ex-


Journal ArticleDOI
TL;DR: In this paper, a patient who had made a partial recovery from herpes simplex encephalitis was observed to have similar characteristics both with verbal and pictorial material, and a significant degree of consistency was observed between repeated presentations and across various modalities of administration of the same stimuli.
Abstract: A category-specific semantic disorder, selectively affecting Living things and food and sparing inanimate objects, was observed in a patient (LA) who had made a partial recovery from herpes simplex encephalitis. This impairment was observed to have similar characteristics both with verbal and pictorial material, and a significant degree of consistency was observed between repeated presentations and across various modalities of administration of the same stimuli. In order to study the possible role of interactions between verbal-semantic and visual-semantic impairment, we constructed a test of “naming animals by definitions”, in which two sorts of definitions were contrasted: (1) those stressing visual perceptual features; and (2) those using verbal metaphorical expressions, or a description of the function accomplished by that animal for man, to allow identification. LA performed much better in the second than in the first condition. On the grounds of these results, the following hypotheses were ...

Journal ArticleDOI
TL;DR: An experiment in which either verbal or spatial decision tasks, responded to with either voice or keypress, were time-shared with second-order tracking provided support for the importance of the dichotomy between verbal and spatial processing codes in accounting for task interference.
Abstract: The relevance of codes and modalities in a multiple-resource model to the prediction of task interference was investigated in an experiment in which either verbal or spatial decision tasks, responded to with either voice or key press, were time-shared with second-order tracking. Results indicate the importance of the dichotomy between verbal and spatial processing codes in accounting for task interference. Interference with tracking was consistently greater, and difficulty/performance trade-offs were stronger, when the spatial decision task was performed and the manual response was used. A review of the literature on the interference between a continuous visual task and a discrete task whose modality is either auditory or visual suggests that scanning produces a dominant cost to intramodal configurations when visual channels are separated in space. In absence of visual separation, the differences between cross-modal and intramodal performance may be best accounted for by a mechanism of preemption.


Journal ArticleDOI
TL;DR: A neural network model is described, based on back-propagation learning, that demonstrates how spatial location could be derived from the population response of area 7a neurons and accurately accounts for the observed response properties of these neurons.
Abstract: Lesion to the posterior parietal cortex in monkeys and humans produces spatial deficits in movement and perception. In recording experiments from area 7a, a cortical subdivision in the posterior parietal cortex in monkeys, we have found neurons whose responses are a function of both the retinal location of visual stimuli and the position of the eyes in the orbits. By combining these signals area 7 a neurons code the location of visual stimuli with respect to the head. However, these cells respond over only limited ranges of eye positions (eye-position-dependent coding). To code location in craniotopic space at all eye positions (eye-position-independent coding) an additional step in neural processing is required that uses information distributed across populations of area 7a neurons. We describe here a neural network model, based on back-propagation learning, that both demonstrates how spatial location could be derived from the population response of area 7a neurons and accurately accounts for the observed response properties of these neurons.

Journal ArticleDOI
TL;DR: In this article, an experiment was conducted on a circuit under actual driving conditions, where experienced drivers and beginners had to indicate the moment they expected a collision with a stationary obstacle to take place.
Abstract: Previous studies on the visual origin of time-to-collision (Tc) information have demonstrated that Tc estimates can be based solely on the processing of target expansion rate (optic variable tau). But in the simulated situations used (film clips), there was little reliable information on speed (owing to reduced peripheral vision) and distance (owing to the absence of binocular distance cues) available. In order to determine whether these kinds of information are also taken into account, it is necessary to take an approach where the subject receives a more complete visual input. Thus, an experiment conducted on a circuit under actual driving conditions is reported. Experienced drivers and beginners, who were passengers in a car, had to indicate the moment they expected a collision with a stationary obstacle to take place. Subjects were blindfolded after a viewing time of 3 s. The conditions for speed evaluation (normal versus restricted visual field) and distance evaluation (binocular versus monocular vision) by subjects were varied. The approach speed (30 and 90 km h-1) and actual Tc (3 and 6 s) were also varied. The results show that accuracy of Tc estimation increased with (i) normal visual field, (ii) binocular vision, (iii) higher speeds, and (iv) driving experience. These findings have been interpreted as indicating that both speed and distance information are taken into account in Tc estimation. They suggest furthermore that these two kinds of information may be used differently depending on the skill level of the subject. The results are discussed in terms of the complementarity of the various potentially usable visual means of obtaining Tc information.


Journal ArticleDOI
TL;DR: Grouped and individual results of the one direct and two indirect scaling tasks suggest that perceivers use these sources of information in an additive fashion, which suggests independent use of information by four separate, functional subsystems within the visual system, here called minimodules.
Abstract: SUMMARY In natural vision, information overspecifies the relative distances between objects and their layout in three dimensions. Directed perception applies (Cutting, 1986), rather than direct or indirect perception, because any single source of information (or cue) might be adequate to reveal relative depth (or local depth order), but many are present and useful to observers. Such overspecification presents the theoretical problem of how perceivers use this multiplicity of information to arrive at a unitary appreciation of distance between objects in the environment. This article examines three models of directed perception: selection, in which only one source of information is used; addition, in which all sources are used in simple combination; and multiplication, in which interactions among sources can occur. To establish perceptual overspecification, we created stimuli with four possible sources of monocular spatial information, using all combinations of the presence or absence of relative size, height in the projection plane, occlusion, and motion parallax. Visual stimuli were computer generated and consisted of three untextured parallel planes arranged in depth. Three tasks were used: one of magnitude estimation of exocentric distance within a stimulus, one of dissimilarity judgment in how a pair of stimuli revealed depth, and one of choice judgment within a pair as to which one revealed depth best. Grouped and individual results of the one direct and two indirect scaling tasks suggest that perceivers use these sources of information in an additive fashion. That is, one source (or cue) is generally substitutable for another, and the more sources that are present, the more depth is revealed. This pattern of results suggests independent use of information by four separate, functional subsystems within the visual system, here called minimodules. Evidence for and advantages of mmimodularity are discussed.

Journal ArticleDOI
01 Mar 1988-Cortex
TL;DR: It is argued that this syndrome has all the hallmarks of an apperceptive agnosia, a failure of perceptual categorisation in which the physical identity of the object is specified, and the two discontinuities between visual-sensory processing, perceptualategorisation and visual-semantic processing are discussed in terms of a 2 categorical stage model of object recognition.

Journal ArticleDOI
TL;DR: In this article, the authors investigated how sex and situation-specific power factors relate to visual behavior in mixed-sex interactions and found that both men and women high in expertise or reward power displayed high visual dominance, defined as the ratio of looking while speaking to looking while listening.
Abstract: Two studies, with undergraduate subjects, investigated how sex and situation-specific power factors relate to visual behavior in mixed-sex interactions. The power variable in Study 1 was expert power, based on differential knowledge. Mixed-sex dyads were formed such that members had complementary areas of expertise. In Study 2, reward power was manipulated. Consistent with expectation states theory, both men and women high in expertise or reward power displayed high visual dominance, defined as the ratio of looking while speaking to looking while listening. Specifically, men and women high in expertise or reward power exhibited equivalent levels of looking while speaking and looking while listening. High visual dominance ratios have been associated with high social power in previous research. Both men and women low in expertise or reward power looked more while listening than while speaking, producing a relatively low visual dominance ratio. In conditions in which men and women did not possess differential expertise or reward power, visual behavior was related to sex. Men displayed visual behavior similar to their patterns in the high expertise and high reward power conditions, whereas women exhibited visual behavior similar to their patterns in the low expertise and low reward power conditions. The results demonstrate how social expectations are reflected in nonverbal power displays.

Journal ArticleDOI
TL;DR: The pattern of DG uptake produced by binocular viewing was found to deviate in a number of ways from that expected by linearly summing the component monocular DG patterns, including an enhancement of the representation of visual field borders between stimuli differing from each other in texture, orientation, direction, etc.
Abstract: A series of experiments was carried out using 14C-2-deoxy-d-glucose (DG) in order to examine the functional architecture of macaque striate (primary visual) cortex. This paper describes the results of experiments on uptake during various baseline (or reference) conditions of visual stimulation (described below), and on differences in the functional architecture following monocular versus binocular viewing conditions. In binocular “baseline” experiments, monkeys were stimulated either (1) in the dark, (2) with a diffuse gray screen, or (3) with a very general visual stimulus composed of gratings of varied orientation and spatial frequency. In all of these conditions, DG uptake was found to be topographically uniform within all layers of parafoveal striate cortex. In monocular experiments that were otherwise similar, uptake was topographically uniform within the full extent of the eye dominance strip, in all layers. Certain other visual stimuli produce high uptake in the blobs, and still another set of visual stimuli (including high-spatial-frequency gratings) produce highest uptake between the blobs at parafoveal eccentricities, even in an unanesthetized, unparalyzed monkey. Eye movements per se had no obvious effect on striate DG uptake. Endogenous uptake in the blobs (relative to that in the interblobs) appears higher in the squirrel monkey than in the macaque. The pattern of DG uptake produced by binocular viewing was found to deviate in a number of ways from that expected by linearly summing the component monocular DG patterns. One of the most interesting deviations was an enhancement of the representation of visual field borders between stimuli differing from each other in texture, orientation, direction, etc. This “border enhancement” was confined to striate layers 1–3 (not appearing in any of the striate input layers), and it only appeared following binocular, but not monocular, viewing conditions. The border enhancement may be related to a suppression of DG uptake that occurs during binocular viewing conditions in layers 2 + 3 (and perhaps layers 1 and 4B), but not in layers 4Ca, 4Cb, 5 or 6. Another major class of binocular interaction was a spread of neural activity into the “unstimulated” ocular dominance strips following monocular stimulation. Such an effect was prominent in striate layer 4Ca, but it did not occur in layer 4Cb. This “binocular” spread of DG uptake into the inappropriate eye dominance strip in 4Ca may be related to the appearance of orientation tuning and orientation columns in that layer. No DG effects were seen that depended on the absolute disparity of visual stimuli in macaque striate cortex.

Journal ArticleDOI
TL;DR: The right hemisphere maintains a highly developed social-emotional mental system and can independently perceive, recall and act on certain memories and experiences without the aid or active reflective participation of the left, which leads to situations in which the right and left halves of the brain sometime act in an uncooperative fashion.
Abstract: Based on a review of numerous studies conducted on normal, neurosurgical and brain-injured individuals, the right cerebral hemisphere appears to be dominant in the perception and identification of environmental and nonverbal sounds; the analysis of geometric and visual space (e.g., depth perception, visual closure); somesthesis, stereognosis, the maintenance of the body image; the production of dreams during REM sleep; the perception of most aspects of musical stimuli; and the comprehension and expression of prosodic, melodic, visual, facial, and verbal emotion. When the right hemisphere is damaged a variety of cognitive abnormalities may result, including hemi-inattention and neglect, prosopagnosia, constructional apraxia, visual-perceptual disturbances, and agnosia for environmental, musical, and emotional sounds. Similarly, a myriad of affective abnormalities may occur, including indifference, depression, hysteria, gross social-emotional disinhibition, florid manic excitement, childishness, euphoria, impulsivity, and abnormal sexual behavior. Patients may become delusional, engage in the production of bizzare confabulations and experience a host of somatic disturbances such as pain and body-perceptual distortions. Based on studies of normal and "split-brain" functioning, it also appears that the right hemisphere maintains a highly developed social-emotional mental system and can independently perceive, recall and act on certain memories and experiences without the aid or active reflective participation of the left. This leads to situations in which the right and left halves of the brain sometime act in an uncooperative fashion, which gives rise to inter-manual and intra-psychic conflicts.

Journal ArticleDOI
TL;DR: An effect of imagery was seen within 200 ms following stimulus presentation, at the latency of the first negative component of the visual ERP, localized at the occipital and posterior temporal regions of the scalp, that is, directly over visual cortex, providing support for the claim that mental images interact with percepts in the visual system proper and hence thatmental images are themselves visual representations.
Abstract: Does mental imagery involve the activation of representations in the visual system? Systematic effects of imagery on visual signal detection performance have been used to argue that imagery and the perceptual processing of stimuli interact at some common locus of activity (Farah, 1985). However, such a result is neutral with respect to the question of whether the interaction occurs during modality-specific visual processing of the stimulus. If imagery affects stimulus processing at early, modality-specific stages of stimulus representation, this implies that the shared stimulus representations are visual, whereas if imagery affects stimulus processing only at later, amodal stages of stimulus representation, this implies that imagery involves more abstract, postvisual stimulus representations. To distinguish between these two possibilities, we repeated the earlier imagery-perception interaction experiment while recording event-related potentials (ERPs) to stimuli from 16 scalp electrodes. By observing the time course and scalp distribution of the effect of imagery on the ERP to stimuli, we can put constraints on the locus of the shared representations for imagery and perception. An effect of imagery was seen within 200 ms following stimulus presentation, at the latency of the first negative component of the visual ERP, localized at the occipital and posterior temporal regions of the scalp, that is, directly over visual cortex. This finding provides support for the claim that mental images interact with percepts in the visual system proper and hence that mental images are themselves visual representations.

Journal ArticleDOI
TL;DR: The verve of op art, the serenity of a pointillist painting and the 3-D puzzlement of an Escher print derive from the interplay of the art with the anatomy of the visual system.
Abstract: The verve of op art, the serenity of a pointillist painting and the 3-D puzzlement of an Escher print derive from the interplay of the art with the anatomy of the visual system. Color, shape and movement are each processed separately by different structures in the eye and brain and then are combined to produce the experience we call perception.

Journal ArticleDOI
TL;DR: This paper shows how the seemingly intractable problem of visual perception can be converted into a much simpler problem by the application of several physical and biological constraints and argues strongly for the validity of the computational approach to modeling the human visual system.
Abstract: This paper demonstrates how serious consideration of the deep complexity issues inherent in the design of a visual system can constrain the development of a theory of vision. We first show how the seemingly intractable problem of visual perception can be converted into a much simpler problem by the application of several physical and biological constraints. For this transformation, two guiding principles are used that are claimed to be critical in the development of any theory of perception. The first is that analysis at the ‘complexity level’ is necessary to ensure that the basic space and performance constraints observed in human vision are satisfied by a proposed system architecture. Second, the ‘maximum power/minimum cost principle’ ranks the many architectures that satisfy the complexity level and allows the choice of the best one. The best architecture chosen using this principle is completely compatible with the known architecture of the human visual system, and in addition, leads to several predictions. The analysis provides an argument for the computational necessity of attentive visual processes by exposing the computational limits of bottom-up early vision schemes. Further, this argues strongly for the validity of the computational approach to modeling the human visual system. Finally, a new explanation for the pop-out phenomenon so readily observed in visual search experiments, is proposed.

Journal ArticleDOI
TL;DR: This article found that retinotopic visual persistence at which transsaccadic masking occurs and spatiotemporal visual integration at which spatial fusion occurs across saccades using a similar procedure, and they concluded that their findings can be explained solely in retiniotopic terms and provide no convincing evidence for spatiotopic visual persistency.
Abstract: The visual world appears unified, stable, and continuous despite rapid changes in eye position. How this is accomplished has puzzled psychologists for over a century. One possibility is that visual information from successive eye fixations is fused in memory according to environmental or spatiotopic coordinates. Evidence supporting this hypothesis was provided by Davidson, Fox, and Dick (1973). They presented a letter array in one fixation and a mask at one letter position in a subsequent fixation and found that the mask inhibited report of the letter that shared its retinal coordinates but appeared to occupy the same position as the letter that shared its spatial coordinates. This suggests the existence of a retinotopic visual persistence at which transsaccadic masking occurs and a spatiotopic visual persistence at which transsaccadic integration, or fusion, occurs. Using a similar procedure, we found retinotopic masking and retinotopic integration: The mask interfered with the letter that shared its retinal coordinates, but also appeared to cover that letter. In another experiment, instead of a mask we presented a bar marker over one letter position, and subjects reported the letter that appeared underneath the bar; subjects usually reported the letter with the same retinal coordinates as the bar, again suggesting retinotopic rather than spatiotopic integration across saccades. In Experiment 3 a bar marker was again presented over one letter position, but in addition a visual landmark was presented after the saccade so that subjects could localize the bar's spatial position; subjects still reported that the letter that shared the bar's retinal coordinates appeared to be under it, but they were also able to accurately specify the bar's spatial position. This ability could have been based on retinal information (the visual landmark) present in the second fixation only, however, rather than spatiotopic visual persistence. Because such a visual landmark was present in the Davidson et al. (1973) experiments, we conclude that their findings can be explained solely in retinotopic terms and provide no convincing evidence for spatiotopic visual persistence. But the exposure parameters that Davidson et al. (1973) and we used were biased in favor of retinotopic, rather than spatiotopic, coding: The stimuli were presented very briefly just before a saccadic eye movement, and subjects are poor at spatially localizing stimuli under these conditions. Thus, in Experiment 4 we presented the letter array about 200 ms before the saccade; then, subjects reported that the letter with the same spatial coordinates as the bar appeared under it.(ABSTRACT TRUNCATED AT 400 WORDS)

Journal ArticleDOI
TL;DR: For instance, Lewkowicz et al. as discussed by the authors found that the visual modality can dominate the auditory modality when the temporal relationship of the information in the two modalities is distinct.
Abstract: A series of studies was conducted with 10-month-old infants in which their response to temporally modulated auditory-visual compounds was examined. The general procedure consisted of first habituating the infants to a compound stimulus (consisting of a flashing checkerboard and a pulsing sound) and then testing their response to it by presenting a series of trials where either one or two temporal attributes of the visual, the auditory, or of both components were changed. When the auditory and visual components were temporally identical, during the habituation phase, the infants only encoded the temporal attributes of the auditory component. When the two components were temporally distinct, or when they were identical but when multiple discriminative cues were available, the infants encoded the temporal aspects of both the auditory and the visual components. When the information context was made more complex, the infants' performance deteriorated, but when the salience of the visual component was increased the infants' performance improved. In sum, although the auditory modality can dominate the visual modality at 10 months of age, the visual modality can process temporal information when the temporal relationship of the information in the two modalities is distinct. Most of the research on sensory/perceptual development in human infants has been concerned with the functional characteristics of single sensory modalities. Only quite recently have investigators begun to examine the way that sensory systems interact with one another during early development (Rose & Ruff, 1987). Most of the research on intersensory interaction has, however, been concerned with the detection of intersensory equivalence. The implicit assumption of all this work has been that the sensory systems operate on an equal footing. Given that the structural and functional aspects of the different sensory systems develop asynchronously (Bronson, 1982; Gottlieb, 1971; Kasatkin, 1972; Volokhov, 1968), however, the possibility that sensory dominance hierarchies operate during early development and that their nature may change during this time must be considered seriously. In a companion article, Lewkowicz (1988; see pp. 155-171, this issue) reported on a set of studies that investigated 6-monthold infants' processing of temporally based auditory-visual compounds. The general purpose of those studies was to investigate the relationship of the auditory and visual modalities when information from each of them competed for the infant's attention. Specifically, these studies systematically determined whether responsiveness to information in one sensory modality

Journal ArticleDOI
TL;DR: The authors found that good readers-poor spellers (mixed) are characterized by a set of deficits that differentiates them from poor readers, and that poor spellers had relatively good visual memory for words and showed relatively good use of rudimentary sound-letter correspondences.
Abstract: Results of recent studies comparing the spelling errors of children with varying discrepancies between their reading and spelling skills have yielded conflicting results. Some studies suggest that good readers-poor spellers (mixed) are characterized by a set of deficits that differentiates them from poor readers-poor spellers (poor). Other studies fail to find differences between groups of poor spellers who differ in their reading skills. The present study attempted to determine the degree to which these discrepant results reflected differences in methods of subject selection and of error analysis. Two different sets of criteria were used to identify poor spellers-good readers. Subjects were selected on the basis of standardized reading comprehension and spelling test scores or on the basis of standardized single-word-recognition and spelling-test scores. The phonetic accuracy of the spelling errors was assessed using two different scoring systems – one that took positional constraints into account and one that did not. In addition, children were identified at two different age levels, allowing for developmental comparisons. Regardless of age or reading ability, poor and mixed spellers had difficulty converting sounds into positionally appropriate graphemes. Only older children with good word recognition but poor spelling skills provided some evidence for a distinct subgroup of poor spellers. These children had relatively good visual memory for words and, unlike other poor spellers, showed relatively good use of rudimentary sound-letter correspondences.

Journal ArticleDOI
TL;DR: It was concluded that autistic subjects perform particularly poorly on meaningless materials, but they are able to utilize meaning to aid their visual memory.
Abstract: High-functioning autistic individuals were compared with age-matched normal control subjects on a visual recognition memory task. In order to evaluate the effects of “meaning” and“delay” on the visual memory of autistic individuals, meaningful (pictures) and meaningless (nonsense shapes) stimuli were presented visually in no delay and 1-minute delay intervals to both groups. It was concluded that autistic subjects perform particularly poorly on meaningless materials, but they are able to utilize meaning to aid their visual memory. Contrary to expectations, 1-minute delay intervals did not differentially affect the visual memory performance of autistic individuals compared to control subjects. The results do not support the idea of a simple parallel between autism and mediotemporal lobe amnesias. The visual memory performance of the autistic subjects was discussed in the light of the possibility of a subtle involvement of the mediotemporal brain structures and inflexible cognitive strategies poorly suited to encode novel information.