scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1985"


Book ChapterDOI
TL;DR: This study addresses the question of how simple networks of neuron-like elements can account for a variety of phenomena associated with this shift of selective visual attention and suggests a possible role for the extensive back-projection from the visual cortex to the LGN.
Abstract: Psychophysical and physiological evidence indicates that the visual system of primates and humans has evolved a specialized processing focus moving across the visual scene. This study addresses the question of how simple networks of neuron-like elements can account for a variety of phenomena associated with this shift of selective visual attention. Specifically, we propose the following: (1) A number of elementary features, such as color, orientation, direction of movement, disparity etc. are represented in parallel in different topographical maps, called the early representation. (2) There exists a selective mapping from the early topographic representation into a more central non-topographic representation, such that at any instant the central representation contains the properties of only a single location in the visual scene, the selected location. We suggest that this mapping is the principal expression of early selective visual attention. One function of selective attention is to fuse information from different maps into one coherent whole. (3) Certain selection rules determine which locations will be mapped into the central representation. The major rule, using the conspicuity of locations in the early representation, is implemented using a so-called Winner-Take-All network. Inhibiting the selected location in this network causes an automatic shift towards the next most conspicious location. Additional rules are proximity and similarity preferences. We discuss how these rules can be implemented in neuron-like networks and suggest a possible role for the extensive back-projection from the visual cortex to the LGN.

3,930 citations


Journal ArticleDOI
TL;DR: The historical development of the evidence of response selectivity for visual stimuli presented beyond the CRF is traced; the anatomical pathways that sub serve these far-reaching surround mechanisms are examined; and the possible relationships between these mechanisms and perception are explored.
Abstract: We perceive the visual world as a unitary whole, yet one of the guiding principles of nearly a half century of neurophysiological research since the early recordings by Hartline (1938) has been that the visual system consists of neurons that are driven by stimulation within small discrete portions of the total visual field. These classical receptive fields (CRFs) have been mapped with the excitatory responses evoked by a flashed or moving stimulus, usually a spot or bar of light. Most of the visual neurons, in turn, are organized in a series of maps of the visual field, at least 10 of which exist in the visual cortex in primates as well as additional topographic representations in the lateral geniculate body, pulvinar and optic tectum (Allman 1977, Newsome & Allman 1980, Allman & Kaas 1984). It has been widely assumed that perceptual functions that require the integration of inputs over large portions of the visual field must be either collective properties of arrays of neurons representing the visual field, or features of those neurons at the highest processing levels in the visual system, such as the cells in inferotemporal or posterior parietal cortex that typically possess very large receptive fields and do not appear to be organized in visuotopic maps. These assumptions have been based on the results of the many studies in which receptive fields were mapped with conventional stimuli, presented one at a time, against a featureless background. However, unlike the neurophysiologist's tangent screen, the natural visual scene is rich in features, and there is a growing body of evidence that in many visual neurons stimuli presented outside the CRF strongly and selectively influence neural responses to stimuli presented within the CRF. These results suggest obvious mechanisms for local-global comparisons within visuotopically organized structures. Such broad and specific surround mechanisms could participate in many functions that require the integration of inputs over wide regions of the visual space such as the perceptual constancies, the segregation of figure from ground, and depth perception through motion parallax. In the first section of this paper, we trace the historical development of the evidence of response selectivity for visual stimuli presented beyond the CRF; in the second, examine the anatomical pathways that sub serve these far-reaching surround mechanisms; and in the third, explore the possible relationships between these mechanisms and perception.

1,079 citations


Book
01 Jan 1985
TL;DR: This book discusses the Physiological Basis of Visual Perception, theories of the Control of Action, and the Ecological Approach to Visual Perception.
Abstract: Part I. The Physiological Basis of Visual Perception. Light and Eyes. The Neurophysiology of the Retina. Visual Pathways in the Brain. Part II. Processing Retinal Images. Approaches to the Psychology of Visual Perception. Images, Filters and Features: The Primal Sketch. Perceptual Organisation. Perceiving Depth. The Computation of Image Motion. Object Recognition. Connectionist Models of Visual Perception. Part III. Visual Information for the Control of Action. Introduction to the Ecological Approach to Visual Perception. Visual Guidance Of Animal Locomotion. Visual Guidance of Human Action. Theories of the Control of Action. Event Perception. Perception of the Social World. Part IV. Conclusions. Contrasting Theories of Visual Perception.

575 citations


Journal ArticleDOI
01 May 1985-Nature
TL;DR: It is reported that V4 and V5 are connected with separate cytochrome oxidase-defined subregions of V2, suggesting that cortical pathways dealing with motion and colour perception are segregated in their passage through V1, and reinforcing evidence for functional specialization in the visual cortex.
Abstract: V5 and V4 are areas of macaque monkey prestriate visual cortex that are specialized for involvement in different aspects of visual perception, namely motion for V5 (refs 1-4) and colour vision, with other possible functions, for V4 (refs 2, 5-9). Thus, it is unlikely that they should be fed the same information for further processing, yet both receive a strong input from patches of the upper layers of V2 (refs 10, 11), the area immediately adjoining the primary visual cortex, V1. V2, however, seems to comprise functionally distinct subregions, which can be revealed by staining the tissue for the mitochondrial enzyme cytochrome oxidase. Here we report that V4 and V5 are connected with separate cytochrome oxidase-defined subregions of V2, suggesting that cortical pathways dealing with motion and colour perception are segregated in their passage through V2, and reinforcing evidence for functional specialization in the visual cortex.

446 citations


Journal ArticleDOI
TL;DR: A general computational treatment of how mammals are able to deal with visual objects and environments that tries to cover the entire range from behavior and phenomenological experience to detailed neural encodings in crude but computationally plausible reductive steps.
Abstract: This paper presents a general computational treatment of how mammals are able to deal with visual objects and environments The model tries to cover the entire range from behavior and phenomenological experience to detailed neural encodings in crude but computationally plausible reductive steps The problems addressed include perceptual constancies, eye movements and the stable visual world, object descriptions, perceptual generalizations, and the representation of extrapersonal spaceThe entire development is based on an action-oriented notion of perception The observer is assumed to be continuously sampling the ambient light for information of current value The central problem of vision is taken to be categorizing and locating objects in the environment The critical step in this process is the linking of visual information to symbolic object descriptions; this is called indexing, from the analogy of identifying a book from index terms The system must also identify situations and use this knowledge to guide movement and other actions in the environment The treatment focuses on the different representations of information used in the visual systemThe four representational frames capture information in the following forms: retinotopic, head-based, symbolic, and allocentric The functional roles of the four frames, the communication among them, and their suggested neurophysiological realization constitute the core of the paper The model is perforce crude, but appears to be consistent with all relevant findings

385 citations


Journal ArticleDOI
TL;DR: This article found that infants can discriminate between a perfectly contingent live display of their own leg motion and a non-con- tingent display of self or a peer by preferential fixation.
Abstract: Five-month-old infants can detect the invariant relationship between their own leg motion and a video display of that motion. In three experiments they discriminated between a perfectly contingent live display of their own leg motion and a noncon- tingent display of self or a peer. They showed this discrimination by preferential fixation of the noncontingent display. This effect was evident even when the infant's direct view of his or her own body was occluded, eliminating video image discrim- ination on the basis of an intramodal visual comparison between the sight of self- motion and the video display of that motion. These findings suggest that the con- tingency provided by a live display of one's body motion is perceived by detecting the invariant intermodal relationship between proprioceptive information for motion and the visual display of that motion. The detection of these relations may be fun- damental to the development of self-perception in infancy. In addition, though 3- month-olds did not show significant discrimination of the contingent and noncon- tingent displays, they did show significantly more extreme looking proportions to the two displays than did the 5-month-olds. This may reflect the infant's progression from self to social orientation. Lewis and Brooks-Gunn (1979) found that by the end of the first year of life, infants are able to discriminate a "live" video image of the self from a recorded image of the self or a peer. The authors propose that this self-rec- ognition is primarily based on the detection of contingent visual stimulation from the live video image. That is, movement of the infant's hand, for example, results in comparable movement of the hand in the video image. Furthermore, they propose that the earliest stages of self-perception are probably based on the infant's detection of some form of response

370 citations


Journal ArticleDOI
TL;DR: Two patients with impaired visual perception and imagery caused by bilateral posterior cerebral lesions had prosopagnosia and achromatopsia, and the imagery disorder involved the description of objects from memory, especially faces and animals, and colors of objects.
Abstract: We studied two patients with impaired visual perception and imagery caused by bilateral posterior cerebral lesions. The first had prosopagnosia and achromatopsia, and the imagery disorder involved the description of objects from memory, especially faces and animals, and colors of objects. The second had visual disorientation; the imagery problem involved the description of spatial relations from memory. Impairments of visual imagery, like disorders of visual perception, can be dissociated. Object and color imagery may be dissociated from imagery for spatial relations. A given imagery deficit tends to be associated with the corresponding type of perceptual deficit.

367 citations


Journal ArticleDOI
TL;DR: Evidence is shown that children become sensitive to some distinctions in memories sooner than they do to others, and that information about origin is part of the memory for an event.
Abstract: Children are often assumed to be more confused than adults are about the origin of self-generated memories (e.g., what they did or thought). The present experiments showed evidence in support of this assumption but only under some circumstances. In Experiment 1, 6- and 9-year-olds were as good as adults in distinguishing what they did from what they saw someone else do. However, children had particular trouble distinguishing what they did from what they imagined doing. Confusion between performed and imagined actions was evident across a range of actions. Clustering data also showed that information about origin is part of the memory for an event; all subjects recalled actions according to who performed what action (Experiment 1). Further, the presence of person categories as a basis for organization reduced clustering based on action class more for children than for adults (Experiment 1 vs. 2). Collectively, these findings indicate that children become sensitive to some distinctions in memories sooner than they do to others.

238 citations


Journal ArticleDOI
TL;DR: This assumption that imagery is similar to perception has led many psychologists to assume that imaging an object consists of activating some of the same representational structures that are activated during the perception of that object, and was tested by measuring the effects of visual imagery on concurrent visual perception.
Abstract: The intuition that imagery is similar to perception has led many psychologists to assume that imaging an object consists of activating some of the same representational structures that are activated during the perception of that object. This assumption was tested by measuring the effects of visual imagery on concurrent visual perception. The experimental task consisted of a two-interval forced-choice detection task (no stimulus identification required) during which the subject imaged a particular stimulus. In Experiment 1, a matching image led to better detection than a nonmatching image. Interactions between imagery and perception imply a common locus of activity, and the content-specific interactions obtained here imply that the common locus consists of representational structures. In Experiment 2, a matching image facilitated perception only when the image and the stimulus were in the same position. This was taken to imply that the shared representational structures occur at an analog level of perceptual representation.

226 citations



Journal ArticleDOI
TL;DR: This work examines a number of investigations of perceptual economy or, more specifically, of minimum tendencies and minimum principles in the visual perception of form, depth, and motion and concludes that those which favor massively parallel processing are the most likely.
Abstract: We examine a number of investigations of perceptual economy or, more specifically, of minimum tendencies and minimum principles in the visual perception of form, depth, and motion. A minimum tendency is a psychophysical finding that perception tends toward simplicity, as measured in accordance with a specified metric. A minimum principle is a theoretical construct imputed to the visual system to explain minimum tendencies. After examining a number of studies of perceptual economy, we embark on a systematic analysis of this notion. We examine the notion that simple perceptual representations must be defined within the "geometric constraints" provided by proximal stimulation. We then take up metrics of simplicity. Any study of perceptual economy must use a metric of simplicity; the choice of metric may be seen as a matter of convention, or it may have deep theoretical and empirical implications. We evaluate several answers to the question of why the visual system might favor economical representations. Finally, we examine several accounts of the process for achieving perceptual economy, concluding that those which favor massively parallel processing are the

Journal ArticleDOI
TL;DR: This paper found functional dissociations between the kinds of imagery tasks that could be performed in the left and right cerebral hemispheres of two patients who had previously undergone surgical transection of their corpus callosa.
Abstract: Recent efforts to build computer simulation models of mental imagery have suggested that imagery is not a unitary phenomenon. Rather, such efforts have led to a modular analysis of the image-generation process, with separate modules that can activate visual memories, inspect parts of imaged patterns, and arrange separate parts into a composite image. This idea was supported by the finding of functional dissociations between the kinds of imagery tasks that could be performed in the left and right cerebral hemispheres of two patients who had previously undergone surgical transection of their corpus callosa. The left hemisphere in both subjects could inspect imaged patterns and could generate single and multipart images. In contrast, although the right hemisphere could inspect imaged patterns and could generate images of overall shape, it had difficulty in generating multipart images. The results suggest a deficit in the module that arranges parts into a composite. The observed pattern of deficits and abilities implied that this module is not used in language, visual perception, or drawing. Furthermore, the results suggest that the basis for this deficit is not a difficulty in simply remembering visual details or engaging in sequential processing.



Journal ArticleDOI
TL;DR: Attempts were made to retrain twelve homonymous hemianopic or quadrantanopic patients with similar methods, but under conditions in which possible contaminating experimental variables were controlled, indicating that visual field increases are not trainable.
Abstract: Investigators have recently reported that specific practice facilitates the restitution of visual fields in partially blinded humans with lesions to the striate cortex. In order to further evaluate this work, attempts were made to retrain twelve homonymous hemianopic or quadrantanopic patients with similar methods, but under conditions in which possible contaminating experimental variables were controlled, including: reliance on gross subjective impressions, large visual stimuli response variability, changes in detection strategies with practice and compensatory eccentric fixation. The results indicate that visual field increases are not trainable. It is concluded that previous studies should be regarded with caution and the restitution of visual fields after damage to the striate cortex in humans is probably not possible with existing methods.

Journal ArticleDOI
TL;DR: In this article, the authors examined the respective role of the cerebral hemispheres in face perception and the nature of their contribution depending on task demands and on the spatial-frequency composition of the stimuli.
Abstract: Two experiments examined the respective role of the cerebral hemispheres in face perception and the nature of their contribution depending on task demands and on the spatial-frequency composition of the stimuli. Sixteen faces of members of the subjects' department were presented as stimuli, with men and women, and professors and nonprofessors being equally represented. In Experiment 1, high-resolution black-and-white photographs of faces were used in three reaction-time tasks: verbal identification, manual membership categorization, and manual male/female categorization, in a within-subject design. Identification and membership categorization were significantly better performed in right-visual-field presentations, whereas the male/female categorization yielded a nonsignificant left-visual-field superiority. In Experiment 2, two versions of the same faces were used: digitized low-pass (0 to 2 cycles/degree of visual angle) and digitized broad-pass (0 to 32 cycles/degree) faces. Broad-pass faces produced the same laterality pattern as in Experiment 1, while low-pass faces were better processed in left-visual-field presentations for all three tasks. The results suggest that the two hemispheres play a role in face perception, and their contribution may vary as a function of the task demands and of the spatial-frequency components of the incoming information.

Journal ArticleDOI
TL;DR: A theory of mobility using nonvisual stimuli and cognitive control process is proposed to augment Gibson's (1958, 1979) explanations of visual guidance, to describe the overall processes of guidance by which both blind and sighted travelers move through space.
Abstract: A theory of mobility using nonvisual stimuli and cognitive control process is proposed to augment Gibson's (1958, 1979) explanations of visual guidance. Nonvisual processes are clearly important to the totally blind, who often manage considerable independent mobility in the absence of vision, but are also important to the sighted. Mobility can be directed by visual control stimuli in the ambient optic array, by nonvisual control stimuli, as well as by processes of spatial learning, including stimulus-response (S-R) rote learning, motor plans, schemas, and cognitive maps. The selection of processes and strategies depends on the availability of particular information or on task demands. Attentional processes select stimuli for locomotor control within any particular modality and select between perceptual and cognitive processes. The skill of traveling through the spatial environment, avoiding obstacles, and traveling directly or indirectly toward goals, is a general characteristic of animal behavior and is described here by the term mobility. Although this term has a special connotation within blindness rehabilitation (Welsh & Blasch. 1980), it is used here to describe the overall processes of guidance by which both blind and sighted travelers move through space. The study of mobility encompasses several more traditional research concerns, such as space perception, motor control, and spatial cognition. Until recently there has been little research related to general psychological theory of mobility. A comparison may be made with the study of reading, where considerable research has taken place on component tasks such as letter extraction, word recognition, and eye movements, but where there has been comparatively little interest until recently in the general rules of the process (see Haber, 1978).


Journal ArticleDOI
TL;DR: The results demonstrate that the ability of human observers to perceive structure from motion is much more general than would be reasonable to expect on the basis of existing theory, and suggest that the modular analyses of visual information will have to be modified if they are to account for the high level of generality exhibited by human observers.
Abstract: A fundamental assumption of almost all existing computational analyses of the perception of structure from motion is that moving elements on the retina projectively correspond to identifiable moving points in three-dimensi onal space. The present investigation was designed to determine the psychological validity of this assumption in several different contexts. The results demonstrate that the ability of human observers to perceive structure from motion is much more general than would be reasonable to expect on the basis of existing theory. Observers can experience a compelling kinetic depth effect even when the pattern of optical motion is contaminated by large amounts of visual noise (e.g., where the signal to noise ratio is less than 0.15). Moreover, the optical deformations of shading, texture, or selfoccluding contours, which would be treated as noise by existing computational models, are analyzed by human observers as perceptually salient sources of information about an object's three-dimensional form. These results suggest that the modular analyses of visual information that currently dominate the literature will have to be modified if they are to account for the high level of generality exhibited by human observers.

Journal ArticleDOI
TL;DR: The present results reveal that assimilation is about half as effective as physical contrast in determining the apparent brightness of objects, implying that previous theories of vision will have to be revised; the importance of physical contrast must be weighted more strongly.
Abstract: The rapid estimation of the brightness of objects is one of the nervous system's major visual tasks. Exactly how the eye and brain perform this basic task is still not understood. Two mechanisms that contribute to human perception of the brightness of objects have been identified previously: (i) the visual response to physical contrast and (ii) assimilation. Use of a unique visual display device allowed us to measure the relative importance of these two mechanisms. The present results reveal that assimilation is about half as effective as physical contrast in determining the apparent brightness of objects. These results imply that previous theories of vision--for instance, the retinex theory--will have to be revised; the importance of physical contrast must be weighted more strongly.

Journal ArticleDOI
TL;DR: It is suggested that the ability to contract and expand the size of the attentional "spotlight" improves with age in the school years, and younger children experienced more interference when the elements were closely spaced.

Journal ArticleDOI
TL;DR: In this paper, the authors found that complex visual stimuli were remembered better when presented at high luminance than when shown at low luminance, and two possibilities were considered: lowering luminance reduces the amount of available information in the stimulus and reducing the rate at which the information is extracted from the stimulus.
Abstract: SUMMARY In each of four experiments, complex visual stimuli--pictures and digit arrays--were remembered better when shown at high luminance than when shown at low luminance. Why does this occur? Two possibilities were considered: first that lowering luminance reduces the amount of available information in the stimulus, and second that lowering luminance reduces the rate at which the information is extracted from the stimulus. Evidence was found for both possibilities. When stimuli were presented at durations short enough to permit only a single eye fixation, luminance affected only the rate at which information is extracted: decreasing luminance by a factor of 100 caused information to be extracted more slowly by a factor that ranged, over experiments, from 1.4 to 2.0. When pictures were presented at durations long enough to permit multiple fixations, however, luminance affected the total amount of extractable information. In a fifth experiment, converging evidence was sought for the proposition that within the first eye fixation on a picture, luminance affects the rate of information extraction. If this proposition is correct and, in addition, the first eye fixation lasts until some criterion amount of information is extracted, then fixation duration should increase with decreasing luminance. This prediction was confirmed.

Journal ArticleDOI
TL;DR: Although the shortest response latencies were found in occipital cortex, considerable temporal overlap of the sample-related activities in the two cortices was observed, and the finding that most inferotemporal cells, unlike occipitals, treated only the sample with excitatory response indicates that the inferOTemporal cortex is selectively attuned to visual detail.

Journal ArticleDOI
TL;DR: It was concluded that the neural processing of information along the visual pathways of the two species is generally similar and that the monkey is an excellent model of the human visual system.
Abstract: Data for three fundamental psychophysical functions (spatial modulation sensitivity, temporal modulation sensitivity, and increment-threshold spectral sensitivity) were compared for groups of 12 rhesus monkeys and 12 human subjects. It was found that there are important, nontrivial differences between the data for monkeys and humans, but that many of the differences could be accounted for by structural or passive differences in the visual systems. Therefore, it was concluded that the neural processing of information along the visual pathways of the two species is generally similar and that the monkey is an excellent model of the human visual system.

Journal ArticleDOI
TL;DR: Subjects appeared not to treat the two sources of information as independent; rather, the probability of a correct response in the combined vision-touch condition could be best described as the arithmetic mean of the vision and touch conditions.
Abstract: In two completely randomized experiments, subjects were required to judge either which was the rougher of two abrasive papers or whether two abrasive papers were the same or different. Judgments were made visually, tactually, or with both vision and touch available. The subjects used either the right hand or the left hand in the touch conditions. Differences between the hands in terms of either proportion correct or mean latency were negligible in both experiments. Accuracy was statistically equivalent across conditions, although the latency of visual judgments was shorter. In the same-different experiment, comparable accuracy for vision and touch appeared to result from different strategies. Subjects in the touch condition were much less likely to be correct without guessing on “different” trials. In a third, within-subject experiment, a comparison was made of four probability models of dual-mode efficiency. Subjects appeared not to treat the two sources of information as independent; rather, the probability of a correct response in the combined vision-touch condition could be best described as the arithmetic mean of the vision and touch conditions. Latencies for the combined condition also appeared to reflect a similar compromise. Implications for further research are discussed.

Book ChapterDOI
TL;DR: This chapter focuses on the reason that determines infants' visual preferences at different ages and proposes a quantitative model of preferences based on linear systems techniques and test it against data from several well-known preference experiments, finding that the model's predictions agree quite well with observed preferences for a variety of stimuli.
Abstract: Publisher Summary This chapter focuses on the reason that determines infants' visual preferences at different ages and proposes a quantitative model of preferences based on linear systems techniques and test it against data from several well-known preference experiments The model's predictions agree quite well with observed preferences for a variety of stimuli The success of this model implies that infants' visual preferences are governed simply by a tendency to look at highly visible patterns This account of early preferential looking is thus consonant with the understanding of how the growth of basic sensory mechanisms affects visual perception during the first months of life Linear systems analysis is based on Fourier's theorem This powerful theorem implies that any two-dimensional, time-invariant visual stimulus can be exactly described by combining a set of more basic stimuli These basic stimuli are sine wave gratings A sine wave grating is a pattern of light and dark stripes whose intensity varies sinusoidally with position Sine wave gratings are specified by four parameters—spatial frequency, orientation, phase, and contrast Fourier's theorem implies then that even a complex, two-dimensional visual stimulus, such as the picture of a face, can be described exactly by the combination of a set of gratings of various frequencies, orientations, phases, and contrasts

Journal ArticleDOI
TL;DR: The hypothesis that perceptual malfunction may underlie some other schizophrenic disorders such as delusions or catatonia is suggested to be correct.
Abstract: Questionnaires were completed by 73 volunteers with a history of probable schizophrenia, and by 67 patients admitted to the accident wards of a general hospital. A high incidence of sensory distortions was recorded in the schizophrenic group compared with the controls, distortions of brightness contrast being the most common. Many schizophrenic patients reported that such distortions were reduced or removed early in treatment. Patients who reported visual distortions also tended to report visual hallucinations. The results are discussed with reference to the hypothesis that perceptual malfunction may underlie some other schizophrenic disorders such as delusions or catatonia.

Journal ArticleDOI
TL;DR: The finding of significant impairments for fallers in visual perceptual abilities confirmed a trend previously established by one of the authors (Tobis) and felt that this relatively greater dependence on visual sources may develop in response to impairment of feedback on posture and gait from the kinesthetic and vestibular systems as a result of age and chronic health problems.
Abstract: The authors postulated that older adult fallers show a greater tendency than older adult nonfallers to rely more on visual information sources in maintaining upright posture than on kinesthetic and vestibular cues. This paper presents descriptive statistics on 199 older adults living independently in the community. Their visual perception of the vertical and horizontal was analyzed with respect to age, sex, health status, and severity of injury as a result of a fall. The finding of significant impairments for fallers in visual perceptual abilities confirmed a trend previously established by one of the authors (Tobis). When the visual field entailed only misleading or ambiguous cues in the form of a tilted frame, fallers again showed a larger error than nonfallers in establishing the vertical and horizontal. The authors feel that this relatively greater dependence on visual sources may develop in response to impairment of feedback on posture and gait from the kinesthetic and vestibular systems as a result of age and chronic health problems. Errors in visual perception of the vertical and horizontal intercorrelated with age, sex, and a large number of medical problems. However, visual variables were more important in predicting faller status than physical characteristics.

Journal ArticleDOI
TL;DR: The present paper reviews a series of studies regarding the effects of hemispheric asymmetry and reading and writing habits on directional preferences in reproducing horizontally-displayed visual stimuli, and finds that with the introduction of English as a foreign language in the fifth grade, children show an increase in left-right directionality.
Abstract: The present paper reviews a series of studies regarding the effects of hemispheric asymmetry and reading and writing habits on directional preferences in reproducing horizontally-displayed visual s...

Journal ArticleDOI
TL;DR: 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement is examined, and infants demonstrate visual preferences for the sound-matched films, evidently detecting the relationship between auditory and visual information.
Abstract: 2 studies examined 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. In 1 experiment, infants were presented successively with 2 filmed events--1 of an automobile approaching, and the other of the same automobile driving away. A soundtrack that increased or decreased in amplitude was played along with each film, either in a match or mismatch condition. Infants did not show differential looking patterns related to the match or mismatch of auditory and visual information. In a second experiment, infants were tested using a paired preference technique. The films were shown side-by-side along with a single soundtrack appropriate to 1 of them. Looking time was monitored as before. These infants demonstrated visual preferences for the sound-matched films, evidently detecting the relationship between auditory and visual information when this procedure was used.