Author
J. Farley Norman
Other affiliations: Ohio State University, DePauw University, Brandeis University ...read more
Bio: J. Farley Norman is an academic researcher from Western Kentucky University. The author has contributed to research in topics: Binocular disparity & Visual perception. The author has an hindex of 33, co-authored 106 publications receiving 3562 citations. Previous affiliations of J. Farley Norman include Ohio State University & DePauw University.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A set of 4 experiments evaluated observers' sensitivity to three-dimensional (3-D) length, using both discrimination and adjustment paradigms with computer-generated optical patterns and real objects viewed directly in a natural environment.
Abstract: A set of 4 experiments evaluated observers' sensitivity to three-dimensional (3-D) length, using both discrimination and adjustment paradigms with computer-generated optical patterns and real objects viewed directly in a natural environment. Although observers were highly sensitive to small differences in two-dimensional length for line segments presented in the frontoparallel plane, their discrimination thresholds increased by an order of magnitude when the line segments were presented at random orientations in 3-D space. There were also large failures of constancy, such that the perception of 3-D length varied systematically with viewing distance, even under full-cue conditions.
214 citations
••
TL;DR: The results indicate that extracting 3D spatial information from stereo involves several intraparietal areas, among which AIP and anterior LIP are more specifically engaged in extracting the 3D shape of objects.
180 citations
••
TL;DR: Findings provide strong evidence that human observers do not have accurate perceptions of 3-D metric structure.
Abstract: Three experiments are reported in which observers judged the three-dimensional (3-D) structures of virtual or real objects defined by various combinations of texture, motion, and binocular disparity under a wide variety of conditions The tasks employed in these studies involved adjusting the depth of an object to match its width, adjusting the planes of a dihedral angle so that they appeared orthogonal, and adjusting the shape of an object so that it appeared to match another at a different viewing distance The results obtained on all of these tasks revealed large constant errors and large individual differences among observers There were also systematic failures of constancy over changes in viewing distance, orientation, or response task When considered in conjunction with other, similar reports in the literature, these findings provide strong evidence that human observers do not have accurate perceptions of 3-D metric structure
170 citations
••
TL;DR: A psychophysical experiment is described that measured the sensitivity of human observers to small differences of 3D shape over a wide variety of conditions and provides clear evidence that the presence of specular highlights or the motions of a surface relative to its light source do not pose an impediment to perception, but rather, provide powerful sources of information for the perceptual analysis of3D shape.
Abstract: There have been numerous computational models developed in an effort to explain how the human visual system analyzes three-dimensional (3D) surface shape from patterns of image shading, but they all share some important limitations. Models that are applicable to individual static images cannot correctly interpret regions that contain specular highlights, and those that are applicable to moving images have difficulties when a surface moves relative to its sources of illumination. Here we describe a psychophysical experiment that measured the sensitivity of human observers to small differences of 3D shape over a wide variety of conditions. The results provide clear evidence that the presence of specular highlights or the motions of a surface relative to its light source do not pose an impediment to perception, but rather, provide powerful sources of information for the perceptual analysis of 3D shape.
135 citations
••
TL;DR: The results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.
Abstract: In this study, we evaluated observers’ ability to compare naturally shaped three-dimensional (3-D) objects, using their senses of vision and touch. In one experiment, the observers haptically manipulated 1 object and then indicated which of 12 visible objects possessed the same shape. In the second experiment, pairs of objects were presented, and the observers indicated whether their 3-D shape was thesame ordifferent. The 2 objects were presented either unimodally (vision-vision or haptic-haptic) or cross-modally (vision-haptic or haptic-vision). In both experiments, the observers were able to compare 3-D shape across modalities with reasonably high levels of accuracy. In Experiment 1, for example, the observers’ matching performance rose to 72% correct (chance performance was 8.3%) after five experimental sessions. In Experiment 2, small (but significant) differences in performance were obtained between the unimodal vision-vision condition and the two cross-modal conditions. Taken together, the results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.
131 citations
Cited by
More filters
••
TL;DR: In this paper, the authors offer a new book that enPDFd the perception of the visual world to read, which they call "Let's Read". But they do not discuss how to read it.
Abstract: Let's read! We will often find out this sentence everywhere. When still being a kid, mom used to order us to always read, so did the teacher. Some books are fully read in a week and we need the obligation to support reading. What about now? Do you still love reading? Is reading only for you who have obligation? Absolutely not! We here offer you a new book enPDFd the perception of the visual world to read.
2,250 citations
••
TL;DR: Details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework are presented and a comparison with the classical approach of "stereoscopic" video is compared.
Abstract: This paper presents details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework. The work is part of the European Information Society Technologies (IST) project “Advanced Three-Dimensional Television System Technologies” (ATTEST), an activity, where industries, research centers and universities have joined forces to design a backwards-compatible, flexible and modular broadcast 3D-TV system. At the very heart of the described new concept is the generation and distribution of a novel data representation format, which consists of monoscopic color video and associated perpixel depth information. From these data, one or more “virtual” views of a real-world scene can be synthesized in real-time at the receiver side (i. e. a 3D-TV set-top box) by means of so-called depth-image-based rendering (DIBR) techniques. This publication will provide: (1) a detailed description of the fundamentals of this new approach on 3D-TV; (2) a comparison with the classical approach of “stereoscopic” video; (3) a short introduction to DIBR techniques in general; (4) the development of a specific DIBR algorithm that can be used for the efficient generation of high-quality “virtual” stereoscopic views; (5) a number of implementation details that are specific to the current state of the development; (6) research on the backwards-compatible compression and transmission of 3D imagery using state-of-the-art MPEG (Moving Pictures Expert Group) tools.
1,560 citations
••
TL;DR: Three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices are identified.
Abstract: The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.
1,072 citations
01 Jan 1998
TL;DR: The lateral intraparietal area (LIP) as mentioned in this paper has been shown to have visual responses to stimuli appearing abruptly at particular retinal locations (their receptive fields) and the visual representation in LIP is sparse, with only the most salient or behaviourally relevant objects being strongly represented.
Abstract: When natural scenes are viewed, a multitude of objects that are stable in their environments are brought in and out of view by eye movements. The posterior parietal cortex is crucial for the analysis of space, visual attention and movement 1 . Neurons in one of its subdivisions, the lateral intraparietal area (LIP), have visual responses to stimuli appearing abruptly at particular retinal locations (their receptive fields)2. We have tested the responses of LIP neurons to stimuli that entered their receptive field by saccades. Neurons had little or no response to stimuli brought into their receptive field by saccades, unless the stimuli were behaviourally significant. We established behavioural significance in two ways: either by making a stable stimulus task-relevant, or by taking advantage of the attentional attraction of an abruptly appearing stimulus. Our results show that under ordinary circumstances the entire visual world is only weakly represented in LIP. The visual representation in LIP is sparse, with only the most salient or behaviourally relevant objects being strongly represented.
1,007 citations
••
TL;DR: This paper argues that the MWF model is consistent with previous experimental results and is a parsimonious summary of these results, and describes experimental methods, analogous to perturbation analysis, that permit us to analyze depth cue combination in novel ways.
975 citations