scispace - formally typeset
Search or ask a question
JournalISSN: 1534-7362

Journal of Vision 

Association for Research in Vision and Ophthalmology
About: Journal of Vision is an academic journal published by Association for Research in Vision and Ophthalmology. The journal publishes majorly in the area(s): Perception & Visual search. It has an ISSN identifier of 1534-7362. It is also open access. Over the lifetime, 13917 publications have been published receiving 208747 citations. The journal is also known as: JOV & J Vis.


Papers
More filters
Journal ArticleDOI
TL;DR: This display is used to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue and shows that when focus cues are correct or nearly correct, the time required to identify a stereoscopic stimulus is reduced, stereoacuity in a time-limited task is increased, and distortions in perceived depth are reduced.
Abstract: Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues-accommodation and blur in the retinal image-specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one's ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays.

1,459 citations

Journal ArticleDOI
TL;DR: In the model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters, which provides a straightforward explanation for many search asymmetries observed in humans.
Abstract: We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model’s bottom-up saliency maps perform as well as or better than existing algorithms in predicting people’s fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters.

1,269 citations

Journal ArticleDOI
TL;DR: The endurance of the central fixation bias irrespective of the distribution of image features, or the observer's task, implies one of three possible explanations: first, the center of the screen may be an optimal location for early information processing of the scene, or second, the central bias reflects a tendency to re-center the eye in its orbit.
Abstract: Observers show a marked tendency to fixate the center of the screen when viewing scenes on computer monitors. This is often assumed to arise because image features tend to be biased toward the center of natural images and fixations are correlated with image features. A common alternative explanation is that experiments typically use a central pre-trial fixation marker, and observers tend to make small amplitude saccades. In the present study, the central bias was explored by dividing images post hoc according to biases in their image feature distributions. Central biases could not be explained by motor biases for making small saccades and were found irrespective of the distribution of image features. When the scene appeared, the initial response was to orient to the center of the screen. Following this, fixation distributions did not vary with image feature distributions when freely viewing scenes. When searching the scenes, fixation distributions shifted slightly toward the distribution of features in the image, primarily during the first few fixations after the initial orienting response. The endurance of the central fixation bias irrespective of the distribution of image features, or the observer's task, implies one of three possible explanations: First, the center of the screen may be an optimal location for early information processing of the scene. Second, it may simply be that the center of the screen is a convenient location from which to start oculomotor exploration of the scene. Third, it may be that the central bias reflects a tendency to re-center the eye in its orbit.

934 citations

Journal ArticleDOI
TL;DR: The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
Abstract: Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.

875 citations

Journal ArticleDOI
TL;DR: A framework is developed that transforms biological motion into a representation allowing for analysis using linear methods from statistics and pattern recognition, and reveals that the dynamic part of the motion contains more information about gender than motion-mediated structural cues.
Abstract: Biological motion contains information about the identity of an agent as well as about his or her actions, intentions, and emotions. The human visual system is highly sensitive to biological motion and capable of extracting socially relevant information from it. Here we investigate the question of how such information is encoded in biological motion patterns and how such information can be retrieved. A framework is developed that transforms biological motion into a representation allowing for analysis using linear methods from statistics and pattern recognition. Using gender classification as an example, simple classifiers are constructed and compared to psychophysical data from human observers. The analysis reveals that the dynamic part of the motion contains more information about gender than motion-mediated structural cues. The proposed framework can be used not only for analysis of biological motion but also to synthesize new motion patterns. A simple motion modeler is presented that can be used to visualize and exaggerate the differences in male and female walking patterns.

866 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202397
20221,496
2021218
2020318
2019433
2018438