About: Monocular vision is a research topic. Over the lifetime, 2667 publications have been published within this topic receiving 48827 citations.
Papers published on a yearly basis
TL;DR: A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown.
Abstract: A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown. This problem is relevant not only to photographic surveying1 but also to binocular vision2, where the non-visual information available to the observer about the orientation and focal length of each eye is much less accurate than the optical information supplied by the retinal images themselves. The problem also arises in monocular perception of motion3, where the two projections represent views which are separated in time as well as space. As Marr and Poggio4 have noted, the fusing of two images to produce a three-dimensional percept involves two distinct processes: the establishment of a 1:1 correspondence between image points in the two views—the ‘correspondence problem’—and the use of the associated disparities for determining the distances of visible elements in the scene. I shall assume that the correspondence problem has been solved; the problem of reconstructing the scene then reduces to that of finding the relative orientation of the two viewpoints.
TL;DR: The problem of how three-dimensional form is perceived in spite of the fact that pertinent stimulation consists only in two-dimensional retinal images has been only partly solved.
Abstract: The problem of how three-dimensional form is perceived in spite of the fact that pertinent stimulation consists only in two-dimensional retinal images has been only partly solved. Much is known about the impressive effectiveness of binocular disparity. However, the excellent perception of threedimensional form in monocular vision has remained essentially unexplained. It has been proposed that some patterns of stimulation on the retina give rise to three-dimensional experiences, because visual processes differ in the spontaneous organization that results from certain properties of the retinal pattern. Rules of organization are supposed to exist according to which most retinal projections of three-dimensional forms happen to produce three-dimensional percepts and most retinal images of flat forms lead to flat forms in experience also. This view has been held mainly by gestalt psychologists. Another approach to this problem maintains that the projected stimulus patterns are interpreted on the basis of previous experience, either visual
•30 Nov 1995
TL;DR: Binocular and stereoscopic vision in animals and the physiology of binocular vision, and the limits of stereoscopicVision in animals, are studied.
Abstract: Introduction 1. Binocular correspondence and the horopter 2. Sensory coding 3. The physiology of binocular vision 4. The limits of stereoscopic vision 5. Matching corresponding images 6. Types of disparity 7. Binocular fusion and rivalry 8. Binocular masking and transfer 9. Vergence eye movements 10. Stereo constancy and depth cue interactions 11. Depth contrrast and cooperative processes 12. Spatiotemporal aspects of stereopsis 13. Vision in the cyclopean domain 14. Development and pathology of binocular vision 15. Binocular and stereoscopic vision in animals References Subject index
TL;DR: In this paper, the problem of finding binocular parallax matching patterns of the left and right visual fields was investigated using stereo image pairs generated on a digital computer, and it was shown that pattern-matching can be achieved by first combining the two fields and then searching for patterns in the fused field.
Abstract: The perception of depth involves monocular and binocular depth cues. The latter seem simpler and more suitable for investigation. Particularly important is the problem of finding binocular parallax, which involves matching patterns of the left and right visual fields. Stereo pictures of familiar objects or line drawings preclude the separation of interacting cues, and thus this pattern-matching process is difficult to investigate. More insight into the process can be gained by using unfamiliar picture material devoid of all cues except binocular parallax. To this end, artificial stereo picture pairs were generated on a digital computer. When viewed monocularly, they appear completely random, but if viewed binocularly, certain correlated point domains are seen in depth. By introducing distortions in this material and testing for perception of depth, it is possible to show that pattern-matching of corresponding points of the left and right visual fields can be achieved by first combining the two fields and then searching for patterns in the fused field. By this technique, some interesting properties of this fused binocular field are revealed, and a simple analog model is derived. The interaction between the monocular and binocular fields is also describea. A number of stereo images that demonstrate these and other findings are presented.
TL;DR: This work proposes a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.
Abstract: We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscale Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.
Trending Questions (10)
Related Topics (5)