scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A model consisting of channels tuned for orientation and spatial frequency which compute local oriented energy, followed by (texture) edge detection and a cross-correlator which performs the shape discrimination is presented, in accord with the degradation in performance with increased delta chi and decreased delta theta.

317 citations

Journal ArticleDOI
TL;DR: To support the adequacy of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature and was illustrated and compared with human perception.
Abstract: Recently, Marr & Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented, and consists of five steps. (i) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by delta 2G, the Laplacian of a Gaussian function. (ii) Zero crossings in the filtered images are found along horizontal scan lines. (iii) For each mask size, matching takes place between zero crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, it can be shown that false targets pose only a simple problem. (iv) The output of the wide masks can control vergence movements, thus causing small masks to come into correspondence. In this way, the matching process gradually moves from dealing with large disparities at a low resolution to dealing with small disparities at a high resolution. (v) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-dimensional sketch. To support the adequacy of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. Also statistical assumptions made by Marr & Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail.

314 citations

Proceedings ArticleDOI
07 Dec 2015
TL;DR: A depth estimation algorithm that treats occlusions explicitly, the method also enables identification of occlusion edges, which may be useful in other applications and outperforms current state-of-the-art light-field depth estimation algorithms, especially near Occlusion boundaries.
Abstract: Consumer-level and high-end light-field cameras are now widely available. Recent work has demonstrated practical methods for passive depth estimation from light-field images. However, most previous approaches do not explicitly model occlusions, and therefore cannot capture sharp transitions around object boundaries. A common assumption is that a pixel exhibits photo-consistency when focused to its correct depth, i.e., all viewpoints converge to a single (Lambertian) point in the scene. This assumption does not hold in the presence of occlusions, making most current approaches unreliable precisely where accurate depth information is most important - at depth discontinuities. In this paper, we develop a depth estimation algorithm that treats occlusion explicitly, the method also enables identification of occlusion edges, which may be useful in other applications. We show that, although pixels at occlusions do not preserve photo-consistency in general, they are still consistent in approximately half the viewpoints. Moreover, the line separating the two view regions (correct depth vs. occluder) has the same orientation as the occlusion edge has in the spatial domain. By treating these two regions separately, depth estimation can be improved. Occlusion predictions can also be computed and used for regularization. Experimental results show that our method outperforms current state-of-the-art light-field depth estimation algorithms, especially near occlusion boundaries.

313 citations

Patent
28 Nov 1997
TL;DR: In this article, the position and orientation of an ultrasound transducer are tracked in a frame of reference by a spatial determinator, which is used to generate processed images from the images acquired by the transducers.
Abstract: The present invention provides a system and method for visualizing internal images of an anatomical body. Internal images of the body are acquired by an ultrasound imaging transducer. The position and orientation of the ultrasound imaging transducer is tracked in a frame of reference by a spatial determinator. The position of the images in the frame of reference is determined by calibrating the ultrasound imaging transducer to produce a vector position of the images with respect to a fixed point on the transducer. This vector position can then be added to the position and orientation of the fixed point of the transducer in the frame of reference determined by the spatial determinator. The position and orientation of a medical instrument used on the patient are also tracked in the frame of reference by spatial determinators. The position and orientation of the instrument is mapped onto the position and orientation of the images. This information is used to generate processed images from the images acquired by the transducer. The processed images are generated from a view spatially related to the position of the instrument. The system is expandable so that more than one instrument and more than one transducer can be used.

313 citations

Journal ArticleDOI
TL;DR: These models indicate that the visual system contains networks that pool orientation information within regions 3.5-4.5 degrees in diameter in central vision, and are in good agreement with recent single unit physiology of primate area V4, an intermediate level of the form vision pathway.

312 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691