scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Book ChapterDOI
20 Oct 2008
TL;DR: A method for extracting image features which utilizes 2nd order statistics, i.e., spatial and orientational auto-correlations of local gradients, enables us to extract richer information from images and to obtain more discriminative power than standard histogram based methods.
Abstract: In this paper, we propose a method for extracting image features which utilizes 2nd order statistics, i.e., spatial and orientational auto-correlations of local gradients. It enables us to extract richer information from images and to obtain more discriminative power than standard histogram based methods. The image gradients are sparsely described in terms of magnitude and orientation. In addition, normal vectors on the image surface are derived from the gradients and these could also be utilized instead of the gradients. From a geometrical viewpoint, the method extracts information about not only the gradients but also the curvatures of the image surface. Experimental results for pedestrian detection and image patch matching demonstrate the effectiveness of the proposed method compared with other methods, such as HOG and SIFT.

123 citations

Journal ArticleDOI
TL;DR: The influence of the tilt correction and also the trapezium distortion which appears at low magnifications will be discussed and an additional rhomboidal distortion will be introduced which is independent of the magnification used.

123 citations

Journal ArticleDOI
TL;DR: Considering both pattern complexity and luminance contrast, a novel spatial masking estimation function is deduced, and an improved JND estimation model is built, which performs highly consistent with the human perception.
Abstract: The just noticeable difference (JND) in an image, which reveals the visibility limitation of the human visual system (HVS), is widely used for visual redundancy estimation in signal processing. To determine the JND threshold with the current schemes, the spatial masking effect is estimated as the contrast masking, and this cannot accurately account for the complicated interaction among visual contents. Research on cognitive science indicates that the HVS is highly adapted to extract the repeated patterns for visual content representation. Inspired by this, we formulate the pattern complexity as another factor to determine the total masking effect: the interaction is relatively straightforward with a limited masking effect in a regular pattern, and is complicated with a strong masking effect in an irregular pattern. From the orientation selectivity mechanism in the primary visual cortex, the response of each local receptive field can be considered as a pattern; therefore, in this paper, the orientation that each pixel presents is regarded as the fundamental element of a pattern, and the pattern complexity is calculated as the diversity of the orientation in a local region. Finally, considering both pattern complexity and luminance contrast, a novel spatial masking estimation function is deduced, and an improved JND estimation model is built. Experimental results on comparing with the latest JND models demonstrate the effectiveness of the proposed model, which performs highly consistent with the human perception. The source code of the proposed model is publicly available at http://web.xidian.edu.cn/wjj/en/index.html.

123 citations

Journal ArticleDOI
TL;DR: A rapid algorithm for robust, accurate, and automatic extraction of the midsagittal plane (MSP) of the human cerebrum from normal and pathological neuroimages is proposed and is fully automatic and thoroughly validated, which make it suitable for clinical applications.

123 citations

Journal ArticleDOI
TL;DR: This contribution addresses the problem of pose estimation and tracking of vehicles in image sequences from traffic scenes recorded by a stationary camera by directly matching polyhedral vehicle models to image gradients without an edge segment extraction process.
Abstract: This contribution addresses the problem of pose estimation and tracking of vehicles in image sequences from traffic scenes recorded by a stationary camera. In a new algorithm, the vehicle pose is estimated by directly matching polyhedral vehicle models to image gradients without an edge segment extraction process. The new approach is significantly more robust than approaches that rely on feature extraction since the new approach exploits more information from the image data. We successfully tracked vehicles that were partially occluded by textured objects, e.g., foliage, where a previous approach based on edge segment extraction failed. Moreover, the new pose estimation approach is also used to determine the orientation and position of the road relative to the camera by matching an intersection model directly to image gradients. Results from various experiments with real world traffic scenes are presented.

123 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691