scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The resulting algorithm, dubbed CurveletQA, correlates well with human subjective opinions of image quality, delivering performance that is competitive with popular full-reference IQA algorithms such as SSIM, and with top-performing NR IQA models.
Abstract: We study the efficacy of utilizing a powerful image descriptor, the curvelet transform, to learn a no-reference (NR) image quality assessment (IQA) model. A set of statistical features are extracted from a computed image curvelet representation, including the coordinates of the maxima of the log-histograms of the curvelet coefficients values, and the energy distributions of both orientation and scale in the curvelet domain. Our results indicate that these features are sensitive to the presence and severity of image distortion. Operating within a 2-stage framework of distortion classification followed by quality assessment, we train an image distortion and quality prediction engine using a support vector machine (SVM). The resulting algorithm, dubbed CurveletQA for short, was tested on the LIVE IQA database and compared to state-of-the-art NR/FR IQA algorithms. We found that CurveletQA correlates well with human subjective opinions of image quality, delivering performance that is competitive with popular full-reference (FR) IQA algorithms such as SSIM, and with top-performing NR IQA models. At the same time, CurveletQA has a relatively low complexity.

176 citations

Proceedings ArticleDOI
01 Jan 2003
TL;DR: In this article, a bottom-up aggregation framework was proposed to combine structural characteristics of texture elements with filter responses to distinguish between different textures, where the shape measures and the filter responses crosstalk extensively.
Abstract: Texture segmentation is a difficult problem, as is apparent from camouflage pictures. A textured region can contain texture elements of various sizes, each of which can itself be textured. We approach this problem using a bottom-up aggregation framework that combines structural characteristics of texture elements with filter responses. Our process adaptively identifies the shape of texture elements and characterize them by their size, aspect ratio, orientation, brightness, etc., and then uses various statistics of these properties to distinguish between different textures. At the same time our process uses the statistics of filter responses to characterize textures. In our process the shape measures and the filter responses crosstalk extensively. In addition, a top-down cleaning process is applied to avoid mixing the statistics of neighboring segments. We tested our algorithm on real images and demonstrate that it can accurately segment regions that contain challenging textures.

176 citations

Journal ArticleDOI
TL;DR: A new method for the automatic detection of cars in unmanned aerial vehicle (UAV) images acquired over urban contexts, which starts with a screening operation in which the asphalted areas are identified in order to make the car detection process faster and more robust.
Abstract: This paper presents a new method for the automatic detection of cars in unmanned aerial vehicle (UAV) images acquired over urban contexts. UAV images are characterized by an extremely high spatial resolution, which makes the detection of cars particularly challenging. The proposed method starts with a screening operation in which the asphalted areas are identified in order to make the car detection process faster and more robust. Subsequently, filtering operations in the horizontal and vertical directions are performed to extract histogram-of-gradient features and to yield a preliminary detection of cars after the computation of a similarity measure with a catalog of cars used as reference. Three different strategies for computing the similarity are investigated. Successively, for the image points identified as potential cars, an orientation value is computed by searching for the highest similarity value in 36 possible directions. The last step is devoted to the merging of the points which belong to the same car because it is likely that a car is identified by more than one point due to the extremely high resolution of UAV images. As outcomes, the proposed method provides the number of cars in the image, as well as the position and orientation for each of them. Interesting experimental results, conducted on a set of real UAV images acquired over an urban area, are presented and discussed.

175 citations

Patent
07 Feb 1994
TL;DR: In this article, an image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis.
Abstract: A device for omnidirectional image viewing providing pan-and-tilt orientation, rotation, and magnification within a selected field-of-view for use in any application including inspection, monitoring, surveillance, and target acquisition.. The imaging device (using optical or infrared images) is based on the effect that the image from a wide angle lens, which produces a circular image of an entire field-of-view, can be mathematically corrected using high speed electronic circuitry. More specifically, an incoming image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. Multiple simultaneous images can be output from a single input image. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout the selected field-of-view without the need for any mechanical mechanisms to move a camera. The preferred embodiment of the image transformation device can provide corrected images at real-time rates compatible with standard video equipment. The device can be controlled by discrete user input, automatic computer scanning, or discrete environmental inputs to select desired output images.

175 citations

Proceedings ArticleDOI
03 May 1993
TL;DR: From this analysis, an improved method is proposed, and it is shown that the new method can increase the PSNR up to 1.3 dB over the original method.
Abstract: The zero-tree method for image compression, proposed by J. Shapiro (1992), is studied. The method is presented in a more general perspective, so that its characteristics can be better understood. From this analysis, an improved method is proposed, and it is shown that the new method can increase the PSNR up to 1.3 dB over the original method. >

174 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691