scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm, but never applied to HRSI orientation accuracy assessment.
Abstract: Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation–orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV since only the HOV is implemented. The software comparison guaranteed about the overall correctness and good performances of the SISAR model, whereas the results showed the good features of the LOOCV method.

86 citations

Proceedings ArticleDOI
06 Jun 2005
TL;DR: GPS absolute localization is proposed here to combine GPS absolute localization with data computed by a vision system giving the position and orientation of the vehicle on the road.
Abstract: Localization is an important functionality for the navigation of intelligent vehicles. It is usually done using several kinds of sensors (proprioceptive, GPS, camera). All the data are uncertain and even momentarily unavailable (GPS in urban areas for example). A data fusion process is necessary for sensors data to compensate one each other. We propose here to combine GPS absolute localization with data computed by a vision system giving the position and orientation of the vehicle on the road. This last local information is transformed into a global reference using a map of the environment. The localization parameters are estimated using a particles filter making it possible to manage multimodal estimations (the vehicle can be on the left lane or on the right one for example). Many results have been obtained in real time and on real roads by implementing this solution in an experimental vehicle. The best standard deviation reached is 48 cm along the road axis and 8 cm along the axis normal to the road.

86 citations

Journal ArticleDOI
TL;DR: This paper presents a workflow for the supervised and automated identification and reconstruction of near-planar geological surfaces that have a three-dimensional exposure in the outcrop (typically bedding, fractures, or faults enhanced by differential erosion).

86 citations

Journal ArticleDOI
TL;DR: A novel and powerful local image descriptor that extracts the histograms of second-order gradients (HSOGs) to capture the curvature related geometric properties of the neural landscape, i.e., cliffs, ridges, summits, valleys, basins, and so on is introduced.
Abstract: Recent investigations on human vision discover that the retinal image is a landscape or a geometric surface, consisting of features such as ridges and summits. However, most of existing popular local image descriptors in the literature, e.g., scale invariant feature transform (SIFT), histogram of oriented gradient (HOG), DAISY, local binary Patterns (LBP), and gradient location and orientation histogram, only employ the first-order gradient information related to the slope and the elasticity, i.e., length, area, and so on of a surface, and thereby partially characterize the geometric properties of a landscape. In this paper, we introduce a novel and powerful local image descriptor that extracts the histograms of second-order gradients (HSOGs) to capture the curvature related geometric properties of the neural landscape, i.e., cliffs, ridges, summits, valleys, basins, and so on. We conduct comprehensive experiments on three different applications, including the problem of local image matching, visual object categorization, and scene classification. The experimental results clearly evidence the discriminative power of HSOG as compared with its first-order gradient-based counterparts, e.g., SIFT, HOG, DAISY, and center-symmetric LBP, and the complementarity in terms of image representation, demonstrating the effectiveness of the proposed local descriptor.

86 citations

Proceedings ArticleDOI
27 Jun 2011
TL;DR: A new method to model the spatial distribution of oriented local features on an object is presented, which is used to infer object pose given small sets of observed local features.
Abstract: The success of personal service robotics hinges upon reliable manipulation of everyday household objects, such as dishes, bottles, containers, and furniture. In order to accurately manipulate such objects, robots need to know objects’ full 6-DOF pose, which is made difficult by clutter and occlusions. Many household objects have regular structure that can be used to effectively guess object pose given an observation of just a small patch on the object. In this paper, we present a new method to model the spatial distribution of oriented local features on an object, which we use to infer object pose given small sets of observed local features. The orientation distribution for local features is given by a mixture of Binghams on the hypersphere of unit quaternions, while the local feature distribution for position given orientation is given by a locally-weighted (Quaternion kernel) likelihood. Experiments on 3D point cloud data of cluttered and uncluttered scenes generated from a structured light stereo image sensor validate our approach.

86 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691