scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: An affordable, fully automated and accurate mapping solutions based on ultra-light UAV imagery that can compete with traditional mapping solutions that capture fewer high-resolution images from airplanes and that rely on highly accurate orientation and positioning sensors on board.
Abstract: This paper presents an affordable, fully automated and accurate mapping solutions based on ultra-light UAV imagery. Several datasets are analysed and their accuracy is estimated. We show that the accuracy highly depends on the ground resolution (flying height) of the input imagery. When chosen appropriately this mapping solution can compete with traditional mapping solutions that capture fewer high-resolution images from airplanes and that rely on highly accurate orientation and positioning sensors on board. Due to the careful integration with recent computer vision techniques, the post processing is robust and fully automatic and can deal with inaccurate position and orientation information which are typically problematic with traditional techniques. Fully autonomous, ultra-light Unmanned Aerial Vehicles (UAV) have recently become commercially available at very reasonable cost for civil applications. The advantages linked to their small mass (typically around 500 grams) are that they do not represent a real threat for third parties in case of malfunctioning. In addition, they are very easy and quick to deploy and retrieve. The drawback of these autonomous platforms certainly lies in the relatively low accuracy of their orientation estimates. In this paper, we show however that such ultra-light UAV’s can take reasonably good images with large amount of overlap while covering areas in the order of a few square kilometers per flight. Since their miniature on-board autopilots cannot deliver extremely precise positioning and orientation of the recorded images, postprocessing is key in the generation of geo-referenced orthomosaics and digital elevation models (DEMs). In this paper we evaluate an automatic image processing pipeline with respect to its accuracy on various datasets. Our study shows that ultralight UAV imagery provides a convenient and affordable solution for measuring geographic information with a similar accuracy as larger airborne systems equipped with high-end imaging sensors, IMU and differential GPS devices. In the frame of this paper, we present results from a flight campaign carried out with the swinglet CAM, a 500-gram autonomous flying wing initially developed at EPFL-LIS and now produced by senseFly. The swinglet CAM records 12MP images and can cover area up to 10 square km. These images can easily be geotagged after flight using the senseFly PostFlight Suite that processes the flight trajectory to find where the images have been taken. The images and their geotags form the input to the processing developed at EPFL-CVLab. In this paper, we compare two variants:

126 citations

01 Jan 1995
TL;DR: In this article, a technique for characterization and segmentation of anisotropic patterns that exhibit a single local orientation is described, which can be done with the help of quantitative image analysis.
Abstract: This paper describes a technique for characterization and segmentation of anisotropic patterns that exhibit a single local orientation. Using Gaussian derivatives we construct a gradient-square tensor at a selected scale. Smoothing of this tensor allows us to combine information in a local neighborhood without canceling vectors pointing in opposite directions. Whereas opposite vectors would cancel, their tensors reinforce. Consequently, the tensor characterizes orientation rather than direction. Usually this local neighborhood is at least a few times larger than the scale parameter of the gradient operators. The eigenvalues yield a measure for anisotropy whereas the eigenvectors indicate the local orientation. In addition to these measures we can detect anomalies in textured patterns. 1. Introduction Information from subsurface structures may help geologists in their search for hydrocarbons (oil and gas). In addition to seismic measurements which are performed at the earth’s surface important information can be extracted from a borehole. This can be done either by downhole imaging of the borehole wall or by analyzing the removed borehole material “the core”. Core imaging requires careful drilling with a hollow drillbit. The cores are transported to the surface for further analysis. Apart from physical measurements geologists are interested in the spatial organization of the acquired rock formations. We show that this can be done with the help of quantitative image analysis. The cylindrical cores can be cut longitudinally (slabbed) and digitization of the flat surface yields a 2D slabbed core image. Quantitative information about the layer structure in a borehole may help the geologist to improve their interpretation. The approach to be followed is guided by a simple layer model of the earth’s subsurface. These layers can be described by a number of parameters which may all vary as a function of depth. Some of these parameters have a direct geometric meaning (dip and azimuth) whereas others are much more difficult to express quantitatively in a unique way. In this paper we will focus on orientation and anisotropy measurements applied to slabbed core images.

125 citations

Patent
19 Jul 1999
TL;DR: In this paper, a system is presented that creates vision-based, three-dimensional control of a multiple-degree-of-freedom dexterous robot, without special calibration of the vision system, the robot, or any of the constituent parts of the system, and that allows highlevel human supervision or direction of the robot.
Abstract: A system is presented that creates vision-based, three-dimensional control of a multiple-degree-of-freedom dexterous robot, without special calibration of the vision system, the robot, or any of the constituent parts of the system, and that allows high-level human supervision or direction of the robot. The human operator uses a graphical user interface (GUI) to point and click on an image of the surface of the object with which the robot is to interact. Directed at this surface is the stationary selection camera, which provides the image for the GUI, and at least one other camera. A laser pointer is panned and tilted so as to create, in each participating camera space, targets associated with surface junctures that the user has selected in the selection camera. Camera-space manipulation is used to control the internal degrees of freedom of the robot such that selected points on the robot end member move relative to selected surface points in a way that is consistent with the desired robot operation. As per the requirement of camera-space manipulation, the end member must have features, or “cues”, with known location relative to the controlled end-member points, that can be located in the images or camera spaces of participant cameras. The system is extended to simultaneously control tool orientation relative to the surface normal and/or relative to user-selected directions tangent to the surface. The system is extended in various ways to allow for additional versatility of application.

125 citations

Patent
Ho Jin Koh1
09 Sep 2004
TL;DR: In this paper, an apparatus, method, and medium for controlling image orientation are disclosed, where an orientation mode detector measures multi-directional rotational angles of a display panel and determines an orientation for original image data based on the measured rotational angle.
Abstract: An apparatus, method, and medium for controlling image orientation are disclosed. An orientation mode detector measures multi-directional rotational angles of a display panel and determines an orientation mode for original image data based on the measured rotational angles. A system memory stores orientation parameters corresponding to a plurality of image orientation modes. A system controller initially acquires information indicating the orientation mode from the orientation mode detector, and it extracts orientation parameters corresponding to the acquired information from the system memory. Finally, a driver changes an orientation of the original image data according to the extracted orientation parameters.

125 citations

Posted Content
TL;DR: This work focuses on mitigating two limitations in the joint learning of local feature detectors and descriptors, by resorting to deformable convolutional networks to densely estimate and apply local transformation in ASLFeat.
Abstract: This work focuses on mitigating two limitations in the joint learning of local feature detectors and descriptors. First, the ability to estimate the local shape (scale, orientation, etc.) of feature points is often neglected during dense feature extraction, while the shape-awareness is crucial to acquire stronger geometric invariance. Second, the localization accuracy of detected keypoints is not sufficient to reliably recover camera geometry, which has become the bottleneck in tasks such as 3D reconstruction. In this paper, we present ASLFeat, with three light-weight yet effective modifications to mitigate above issues. First, we resort to deformable convolutional networks to densely estimate and apply local transformation. Second, we take advantage of the inherent feature hierarchy to restore spatial resolution and low-level details for accurate keypoint localization. Finally, we use a peakiness measurement to relate feature responses and derive more indicative detection scores. The effect of each modification is thoroughly studied, and the evaluation is extensively conducted across a variety of practical scenarios. State-of-the-art results are reported that demonstrate the superiority of our methods.

125 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691