Topic
Orientation (computer vision)
About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A new method for registration in augmented reality (AR) was developed that simultaneously tracks the position, orientation, and motion of the user's head, as well as estimating the three-dimensional (3D) structure of the scene.
Abstract: A new method for registration in augmented reality (AR) was developed that simultaneously tracks the position, orientation, and motion of the user's head, as well as estimating the three-dimensional (3D) structure of the scene. The method fuses data from head-mounted cameras and head-mounted inertial sensors. Two extended Kalman filters (EKFs) are used: one estimates the motion of the user's head and the other estimates the 3D locations of points in the scene. A recursive loop is used between the two EKFs. The algorithm was tested using a combination of synthetic and real data, and in general was found to perform well. A further test showed that a system using two cameras performed much better than a system using a single camera, although improving the accuracy of the inertial sensors can partially compensate for the loss of one camera. The method is suitable for use in completely unstructured and unprepared environments. Unlike previous work in this area, this method requires no a priori knowledge about the scene, and can work in environments in which the objects of interest are close to the user.
82 citations
••
TL;DR: A vision-based position and orientation estimation method for aircraft navigation and control that accounts for a limited camera FOV by releasing tracked features that are about to leave the FOV and tracking new features.
Abstract: While a Global Positioning System (GPS) is the most widely used sensor modality for aircraft navigation, researchers have been motivated to investigate other navigational sensor modalities because of the desire to operate in GPS denied environments. Due to advances in computer vision and control theory, monocular camera systems have received growing interest as an alternative/collaborative sensor to GPS systems. Cameras can act as navigational sensors by detecting and tracking feature points in an image. Current methods have a limited ability to relate feature points as they enter and leave the camera field of view (FOV). A vision-based position and orientation estimation method for aircraft navigation and control is described. This estimation method accounts for a limited camera FOV by releasing tracked features that are about to leave the FOV and tracking new features. At each time instant that new features are selected for tracking, the previous pose estimate is updated. The vision-based estimation scheme can provide input directly to the vehicle guidance system and autopilot. Simulations are performed wherein the vision-based pose estimation is integrated with a nonlinear flight model of an aircraft. Experimental verification of the pose estimation is performed using the modelled aircraft.
82 citations
••
TL;DR: An algorithm to recover three-dimensional shape, i.e., surface orientation and relative depth from a single segmented image, is presented and a variational formulation of line drawing and shading constraints in a common framework is developed.
Abstract: An algorithm to recover three-dimensional shape, i.e., surface orientation and relative depth from a single segmented image is presented. It is assumed that the scene is composed of opaque regular solid objects bounded by piecewise smooth surfaces with no markings or textures. It is also assumed that the reflectance map R(n) is known. For the canonical case of Lambertian surfaces illuminated by a point light source, this implies knowing the light-source direction. A variational formulation of line drawing and shading constraints in a common framework is developed. The global constraints are partitioned into constraint sets corresponding to the faces, edges and vertices in the scene. For a face, the constraints are given by Horn's image irradiance equation. A variational formulation of the constraints at an edge both from the known direction of the image curve corresponding to the edge and shading is developed. At a vertex, the constraints are modeled by a system of nonlinear equations. An algorithm is presented to solve this system of constraints. >
82 citations
••
23 May 2003TL;DR: The purpose of this study is to enhance previously known calibration methods by introducing a novel calibration fixture and process, which is inexpensive, easy to construct,easy to scan, while yielding more data points per image than previously known designs.
Abstract: Conventional freehand 3D ultrasound (US) is a complex process, involving calibration, scanning, processing, volume reconstruction, and visualization. Prior to calibration, a position sensor is attached to the probe for tagging each image with its position and orientation in space; then calibration process is performed to determine the spatial transformation of the scan plan with respect to the position sensor. Finding this transformation matrix is a critical, but often underrated task in US-guided surgery. The purpose of this study is to enhance previously known calibration methods by introducing a novel calibration fixture and process. The proposed phantom is inexpensive, easy to construct, easy to scan, while yielding more data points per image than previously known designs. The processing phase is semi-automated, allowing for fast processing of a massive amount of data, which in turn increases accuracy by reducing human errors.
82 citations
••
TL;DR: The paper describes in detail the derivation of the extended collinearity model and discusses the advantages of this new approach compared to the standard coplanarity model that is used in line photogrammetry.
Abstract: This paper is concerned with using linear features in aerial triangulation. Without loss of generality, the focus is on straight lines with the attempt to treat tie lines in the same fashion as tie points. The parameters of tie lines appear in the block adjustment like the tie points do. This requires a unique representation of lines in object space. We propose a four-parameter representation that also offers a meaningful stochastic interpretation of the line parameters. The proposed line representation lends itself to a parameterized form, allowing use of the collinearity model for expressing orientation and tie line parameters as a function of points measured on image lines. The paper describes in detail the derivation of the extended collinearity model and discusses the advantages of this new approach compared to the standard coplanarity model that is used in line photogrammetry. The intention of the paper is to make a contribution to feature-based aerial triangulation on the algorithmic level.
82 citations