scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Proceedings ArticleDOI
07 Jul 2008
TL;DR: A new image-based method to process contacts between objects bounded by triangular surfaces, which eliminates complex geometrical computations and robustly handles deep intersections, and is efficient for both deformable and rigid objects.
Abstract: We present a new image-based method to process contacts between objects bounded by triangular surfaces. Unlike previous methods, it relies on image-based volume minimization, which eliminates complex geometrical computations and robustly handles deep intersections.The surfaces are rasterized in three orthogonal directions, and intersections are detected based on pixel depth and normal orientation. Per-pixel contact forces are computed and accumulated at the vertices. We show how to compute pressure forces which serve to minimize the intersection volume, as well as friction forces.No geometrical precomputation is required, which makes the method efficient for both deformable and rigid objects. We demonstrate it on rigid, skinned, and particle-based physical models with detailed surfaces in contacts at interactive frame rates.

104 citations

Journal ArticleDOI
TL;DR: The experiments show that the new model yields superior results in estimating the vessel radius compared to previous approaches based on a Gaussian model as well as the Hough transform.
Abstract: We introduce a new approach for 3-D segmentation and quantification of vessels. The approach is based on a 3-D cylindrical parametric intensity model, which is directly fitted to the image intensities through an incremental process based on a Kalman filter. Segmentation results are the vessel centerline and shape, i.e., we estimate the local vessel radius, the 3-D position and 3-D orientation, the contrast, as well as the fitting error. We carried out an extensive validation using 3-D synthetic images and also compared the new approach with an approach based on a Gaussian model. In addition, the new model has been successfully applied to segment vessels from 3-D MRA and computed tomography angiography image data. In particular, we compared our approach with an approach based on the randomized Hough transform. Moreover, a validation of the segmentation results based on ground truth provided by a radiologist confirms the accuracy of the new approach. Our experiments show that the new model yields superior results in estimating the vessel radius compared to previous approaches based on a Gaussian model as well as the Hough transform.

104 citations

Journal ArticleDOI
TL;DR: Two brain extraction methods (BEM) that solely depend on the brain anatomy and its intensity characteristics are proposed that give better results than the popular methods, FSL's Brain Extraction Tool (BET), BrainSuite's Brain Surface Extractor (BSE) and works well even where MLS failed.

104 citations

01 Jan 1999
TL;DR: A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose of continuous position and orientation estimation is discussed, where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data.
Abstract: A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.

103 citations

01 Jan 2014
TL;DR: This paper presents a novel 3D marker for coarse calibration which can be robustly detected in both the camera image and the LiDAR scan and requires only a single pair of camera-LiDAR frames for estimating large sensors displacement.
Abstract: Calibration of the LiDAR sensor with RGB camera finds its usage in many application fields from enhancing image classification to the environment perception and mapping. This paper presents a pipeline for mutual pose and orientation estimation of the mentioned sensors using a coarse to fine approach. Previously published methods use multiple views of a known chessboard marker for computing the calibration parameters, or they are limited to the calibration of the sensors with a small mutual displacement only. Our approach presents a novel 3D marker for coarse calibration which can be robustly detected in both the camera image and the LiDAR scan. It also requires only a single pair of camera-LiDAR frames for estimating large sensors displacement. Consequent refinement step searches for more accurate calibration in small subspace of calibration parameters. The paper also presents a novel way for evaluation of the calibration precision using projection error.

103 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691