scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Image enhaneement and reconstruction methods were applied to electron microscopy in order to overcome the problem of extraeting an unambiguous answer by looking at Single images of the strueture of interest.

134 citations

Journal ArticleDOI
TL;DR: A new technique for directional analysis of linear patterns in images is proposed based on the notion of scale space which is illustrated through applications to synthetic patters and to scanning electron microscope images of collagen fibrils in rabbit ligaments.
Abstract: In this paper a new technique for directional analysis of linear patterns in images is proposed based on the notion of scale space. A given image is preprocessed by a sequence of filters which are second derivatives of 2-D Gaussian functions with different scales. This gives a set of zero crossing maps (the scale space) from which a stability map is generated. Significant linear patterns are detected from measurements on the stability map. Information regarding orientation of the linear patterns in the image and the area covered by the patterns in specific directions is then computed. The performance of the method is illustrated through applications to synthetic patters and to scanning electron microscope images of collagen fibrils in rabbit ligaments.

134 citations

Patent
02 Mar 2007
TL;DR: In this article, a mapping between two views of the scene is used to generate a stereoscopic display of a scene using a first image and a second image during a surgical procedure, where a position and orientation of an imaging device is at least partially changed to capture the first and second images from different viewpoints.
Abstract: Methods and apparatuses to generate stereoscopic views for image guided surgical navigation. One embodiment includes transforming a first image of a scene into a second image of the scene according to a mapping between two views of the scene. Another embodiment includes generating a stereoscopic display of the scene using a first image and a second image of a scene during a surgical procedure, where a position and orientation of an imaging device is at least partially changed to capture the first and second images from different viewpoints (821, 823). A further embodiment includes: determining a real time location of a probe relative to a patient during a surgical procedure; determining a pair of virtual viewpoints according to the real time location of the probe (803); and generating a virtual stereoscopic image showing the probe relative to the patient, according to the determined pair of virtual viewpoints.

134 citations

Patent
16 Dec 2004
TL;DR: In this article, a method for calibrating a surveying instrument is presented, which takes into account at least one optical property of camera and the relative orientation of the vertical axis and the tilting axis.
Abstract: A method for calibrating a surveying instrument is disclosed the survey instrument comprising a base element (3) and a camera with an image sensor (10), the camera being rotatable about a vertical axis (2) fixed with respect to said base element and being rotatable about a tilting axis (4), the tilting axis being rotated about the vertical axis with rotation of the camera about the vertical axis, In the method, data associated with calibration points (P) and images (P1) of the calibration points on the image sensor captured in different faces are used, the data for each of said calibration points comprising distance data and the data for each of the images of each said calibration point comprising image position data and orientation data. Further, on the basis of the distance data for each of the calibration points and the image position and orientation data for each of the images of the calibration points the surveying instrument is calibrated simultaneously taking into account at least one optical property of camera and at least one of the relative orientation of the vertical axis and the tilting axis and the orientation of the camera relative to one of the base element, the vertical axis and the tilting axis.

133 citations

Journal ArticleDOI
M. Menke1, M.S. Atkins, K. Buckley1
01 Feb 1996
TL;DR: Preliminary results obtained with test data indicate that the methods have the potential to improve the resolution of PET images in cases where significant head motion has occurred, provided that the head position and orientation can be accurately measured.
Abstract: The authors describe two methods to correct for motion artifacts in head images obtained by positron emission tomography (PET). The methods are based on six-dimensional motion data of the head that have to be acquired simultaneously during scanning. The data are supposed to represent the rotational and translational deviations of the head as a function of time, with respect to the initial head position. The first compensation method is a rebinning procedure by which the lines of response are geometrically transformed according to the current values of the motion data, assuming a cylindrical scanner geometry. An approximation of the rebinning transformations by use of large look-up tables, having the potential of on-line event processing, is presented. The second method comprises post-processing of the reconstructed images by unconstrained or constrained deconvolution of the image or image segments with kernels that are generated from the motion data. The authors use motion data that were acquired with a volunteer in supine position, immobilized by a thermoplastic head holder, to demonstrate the effects of the compensation methods. Preliminary results obtained with test data indicate that the methods have the potential to improve the resolution of PET images in cases where significant head motion has occurred, provided that the head position and orientation can be accurately measured.

133 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691