scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Patent
22 Jun 2007
TL;DR: In this paper, the position and orientation of an out-of-view tool from the viewing area are indicated by the size, color, brightness, or blinking or oscillation frequency of the symbol.
Abstract: An endoscope captures images of a surgical site for display in a viewing area of a monitor. When a tool is outside the viewing area, a GUI indicates the position of the tool by positioning a symbol in a boundary area around the viewing area so as to indicate the tool position. The distance of the out-of-view tool from the viewing area may be indicated by the size, color, brightness, or blinking or oscillation frequency of the symbol. A distance number may also be displayed on the symbol. The orientation of the shaft or end effector of the tool may be indicated by an orientation indicator superimposed over the symbol, or by the orientation of the symbol itself. When the tool is inside the viewing area, but occluded by an object, the GUI superimposes a ghost tool at its current position and orientation over the occluding object.

248 citations

Proceedings ArticleDOI
R.K. Lenz1, R. Tsai
01 Mar 1987
TL;DR: This paper describes techniques for calibrating certain intrinsic camera parameters for machine vision and reports accuracy and reproducibility of the calibrated parameters, as well as the improvement in actual 3D measurement due to center calibration.
Abstract: This paper describes techniques for calibrating certain intrinsic camera parameters for machine vision. The parameters to be calibrated are the horizontal scale factor, i.e. the factor that relates the sensor element spacing of a discrete array camera to the picture element spacing after sampling by the image acquisition circuitry, and the image center, i.e. the intersection of the optical axis with the camera sensor. The scale factor calibration uses a 1D-FFT and is accurate and efficient. It also permits the use of only one coplanar set of calibration points for general camera calibration. Three groups of techniques for center calibration are presented: Group I requires using a laser and a four-degree of freedom adjustment of its orientation, but is simplest in concept, and is accurate and reproducible. Group II is simple to perform, but is less accurate than the other two. The most general Group III is accurate and efficient, but requires accurate image feature extraction of calibration points with known 3D coordinates. A feasible setup is described. Results of real experiments are presented and compared with theoretical predictions. Accuracy and reproducibility of the calibrated parameters are reported, as well as the improvement in actual 3D measurement due to center calibration.

248 citations

Proceedings ArticleDOI
09 Jan 1979
TL;DR: In photometric stereo as mentioned in this paper, the direction of the incident illumination between successive views is varied while holding the viewing direction constant, which provides enough information to determine surface orientation at each picture element.
Abstract: This paper introduces a novel technique called photometric stereo. The idea of photometric stereo is to vary the direction of the incident illumination between successive views while holding the viewing direction constant. This provides enough information to determine surface orientation at each picture element. Traditional stereo techniques determine range by relating two images of an object viewed from different directions. If the correspondence between picture elements is known, then distance to the object can be calculated by triangulation. Unfortunately, it is difficult to determine this correspondence. In photometric stereo, the imaging geometry does not change. Therefore, the correspondence between picture elements is known a priori. This stereo technique is photometric because it uses the intensity values recorded at a single picture element, in successive views, rather than the relative positions of features.

248 citations

Patent
05 Apr 2002
TL;DR: In this paper, a method and apparatus are provided for superimposing the position and orientation of a diagnostic and/or treatment device on a previously acquired three-dimensional anatomic image such as a CT or MRI image, so as to enable navigation of the diagnostic and or treatment device to a desired location.
Abstract: A method and apparatus are provided for superimposing the position and orientation of a diagnostic and/or treatment device on a previously acquired three-dimensional anatomic image such as a CT or MRI image, so as to enable navigation of the diagnostic and/or treatment device to a desired location. A plurality of previously acquired three-dimensional images may be utilized to form a “movie” of the beating heart which can be synchronized with a patient's EKG in the operating room, and the position of the diagnostic and/or treatment device can be superimposed on the synchronized “movie” of the beating heart. An electrophysiological map of the heart can also be superimposed on the previously acquired three-dimensional antaomic image and/or the “movie” of the beating heart.

247 citations

Proceedings ArticleDOI
01 Jun 2021
TL;DR: CenterPoint as mentioned in this paper proposes to represent, detect, and track 3D objects as points and achieves state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA.
Abstract: Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, Center-Point outperforms all previous single model methods by a large margin and ranks first among all Lidar-only submissions. The code and pretrained models are available at https://github.com/tianweiy/CenterPoint.

246 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691