scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a noncontact vision sensor for simultaneous measurement of structural displacements at multiple points using one camera is developed based on two advanced template matching techniques: the upsampled cross correlation (UCC) and the orientation code matching (OCM).
Abstract: Summary A novel noncontact vision sensor for simultaneous measurement of structural displacements at multiple points using one camera is developed based on two advanced template matching techniques: the upsampled cross correlation (UCC) and the orientation code matching (OCM). While existing studies on vision sensors are mostly focused on the time-domain performance evaluation, this study investigates the performance in both time and frequency domains through a shaking table test of a three-story frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Excellent agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. The results of structural modal analysis based on the measurements by the vision sensor and reference accelerometers also agree well. Moreover, the identified modal parameters are used to update the finite element model of the structure, demonstrating the potential of the vision sensor for structural health monitoring applications. This study further examines the robustness of the proposed vision sensor against ill environmental conditions such as dim light, background image disturbance, and partial template occlusion, which is important for future implementation in the field. Significant advantages of the proposed vision sensor include its low cost (a single camera to remotely measure structural displacements at multiple points without installing artificial targets) and flexibility to extract structural displacements at any point from a single measurement. Copyright © 2015 John Wiley & Sons, Ltd.

180 citations

Patent
22 Sep 1975
TL;DR: In this paper, a visual system for determining position in space and/or orientation in 3D space is presented, for directing or instructing an industrial robot to perform manipulation acts and apparatus employing the visual system.
Abstract: Visual system for determining position in space and/or orientation in three-dimensional space for purposes, for example, of directing or instructing an industrial robot to perform manipulative acts and apparatus employing the visual system. The system includes a portable object arbitrarily movable in three-dimensional space and possessing the discernible properties of position in space and/or orientation in space. One or more sensors extract visual information or image data from the portable object and convert the same to an electric signal or signals. A computer is connected to receive the signal or signals which are analyzed and, in the case of the industrial robot, the information obtained is used to prepare operating instructions.

180 citations

Journal ArticleDOI
TL;DR: An algorithm for pose estimation based on the volume measurement of tetrahedra composed of feature-point triplets extracted from an arbitrary quadrangular target and the lens center of the vision system is proposed.
Abstract: Pose estimation is an important operation for many vision tasks. In this paper, the authors propose an algorithm for pose estimation based on the volume measurement of tetrahedra composed of feature-point triplets extracted from an arbitrary quadrangular target and the lens center of the vision system. The inputs to this algorithm are the six distances joining all feature pairs and the image coordinates of the quadrangular target. The outputs of this algorithm are the effective focal length of the vision system, the interior orientation parameters of the target, the exterior orientation parameters of the camera with respect to an arbitrary coordinate system if the target coordinates are known in this frame, and the final pose of the camera. The authors have also developed a shape restoration technique which is applied prior to pose recovery in order to reduce the effects of inaccuracies caused by image projection. An evaluation of the method has shown that this pose estimation technique is accurate and robust. Because it is based on a unique and closed form solution, its speed makes it a potential candidate for solving a variety of landmark-based tracking problems. >

179 citations

Proceedings ArticleDOI
21 Jun 1994
TL;DR: This paper proposes a new method extending the classical correlation method to estimate accurately both the disparity and its derivatives directly from the image data, and relates those derivatives to differential properties of the surface such as orientation and curvatures.
Abstract: We are considering the problem of recovering the three-dimensional geometry of a scene from binocular stereo disparity. Once a dense disparity map has been computed from a stereo pair of images, one often needs to calculate some local differential properties of the corresponding 3-D surface such as orientation or curvatures. The usual approach is to build a 3-D reconstruction of the surface(s) from which all shape properties will then be derived without ever going back to the original images. In this paper, we depart from this paradigm and propose to use the images directly to compute the shape properties. We thus propose a new method extending the classical correlation method to estimate accurately both the disparity and its derivatives directly from the image data. We then relate those derivatives to differential properties of the surface such as orientation and curvatures. >

178 citations

Journal ArticleDOI
TL;DR: The results are surprising in that they show that classification can be done with less than one photon per pixel in the limiting resolution shell, assuming Poisson-type photon noise in the image.

178 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691