scispace - formally typeset
Search or ask a question

Showing papers on "Perspective (geometry) published in 2012"


Journal ArticleDOI
TL;DR: This paper introduces an automatic algorithm to rectifying images containing textures of repeated elements lying on an unknown plane by maximizing for image self‐similarity over the space of homography transformations.
Abstract: Many photographs are taken in perspective. Techniques for rectifying resulting perspective distortions typically rely on the existence of parallel lines in the scene. In scenarios where such parallel lines are hard to automatically extract or manually annotate, the unwarping process remains a challenge. In this paper, we introduce an automatic algorithm to rectifying images containing textures of repeated elements lying on an unknown plane. We unwrap the input by maximizing for image self-similarity over the space of homography transformations. We map a set of detected regional descriptors to surfaces in a transformation space, compute the intersection points among triplets of such surfaces, and then use consensus among the projected intersection points to extract the correcting transform. Our algorithm is global, robust, and does not require explicit or accurate detection of similar elements. We evaluate our method on a variety of challenging textures and images. The rectified outputs are directly useful for various tasks including texture synthesis, image completion, etc. © 2012 Wiley Periodicals, Inc.

21 citations


Patent
07 Nov 2012
TL;DR: In this paper, a single-chart self-calibration method of a catadioptric omnibearing camera mirror plane pose was proposed, comprising of calculating two candidate poses by using an ellipse fit by image points on the outer edge of the mirror plane in a collected image, respectively generating two groups of predicted formed images of the edge of a perspective camera.
Abstract: The invention discloses a single-chart self-calibration method of a catadioptric omnibearing camera mirror plane pose, comprising the following steps: firstly, calculating to obtain two candidate poses by using an ellipse fit by image points on the outer edge of the mirror plane in a collected image; respectively generating two groups of predicted formed images of the edge of a perspective cameraby the two candidate poses; and comparing the two groups of predicted formed images with a practical lens formed image, wherein a candidate pose corresponding to the predicated formed image with small difference is a practical mirror plane pose, and necessary distance between a camera projection center and a practical lens edge in the calibration process is obtained by using an optimized search method. By using the method, the deficiency of the existing calibration method is overcome; under the condition that the mirror plane parameter and the perspective camera parameter are known, a rotary matrix and a translation vector between a reflection mirror surface and the perspective camera can be estimated only by one self-shooting image of the catadioptric omnibearing camera without other calibration objects. The calibration method has the characteristics of strong anti-interference performance and higher precision and is simple to operate.

7 citations


Dissertation
19 Jan 2012
TL;DR: In this paper, the authors put the geometric feature detection in a contrario statistical framework in order to obtain a combined parameter-free line segment, circular/elliptical arc detector, which controls the number of false detections.
Abstract: This thesis deals with different aspects concerning the detection, fitting, and identification of elliptical features in digital images. We put the geometric feature detection in the a contrario statistical framework in order to obtain a combined parameter-free line segment, circular/elliptical arc detector, which controls the number of false detections. To improve the accuracy of the detected features, especially in cases of occluded circles/ellipses, a simple closed-form technique for conic fitting is introduced, which merges efficiently the algebraic distance with the gradient orientation. Identifying a configuration of coplanar circles in images through a discriminant signature usually requires the Euclidean reconstruction of the plane containing the circles. We propose an efficient signature computation method that bypasses the Euclidean reconstruction; it relies exclusively on invariant properties of the projective plane, being thus itself invariant under perspective.

6 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the integrability of Clifford point-circle configurations is a consequence of a conformal generalization of the classical Desargues theorem of projective geometry.

5 citations




Proceedings Article
01 Jan 2012
TL;DR: This paper proposes the "Bayesian perspective-plane (BPP)" algorithm, which can deal with more generalized constraints rather than type-specific ones to determine the plane for localization, and demonstrates that the algorithm is accurate and generalized for object localization.
Abstract: The "perspective-plane" problem proposed in this paper is similar to the "perspective-n-point (PnP)" or "perspective-n-line (PnL)" problems, yet with broader applications and potentials, since planar scenes are more widely available than control points or lines in practice. We address this problem in the Bayesian framework and propose the "Bayesian perspective-plane (BPP)" algorithm, which can deal with more generalized constraints rather than type-specific ones to determine the plane for localization. Computation of the plane normal is formulated as a maximum likelihood problem, and is solved by using the Maximum Likelihood Searching Model (MLS-M). Two searching modes of 2D and 1D are presented. With the computed normal, the plane distance and the position of the object or camera can be computed readily. The BPP algorithm has been tested with real image data by using different types of scene constraints. The 2D and 1D searching modes were illustrated for plane normal computation. The results demonstrate that the algorithm is accurate and generalized for object localization.

1 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: A ground-image plane mapping technique is proposed to quickly locate detected object if the object's position is known in the real world and enables a rapid search of an object in a moving scene to achieve fast object identification during sensor acquisition.
Abstract: Autonomous vehicles are equipped with optical sensors and micro-processing units to perform intelligent visual analysis of its surroundings. Due to the high speed of moving vehicle, the captured information has to be processed in a short duration to avoid possible collision. In this paper, a ground-image plane mapping technique is proposed to quickly locate detected object if the object's position is known in the real world. A three dimensional (3D) world coordinate is mathematically derived to an image plane using pinhole camera model. Several 3D perspective parameters such as vehicle's steering angle and its velocity, sensor's height and tilting angle are encompassed in the ground plane measurement. The optical sensor's intrinsic parameters such as focal length, principal point, pixel's height and width are also inserted for the mathematical model derivation. The importance of this ground to image plane mapping enables a rapid search of an object in a moving scene to achieve fast object identification during sensor acquisition. Experimental results have been carried on the application of lane marks detection with 93.82% correct mapping, using approximately 20% less processing time.

1 citations