scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A method for 3D reconstruction of piecewise planar objects from single panoramic images

16 Jun 2000-pp 119-126
TL;DR: The approach is based on user-provided coplanarity, perpendicularity and parallelism constraints and results are provided for the case of a parabolic mirror-based omnidirectional sensor.
Abstract: We present an approach for 3D reconstruction of objects from a single panoramic image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. The method is described in detail for the case of a parabolic mirror-based omnidirectional sensor and results are provided.

Summary (3 min read)

1 Introduction

  • Methods for 3D reconstruction from images abound in the literature.
  • On one hand, research is directed towards completely automatic systems; these are relatively difficult to realize and it is not clear yet if they are ready for use by a non expert.
  • The guideline of the work described here is to provide an intermediate solution, reconstruction from a single image, that needs relatively little user interaction.
  • Most of the existing methods were developed for the use of a pinhole camera (with the exception of [12] where mosaics are used).
  • Coplanarity constraints are used to complete the reconstruction, via simultaneous reconstruction of points and planes.

2 Camera Model

  • The authors use an omnidirectional camera formed by the combination of a parabolic mirror and an orthographic camera whose viewing direction is parallel to the mirror’s axis [10].
  • Geometrically speaking, the projection center of the orthographic camera coincides with the infinite one among the two focal points.
  • This work is partially supported by the EPSRC funded project GR/K89221 . of the paraboloid.
  • Given the image of a point and a small amount of calibration information described below, it is possible to determine the 3D direction of the line joining the original 3D point and the finite focal point of the paraboloid.
  • These formulas are well known [10, 14], but presented here for the sake of completeness.

2.1 Representation of Mirror and Camera

  • The mirror is a rotationally symmetric paraboloid.
  • Without loss of generality, the authors may represent the paraboloid in usual quadric notation by the following symmetric matrix: where means equality up to scale, which accounts for the use of homogeneous coordinates.
  • The mirror’s axis is the Z-axis and the finite focal point is the coordinate origin, i.e. "! .
  • The parameter $ is the magnification factor of the orthographic projection.

2.2 Projection of a 3D Point

  • Its projection can be computed as follows.
  • Let 9 be the line joining / and the mirror’s finite focal point .

2.3 Calibration

  • The above projection equations show that the mirror’s shape parameter and the magnification $ of the orthographic projectioncan be grouped together in a parameter AB C ED describing the combined system.
  • These parameters have a simple geometrical meaning: consider the horizontal circle on the paraboloid at the height of the focal point .
  • If the mirror’s top border does not lie at the height of the focal point , then the authors can not directly determine the circle G in the image.
  • The calibration procedure has to be done only once for a fixed configuration.
  • Another, more flexible calibration method, is described in [2].

2.4 Backprojection

  • The most important feature of their mirror-camera system is that from a panoramic image, the authors may create correct perspective images of the scene, as if they had been observed by a pinhole camera with optical center at .
  • Given the calibration parameters, the projection rays are determined in Euclidean space, which is useful for obtaining metric 3D reconstructions as described later.

3 Input

  • To prepare the description of the 3D reconstruction method, the authors first explain the (user-provided) input.
  • The basic primitives for their method are interest points ).
  • Lines are defined by two or more interest points and they are grouped together into sets of mutually parallel lines and (d)).
  • The input data are rather easy to provide interactively, which typically takes 10-15 minutes per image.
  • 1By ideal points and ideal lines the authors denote points and lines at infinity respectively.

4 Basic Idea for 3D Reconstruction

  • Sets of coplanar 3D points define polygons onto which texture can be mapped for visualization purposes.
  • The authors assume that the image has been calibrated as described in 2.3.
  • Planes with known normal and which contain / (known from the input) are then completely defined.
  • This alternation scheme allows to reconstruct objects whose parts are sufficiently “interconnected”, i.e. the points on the object have to be linked together via coplanarity or other geometrical constraints.
  • This discussion shows that it is possible to obtain a 3D reconstruction from one image and constraints of the types considered.

5.2 Computation of the Direction of Parallel Lines

  • Given the input that two or more 3D lines are parallel, the authors can compute the lines’ 3D direction as follows.
  • For each line, the authors may compute the 3D interpretation plane, i.e. the plane spanned by the focal point and the 3D line.
  • This plane is given by the backprojection rays of the image points.
  • If more than two points are given, a least squares fit is done to determine the plane: the normal is computed as the right singular vector associated to the least singular value [6] of the following matrix: - The interpretation plane is then given by 4 T .
  • If more than two interpretation planes are given, a least squares fit is done as above.

5.3 Computation of the Normal Direction of a Set of Parallel Planes

  • Planes are depicted by the user by indicating sets of coplanar points in the image and (f)).
  • In the following, the authors suppose that the normal vectors have unit norm.

6 Simultaneous Reconstruction of Points and Planes

  • The coplanarity constraints provided by the user are in general overconstrained, i.e. several points may lie on more than one plane.
  • In the following, the authors only consider planes with known normal direction.
  • The authors say that two planes and are connected if they share a point, i.e. if the intersection of and is non empty.
  • The authors now show how the considered planes and points may be reconstructed simultaneously in a least squares manner.
  • The partial derivatives (divided by ) of the cost function are given by: R; <; - Nullifying these equations leads to a homogeneous linear equation system in the unknowns and , giving the least squares solution.

7 Complete Algorithm

  • Backproject the other points that lie on planes in the actual partition (cf. 5.4).
  • Note that this process is done completely automatically.
  • From the 3D reconstruction, the authors may create textured VRML models (see an example in 8).

8 Example

  • The input image was obtained with the CycloVision ParaShot system and an Agfa ePhoto 1680 camera.
  • Texture maps were created from the panoramic image using the projection equations in 2.2 and bicubic interpolation [8].
  • With other images, similar results were obtained.

9 Conclusion

  • The authors have presented a method for interactive 3D reconstruction of piecewise planar objects from a single panoramic view.
  • The method was developed for a sensor based on a parabolic mirror, but its adaptation to other sensors is straightforward.
  • 3D reconstruction is done using geometrical constraints provided by the user, that are simple in nature (coplanarity, perpendicularity and parallelism) and may be easily provided without any computer vision expertise.
  • The major drawback of single-view 3D reconstruction is of course that only limited classes of objects may be reconstructed and that the reconstruction is usually incomplete.
  • Please contact the author for getting a paper version with color figures.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

HAL Id: inria-00525670
https://hal.inria.fr/inria-00525670
Submitted on 30 May 2011
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
A Method for 3D Reconstruction of Piecewise Planar
Objects from Single Panoramic Images
Peter Sturm
To cite this version:
Peter Sturm. A Method for 3D Reconstruction of Piecewise Planar Objects from Single Panoramic
Images. IEEE Workshop on Omnidirectional Vision (OMNVIS ’00), Jun 2000, Hilton Head Island,
United States. pp.119-126, �10.1109/OMNVIS.2000.853818�. �inria-00525670�

A Method for 3D Reconstruction of Piecewise Planar Objects from Single
Panoramic Images
Peter Sturm
INRIA Rhˆone-Alpes
655 Avenue de l’Europe
38330 Montbonnot St Martin, France
Peter.Sturm@inrialpes.fr
IEEE WORKSHOP ON OMNIDIRECTIONAL VISION (WITH CVPR), HILTON HEAD ISLAND, PP. 119-126, JUNE 2000.
Abstract
We present an approach for 3D reconstruction of objects from a single panoramic image. Obviously, constraints on the 3D
structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism
constraints. The method is described in detail for the case of a parabolic mirror-based omnidirectional sensor and results are
provided.
1 Introduction
Methods for 3D reconstruction from images abound in the literature. A lot of effort has been spent on the development
of multi-view approaches allowing for high accuracy and complete modeling of complex scenes. On one hand, research is
directed towards completely automatic systems; these are relatively difficult to realize and it is not clear yet if they are ready for
use by a non expert. On the other hand, commercial systems exist, but they usually require a high amount of user interaction
(clicking on many points in many images).
The guideline of the work described here is to provide an intermediate solution, reconstruction from a single image, that
needs relatively little user interaction. Naturally, there are limits on the kind of objects possible to be reconstructed and on the
achievable degree of completeness of reconstructions.
Work on reconstruction from single images has been done before, see e.g. [9, 12, 13]. Most of the existing methods were
developed for the use of a pinhole camera (with the exception of [12] where mosaics are used). The scope of several of these
methods is limited, e.g. the approaches described in [9, 12] only allow to reconstruct planar surfaces whose vanishing line
can be determined in the image. One of the two approaches in [9] achieves the reconstruction by measuring heights of points
with respect to a ground plane. The drawback of the method is the requirement of the foot point for each 3D point to be
reconstructed, i.e. the image of the vertical intersection with the ground plane.
In this paper, we present an approach for 3D reconstruction from a single panoramic image (work on panoramic stereo and
ego-motion estimation is described in e.g. [3, 5, 7, 14]). The concrete example of an image acquired with a parabolic mirror-
based omnidirectional camera is described, but the method is easily adapted to other omnidirectional sensors. Reconstruction
from a single image requires a priori constraints on the 3D structure. We use constraints that are easy to provide: coplanarity of
points, perpendicularity of planes and lines, and parallelism of planes and lines. The parallelism and perpendicularity constraints
are used to estimate the “directional geometry” of the scene (line directions and plane normals) which forms the skeleton of the
3D reconstruction. Coplanarity constraints are used to complete the reconstruction, via simultaneous reconstruction of points
and planes. With the type of information used we are able to reconstruct piecewise planar objects.
The paper is organized as follows. In
2, we describe the camera model. The input to our reconstruction scheme is explained
in
3. The basic idea for 3D reconstruction from a single image is outlined in
4. Details on 3D reconstruction are given in

5
and 6. The complete algorithm is summarized in
7.
8 shows an experimental result and conclusions are given in
9.
2 Camera Model
We use an omnidirectional camera formed by the combination of a parabolic mirror and an orthographic camera whose viewing
direction is parallel to the mirror’s axis [10]. Orthographic projection can be obtained by using telecentric optics [15]. Geomet-
rically speaking, the projection center of the orthographic camera coincides with the infinite one among the two focal points
This work is partially supported by the EPSRC funded project GR/K89221 (Vector).
1

of the paraboloid. Given the image of a point and a small amount of calibration information described below, it is possible
to determine the 3D direction of the line joining the original 3D point and the finite focal point of the paraboloid. The finite
focal point acts as an effective optical center, relative to which correct perspective views of the scene can be created from the
panoramic image [1, 11].
In the following, we give formulas needed for calibrating the system and for 3D reconstruction. These formulas are well
known [10, 14], but presented here for the sake of completeness.
2.1 Representation of Mirror and Camera
The mirror is a rotationally symmetric paraboloid. Its shape is thus defined by a single parameter
. Without loss of generality,
we may represent the paraboloid in usual quadric notation by the following symmetric matrix:




 

where
means equality up to scale, which accounts for the use of homogeneous coordinates. The mirror’s axis is the Z-axis
and the finite focal point
is the coordinate origin, i.e.
  "!
. The parameter
describes the mirror’s “curvature”.
The viewing direction of the orthographic camera is parallel to the Z-axis, thus the projection matrix can be written as (using
homogeneous coordinates):
#
%$
&('*)
$
+)
,&
-
The parameter
$
is the magnification factor of the orthographic projection. The coefficients
'.)
and
+)
describe the relative
position of the image plane and the mirror, perpendicular to the viewing direction.
2.2 Projection of a 3D Point
Let
/
be a 3D point with coordinates
102 346578!
. Its projection can be computed as follows. Let
9
be the line joining
/
and the mirror’s finite focal point
. Among the two intersection points of
9
with the mirror
, choose the one which lies on
the same half-line as
/
, with respect to
. The image of
/
is the orthographic projection of this intersection point, giving the
image coordinates:
' ':)<;
$

0
5=;?> 0
;=3
;@5
0
;=3
+ +)4;
$

3
5?;
>
0
;=3
;@5
0
;=3
-
2.3 Calibration
The above projection equations show that the mirror’s shape parameter
and the magnification
$
of the orthographic projection
can be grouped together in a parameter
AB C
ED
describing the combined system. To calibrate the system, we thus need to
estimate the parameters
'
)
F+
)
and
A
. These parameters have a simple geometrical meaning: consider the horizontal circle on
the paraboloid at the height of the focal point
(cf. figure 1). The projection of this circle in the orthographic image is exactly
the circle
G
with center
'*)HF+)8!
and radius
A
.
If the mirror’s top border does not lie at the height of the focal point (e.g. as shown in figure 1), then we can not directly
determine the circle
G
in the image. Instead, we fit a circle
G<I
to the border of the image as shown in figure 2 (a). This circle is
cocentric with
G
, thus
'*)
and
+)
are given by its center. The radius
AI
of
G4I
, and
A
are related as follows:
AJA
ILKNMPOQ
R;
OTSVUWQ
where
Q
is the angle shown in figure 1, which is known by construction.
The calibration procedure has to be done only once for a fixed configuration. Another, more flexible calibration method, is
described in [2].
2

F
Image plane
r
r’
Figure 1: The paraboloidal mirror.
2.4 Backprojection
The most important feature of our mirror-camera system is that from a panoramic image, we may create correct perspective
images of the scene, as if they had been observed by a pinhole camera with optical center at
.
This is equivalent to being able to backproject image points via
: we are able to determine the projection ray
9
(cf.
2.2)
of a point
/
, given its image point. Given the calibration parameters, the projection rays are determined in Euclidean space,
which is useful for obtaining metric 3D reconstructions as described later.
A projection ray may be represented by its ideal point
1
which can be computed from the image coordinates
1' F+ !
as follows:

"A1' '*) !
"A1+ +H) !
1' B':)"! R;J1+ +H) ! A"

-
(1)
3 Input
To prepare the description of the 3D reconstruction method, we first explain the (user-provided) input. First of course, the
system has to be calibrated, as described in
2.3 (also see figure 2 (a)). The basic primitives for our method are interest points
(see figure 2 (b)). Based on interest points, coplanarity, parallelism and perpendicularity constraints are provided as follows.
Lines are defined by two or more interest points and they are grouped together into sets of mutually parallel lines (see figures
2 (c) and (d)). In
5.2 it is described how to compute the direction of a set of parallel lines. In the following, the direction of
the
th set of parallel lines will be represented via the ideal point


T

.
Planes are also defined by interest points and grouped according to parallelism (see figures 2 (e) and (f)). The normal
direction of a set of parallel lines can be computed as described in
5.3. The normal of the
th set of parallel planes will be
represented via the 3-vector

.
Other useful constraints are:
parallelism of lines and planes, expressed as:

J
.
perpendicularity of lines and planes:
.
perpendicularity of lines:



.
perpendicularity of planes:
.
The input data are rather easy to provide interactively, which typically takes 10-15 minutes per image.
1
By ideal points and ideal lines we denote points and lines at infinity respectively.
3

(a) Calibration: the dotted line shows the cir-
cle
(see text).
(b) Interest points.
(c) A set of parallel lines. (d) A set of parallel lines.
(e) A set of parallel planes. (f) A set of parallel planes.
Figure 2: Illustration of camera calibration and the input used for 3D reconstruction. Crosses of the same color represent
interest points belonging to the same line or plane.
4

Citations
More filters
Journal ArticleDOI
TL;DR: An algebraic representation is developed which unifies the three types of measurement and permits a first order error propagation analysis to be performed, associating an uncertainty with each measurement.
Abstract: We describe how 3D affine measurements may be computed from a single perspective view of a scene given only minimal geometric information determined from the image This minimal information is typically the vanishing line of a reference plane, and a vanishing point for a direction not parallel to the plane It is shown that affine scene structure may then be determined from the image, without knowledge of the camera's internal calibration (eg focal length), nor of the explicit relation between camera and world (pose) In particular, we show how to (i) compute the distance between planes parallel to the reference plane (up to a common scale factor)s (ii) compute area and length ratios on any plane parallel to the reference planes (iii) determine the camera's location Simple geometric derivations are given for these results We also develop an algebraic representation which unifies the three types of measurement and, amongst other advantages, permits a first order error propagation analysis to be performed, associating an uncertainty with each measurement We demonstrate the technique for a variety of applications, including height measurements in forensic images and 3D graphical modelling from single images

760 citations

Journal ArticleDOI
TL;DR: It is shown that a central catadioptric projection is equivalent to a two-step mapping via the sphere, and it is proved that for each catadi optric projection there exists a dual catadiOptric projection based on the duality between points and line images (conics).
Abstract: Catadioptric sensors are devices which utilize mirrors and lenses to form a projection onto the image plane of a camera. Central catadioptric sensors are the class of these devices having a single effective viewpoint. In this paper, we propose a unifying model for the projective geometry induced by these devices and we study its properties as well as its practical implications. We show that a central catadioptric projection is equivalent to a two-step mapping via the sphere. The second step is equivalent to a stereographic projection in the case of parabolic mirrors. Conventional lens-based perspective cameras are also central catadioptric devices with a virtual planar mirror and are, thus, covered by the unifying model. We prove that for each catadioptric projection there exists a dual catadioptric projection based on the duality between points and line images (conics). It turns out that planar and parabolic mirrors build a dual catadioptric projection pair. As a practical example we describe a procedure to estimate focal length and image center from a single view of lines in arbitrary position for a parabolic catadioptric system.

434 citations


Cites methods from "A method for 3D reconstruction of p..."

  • ...Furthermore, easily modified multiple view algorithms can be applied for reconstruction (Taylor, 2000; Sturm, 2000)....

    [...]

Book
05 Jan 2011
TL;DR: This survey gives an overview of image acquisition methods used in computer vision and especially, of the vast number of camera models that have been proposed and investigated over the years, where it tries to point out similarities between different models.
Abstract: This survey is mainly motivated by the increased availability and use of panoramic image acquisition devices, in computer vision and various of its applications. Different technologies and different computational models thereof exist and algorithms and theoretical studies for geometric computer vision ("structure-from-motion") are often re-developed without highlighting common underlying principles. One of the goals of this survey is to give an overview of image acquisition methods used in computer vision and especially, of the vast number of camera models that have been proposed and investigated over the years, where we try to point out similarities between different models. Results on epipolar and multi-view geometry for different camera models are reviewed as well as various calibration and self-calibration approaches, with an emphasis on non-perspective cameras. We finally describe what we consider are fundamental building blocks for geometric computer vision or structure-from-motion: epipolar geometry, pose and motion estimation, 3D scene modeling, and bundle adjustment. The main goal here is to highlight the main principles of these, which are independent of specific camera models.

234 citations


Cites background from "A method for 3D reconstruction of p..."

  • ...perpendicularity or parallelism of lines and coplanarity of points [470]....

    [...]

Journal ArticleDOI
TL;DR: An algorithm for the calibration of a paracatadioptric device using only the images of lines in space is presented and it is shown that all of the intrinsic parameters from theImages of only three lines are obtained and that this is possible without any metric information.
Abstract: Catadioptric sensors refer to the combination of lens-based devices and reflective surfaces. These systems are useful because they may have a field of view which is greater than hemispherical, providing the ability to simultaneously view in any direction. Configurations which have a unique effective viewpoint are of primary interest, among these is the case where the reflective surface is a parabolic mirror and the camera is such that it induces an orthographic projection and which we call paracatadioptric. We present an algorithm for the calibration of such a device using only the images of lines in space. In fact, we show that we may obtain all of the intrinsic parameters from the images of only three lines and that this is possible without any metric information. We propose a closed-form solution for focal length, image center, and aspect ratio for skewless cameras and a polynomial root solution in the presence of skew. We also give a method for determining the orientation of a plane containing two sets of parallel lines from one uncalibrated view. Such an orientation recovery enables a rectification which is impossible to achieve in the case of a single uncalibrated view taken by a conventional camera. We study the performance of the algorithm in simulated setups and compare results on real images with an approach based on the image of the mirror's bounding circle.

210 citations


Additional excerpts

  • ...) !...

    [...]

Journal ArticleDOI
TL;DR: This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view, and shows that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem.
Abstract: This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180deg field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183deg), Sigma 8 mm-f4-EX (180deg), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors

200 citations


Cites background from "A method for 3D reconstruction of p..."

  • ...Interestingly, there is a direct generalization for the para-catadioptric cameras, leading to the quartic polynomial eigenvalue problem....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: It can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines.
Abstract: Cubic convolution interpolation is a new technique for resampling discrete data. It has a number of desirable features which make it useful for image processing. The technique can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero. With the appropriate boundary conditions and constraints on the interpolation kernel, it can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines. A one-dimensional interpolation function is derived in this paper. A separable extension of this algorithm to two dimensions is applied to image data.

3,280 citations

Proceedings ArticleDOI
17 Jun 1997
TL;DR: A new camera with a hemispherical field of view is presented and results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification.
Abstract: Conventional video cameras have limited fields of view that make them restrictive in a variety of vision applications. There are several ways to enhance the field of view of an imaging system. However, the entire imaging system must have a single effective viewpoint to enable the generation of pure perspective images from a sensed image. A new camera with a hemispherical field of view is presented. Two such cameras can be placed back-to-back, without violating the single viewpoint constraint, to arrive at a truly omnidirectional sensor. Results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification. The paper concludes with a discussion on the spatial resolution of the proposed camera.

688 citations


"A method for 3D reconstruction of p..." refers background or methods in this paper

  • ...These formulas are well known [10, 14], but presented here for the sake of completeness....

    [...]

  • ...We use an omnidirectional camera formed by the combination of a parabolic mirror and an orthographic camera whose viewing direction is parallel to the mirror’s axis [10]....

    [...]

Book ChapterDOI
09 Oct 1993
TL;DR: A practical algorithm for Euclidean reconstruction from several views with the same camera is given and is shown to behave very robustly in the presence of noise giving excellent calibration and reconstruction results.
Abstract: The possibility of calibrating a camera from image data alone, based on matched points identified in a series of images by a moving camera was suggested by Mayband and Faugeras. This result implies the possibility of Euclidean reconstruction from a series of images with a moving camera, or equivalently, Euclidean structure-from-motion from an uncalibrated camera. No tractable algorithm for implementing their methods for more than three images have been previously reported. This paper gives a practical algorithm for Euclidean reconstruction from several views with the same camera. The algorithm is demonstrated on synthetic and real data and is shown to behave very robustly in the presence of noise giving excellent calibration and reconstruction results.

432 citations


"A method for 3D reconstruction of p..." refers methods in this paper

  • ...in [4], but for small problems (the size of the matrix is the number of planes plus the number of points, which is usually at most a few dozens for single images) we simply use singular value decomposition [6]....

    [...]

Proceedings ArticleDOI
04 Jan 1998
TL;DR: In this article, the authors derived the complete class of single-lens single-mirror catadioptric sensors which have a single viewpoint and an expression for the spatial resolution of a single-view camera in terms of the resolution of the camera used to construct it.
Abstract: Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. When designing a catadioptric sensor, the shape of the mirror(s) should ideally be selected to ensure that the complete catadioptric system has a single effective viewpoint. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors which have a single viewpoint and an expression for the spatial resolution of a catadioptric sensor in terms of the resolution of the camera used to construct it. We also include a preliminary analysis of the defocus blur caused by the use of a curved mirror.

415 citations

Journal ArticleDOI
01 Sep 1999
TL;DR: Methods for creating 3D graphical models of scenes from a limited numbers of images, i.e. one or two, in situations where no scene co‐ordinate measurements are available are presented.
Abstract: We present methods for creating 3D graphical models of scenes from a limited numbers of images, i.e. one or two, in situations where no scene co-ordinate measurements are available. The methods employ constraints available from geometric relationships that are common in architectural scenes - such as parallelism and orthogonality - together with constraints available from the camera. In particular, by using the circular points of a plane simple, linear algorithms are given for computing plane rectification, plane orientation and camera calibration from a single image. Examples of image based 3D modelling are given for both single images and image pairs.

310 citations


"A method for 3D reconstruction of p..." refers background in this paper

  • ...One of the two approaches in [9] achieves the reconstruction by measuring heights of points with respect to a ground plane....

    [...]

  • ...the approaches described in [9, 12] only allow to reconstruct planar surfaces whose vanishing line can be determined in the image....

    [...]

Frequently Asked Questions (2)
Q1. What are the contributions in "A method for 3d reconstruction of piecewise planar objects from single panoramic images" ?

The authors present an approach for 3D reconstruction of objects from a single panoramic image. Their approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. The method is described in detail for the case of a parabolic mirror-based omnidirectional sensor and results are provided. 

The authors have presented a method for interactive 3D reconstruction of piecewise planar objects from a single panoramic view. 3D reconstruction is done using geometrical constraints provided by the user, that are simple in nature ( coplanarity, perpendicularity and parallelism ) and may be easily provided without any computer vision expertise. The major drawback of single-view 3D reconstruction is of course that only limited classes of objects may be reconstructed and that the reconstruction is usually incomplete. One advantage of their method compared to other approaches is that a wider class of objects can be reconstructed ( especially, there is no requirement of disposing of two or more ideal points for each plane ).