scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Multiple view 3-D reconstruction in water

TL;DR: A camera model to deal with underwater scene reconstruction from multiple images is presented and the refractive index of the medium is calculated and the geometric refraction effects in images are removed using point correspondences in image pairs.
Abstract: This paper presents a camera model to deal with underwater scene reconstruction from multiple images. Effects due to change in path of light ray at the medium interface is modelled for a general medium with unknown refractive index. Our model calculates the refractive index of the medium and simultaneously removes the geometric refraction effects in images using point correspondences in image pairs. With known internal parameters of the camera we find the external parameters of the camera and then 3-D reconstruction is obtained.
Citations
More filters
Journal ArticleDOI
TL;DR: This work provides a succinct derivation and expression for the refractive fundamental matrix and uses this as the basis for a novel two- view reconstruction method for underwater imaging that outperforms classic two-view Structure-from-Motion method relying on the pinhole-plus-distortion camera model.
Abstract: Recovering 3D geometry from cameras in underwater applications involves the Refractive Structure-from-Motion problem where the non-linear distortion of light induced by a change of medium density invalidates the single viewpoint assumption. The pinhole-plus-distortion camera projection model suffers from a systematic geometric bias since refractive distortion depends on object distance. This leads to inaccurate camera pose and 3D shape estimation. To account for refraction, it is possible to use the axial camera model or to explicitly consider one or multiple parallel refractive interfaces whose orientations and positions with respect to the camera can be calibrated. Although it has been demonstrated that the refractive camera model is well-suited for underwater imaging, Refractive Structure-from-Motion remains particularly difficult to use in practice when considering the seldom studied case of a camera with a flat refractive interface. Our method applies to the case of underwater imaging systems whose entrance lens is in direct contact with the external medium. By adopting the refractive camera model, we provide a succinct derivation and expression for the refractive fundamental matrix and use this as the basis for a novel two-view reconstruction method for underwater imaging. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate its practical application within laboratory settings and for medical applications in fluid-immersed endoscopy. We demonstrate our approach outperforms classic two-view Structure-from-Motion method relying on the pinhole-plus-distortion camera model.

41 citations


Additional excerpts

  • ...An alternative ray-basedmodel has been proposed in Chaudhury et al. (2015)....

    [...]

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A succinct derivation of the refractive fundamental matrix is developed in the form of the generalised epipolar constraint for an axial camera to robustly estimate underwater camera poses, where other methods suffer from poor noise-sensitivity.
Abstract: Recovering 3D scene geometry from underwater images involves the Refractive Structure-from-Motion (RSfM) problem, where the image distortions caused by light refraction at the interface between different propagation media invalidates the single view point assumption. Direct use of the pinhole camera model in RSfM leads to inaccurate camera pose estimation and consequently drift. RSfM methods have been thoroughly studied for the case of a thick glass interface that assumes two refractive interfaces between the camera and the viewed scene. On the other hand, when the camera lens is in direct contact with the water, there is only one refractive interface. By explicitly considering a refractive interface, we develop a succinct derivation of the refractive fundamental matrix in the form of the generalised epipolar constraint for an axial camera. We use the refractive fundamental matrix to refine initial pose estimates obtained by assuming the pinhole model. This strategy allows us to robustly estimate underwater camera poses, where other methods suffer from poor noise-sensitivity. We also formulate a new four view constraint enforcing camera pose consistency along a video which leads us to a novel RSfM framework. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate performance within laboratory settings and for applications in endoscopy.

26 citations


Cites background from "Multiple view 3-D reconstruction in..."

  • ...The symbol ⊗ refers to the Krönecker product and the two design matrix Ds and S are defined as Ds = diag(1 2 1 2 2 1) and S([1, 1], [2, 2], [2, 4], [3, 5], [4, 3], [4, 7], [5, 6], [5, 8], [6, 9]) = 1....

    [...]

  • ...An alternative ray-based model allows the underlying refractive geometry to be expressed as a direct extension of the projective geometry but this only allows 3D reconstruction are obtained up to a similarity and assumes that refraction occurs at the camera centre [6]....

    [...]

Journal ArticleDOI
TL;DR: A geometrically-driven approach which fulfills the coplanarity condition and thereby requires no knowledge of object space data is proposed and may prove useful not only for object space reconstruction but also as a preparatory step for application of bundle block adjustment and for outlier detection.
Abstract: . While accuracy, detail, and limited time on site make photogrammetry a valuable means for underwater mapping, the establishment of reference control networks in such settings is oftentimes difficult. In that respect, the use of the coplanarity constraint becomes a valuable solution as it requires neither knowledge of object space coordinates nor setting a reference control network. Nonetheless, imaging in such domains is subjected to non-linear and depth-dependent distortions, which are caused by refractive media that alter the standard single viewpoint geometry. Accordingly, the coplanarity relation, as formulated for the in-air case does not hold in such environment and methods that have been proposed thus far for geometrical modeling of its effect require knowledge of object-space quantities. In this paper we propose a geometrically-driven approach which fulfills the coplanarity condition and thereby requires no knowledge of object space data. We also study a linear model for the establishment of this constraints. Clearly, a linear form requires neither first approximations nor iterative convergence scheme. Such an approach may prove useful not only for object space reconstruction but also as a preparatory step for application of bundle block adjustment and for outlier detection. All are key features in photogrammetric practices. Results show that no unique setup is needed for estimating the relative orientation parameters using the model and that high levels of accuracy can be achieved.

4 citations


Cites methods from "Multiple view 3-D reconstruction in..."

  • ...As an example, Chaudhury et al. (2015) implemented a ray-tracing based model that allows expressing the underlying refractive geometry as an extension of projective geometry....

    [...]

Proceedings ArticleDOI
01 Feb 2018
TL;DR: A refraction reconstruction model to compensate for refractive errors is presented, for images acquired with the camera above the water surface instead of underwater camera images, excluding the possible refraction across glass-water interface.
Abstract: 3D reconstruction of underwater objects is a challenging problem in computer vision due to the scattering and absorption of light which leads to contrast and colour degradations of the acquired images. In addition, refraction across media boundaries introduces geometric variations which lead to erroneous correspondence matching between images. In this paper, we present a refraction reconstruction model to compensate for refractive errors. Our model is for images acquired with the camera above the water surface instead of underwater camera images, excluding the possible refraction across glass-water interface. The corrected images are used under a multi-view 3D reconstruction framework to produce the 3D geometry of the underwater objects as well as camera pose.

2 citations


Cites background or methods from "Multiple view 3-D reconstruction in..."

  • ...Another ray based model [7] assumed refraction at camera centre and expressed refractive geometry as an extension of projective geometry....

    [...]

  • ...The correction coordinates applied on fundamental matrix can be converted to an eigenvalue problem to estimate elements of fundamental matrix along with refractive index [7]....

    [...]

  • ...In terms of refractive index η and focal length f , α is derived [7] as...

    [...]

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A single layer refraction method where the glass is thin and its effect is neglected, which produced better pose estimate compared to the standard structure from motion.
Abstract: Retrieval of 3D geometry from underwater images overthrows single view point assumption and classical structure from motion as the images are distorted due to the refraction of light at the interface between the propagation through different media. Most of the underwater imaging systems suffer from two level refraction due to the propagation of light ray through two media, water and glass. We propose a single layer refraction method where the glass is thin and its effect is neglected. Initial pose is estimated through an adaptive pinhole model and is then refined using refractive fundamental matrix. The ground truth pose estimated with calibration pattern is used to measure the pose error. Our method produced better pose estimate compared to the standard structure from motion.

1 citations


Cites background from "Multiple view 3-D reconstruction in..."

  • ...In [7] camera is placed close to water and object underwater is imaged from top....

    [...]

References
More filters
Book
01 Jan 2000
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Abstract: From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.

15,558 citations

01 Jan 2001
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this multiple view geometry in computer vision. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

14,282 citations


"Multiple view 3-D reconstruction in..." refers background or methods in this paper

  • ...The former results in the degradation of quality so the image processing techniques involving object recognition, feature extraction become more difficult, whereas the latter results in change of geometric entities so the basic pin-hole model for the camera as explained in [1] is no longer valid for reconstruction applications....

    [...]

  • ...• For points x̄c and x̄c ′ , use linear triangulation as described in [1] to obtain the 3-D world points up to a similarity transformation....

    [...]

  • ...The decomposition of essential matrix gives the camera matrices P1 = [I|0] and P2 = [R|t] as discussed in [1]....

    [...]

Proceedings ArticleDOI
01 Dec 2001
TL;DR: This paper shows how linear estimation of the fundamental matrix from two-view point correspondences may be augmented to include one term of radial lens distortion, by expressing fundamental matrix estimation as a quadratic eigenvalue problem (QEP), for which efficient algorithms are well known.
Abstract: A problem in uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be pre-calibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from two-view point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radial-lens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a quadratic eigenvalue problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundle-adjusted calibration-grid data. The new estimator is fast enough to be included in a RANSAC-based matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multi-view relation is a planar homography or trifocal tensor is described.

595 citations


"Multiple view 3-D reconstruction in..." refers background or methods in this paper

  • ...We model our camera as an extension to pin-hole model using insight from division model in [6]....

    [...]

  • ...The problem is formulated as an eigenvalue problem with reference to [6], and can be readily solved using standard techniques in algebra....

    [...]

  • ...Reference [6] proposes a solution for finding λ, and a better extension to this is shown in [7] which uses the fact that if the distortion function is as shown above then a straight line is mapped to a circular arc which can be represented linearly in lifted coordinates....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors developed normwise backward errors and condition numbers for the polynomial eigenvalue problem, and showed that solving the QEP by applying the QZ algorithm to a corresponding generalized eigen value problem can be backward unstable.

227 citations

01 Jan 2012
TL;DR: The underlying geometry of rays corresponds to an axial camera, and a general theory of calibrating such systems using 2D-3D correspondences is developed, which allows non-linear refinement by minimizing the reprojection error.
Abstract: In this section, we describe in detail the analytical solutions to compute the layer thicknesses and translation along the axis when refractive indices are unknown. As shown in the paper, the axis can be computed independently of the layer thicknesses and refractive indices. We assume that axis A, rotation R and translation orthogonal to the axis, tA, has been computed as described in Section 3 of the paper. Our goal is to compute the translation tA along the axis, layer thicknesses and refractive indices, using the given 2D-3D correspondences. Let tA = αA, where α is the unknown translation magnitude along the axis. We first apply the computedR and tA⊥ to the 3D points P. Let Pc = RP+ tA⊥ . The plane of refraction is obtained by the estimated axis A and the given camera ray v0. Let [z2, z1] denote an orthogonal coordinate system on the plane of refraction (POR). We choose z1 along the axis. For a given camera ray v0, let z2 = z1 × (z1 × v0) be the orthogonal direction. The projection of Pc on POR is given by u = [u, u], where u = z2 Pc and u = z1 Pc. Similarly, each ray vi on the light-path of v0 can be represented by a 2D vector vpi on POR, whose components are given by z2 vi and z1 vi. Let zp = [0; 1] be a unit 2D vector and ci = vpi zp. On the plane of refraction, the normal n of the refracting layers is given by n = [0;−1]. 1.1. Case 1: Single Refraction In this case, we have three unknowns d0, μ1 and α. When μis are unknown, ray directions cannot be pre-computed and flat refraction constraint needs to be written in terms of camera rays. For Case 1, the flat refraction constraint is given by

141 citations


"Multiple view 3-D reconstruction in..." refers background in this paper

  • ...Reference [3] models multiple layer flat refractive geometry and proposes to calculate the refractive indices and thickness of the mediums with known scene geometry....

    [...]