It is shown that analytical expressions to assess the accuracy of the transformation parameters have been proposed provide less accurate bounds than those based on the earlier results of Weng et al. (1989).
Abstract:
Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples.
TL;DR: In this article , a wavefront sensor based on the Talbot effect has been used to detect optical density perturbations along the path of a laser beam, which makes it possible to create adaptive laser communication systems in free space.
TL;DR: In this article , a wavefront sensor based on the Talbot effect has been used to detect optical density perturbations along the path of a laser beam, which makes it possible to create adaptive laser communication systems in free space.
TL;DR: A two-step estimation approach for robust projective transformation estimation using the algebraic distance and geometric distance to obtain an initial guess and three geometric-based refining methods to refine this initial guess.
TL;DR: In this paper, a visual regulation approach based on planar region alignment is proposed to estimate camera ego-motion motion with a multistage approach, and the motion estimation experiment results show the convergence of the proposed visual regulation.
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
TL;DR: The presented approach to error estimation applies to a wide variety of problems that involve least-squares optimization or pseudoinverse and shows, among other things, that the errors are very sensitive to the translation direction and the range of field view.
TL;DR: This paper presents Super-resolution: Maximum Likelihood and Related Approaches, a model for feature-matching over N-views using a generative model and a note on the assumptions made in the model.
TL;DR: An uncertainty analysis which includes both the errors in image localization and the uncertainty in the imaging transformation is developed, and the distribution of correspondences can be chosen to achieve a particular bound on the uncertainty.
TL;DR: 3-D shape uncertainty as ellipsoids on top of the 3-D reconstruction as an enhanced visualization is shown, leading to better use of the factorization method in engineering applications.
Q1. Why do the authors refer the reader to section 5 in [4]?
Due to space limitation, the authors refer the reader to section 5 in [4], given for the estimation of the covariance of the homography parameters.
Q2. In what case does the author claim that their solution provides better estimates in two cases?
In [4], the authors claim that their solution provides better estimates in two cases: 1) Relatively small measurement noise levels, or 2) when minimum N = 4 image correspondences are utilized in the estimation of the homography.
Q3. What are the limitations of the analysis?
Computation of projective homography from frame-to-frame correspondences has been extensively studied in recent years [5], and analytical uncertainty bounds of the homography parameters and reprojection errors have been proposed [4].
Q4. What is the error bound for the envelops?
The dashed blue envelop is the ±3σ error bound computed experimentally, and the other two envelops in green and red are derived from analytical bounds ±3σhc and ±3σho, respectively.
Q5. What is the covariance of the homography?
The authors construct matching pairs {p, p′} based on a pre-specified homographyH; the authors use the well-know interpretation H = R + tnT in terms of the motion {R, t} of a camera relative to a planar scene with surface normal n = [−P,−Q, 1]/Zo, where P and Q control the surface slant and tilt angles, and Zo its distance from the camera.
Q6. How do the authors determine the covariance of the homography parameters?
For small variations –max{δqi} << 1, where qi denotes i-th element of q –it can be shown [6, 7] that up to first-order, the eigenvalues and eigenvectors ofQ vary according to δλi =
Q7. What is the purpose of this paper?
Ability to not only estimate the transformation between frames but also to assess the confidence in these estimates is important in many applications involving motion estimation from video imagery.