It is shown that analytical expressions to assess the accuracy of the transformation parameters have been proposed provide less accurate bounds than those based on the earlier results of Weng et al. (1989).
Abstract:
Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples.
TL;DR: A new method for estimating and maintaining over time the pose of a single Pan-Tilt-Zoom camera (PTZ) is proposed, achieved firstly by building offline a keypoints database of the scene; then a coarse localization is obtained from camera odometry and finally refined by visual landmarks matching.
TL;DR: The stepping/impact angle is proposed as the metric that quantifies how much stepping affected the direction of the fall and is used in two real fall events as demonstrative cases.
TL;DR: The camera motion model is derived and represented by two parts, and a multistage ego-motion estimation process is introduced, and some experiments with real images demonstrate the robustness of the proposed motion estimation method.
TL;DR: This paper assumes the mobile robot with visual observer is moved on flat ground and a constrained camera-robot configuration is used to simplify the camera motion model, and the multistage process for ego-motion estimation is introduced.
TL;DR: In this article, the authors propose to extract a constant number of keypoints that have the highest response for their respective frame, and find that the shapes of the resulting graphs are relatively invariant to the chosen threshold values.
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
TL;DR: The presented approach to error estimation applies to a wide variety of problems that involve least-squares optimization or pseudoinverse and shows, among other things, that the errors are very sensitive to the translation direction and the range of field view.
TL;DR: This paper presents Super-resolution: Maximum Likelihood and Related Approaches, a model for feature-matching over N-views using a generative model and a note on the assumptions made in the model.
TL;DR: An uncertainty analysis which includes both the errors in image localization and the uncertainty in the imaging transformation is developed, and the distribution of correspondences can be chosen to achieve a particular bound on the uncertainty.
TL;DR: 3-D shape uncertainty as ellipsoids on top of the 3-D reconstruction as an enhanced visualization is shown, leading to better use of the factorization method in engineering applications.
Q1. Why do the authors refer the reader to section 5 in [4]?
Due to space limitation, the authors refer the reader to section 5 in [4], given for the estimation of the covariance of the homography parameters.
Q2. In what case does the author claim that their solution provides better estimates in two cases?
In [4], the authors claim that their solution provides better estimates in two cases: 1) Relatively small measurement noise levels, or 2) when minimum N = 4 image correspondences are utilized in the estimation of the homography.
Q3. What are the limitations of the analysis?
Computation of projective homography from frame-to-frame correspondences has been extensively studied in recent years [5], and analytical uncertainty bounds of the homography parameters and reprojection errors have been proposed [4].
Q4. What is the error bound for the envelops?
The dashed blue envelop is the ±3σ error bound computed experimentally, and the other two envelops in green and red are derived from analytical bounds ±3σhc and ±3σho, respectively.
Q5. What is the covariance of the homography?
The authors construct matching pairs {p, p′} based on a pre-specified homographyH; the authors use the well-know interpretation H = R + tnT in terms of the motion {R, t} of a camera relative to a planar scene with surface normal n = [−P,−Q, 1]/Zo, where P and Q control the surface slant and tilt angles, and Zo its distance from the camera.
Q6. How do the authors determine the covariance of the homography parameters?
For small variations –max{δqi} << 1, where qi denotes i-th element of q –it can be shown [6, 7] that up to first-order, the eigenvalues and eigenvectors ofQ vary according to δλi =
Q7. What is the purpose of this paper?
Ability to not only estimate the transformation between frames but also to assess the confidence in these estimates is important in many applications involving motion estimation from video imagery.