Using Sparse Elimination for Solving Minimal Problems in Computer Vision
read more
Citations
Minimal Case Relative Pose Computation Using Ray-Point-Ray Features
Minimal Solutions to Relative Pose Estimation From Two Views Sharing a Common Direction With Unknown Focal Length
A Sparse Resultant Based Method for Efficient Minimal Solvers
Computing stable resultant-based minimal solvers by hiding a variable
Computational Methods for Computer Vision: Minimal Solvers and Convex Relaxations
References
A flexible new technique for camera calibration
An efficient solution to the five-point relative pose problem
Simultaneous linear estimation of multiple view geometry and lens distortion
Motion and structure from motion in a piecewise planar environment
Analysis and solutions of the three point perspective pose estimation problem
Related Papers (5)
Frequently Asked Questions (18)
Q2. What are the contributions mentioned in the paper "Using sparse elimination for solving minimal problems in computer vision" ?
In this paper, the authors propose a new algorithm for selecting the basis that is in general more compact than the basis obtained with a state-of-the-art algorithm making PEP a more viable option for solving polynomial equations. Another contribution is that the authors present two minimal problems for camera self-calibration based on homography, and demonstrate experimentally using synthetic and real data that their algorithm can provide a numerically stable solution to the camera focal length from two homographies of unknown planar scene.
Q3. How did Herrera and colleagues initialize the solver?
The solver was initialized by assuming that the reference view is fronto-parallel when it becomes easy to compute an initial value for the focal length.
Q4. How many constraints are needed for the camera pose?
Five constraints are needed for the camera pose (3 for rotation and 2 for translation up to scale), the normal of the plane n requires two constraints and oneis needed for the perspective scaling factor.
Q5. What is the common problem that is solved using linear algebra?
A resultant-based algorithm for transforming a system of polynomial equations to a polynomial eigenvalue problem (PEP) was proposed in [13] that enabled solving several minimal relative pose problems using linear algebra.
Q6. What is the main disadvantage of the resultant-based approach for solving the polynomial?
The main disadvantage of the resultant-based approach for solving the polynomial equations is that it requires computing the determinant of a matrix which often has high dimensions.
Q7. Why is the determinant of an N N matrix infeasible?
Because the determinant of an N ×N matrix has N ! terms, solving the unknowns from the resultant often becomes computationally infeasible even for relatively small problems.
Q8. How many monomials are produced in the first calibration problem?
1. Since d = 8 and n = 3 in the first calibration problem (equal focal length) the authors get 45 basis monomial which is more than twice the number of the monomials produced by their method.
Q9. What are the main techniques used to solve systems of polynomials?
In addition to the Gröbner basis techniques and resultants, systems of polynomial equations can be often solved using eigenvalues and eigenvectors.
Q10. What is the definition of a polynomial eigenvalue problem?
Polynomial eigenvalue problem (PEP) is an extension of the standard eigenvalue problem (C−λI)v = 0 to a system of polynomials represented with the matrix equation:(C0 +C1λ+C2λ 2 + · · ·+Clλ l)v = 0, (1)where l is the highest degree of the polynomials in the variable λ that the authors want to solve, v is a vector of monomials in other variables than λ, and C0, . . .
Q11. What is the smallest set of support vectors for the basis monomials?
The total number of new equations ∑i |Ei| is greater or equal to the number of the basis monomials, which means that the coefficient matrices have at least as many rows as columns.
Q12. What is the approach to solving a polynomial eigenvalue problem?
One approach is to convert the classical multipolynomial resultant to a standard eigenvalue problem [5],[4] which however works only with dense polynomials.
Q13. How many unknowns are needed to solve the camera pose?
There are now 3 + k unknowns λ0, . . . , λk, nx and ny , and two equations which means that the authors cannot solve the problem from a single homography, but the authors need at least two homographies (i = 1, 2) that lead to four equations with four unknowns.
Q14. How many zero monomials did the authors remove from the equations?
Instead the authors selected the equations randomly and used their SVD based elimination scheme to remove 11 zero monomials that helped to improve the stability of the method.
Q15. What is the common approach for solving polynomial equations?
An alternative approach that is also commonly used for solving polynomial equations is multipolynomial resultant, which provides an efficient tool for eliminating variables from multiple equations, and solving the remaining variable as a root of a univariate polynomial [4].
Q16. What is the disadvantage of using a Gröbner basis solver?
Another limitation of this approach is that implementing a Gröbner basis solver for a given problem requires expertise in algebraic geometry because the solver needs to be handcrafted in practice to make it efficient.
Q17. Why is the accuracy of the solver higher than in the first problem?
One possible reason for this is that in the second problem the solver uses 4 constraints in contrast to the 3 constraints of the first problem.
Q18. Why did the authors not implement the modified resultant-based method?
The authors did not implement the modified resultant-based method, because in this case d = 17 and n = 4 that would give us 1140 monomials which is almost 6 times more than with their method, and it is clear that the results would be inferior.