Using Sparse Elimination for Solving Minimal Problems in Computer Vision
Summary (2 min read)
1. Introduction
- Many camera pose estimation and calibration problems boil down to solving a system of polynomial equations.
- Fitzgibbon [8] augmented the fundamental matrix estimation to include one term of radial lens distortion, and solved them from 9 point correspondences.
- One drawback of this approach is that when the polynomial degrees are high, it often suffers from numerical inaccuracies.
2. Polynomial eigenvalue problems
- ,Cl are m × m square matrices that contain the coefficients of the polynomials.
- Since most of the mathematical software libraries and packages can solve this problem it becomes easy to find all the roots for λ.
- Notice that this procedure also increases the number of monomials in v and hence the dimensions of the coefficient matrices.
- Therefore, they proposed a small modification to the resultant-based approach that gives a higher number of polynomial equations that increases the chances to get a linearly independent set of basis monomials.
- Therefore, this approach is feasible only for small systems of equations and low polynomial degrees.
3. Determining basis monomials
- Most of the polynomial equations encountered in computer vision are sparse, and therefore classical multivariate resultants are not well-suited for generating the basis monomials.
- Next the authors discuss about finding the basis monomials for the polynomial eigenvalue problem, i.e., the elements of v in (1).
- The main disadvantage of the resultant-based approach for solving the polynomial equations is that it requires computing the determinant of a matrix which often has high dimensions.
- Due to the relaxed requirements, the authors can try to find a smaller set of basis monomials than (7) defined for the sparse resultant.
- It should be noticed that the mixed volume is computed for the original system (3).
4. Planar self-calibration
- To demonstrate the applicability of their algorithm, the authors present two minimal problems for solving the camera focal length from two homographies corresponding to three images where the patterns are unknown, which makes this a self-calibration problem.
- Because there are now three unknowns only three constraints are needed.
- The authors obtain 135 putative bases B (62 valid), and from those they select the minimal one which has more than or equal to ceil(70/4) = 18 monomials and produces 18 or more equations.
- Hence, besides the actual 70 roots the authors get 12 spurious roots that can be found, for example, by substituting the solutions obtained to the original equations.
- Interestingly, the authors cannot solve the focal length of the first camera λ0 in this case, because the variety is no longer zero-dimensional, and a single reference image does not provide enough constraints for λ0, but it can still be used to solve λ1.
5. Experiments
- The authors show experimentally that the selfcalibration methods presented in Section 4 give numerically stable results both with synthetic and real data.
- The authors randomly picked 4 points from the dataset and computed the homographies.
- The random sampling for images and points is again repeated 30, 000 times to get statistically reliable results.
- The relative errors are plotted in Fig. 3 (a) both for the noiseless case and for σ = 0.5 using the simulated data.
6. Conclusions
- The authors have proposed a new algorithm for selecting the monomial basis for polynomial eigenvalue problems based on sparse elimination that has been previously used for constructing sparse resultants.
- The authors approach has two important advantages over the sparse resultants: 1) the solution is provided by eigenvalues instead of the roots of a high-order determinant, and 2) the cofficient matrices do not need to be of full rank unlike sparse resultants that simplifies the algorithm and often leads to a more compact basis.
- In contrast to the modified resultant-based method [13] their algorithm can exploit the sparsity of the polynomials that is a common property in real-world problems.
- As a result, the monomial basis becomes smaller, and it is the same only in the limiting case where the polynomials are dense.
- The authors also presented two new minimal problems for camera self-calibration, and demonstrated that their algorithm can provide numerically more stable results than the modified resultant-based method.
Did you find this useful? Give us your feedback
Citations
25 citations
Cites methods from "Using Sparse Elimination for Solvin..."
...Successful applications of the resultant method in computer vision can be found in [36], [37]....
[...]
19 citations
Cites methods from "Using Sparse Elimination for Solvin..."
...Polynomial eigenvalue methods have been successfully used for many minimal problems in computer vision, such as the 9-point one-parameter radial distortion problem [11], the 5- and 6-point relative pose problems [21], the 6point one unknown focal length problem [4], and the selfcalibration problems [17]....
[...]
16 citations
Cites background or methods from "Using Sparse Elimination for Solvin..."
...The most promising results in this direction were proposed by Emiris [12] and Heikkilä [18], where methods based on sparse resultants were proposed and applied to camera geometry problems....
[...]
...The most promising results in this direction were proposed by Emiris [12] and Heikkilä [18], where methods based on sparse resultants were proposed and applied to camera geometry problems....
[...]
...The augmented polynomial system is solved by hiding λ and reducing a constraint similar to (2) into a regular eigenvalue problem that leads to smaller solvers than [12, 18]....
[...]
...Our algorithm is inspired by the ideas explored in [18, 12], but thanks to the special form of added equation and by solving the resultant as a small eigenvalue problem, in contrast to a polynomial eigenvalue problem in [18], the new approach achieves significant improvements over [18, 12] in terms of efficiency of the generated solvers....
[...]
...The algorithm by Heikkilä [18] basically computes the Minkowski sum of the Newton polytopes of a subset of input polynomials, Q = ΣiNP(fi(x))....
[...]
7 citations
Cites background or methods from "Using Sparse Elimination for Solvin..."
...based algorithms [24], [26] that can generate smaller and stable solvers for general polynomial systems....
[...]
...Additionally, we propose several improvements to the previously published sparse resultant method [24]....
[...]
...Existing resultant-based solvers are mostly handcrafted and tailored to a particular problem, are not exploiting a sparsity of the systems [8] or can not be directly applied to general minimal problems [24]....
[...]
...Another sparse hidden variable resultant based approach has been proposed by Heikkilä in [24]....
[...]
...Next we list shortcomings of the previous methods [24]–[26]:...
[...]
6 citations
Cites methods from "Using Sparse Elimination for Solvin..."
...This was recently extended by Heikkila [86] using techniques for constructing sparse resultants [205, 56]....
[...]
References
88 citations
"Using Sparse Elimination for Solvin..." refers methods in this paper
...[3] have proposed a generalization of the Gröbner basis method for improving the numerical stability....
[...]
84 citations
"Using Sparse Elimination for Solvin..." refers background in this paper
...Kukelova and Pajdla [15] used an additional constraint to solve the same problem from 8 point correspondences, and Jiang et al....
[...]
...Kukelova and Pajdla [15] used an additional constraint to solve the same problem from 8 point correspondences, and Jiang et al. [11] added still one constraint and they were able to solve the problem from 7 point correspondences....
[...]
82 citations
"Using Sparse Elimination for Solvin..." refers background or methods in this paper
...In contrast to the modified resultant-based method [13] our algorithm can exploit the sparsity of the polynomials that is a common property in real-world problems....
[...]
...Notice that this is a looser condition than in [13] where they assumed that either Cl or C0 must be of full rank and invertible....
[...]
...We compared our method to the modified resultant-based method [13] which generates ( n+d−1...
[...]
...The trick emplyed in [13] is to generate new equations by multiplying the initial equations with monomials produced by their algorithm....
[...]
...In [13] they used the classical Macaulay resultant formulation for creating the set of basis monomials...
[...]
81 citations
"Using Sparse Elimination for Solvin..." refers background in this paper
...Minimal solutions to panorama stitching in [1] and [2] assume that the camera centers coincide which reduces the motion to pure rotation....
[...]
Related Papers (5)
Frequently Asked Questions (18)
Q2. What are the contributions mentioned in the paper "Using sparse elimination for solving minimal problems in computer vision" ?
In this paper, the authors propose a new algorithm for selecting the basis that is in general more compact than the basis obtained with a state-of-the-art algorithm making PEP a more viable option for solving polynomial equations. Another contribution is that the authors present two minimal problems for camera self-calibration based on homography, and demonstrate experimentally using synthetic and real data that their algorithm can provide a numerically stable solution to the camera focal length from two homographies of unknown planar scene.
Q3. How did Herrera and colleagues initialize the solver?
The solver was initialized by assuming that the reference view is fronto-parallel when it becomes easy to compute an initial value for the focal length.
Q4. How many constraints are needed for the camera pose?
Five constraints are needed for the camera pose (3 for rotation and 2 for translation up to scale), the normal of the plane n requires two constraints and oneis needed for the perspective scaling factor.
Q5. What is the common problem that is solved using linear algebra?
A resultant-based algorithm for transforming a system of polynomial equations to a polynomial eigenvalue problem (PEP) was proposed in [13] that enabled solving several minimal relative pose problems using linear algebra.
Q6. What is the main disadvantage of the resultant-based approach for solving the polynomial?
The main disadvantage of the resultant-based approach for solving the polynomial equations is that it requires computing the determinant of a matrix which often has high dimensions.
Q7. Why is the determinant of an N N matrix infeasible?
Because the determinant of an N ×N matrix has N ! terms, solving the unknowns from the resultant often becomes computationally infeasible even for relatively small problems.
Q8. How many monomials are produced in the first calibration problem?
1. Since d = 8 and n = 3 in the first calibration problem (equal focal length) the authors get 45 basis monomial which is more than twice the number of the monomials produced by their method.
Q9. What are the main techniques used to solve systems of polynomials?
In addition to the Gröbner basis techniques and resultants, systems of polynomial equations can be often solved using eigenvalues and eigenvectors.
Q10. What is the definition of a polynomial eigenvalue problem?
Polynomial eigenvalue problem (PEP) is an extension of the standard eigenvalue problem (C−λI)v = 0 to a system of polynomials represented with the matrix equation:(C0 +C1λ+C2λ 2 + · · ·+Clλ l)v = 0, (1)where l is the highest degree of the polynomials in the variable λ that the authors want to solve, v is a vector of monomials in other variables than λ, and C0, . . .
Q11. What is the smallest set of support vectors for the basis monomials?
The total number of new equations ∑i |Ei| is greater or equal to the number of the basis monomials, which means that the coefficient matrices have at least as many rows as columns.
Q12. What is the approach to solving a polynomial eigenvalue problem?
One approach is to convert the classical multipolynomial resultant to a standard eigenvalue problem [5],[4] which however works only with dense polynomials.
Q13. How many unknowns are needed to solve the camera pose?
There are now 3 + k unknowns λ0, . . . , λk, nx and ny , and two equations which means that the authors cannot solve the problem from a single homography, but the authors need at least two homographies (i = 1, 2) that lead to four equations with four unknowns.
Q14. How many zero monomials did the authors remove from the equations?
Instead the authors selected the equations randomly and used their SVD based elimination scheme to remove 11 zero monomials that helped to improve the stability of the method.
Q15. What is the common approach for solving polynomial equations?
An alternative approach that is also commonly used for solving polynomial equations is multipolynomial resultant, which provides an efficient tool for eliminating variables from multiple equations, and solving the remaining variable as a root of a univariate polynomial [4].
Q16. What is the disadvantage of using a Gröbner basis solver?
Another limitation of this approach is that implementing a Gröbner basis solver for a given problem requires expertise in algebraic geometry because the solver needs to be handcrafted in practice to make it efficient.
Q17. Why is the accuracy of the solver higher than in the first problem?
One possible reason for this is that in the second problem the solver uses 4 constraints in contrast to the 3 constraints of the first problem.
Q18. Why did the authors not implement the modified resultant-based method?
The authors did not implement the modified resultant-based method, because in this case d = 17 and n = 4 that would give us 1140 monomials which is almost 6 times more than with their method, and it is clear that the results would be inferior.