scispace - formally typeset
Open AccessProceedings ArticleDOI

Efficient Solvers for Minimal Problems by Syzygy-Based Reduction

Viktor Larsson, +2 more
- pp 2383-2392
Reads0
Chats0
TLDR
A new method for finding small elimination templates by making use of the syzygies (i.e. the polynomial relations) that exist between the original equations by essentially parameterize the set of possible elimination templates.
Abstract
In this paper we study the problem of automatically generating polynomial solvers for minimal problems. The main contribution is a new method for finding small elimination templates by making use of the syzygies (i.e. the polynomial relations) that exist between the original equations. Using these syzygies we can essentially parameterize the set of possible elimination templates. We evaluate our method on a wide variety of problems from geometric computer vision and show improvement compared to both handcrafted and automatically generated solvers. Furthermore we apply our method on two previously unsolved relative orientation problems.

read more

Content maybe subject to copyright    Report

Efficient Solvers for Minimal Problems by Syzygy-based Reduction
Viktor Larsson Kalle
˚
Astr
¨
om Magnus Oskarsson
Centre for Mathematical Sciences
Lund University
{viktorl,kalle,magnuso}@maths.lth.se
Abstract
In this paper we study the problem of automatically gen-
erating polynomial solvers for minimal problems. The main
contribution is a new method for finding small elimination
templates by making use of the syzygies (i.e. the polynomial
relations) that exist between the original equations. Us-
ing these syzygies we can essentially parameterize the set
of possible elimination templates.
We evaluate our method on a wide variety of problems
from geometric computer vision and show improvement
compared to both handcrafted and automatically generated
solvers. Furthermore we apply our method on two previ-
ously unsolved relative orientation problems.
1. Introduction
One of the success stories of computer vision is using
robust estimation schemes such as RANSAC [
13] in multi-
ple view geometry estimation. With a hypothesis and test
framework, one can efficiently handle large amounts of out-
liers in the measured data. A key element in such a frame-
work is the ability to model the problem using a small or
minimal subset of data points a so-called minimal prob-
lem. A classic example is the 5-point algorithm for esti-
mating the relative pose between two cameras, given only
image point measurements, [
25, 12, 41, 47]. The underly-
ing problems in multiple view geometry naturally lead to
systems of polynomial equations in several variables. In or-
der to devise tractable algorithms, robust and fast solvers of
polynomial equations are needed. The predominant way to
solve minimal problems in computer vision is using meth-
ods based on Gr
¨
obner bases. The reason is that this of-
ten leads to fast and numerically stable algorithms. These
methods were popularized in computer vision by Stew
´
enius
[
46]. The earliest examples were methods that were very
much handcrafted [
24, 48], but since then much effort has
been put into making the process of constructing the solvers
more automatic. One of the main challenges is ways of au-
This work has been financed by ELLIIT, MAPCI, eSSENCE and the
Swedish Research Council (grant no. 2012-4213).
tomatically constructing the so-called elimination template,
which is the main focus of this work.
Our contributions in this paper are:
(i) A non-iterative method for automatically finding the
monomial expansion of the initial system of equations.
(ii) A non-iterative reduction step, that often gives a much
more compact representation of the expanding set.
(iii) An efficient implementation of these ideas which pro-
duces stand-alone code for solving arbitrary instances of the
problem.
(iv) Solvers for two previously unsolved minimal cases,
based on our developed system.
1.1. Related work
The most closely related work is by Kukelova et al. [
33],
where the authors presented an automatic method for gen-
erating polynomial solvers. The automatic generator allows
the user to specify a set of polynomial equations and then
automatically generates stand-alone code for solving arbi-
trary problem instances. This automatic generator has been
widely adopted in the computer vision community and it
has been used to solve several problems in geometric com-
puter vision (see e.g. Table
1 and the references therein.)
The solvers generated using [
33] are built on the action ma-
trix method that reduces the polynomial equation system to
an eigenvalue problem. Their automatic generator works
by first computing a Gr
¨
obner basis in Macaulay2 [
15] for
a random problem instance. The Gr
¨
obner basis gives in-
formation on the number of solutions and provides a basis
for the quotient space. The elimination template needed for
computing the action matrix is then found by an iterative
search process that alternates between expanding the equa-
tion system and performing Gaussian elimination. Once
sufficiently many equations have been generated a prun-
ing step is used to remove any unnecessary equations. The
Gr
¨
obner basis computations and the Gaussian eliminations
are performed in some prime field Z
p
to avoid numerical
problems. The main drawback of the approach in [
33] is
that as the number of variables and equations grows the
iterative search and pruning step can quickly become in-
tractable. In this work we propose a new method for finding
the elimination template in place of the iterative search used
1
820

in [33]. We show on a large number of examples that our
approach almost always produces smaller elimination tem-
plates and faster polynomial solvers.
While our focus has been on constructing smaller tem-
plates there has been a number of works that have ad-
dressed the problem of making solvers more numerically
stable [
9, 26] and faster [6, 32]. It is possible that these
methods could be applied in conjunction with our method,
but this is left as further work.
2. Background
In this section we remind the reader of some of the basic
facts and definitions from algebraic geometry that we will
use throughout the paper. For a more thorough introduction
see [
10].
Let X = (x
1
, . . . , x
n
) be a number of variables, and K
be some field. Then the set of all multivariate polynomials
(with n variables) over K is denoted K[X], and this set with
their natural operations forms a ring. In this paper we will
only consider the case when K = C or K = Z
p
for some
prime number p. For a set of polynomials F = {f
i
}
m
i=0
the
set of shared zeros, i.e.
V(F ) = {x K
n
| f
i
(x) = 0, i = 1, 2, . . . , m} (1)
is called an affine variety and the set of all polynomial com-
binations of the elements in F, i.e.
I(F) = {p K[X] | p =
P
i
h
i
f
i
, h
i
K[X]}, (2)
forms an ideal in the polynomial ring K[X]. When it is clear
from the context which polynomials are meant we will omit
F and simply write V and I.
Similar to the univariate case there exists a division al-
gorithm for multivariate polynomials. Unfortunately the re-
mainder depends on the order in which the polynomials are
listed. Fortunately for any ideal I there exist special sets
of generators G = {g
i
}
i=0
called Gr
¨
obner bases, such that
the remainder after division with G is uniquely defined, re-
gardless of how the individual g
i
are listed. This allows us
to define the normal form of a polynomial p K[X] with
respect to G as the unique remainder after division with G.
This is denoted
p
G
. Note that p I if and only if p
G
= 0.
For a Gr
¨
obner basis G the normal set is the set of all mono-
mials not divisible by any element in G. It is easy to see that
the normal form for any polynomial lies in the linear span
of the normal set.
Another object of interest is the quotient space K[X]/I,
which is the set of equivalence classes over I (i.e. two ele-
ments are equivalent if their difference lies in I). If an affine
variety V is zero dimensional (i.e. there are finitely many so-
lutions) then the corresponding quotient space K[X]/I will
be a finite dimensional vector space. For any Gr
¨
obner ba-
sis G of I we have that the normal set forms a vector space
basis of the quotient space K[X]/I.
2.1. The action matrix method
Next we give a brief overview of the action matrix
method for solving polynomial systems. The main idea is
to reduce the problem to an eigenvalue problem for which
there exist good numerical methods. For a more thorough
review of the action matrix method and how it has been ap-
plied in computer vision we recommend [
37], [33] and [9].
Consider the operator T
α
: K[X]/I K[X]/I which
multiplies a polynomial with the fix monomial
1
α K[X],
i.e.
T
α
[p(x)] = [α(x)p(x)] , p K[X]. (3)
The operator T
α
is a linear map and thus if we choose a
(linear) basis b for the quotient space we can express the
operator with a matrix M, i.e.
[αb
i
] =
h
P
j
m
ij
b
j
i
[αb] = [Mb] . (4)
For each x V we must have Mb(x) = α(x)b(x). Thus
if we evaluate α and b at the solutions we get eigenvalues
and eigenvectors for the matrix M. So if we can find the
action matrix M we can recover the solutions by solving an
eigenvalue problem, and hence we have reduced the solving
of the system of polynomial equations to a linear algebra
problem.
The monomials r
i
= αb
i
are called the reducible mono-
mials. If a Gr
¨
obner basis G is known and b is chosen as
the normal set we can recover the action matrix by reduc-
ing the reducible monomials with the Gr
¨
obner basis, i.e.
r
i
G
=
P
j
m
ij
b
j
. Due to roundoff error it is usually not
possible to directly compute a Gr
¨
obner basis for a polyno-
mial system corresponding to a real problem instance. In-
stead an alternative approach is taken to recover the action
matrix. The idea is based on the observation that each
r
i
P
j
m
ij
b
j
I (5)
and thus there exist some polynomials h
ij
K[X] such that
r
i
P
j
m
ij
b
j
=
P
j
h
ij
f
j
. (6)
To find the action matrix M the original set of equations
{f
j
(x) = 0}
m
j=0
is expanded by adding new equations
formed by multiplying each f
j
by some monomials. If
we have multiplied by sufficiently many monomials (i.e. all
monomials in the unknown h
ij
) we can express each poly-
nomial (
5) linearly in the expanded set of equations.
To do this in practice (see e.g. [
26] for details) we write
the expanded set of equations as CX = 0, where the matrix
C is called the elimination template and X is a vector of all
the monomials occurring in the equations. By reordering
1
For simplicity we take α as a monomial here but the theory holds for
a general polynomial as well.
821

the monomials we can rewrite this as
CX =
C
E
C
R
C
B
x
E
x
R
x
B
= 0, (7)
where we have grouped the monomials into excessive
monomials x
E
, reducible monomials x
R
and basis mono-
mials x
B
. The excessive monomials are simply the mono-
mials which are neither reducible nor basis monomials.
Now since we know x
R
= Mx
B
we simply perform Gaus-
sian elimination on (
7) and get the following form on the
lower part of (
7)
0 I M
x
E
x
R
x
B
= 0 (8)
from which we can extract the action matrix M.
2
3. Finding elimination templates in Z
p
[X]
Now we will present our proposed approach for finding
elimination templates for a given problem. Similarly to [33]
we start by generating instances of the problem where the
equations have coefficients in some prime field Z
p
. Due
to the exact arithmetic available in these fields we can eas-
ily compute a Gr
¨
obner basis G = {g
k
}
k=0
for the ideal.
From the Gr
¨
obner basis we find a linear basis b for the quo-
tient space by forming the normal set. As described in Sec-
tion
2.1 we can then find the action matrix by reducing each
of the reducible monomials,
r
i
G
=
P
j
m
ij
b
j
. (9)
Note that this only gives the action matrix for this particular
integer instance, which we are not particularly interested in.
However, by keeping track of how the Gr
¨
obner basis el-
ements are formed we can express each g
k
in the generators
f
j
, i.e.
3
g
k
=
P
k
c
kj
f
j
, (10)
where the coefficients c
kj
Z
p
[X] are polynomials. Since
each r
i
P
m
ij
b
j
I we can then find polynomials h
ij
Z
p
[X] such that
r
i
P
j
m
ij
b
j
=
P
k
a
ik
g
k
=
P
k
a
ik
(
P
j
c
kj
f
j
) =
P
j
h
ij
f
j
(11)
by dividing by {g
k
} and substituting each g
k
with (
10).
While the polynomials h
ij
are specific to this instance, they
will typically have the same structure and only the coeffi-
cients (i.e. numbers) will be different for different problem
instances. Thus to form our elimination template we expand
2
Note that in practice some reducible monomials might be available
from the basis monomials
3
In Macaulay2 this can be accomplished using ChangeMatrix.
our equation set by multiplying each equation f
j
(x) = 0 by
the monomials in h
ij
for each i = 1, 2, . . . , n
R
.
Typically the elimination templates found using this
method will be quite large. In the next section we will show
how we can find simpler polynomials h
ij
that give more
compact elimination templates.
3.1. Reducing the expansion
In the previous section we showed how to obtain poly-
nomials h
i
= (h
i1
, . . . , h
in
) Z
p
[X]
n
such that we could
represent the polynomials needed for forming the action
matrix, i.e.
p
i
= r
i
P
j
m
ij
b
j
=
P
j
h
ij
f
j
. (12)
These representations of p
i
in {f
j
} are however not unique,
since for any s = (s
1
, . . . , s
n
) Z
p
[X]
n
which satisfies
P
j
s
j
f
j
= 0, (13)
we also have
p
i
=
P
j
(h
ij
+ s
j
)f
j
. (14)
Consider the set of all s Z
p
[X]
n
which satisfy this, i.e.
M = {s Z
p
[X]
n
|
P
j
s
j
f
j
= 0}. (15)
This set forms a sub-module in Z
p
[X]
n
and is called the
first syzygy module of (f
1
, . . . , f
n
) [
11]. It captures all poly-
nomial relations between the original equations {f
j
(x) =
0}
m
i=0
and it is clear that any representation of p
i
in {f
j
}
can be written as h
i
+ s for some element s M.
Finding the s M which yields that smallest template
or the best numerics is a difficult problem. Instead we now
present a simple heuristic that usually works well in prac-
tice. We start by computing a Gr
¨
obner basis G
M
for the
module M. This is done on the prime field problem in-
stance using Macaulay2. The Gr
¨
obner basis depends on the
monomial order chosen for M and in this work we have
used the Term-Over-Position-GRevLex which is the default
order for modules in Macaulay2. Next to find a simpler rep-
resentation of each p
i
we compute the normal form w.r.t.
G
M
for each h
i
, i.e.
˜
h
i
=
h
i
G
M
. (16)
This can be thought of as removing as much as possible
of M from the representation. The following proposition
shows that the new representations
˜
h
i
are minimal in the
sense that the maximum degree of the monomials is mini-
mized. Note that there can be multiple representations with
minimal degree and this approach only finds one of them.
Proposition 1. If M is defined as above, p =
P
j
h
j
f
j
and G
M
is a Gr
¨
obner basis for M with respect to TOP-
GRevLex (or any other degree first monomial order), then
822

˜
h =
h
G
M
satisfies
max
i
deg
˜
h
i
max
i
deg
˜
h
i
+ s
i
s M (17)
i.e. the representation
˜
h of p is of minimal degree.
Proof. Assume that there exist some s M such that
max
i
deg
˜
h
i
+ s
i
< max
i
deg
˜
h
i
. (18)
Then since the monomial order is degree first we must have
LM(
˜
h + s) < LM(
˜
h) which implies LM(
˜
h) = LM(s).
But then
˜
h is divisible by an element in M which is a con-
tradiction.
3.2. Implementation details
We have written an automatic generator in MATLAB,
which uses the technique described above to find and reduce
the elimination templates. The generator is similar to that of
Kukelova et al. [
33] in that it only requires the user to spec-
ify the problem equations and then generates stand-alone
MATLAB code that can be used to solve arbitrary problem
instances. The elimination template generation and reduc-
tion are performed in just a few lines of Macaulay2 [15].
The automatic generator allows the user to easily experi-
ment with different parameters such as which action mono-
mial to use and the monomial ordering for the ideal, which
can greatly affect the size of the polynomial solver. The
generator can also automatically identify and exploit any
variable aligned symmetries as described in [
35]. In the im-
plementation we do not perform any refinement on the elim-
ination templates (except for the reduction step described
in Section
3.1). It is possible that by using template opti-
mization techniques such as those described in [
26, 39] the
results could be further improved. We have made the code
for generating solvers publicly available. For most of the
problems that we have tried, the solver generation time is
quite small. The median running time for all the problems
described in Table
1 of our automatic generator (executed
on a standard desktop computer) is 5.7s.
4. Evaluation of the reduction step
In this section we evaluate the reduction step proposed in
Section
3.1. Note that while the reduction does not guaran-
tee that the template will be smaller we will show that this
is often the case in practice.
To perform the evaluation we applied the automatic gen-
erator to a wide variety of minimal problems from the com-
puter vision literature. Table
1 shows both the original tem-
plate sizes as reported by the authors and the resulting tem-
plates from our proposed automatic generator. We have
marked the templates with the smallest number of elements
in bold. It can be seen that in general the reduction step
produces a smaller template. More interestingly is perhaps
that the reduced template is often smaller than the template
in the original paper. Note that many of the papers used
the automatic generator from Kukelova et al. [
33] (indicated
with (*) in the table). For many of the tested problems we
get a significant decrease in template size, and for some a
very large decrease. For instance for the problem of esti-
mating relative pose with a known rotation direction we go
from a template of size 411 × 489, [
45] to a template of
size 40 × 57. If we assume that the time complexity of the
solver is quadratic in the number of rows and linear in the
number of columns this corresponds to a speed-up factor of
900. The problem contains a symmetry that was not used
by the original authors and we have used the method in [
35]
to remove it. (Here the smaller template doesn’t contain all
the variables in the basis for the quotient space, but the re-
maining variables can be extracted linearly from the initial
equations.)
5. Numerical accuracy of the solvers
The focus of the work in this paper has been on gener-
ating fast solvers in an automatic manner, and not on nu-
merical accuracy. However, for the proposed solvers to be
usable we need them to behave in an acceptable way in
terms of accuracy as well. While a smaller template typi-
cally yields faster runtime, there is not always a clear cor-
relation between accuracy and template size. In [
39] it was
reported that on a number of examples, smaller templates
yielded better numerical accuracy as well. All of the gener-
ated solvers in Table
1 produce log
10
-residuals with a mode
below 4.8 and most solvers have significantly better accu-
racy. The median of all of the log
10
-error modes is 10.9.
We will in this section give comparisons between our
solvers and the original ones, for some specific problems.
In terms of the underlying computer vision problem, there
is often some meaningful statistical error that one can eval-
uate, e.g. the reprojection errors, but since our main con-
tribution in this paper is an automatic way of constructing
solvers to systems of polynomials equations, we have opted
to evaluate the actual equation residuals instead.
We compare on three different problems, where the orig-
inal solvers were publicly available, namely image stitching
with unknown focal length and radial distortion [
8, 39] ,
the optimal PnP-method of Hesch et al. [
21] and the opti-
mal PnP-method of Zheng et al. [
54]. In Figure 1 the re-
sulting error residual histograms are shown, for 5,000 runs
of the solvers, with random input. The figure shows that
for these problems we get similar accuracy as the original
solvers while having smaller elimination templates. For the
image stitching we have used the original solver presented
in [
8]. The smaller original template presented in Table 1 is
from the paper of Naroditsky et al., but they reported almost
identical numerics as the original solver [
39].
823

Problem Original Proposed generator
Author template size no reduction step with reduction step
Rel. pose 5pt Stew
´
enius et al. [47] 10 × 20 10 × 20 10 × 20
Rel. pose 8pt one-sided rad. dist. Kuang et al. [
30] 12 × 24 11 × 20 11 × 20
TDOA offset rank 2, 7,4 pts Kuang et al. [
28] 20 × 15 20 × 15 20 × 15
Rel. pose + one focal 6pt Bujnak et al. [
4] (*) 21 × 30 21 × 30 21 × 30
P3.5P + focal Wu [
52] 20 × 43 24 × 45 20 × 44
Rel. pose + const. focal 6pt Kukelova et al. [
33] (*) 31 × 46 31 × 50 31 × 50
Rel. pose + rad. dist. 8pt Kukelova et al. [
33] (*) 32 × 48 31 × 49 32 × 50
Rel. pose 6pt ones-sided rad. dist. Kuang et al. [
30] 48 × 70 34 × 60 34 × 60
TDOA offset rank 2, 5,6 pts Kuang et al. [
28] 105 × 83 105 × 83 40 × 42
Rolling shutter pose Saurer et al. [
44] (*) 48 × 56 50 × 55 47 × 55
Generalized P4P + scale Ventura et al. [
51] (*) 48 × 56 50 × 55 47 × 55
Stitching + const. focal + rad. dist. 3pt Naroditsky et al. [
39] 54 × 77 96 × 108 48 × 66
TDOA offset rank 3, 9,5 pts Kuang et al. [
28] 70 × 31 70 × 31 70 × 31
TDOA offset rank 3, 7,6 pts Kuang et al. [
28] 255 × 157 255 × 157 75 × 57
Generalized rel. pose 6pt Stew
´
enius et al. [
48] 60 × 120
135 × 164 99 × 163
Optimal PnP Hesch et al. [
21] 120 × 120 93 × 116 88 × 115
Triangulation from satellite im. Zheng et al. [
53] (*) 93 × 120 93 × 116 88 × 115
Optimal PnP (Cayley) Nakano [
38] (*) 124 × 164 186 × 161 118 × 158
P4P + focal + rad. dist. Bujnak et al. [
5] (*) 136 × 152 140 × 144 140 × 156
Rel. pose + rad. dist. 6pt Kukelova et al. [
33] (*) 238 × 290 223 × 290 154 × 210
Rel. pose + 2 rad. dist. 9pt Kukelova et al. [
33] (*) 179 × 203 355 × 298 165 × 200
Rel. pose 7pt one-sided focal + rad. dist. Kuang et al. [
30] 200 × 231 249 × 214 185 × 204
Weak PnP Larsson et al. [
35] 234 × 276 568 × 498
189 × 232
Weak PnP (2x2 sym) Larsson et al. [
35] 104 × 90 83 × 90
49 × 59
Rolling shutter R6P Albl et al. [
2] (*) 196 × 216 222 × 230 204 × 224
Optimal pose w dir 4pt Sv
¨
arm et al. [
49] 280 × 252 371 × 351 203 × 239
Rel. pose w dir. 3pt Saurer et al. [
45] (*) 411 × 489 287 × 324 210 × 255
Rel. pose w dir. 3pt (using sym.) - - 94 × 111
40 × 57
Abs. pose quivers Kuang et al. [
27] 372 × 386 420 × 406 217 × 253
L
2
3 view triangulation (Relaxed) Kukelova et al. [
34] (*) 274 × 305 399 × 384 239 × 290
Rel. pose w angle 4pt Li et al. [
36] (*) 270 × 290 280 × 304 266 × 329
Refractive P5P Haner et al. [
16] 280 × 399 410 × 480 240 × 324
TDOA offset rank 3, 6,8 pts Kuang et al. [
28] 1359 × 754 1359 × 754 356 × 345
Optimal PnP Zheng et al. [
54] (*) 575 × 656 812 × 704 521 × 601
Optimal PnP (using sym.) Zheng et al. [
54] (*) 348 × 376 484 × 408
302 × 342
Optimal pose w dir 3pt Sv
¨
arm et al. [
49] 1, 260 × 1, 278 918 × 726 544 × 592
Optimal PnP (quaternion) Nakano [
38] (*) 630 × 710 958 × 693 604 × 684
Refractive P6P + focal Haner et al. [
16] 648 × 917 2, 196 × 1, 913
636 × 851
Rel. pose + const. focal + rad. dist. 7pt Jiang et al. [
23] 886 × 1, 011 1, 393 × 1, 237 581 × 862
Dual-Receiver TDOA 5pt Burgess et al. [
7] 2, 625 × 2, 352 850 × 1, 167 455 × 768
Optimal PnP (rot. matrix) Nakano [
38] (*) 1, 936 × 1, 976 1, 698 × 1, 153 1, 102 × 1, 135
L
2
3 view triangulation Kukelova et al. [
34] (*) 1, 866 × 1, 975 2, 647 × 2, 584 1, 759 × 2, 013
(*) Original template constructed using [33]. If several elimination templates are used, the largest of these templates is reported.
: The problem contains variable-aligned symmetries [
3, 31, 35] that was automatically found and removed by our generator.
: The original template doesn’t generate the full Gr
¨
obner basis, and some additional operations on the template are performed.
Table 1. Comparison of elimination template sizes for some common minimal problems in computer vision. The template with the fewest
elements, for each problem, is shown in bold.
824

Citations
More filters
Proceedings ArticleDOI

Beyond Grobner Bases: Basis Selection for Minimal Solvers

TL;DR: In this article, the action matrix method is used to make polynomial solvers based on Grobner bases faster, by careful selection of the monomial bases, which leads to more efficient solvers in many cases.
Proceedings ArticleDOI

Making Minimal Solvers for Absolute Pose Estimation Compact and Robust

TL;DR: New techniques for constructing compact and robust minimal solvers for absolute pose estimation and can be directly applied to the P3.5Pf problem to get a non-degenerate solver, which is competitive with the current state-of-the-art.

Revisiting the PnP Problem: A Fast, General and Optimal Solution

TL;DR: This paper revisits the classical perspective-n-point (PnP) problem, and proposes the first non-iterative O(n) solution that is fast, generally applicable and globally optimal.
Proceedings ArticleDOI

Polynomial Solvers for Saturated Ideals

TL;DR: This paper presents a new method for creating polynomial solvers for problems where a (possibly infinite) subset of the solutions are undesirable or uninteresting, and extends the standard action matrix method to saturated ideals.

Stratified Sensor Network Self-Calibration From TDOA Measurements

TL;DR: A non-iterative algorithm is proposed that applies a three-step stratification process, using a set of rank constraints to determine the unknown time offsets, and applying factorization techniques to determine transmitters and receivers up to unknown affine transformation.
References
More filters
Journal ArticleDOI

Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography

TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Book

Multiple view geometry in computer vision

TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.

Multiple View Geometry in Computer Vision.

TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Journal ArticleDOI

An efficient solution to the five-point relative pose problem

TL;DR: The algorithm is used in a robust hypothesize-and-test framework to estimate structure and motion in real-time with low delay and is the first algorithm well-suited for numerical implementation that also corresponds to the inherent complexity of the problem.
Book

Using Algebraic Geometry

TL;DR: The Berlekamp-Massey-Sakata Decoding Algorithm is used for solving Polynomial Equations and for computations in Local Rings.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What are the contributions in "Efficient solvers for minimal problems by syzygy-based reduction" ?

In this paper the authors study the problem of automatically generating polynomial solvers for minimal problems. The authors evaluate their method on a wide variety of problems from geometric computer vision and show improvement compared to both handcrafted and automatically generated solvers. Using these syzygies the authors can essentially parameterize the set of possible elimination templates. Furthermore the authors apply their method on two previously unsolved relative orientation problems. 

The authors have achieved state-of-the-art results by finding the normal form w. r. t. the module, but it is possible that a more advanced search over the syzygies would yield even better results, and this is an interesting venue for further work. 

If an affine variety V is zero dimensional (i.e. there are finitely many solutions) then the corresponding quotient space K[X]/I will be a finite dimensional vector space. 

The main drawback of the approach in [33] is that as the number of variables and equations grows the iterative search and pruning step can quickly become intractable. 

Using the point correspondence the authors can parameterize the two translations using the depths astk = λkxk −RkX, k = 2, 3. (24)Since P1 = [I 0] the authors can select the scale such that X = x1. 

by keeping track of how the Gröbner basis elements are formed the authors can express each gk in the generators fj , i.e. 3gk = ∑ kckjfj , (10)where the coefficients ckj ∈ Zp[X] are polynomials. 

The automatic generator allows the user to specify a set of polynomial equations and then automatically generates stand-alone code for solving arbitrary problem instances. 

If the authors have multiplied by sufficiently many monomials (i.e. all monomials in the unknown hij) the authors can express each polynomial (5) linearly in the expanded set of equations. 

Since the rotation axis is assumed to be known the authors can w.l.o.g. assume that v = ( 0, 1, 0 )T by rotating the image coordinate systems. 

So if the authors can find the action matrix M the authors can recover the solutions by solving an eigenvalue problem, and hence the authors have reduced the solving of the system of polynomial equations to a linear algebra problem. 

Since there are only two parameters in the rotations the authors can use these two equations to solve for the rotations independently from the rest of the variables. 

For any Gröbner basis G of The authorwe have that the normal set forms a vector space basis of the quotient space K[X]/I .Next the authors give a brief overview of the action matrix method for solving polynomial systems. 

In the previous section the authors showed how to obtain polynomials hi = (hi1, . . . , hin) ∈ Zp[X] n such that the authors could represent the polynomials needed for forming the action matrix, i.e.pi = ri − ∑ jmijbj = ∑ jhijfj . 

Due to roundoff error it is usually not possible to directly compute a Gröbner basis for a polynomial system corresponding to a real problem instance.