scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Planar homography: accuracy analysis and applications

14 Nov 2005-Vol. 1, pp 1089-1092
TL;DR: It is shown that analytical expressions to assess the accuracy of the transformation parameters have been proposed provide less accurate bounds than those based on the earlier results of Weng et al. (1989).
Abstract: Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples.

Summary (2 min read)

Introduction

  • The registration of various frames in a video mosaic has numerous applications, including generation of terrain mosaics from flyover image transects in underwater and airborne systems [3].
  • Claimed to be based on the earlier results of Weng et al. [7], who reported a comprehensive error analysis of motion and structure parameters from image correspondences.
  • It is typically the case that some robust estimation method, e.g. RANSAC or LMedS, can be used as a first step to identify the outliers, before the proposed linear solution is applied to the inliers (which satisfy the assumed noise model).

3. COVARIANCE OF PROJECTIVE HOMOGRAPHY

  • Due to space limitation, the authors refer the reader to section 5 in [4], given for the estimation of the covariance of the homography parameters.
  • The authors also derive Cho, the covariance derived here as the new estimate.
  • From δm, the authors can determine the variation in the measurement matrix A2N×9, or Q9×9.

4. SIMULATIONS

  • The authors use computer simulations to compare the closed-form expressions for estimating the variances of the projective homography parameters, given in [4] and derived here (denoted Chc and Cho, respectively).
  • In these two their simulations, only a minimum 4 points near the image corners are used.
  • Measurements noise from a normal distribution with standard deviation σm is then added to corresponding pairs to subsequently estimate the homography, and calculate its error.
  • The process is repeated 1000 times with different noise samples to compute statistical measures.
  • The final experiment deals with the generation of a 10-frame sequence, and the integration of frame-to-frame homographies and the corresponding variances.

4.1. Case 1

  • Blue crosses depict the errors of the homography parameters for each of 1000 simulations, with red circles giving the (zero) mean error.
  • In [4], the authors claim that their solution provides better estimates in two cases: 1) Relatively small measurement noise levels, or 2) when minimum N = 4 image correspondences are utilized in the estimation of the homography.
  • In all cases, the new results consistently provide a more accurate estimate of the ±3σh error bounds.
  • Authorized licensed use limited to: UNIVERSITAT DE GIRONA.

4.2. Case 2

  • Fig. 2 shows the original and transformed images based on an assumed homography.
  • In addition, selected corners have been mapped with 1000 homographies, estimated from noisy correspondences (σm = 1 is assumed).
  • The green and red uncertainty ellipses of the transformed points have been determined from the homography variances σhc and σho, respectively.
  • The results for 4 selected points, A − D, demonstrate once again that σho provides a more accurate and tighter uncertainty bound.

4.3. Case 3

  • A 10-frame sequence has been constructed based on a prescribed homography.
  • Fig. 3 shows the sequence with an inter-frame motion of about 13-14 pixels.
  • The blue stars depict the true positions of 3 sample pixels in various frames, at the center of uncertainty ellipses (computed from the two different techniques) that bound noisy positions of these points based on homographies estimated from 1000 simulations with noisy correspondences.
  • For each of these three points, the noisy positions and bounding ellipses in frames 1, 4, 7 and 10 have been given in subsequent plots.
  • As in previous 2 cases, their new results provide a tighter fit of the projected point distributions.

5. SUMMARY

  • Ability to not only estimate the transformation between frames but also to assess the confidence in these estimates is important in many applications involving motion estimation from video imagery.
  • Based on earlier results of Weng et al. [7], the authors have provided new expressions to estimate these bounds more accurately.
  • These results are particularly important when dealing with long image sequences where frame-to-frame estimates need to be integrated to establish the camera position, to build an image mosaic, or generally to register various frames of a video sequence.
  • Under investigation is the direct use of these uncertainty bounds in the construction of photo-mosaics.
  • This work has been funded in part by the Spanish Ministry of Education and Science under grant CTM2004-04205, and in part by the Generalitat de Catalunya under grant 2003PIVB00032.

6. REFERENCES

  • Authorized licensed use limited to: UNIVERSITAT DE GIRONA.
  • Z. Sun, V. Ramesh, and A.M. Tekalp, “Error characterization of the factorization method,” CVIU, vol.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

PLANAR HOMOGRAPHY: ACCURACY ANALYSIS AND APPLICATIONS
Shahriar Negahdaripour
, Ricard Prados, Rafael Garcia
Computer Vision and Robotics Group
Dept. of Electronics, Informatics and Automation
University of Girona, Spain
ABSTRACT
Projective homography sits at the heart of many problems in
image registration. In addition to many methods for estimat-
ing the homography parameters [5], analytical expressions
to assess the accuracy of the transformation parameters have
been proposed [4]. We show that these expressions provide
less accurate bounds than those based on the earlier results
of Weng et al. [7]. The discrepancy becomes more critical in
applications involving the integration of frame-to-frame ho-
mographies and their uncertainties, as in the reconstruction
of terrain mosaics and the camera trajectory from flyover im-
agery. We demonstrate these issues through selected exam-
ples.
1. INTRODUCTION
The registration of various frames in a video mosaic has nu-
merous applications, including generation of terrain mosaics
from flyover image transects in underwater and airborne sys-
tems [3]. The abilities to accurately compute the view-to-
view transformation parameters and to assess their accura-
cies are equivalently important issues. For example, repro-
jection error bounds can be used to establish search windows
in processing of the data recursively to improve the image
matching and registration.
Let I and I
0
be the images of a scene from two camera
viewpoints. Under many conditions and for a number of
applications, it may be assumed that the scene is (approxi-
mately) planar, and thus the image transformation I I
0
can be described by a planar projective homography. Given
a pair of image correspondences {p = (x, y, 1)
T
and p
0
=
(x
0
, y
0
, 1)
T
in I and I
0
, respectively, the projective homog-
raphy H establishes the constraint between each correspon-
dence according to ρ
0
p
0
= Hp, where ρ
0
is a constant scaling.
A number of methods, including closed-form linear methods,
have been given to estimate H (up to a scale) from N cor-
respondences {p
i
, p
0
i
}, i = 1 : N. Assuming that the image
position measurements are corrupted by Gaussian noise, Cri-
minisi et al. [4] give closed-form expressions to estimate the
variance of the 8 independent parameters
1
. The derivation is
S. Negahdaripour’s permanent address: Dept. Electrical and Computer
Engineering, University of Miami, Coral Gables, FL 33143. This paper
describes work while on sabbatical leave at the University of Girona, Spain.
1
Nine elements determined up to a constant scale factor.
claimed to be based on the earlier results of Weng et al. [7],
who reported a comprehensive error analysis of motion and
structure parameters from image correspondences. Criminisi
et al. also stated that their results, being better conditioned
than solutions given in [2] if only N = 4 correspondences
are used, also provide better estimates when the N > 4
matchings have relatively small measurement noise levels.
We will show that these expressions provide less accurate
bounds than those derived from a singular value perturbation
analysis, as first proposed by Weng et al. [7]. We also explore
the behavior of the uncertainty bounds in applications involv-
ing the integration of frame-to-frame homographies, as in the
reconstruction of mosaics and the camera trajectory from fly-
over imagery.
2. TERMINOLOGY AND BACKGROUND
Let H denote the planar homography that maps points p in I
onto p
0
in I
0
based on up-to-scale transformation p
0
= Hp.
For simplicity, we express the up-to-scale homography as
2
H =
h
1
h
2
h
3
h
4
h
5
h
6
h
7
h
8
1
If h denotes the vector form of H, the above homograhy
can be expressed by two linear constraint equations on h.
If we have N correspondences {p
i
, p
0
i
} (i = 1 : N), 2N
constraint equations can be written Ah = 0, where
A=
x
1
y
1
1 0 0 0 x
0
1
x
1
x
0
1
y
1
x
0
1
0 0 0 x
1
y
1
1 y
0
1
x
1
y
0
1
y
1
y
0
1
.
.
.
x
N
y
N
1 0 0 0 x
0
N
x
N
x
0
N
y
N
x
0
N
0 0 0 x
1
y
1
1 y
0
N
x
N
y
0
N
y
N
y
0
N
Let A = UΣΣV
T
denote the singular value decomposition of
A. Equivalently, we consider the eigenvalue decomposition
Q = A
T
A = VD
2
V
T
, where ΣΣ = D
2
. The linear solution
of the 9×1 unit homography vector
ˆ
h is the eigenvector as-
sociated with the smallest singular value:
ˆ
h = υυ
9
(diagonal
elements of ΣΣ are arranged in descending order, and υυ
i
is
the i-th column of V). Subsequently, h can be determined
by scaling: h =
ˆ
h/
ˆ
h
9
. We are interested in the variations of
2
This representation assumes that h
9
6= 0.
0-7803-9134-9/05/$20.00 ©2005 IEEE
Authorized licensed use limited to: UNIVERSITAT DE GIRONA. Downloaded on April 26,2010 at 10:40:19 UTC from IEEE Xplore. Restrictions apply.

the solution with noisy observations. Define the observation
vector m of N left-and-right matching pixel coordinates:
m = [x
1
, y
1
, x
0
1
, y
0
1
, x
2
, y
2
, x
0
2
, y
0
2
, . . . , x
N
, y
N
, x
0
N
, y
0
N
]
4N×1
The noisy measurement vector
ˆ
m = m + δm comprises
zero-mean Gaussian noise vector δm with covariance C
m
.
A simplified assumption is a normal distribution(0, σ); that
is, C
m
= σ
2
I
4N×4N
, which is reasonable when the errors
are primarily due to quantization effects and the localization
inaccuracies of the feature detector, e.g. Harris interest point
operator [1]. Outliers violate the assumed normal noise dis-
tribution model. However, it is typically the case that some
robust estimation method, e.g. RANSAC or LMedS, can be
used as a first step to identify the outliers, before the pro-
posed linear solution is applied to the inliers (which satisfy
the assumed noise model). Here, we assume knowledge of σ,
and are interested to estimate the variances of h
i
(i = 1 : 8).
3. COVARIANCE OF PROJECTIVE HOMOGRAPHY
Due to space limitation, we refer the reader to section 5 in [4],
given for the estimation of the covariance of the homography
parameters. This is denoted C
hc
here. We also derive C
ho
,
the covariance derived here as the new estimate.
From δm, we can determine the variation in the measure-
ment matrix A
2N×9
, or Q
9×9
. Assume that Q is perturbed
by Q, where q
81×1
and δq
81×1
are the corresponding vec-
tor forms. For small variations max{δq
i
} << 1, where
q
i
denotes i-th element of q –it can be shown [6, 7] that up
to first-order, the eigenvalues and eigenvectors ofQ vary ac-
cording to δλ
i
= υυ
T
i
Q υυ
i
and δυυ
i
= VΨ
i
V
T
ΠΠ
i
δq where
Ψ
i
= diag{(λ
i
λ
1
)
1
, . . . , 0, . . . , (λ
i
λ
9
)
1
}
ΠΠ
i
=
υ
i1
I
9×9
υ
i2
I
9×9
. . . υ
i9
I
9×9
The covariances of the eigenvectors C
υ
i
=VΨ
i
V
T
ΠΠ
i
C
q
ΠΠ
T
i
V
Ψ
T
i
V
T
allows us to write
C
ˆ
h
= C
υ
9
= VΨ
9
V
T
ΠΠ
9
C
q
ΠΠ
T
9
VΨ
T
i
V
T
Recalling that the homography h is determined from
ˆ
h by
scaling (such that h
9
= 1) the covariances of h and
ˆ
h are
related by the transformation C
ho
= J C
ˆ
h
J
T
where
J =
h
ˆ
h
=
1/h
9
0 . . . 0 h
1
/h
2
9
0 1/h
9
. . . 0 h
1
/h
2
9
.
.
.
0 0 . . . 1/h
9
h
8
/h
2
9
8×9
In the above equations, C
q
is determined from
C
q
=
q
a
"
a
m
C
m
a
m
T
#
q
a
T
where a is the vector form of A:
a = [ x
1
, y
1
, 1, 0, 0, 0, x
0
1
x
1
, x
0
1
y
1
, x
0
1
,
0, 0, 0, x
1
, y
1
, 1, y
0
1
x
1
, y
0
1
y
1
, y
0
1
, . . .
0, 0, 0, x
N
, y
N
, 1, y
0
N
x
N
, y
0
N
y
N
, y
0
N
]
18N×1
4. SIMULATIONS
We use computer simulations to compare the closed-form ex-
pressions for estimating the variances of the projective ho-
mography parameters, given in [4] and derived here (denoted
C
hc
and C
ho
, respectively). In all but two tests, we start
with N = 20 points p = [x(i, j), y(i, j), f] on a regular grid
i = 64 : 64 : 320 and j = 64 : 64 : 256 (f = 320 is
assumed). In these two our simulations, only a minimum 4
points near the image corners are used. We construct match-
ing pairs {p, p
0
} based on a pre-specified homographyH; we
use the well-know interpretation H = R + tn
T
in terms of
the motion {R, t} of a camera relative to a planar scene with
surface normal n = [P, Q, 1]/Z
o
, where P and Q control
the surface slant and tilt angles, and Z
o
its distance from the
camera. Measurements noise from a normal distribution with
standard deviation σ
m
is then added to corresponding pairs to
subsequently estimate the homography, and calculate its er-
ror. The process is repeated 1000 times with different noise
samples to compute statistical measures. In case 1, the ex-
perimental variance of each parameter in h
i
(i = 1 : 8) is
compared with the analytical estimates σ
hk
=
p
diag{C
hk
}
(k = c, o). In the second case, the estimated noisy homo-
graphies map certain corners of a real image into 1000 noisy
matches in the 2nd view. The final experiment deals with
the generation of a 10-frame sequence, and the integration
of frame-to-frame homographies and the corresponding vari-
ances.
4.1. Case 1
In the fist simulation, we have usedP = 0.2, Q = 0.2, and
Z
o
= 10, with translational motion t = [0.1, 0.2, 0.05] and
rotational matrix R = R
x
R
y
R
z
, where R
a
denotes a rotation
about axis a with angle θ
a
(θ
x
= 0.05 [rad], θ
y
= 0.05 and
θ
y
= 0).
Fig. 1 (top) shows the results for σ
m
= 1 with N = 20
correspondences (horizontal axis corresponds to homogra-
phy parameters h
i
(i = 1 : 8), and the vertical axis is the
estimation error). Blue crosses depict the errors of the ho-
mography parameters for each of 1000 simulations, with red
circles giving the (zero) mean error. The dashed blue en-
velop is the ±3σ error bound computed experimentally, and
the other two envelops in green and red are derived from an-
alytical bounds ±3σ
hc
and ±3σ
ho
, respectively. The latter
nearly coincides with the experimental results. In [4], the
authors claim that their solution provides better estimates in
two cases: 1) Relatively small measurement noise levels, or
2) when minimum N = 4 image correspondences are uti-
lized in the estimation of the homography. These cases are
tested in the next three plots corresponding to σ
m
= 0.05
with N = 20 σ
m
= 1/3 with N = 4, and σ
m
= 1 with
N = 4. In all cases, the new results consistently provide a
more accurate estimate of the ±3σ
h
error bounds.
Authorized licensed use limited to: UNIVERSITAT DE GIRONA. Downloaded on April 26,2010 at 10:40:19 UTC from IEEE Xplore. Restrictions apply.

4.2. Case 2
Fig. 2 shows the original and transformed images based on
an assumed homography. In addition, selected corners have
been mapped with 1000 homographies, estimated from noisy
correspondences (σ
m
= 1 is assumed). The green and red
uncertainty ellipses of the transformed points have been de-
termined from the homography variances σ
hc
and σ
ho
, re-
spectively. The results for 4 selected points, A D, demon-
strate once again that σ
ho
provides a more accurate and tighter
uncertainty bound.
4.3. Case 3
A 10-frame sequence has been constructed based on a pre-
scribed homography. Fig. 3 shows the sequence with an
inter-frame motion of about 13-14 pixels. The blue stars de-
pict the true positions of 3 sample pixels in various frames, at
the center of uncertainty ellipses (computed from the two dif-
ferent techniques) that bound noisy positions of these points
based on homographies estimated from 1000 simulations with
noisy correspondences. For each of these three points, the
noisy positions and bounding ellipses in frames 1, 4, 7 and
10 have been given in subsequent plots. As in previous 2
cases, our new results provide a tighter fit of the projected
point distributions.
5. SUMMARY
Ability to not only estimate the transformation between frames
but also to assess the confidence in these estimates is impor-
tant in many applications involving motion estimation from
video imagery. Computation of projective homography from
frame-to-frame correspondences has been extensively stud-
ied in recent years [5], and analytical uncertainty bounds
of the homography parameters and reprojection errors have
been proposed [4]. Based on earlier results of Weng et al. [7],
we have provided new expressions to estimate these bounds
more accurately. This has been verified in a number of ex-
periments based on the estimation of homography parame-
ters, as well as the reprojection of image points based on
the estimated homographies. These results are particularly
important when dealing with long image sequences where
frame-to-frame estimates need to be integrated to establish
the camera position, to build an image mosaic, or generally
to register various frames of a video sequence. Under inves-
tigation is the direct use of these uncertainty bounds in the
construction of photo-mosaics.
Acknowledgement: This work has been funded in part by the Span-
ish Ministry of Education and Science under grant CTM2004-04205,
and in part by the Generalitat de Catalunya under grant 2003PIV-
B00032.
6. REFERENCES
[1] D.P. Capel, “Image mosaicing and super-resolution, Ph.D.
Thesis, Dept. of Engineering Science, Univ. of Oxford, 2001.
1 2 3 4 5 6 7 8
−0.05
−0.04
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
0.04
0.05
Coefficients h
i
of the homography
Mean and 3 sd of errors
1 2 3 4 5 6 7 8
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
x 10
−3
Coefficients h
i
of the homography
Mean and 3 sd of errors
1 2 3 4 5 6 7 8
−0.02
−0.015
−0.01
−0.005
0
0.005
0.01
0.015
0.02
Coefficients h
i
of the homography
Mean and 3 sd of errors
1 2 3 4 5 6 7 8
−0.05
−0.04
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
0.04
0.05
Coefficients h
i
of the homography
Mean and 3 sd of errors
Fig. 1. Comparison between experimental (dashed-blue) and
analytical uncertainty bounds of the plane homography coef-
ficients for various noise levels and image correspondences
(C
hc
in green, and C
ho
in red). In each case, the experi-
mental homography represents 1000 simulations with differ-
ent noise samples of variance σ
m
and N point correspon-
dences. From top to bottom, (1) σ = 1 [pix] with N = 20;
(2) σ = 0.05[pix] with N = 20; (3) σ = 1/3[pix] with
N = 4; (4) σ = 1[pix] with N = 4.
Authorized licensed use limited to: UNIVERSITAT DE GIRONA. Downloaded on April 26,2010 at 10:40:19 UTC from IEEE Xplore. Restrictions apply.

50 100 150 200 250 300 350
50
100
150
200
250
300
A
B
C
D
102 104 106 108 110 112
26
27
28
29
30
31
32
33
34
35
90 95 100 105
192
194
196
198
200
202
204
206
208
210
345 350 355 360 365 370 375 380 385 390
135
140
145
150
155
160
165
170
290 295 300 305 310 315 320 325 330 335
270
275
280
285
290
295
300
305
Fig. 2. (top) Two views transformed according to homogra-
phy H
3
superimposed on two color channels. Selected cor-
ners are mapped based on homographies that are calculated
from noisy image features (with variance σ
m
= 1), and re-
peated 1000 times with different noise samples. For each
corner, the analytical 3σ uncertainty ellipse is given based
on C
hc
(green) and C
ho
(red). (bottom) Results have been
magnified for 4 selected cornersA, B, C and D.
[2] J.C. Clarke, “Modelling uncertainty: A primer”, Tech. Report
2161/98, Univ. of Oxford, Dept. of Engineering Science, 1998.
[3] R. Garcia, X. Cufi, and V. Ila, “Recovering Camera Motion in
a Sequence of Underwater Images through Mosaicking”, LNCS
no. 2652, pp. 255-262, Eds. Springer-Verlag, 2003.
[4] A. Criminisi, I. Reid, and A. Zisserman, A Plane Measuring
Device, Image and Vis. Comp. vol. 17(8) pp. 625–634, 1999.
[5] R.I Hartley, and A. Zisserman, Multiple View Geometry in
Computer Vision, Cambridge Univ. Press, 2000.
[6] Z. Sun, V. Ramesh, and A.M. Tekalp, “Error characterization
of the factorization method, CVIU, vol. 82(2), 2001.
[7] J. Weng, T. S. Huang, and N. Ahuja, “Motion and Structure
from Two Perspective Views: Algorithms, Error Analysis, and
Error Estimation, IEEE Trans. on Patt. Anal. and Mach. Int.,
vol. 11(5), pp.451–476, 1989.
28.6 28.8 29 29.2 29.4 29.6 29.8 30 30.2
60.2
60.4
60.6
60.8
61
61.2
61.4
55.5 56 56.5 57 57.5 58
63.5
64
64.5
65
65.5
80.5 81 81.5 82 82.5 83 83.5 84 84.5
68.5
69
69.5
70
70.5
71
71.5
104 105 106 107 108 109
75
75.5
76
76.5
77
77.5
78
78.5
79
79.5
80
Point (20,60)
−87 −86.5 −86 −85.5 −85
−51.4
−51.2
−51
−50.8
−50.6
−50.4
−50.2
−50
−49.8
−49.6
−47.5 −47 −46.5 −46 −45.5 −45 −44.5 −44 −43.5 −43
−51
−50.5
−50
−49.5
−49
−48.5
−48
−9 −8.5 −8 −7.5 −7 −6.5 −6 −5.5 −5 −4.5
−47.5
−47
−46.5
−46
−45.5
−45
−44.5
−44
27 28 29 30 31 32 33
−42
−41.5
−41
−40.5
−40
−39.5
−39
−38.5
−38
−37.5
Point (-100,-50)
106 106.5 107 107.5 108 108.5
101.2
101.4
101.6
101.8
102
102.2
102.4
102.6
102.8
103
125.5 126 126.5 127 127.5 128 128.5 129 129.5 130 130.5
107.5
108
108.5
109
109.5
110
110.5
111
111.5
144 145 146 147 148 149 150
115.5
116
116.5
117
117.5
118
118.5
119
119.5
120
120.5
162 163 164 165 166 167 168 169
125
126
127
128
129
130
Point (100,100)
Fig. 3. Estimated positions of three sample points in a
10-frame sequence based on homographies estimated from
noisy correspondences, repeating the experiment 1000 times.
Analytical uncertainty ellipses have been given for C
hc
(green) and C
ho
. For each sample point, the estimated po-
sitions and ellipses in frames 1, 4, 7, 10 are given in a 2 × 2
plot.
Authorized licensed use limited to: UNIVERSITAT DE GIRONA. Downloaded on April 26,2010 at 10:40:19 UTC from IEEE Xplore. Restrictions apply.
Citations
More filters
Dissertation
22 Nov 2012
TL;DR: Tese de doutoramento em Engenharia Eletrotecnica e de Computadores, no ramo de especializacao em Automacao e Robotica, apresentada a Faculdade de Ciencias e Engenhartia da Universidade de Coimbra as discussed by the authors
Abstract: Tese de doutoramento em Engenharia Eletrotecnica e de Computadores, no ramo de especializacao em Automacao e Robotica, apresentada a Faculdade de Ciencias e Engenharia da Universidade de Coimbra

2 citations

Proceedings ArticleDOI
27 Mar 2018
TL;DR: A near-infrared (NIR) imaging system is designed that is compact, easily integrated into any computer system, and cost-effective, thereby having the potential to be used in clinical settings and demonstrating the uniqueness of the vascular structure and its potential to being used as a biometric identifier.
Abstract: The human body is comprised of a variety of networks that can be monitored and used as body positioning systems. Furthermore, structural changes observed in these networks have clinical significance as they can aid in disease diagnosis and determining the overall health of an individual. One such network is the superficial vascular structure. As the primary network supplying blood to the body, observing the vein structure gives insight into the cardiovascular health and hydration levels of an individual. Additionally, because of the uniqueness of the network, there is growing interest in using veins as a biometric for identification and mapping. However, because vasculature is difficult to image and existing imaging technology is expensive, the potential for superficial vascular structure to map the body and provide insight into overall health has not been well studied. Furthermore, given the 3D nature of the body, registering and matching corresponding vascular regions proves to be quite challenging. In order to address these needs, we have designed a near-infrared (NIR) imaging system to image the superficial vascular structure. It is compact, easily integrated into any computer system, and cost-effective, thereby having the potential to be used in clinical settings. By carefully designing the image acquisition system and developing registration and matching algorithms, we can robustly image and extract the vascular structure. By extracting the vascular structure from certain limbs, we show the potential for using vasculature as a body map. Additionally, we demonstrate the uniqueness of the vascular structure and its potential to be used as a biometric identifier.

1 citations

Book ChapterDOI
01 Jan 2018
TL;DR: The aim is to increase the flexibility of the station and to reduce the need for expensive tooling, such as precision, large-size positioners, in industrial robots related to the change of the position of objects in the workspace.
Abstract: The study presents a method to modify the program of industrial robots related to the change of the position of objects in the workspace. The aim is to increase the flexibility of the station and to reduce the need for expensive tooling, such as precision, large-size positioners. The object to be processed can be placed in any position on the table. The location and the orientation are determined by a camera detecting tags in the form of a matrix code. Methods for the calibration of the system and the transformations performed in the preparation and execution of the program were presented. Repeatability of the method has been investigated experimentally.

1 citations

Proceedings ArticleDOI
11 Aug 2022
TL;DR: Geometric transformation refers to the process of registering a digitized image using control points and equations or process of map onto a coordinate system as mentioned in this paper , which is commonly applicable for many fields which include image registration especially for medical field, remote sensing and changing video resolution.
Abstract: Geometric Transformation refers to the process of registering a digitized image using control points and equations or process of map onto a coordinate system. Three major components of Geometric Transformation are spatial transformation, image interpolation and anti-aliasing. Spatial transformation is of different types which may include mappings of affine, perspective as well as transformations of polynomial and piecewise. Geometric Transformation is commonly applicable for many fields which include image registration especially for medical field, remote sensing and changing video resolution. This paper explores a detailed review on Geometric Transformation techniques.
Book ChapterDOI
01 Jan 2014
TL;DR: The current chapter describes the main steps involved in the photo-mosaic building process, which comprehend the geometrical registration and warping of the images into a single common reference frame, along with an estimation of the topology of the trajectory performed by the UV, and a global alignment of the recovered trajectory.
Abstract: The current chapter describes the main steps involved in the photo-mosaic building process. These steps comprehend the geometrical registration and warping of the images into a single common reference frame, along with an estimation of the topology of the trajectory performed by the UV, and a global alignment of the recovered trajectory. A widely extended geometrical registration strategy consists of identifying common image features between the involved images, using different image feature detectors. These image features, once identified, become correspondences that are used to estimate the camera motion between consecutive images, as well as to perform a global alignment of the estimated trajectory. Global alignment of all the involved images allows providing geometrical consistence to the underwater map. At the end of the chapter the problems and issues of the photo-mosaicing process are pointed out, with the aim of demonstrating the relevance of image blending techniques as a final step of the photo-mosaicing process.
References
More filters
Book
01 Jan 2000
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Abstract: From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.

15,558 citations

Journal ArticleDOI
TL;DR: The presented approach to error estimation applies to a wide variety of problems that involve least-squares optimization or pseudoinverse and shows, among other things, that the errors are very sensitive to the translation direction and the range of field view.
Abstract: Deals with estimating motion parameters and the structure of the scene from point (or feature) correspondences between two perspective views. An algorithm is presented that gives a closed-form solution for motion parameters and the structure of the scene. The algorithm utilizes redundancy in the data to obtain more reliable estimates in the presence of noise. An approach is introduced to estimating the errors in the motion parameters computed by the algorithm. Specifically, standard deviation of the error is estimated in terms of the variance of the errors in the image coordinates of the corresponding points. The estimated errors indicate the reliability of the solution as well as any degeneracy or near degeneracy that causes the failure of the motion estimation algorithm. The presented approach to error estimation applies to a wide variety of problems that involve least-squares optimization or pseudoinverse. Finally the relationships between errors and the parameters of motion and imaging system are analyzed. The results of the analysis show, among other things, that the errors are very sensitive to the translation direction and the range of field view. Simulations are conducted to demonstrate the performance of the algorithms and error estimation as well as the relationships between the errors and the parameters of motion and imaging systems. The algorithms are tested on images of real-world scenes with point of correspondences computed automatically. >

495 citations


"Planar homography: accuracy analysi..." refers background in this paper

  • ...For small variations –max{δqi} << 1, where qi denotes i-th element of q –it can be shown [6, 7] that up to first-order, the eigenvalues and eigenvectors ofQ vary according to δλi = υi ∆Q υi and δυi = VΨiV T Πiδq where Ψi = diag{(λi − λ1)−1, ....

    [...]

  • ...[7], who reported a comprehensive error analysis of motion and structure parameters from image correspondences....

    [...]

Book
19 Jan 2004
TL;DR: This paper presents Super-resolution: Maximum Likelihood and Related Approaches, a model for feature-matching over N-views using a generative model and a note on the assumptions made in the model.
Abstract: 1 Introduction.- 1.1 Background.- 1.2 Modelling assumptions.- 1.3 Applications.- 1.4 Principal contributions.- 2 Literature Survey.- 2.1 Image registration.- 2.1.1 Registration by a geometric transformation.- 2.1.2 Ensuring global consistency.- 2.1.3 Other parametric surfaces.- 2.2 Image mosaicing.- 2.3 Super-resolution.- 2.3.1 Simple super-resolution schemes.- 2.3.2 Methods using a generative model.- 2.3.3 Super-resolution using statistical prior image models.- 3 Registration: Geometric and Photometric.- 3.1 Introduction.- 3.2 Imaging geometry.- 3.3 Estimating homographies.- 3.3.1 Linear estimators.- 3.3.2 Non-linear refinement.- 3.3.3 The maximum likelihood estimator of H.- 3.4 A practical two-view method.- 3.5 Assessing the accuracy of registration.- 3.5.1 Assessment criteria.- 3.5.2 Obtaining a ground-truth homography.- 3.6 Feature-based vs. direct methods.- 3.7 Photometric registration.- 3.7.1 Sources of photometric difference.- 3.7.2 The photometric model.- 3.7.3 Estimating the parameters.- 3.7.4 Results.- 3.8 Application: Recovering latent marks in forensic images.- 3.8.1 Motivation.- 3.8.2 Method.- 3.8.3 Further examples.- 3.9 Summary.- 4 Image Mosaicing.- 4.1 Introduction.- 4.2 Basic method.- 4.2.1 Outline.- 4.2.2 Practical considerations.- 4.3 Rendering from the mosaic.- 4.3.1 The reprojection manifold.- 4.3.2 The blending function.- 4.3.3 Eliminating seams by photometric registration.- 4.3.4 Eliminating seams due to vignetting.- 4.3.5 A fast alternative to median filtering.- 4.4 Simultaneous registration of multiple views.- 4.4.1 Motivation.- 4.4.2 Extending the two-view framework to N-views.- 4.4.3 A novel algorithm for feature-matching over N-views.- 4.4.4 Results.- 4.5 Automating the choice of reprojection frame.- 4.5.1 Motivation.- 4.5.2 Synthetic camera rotations.- 4.6 Applications of image mosaicing.- 4.7 Mosaicing non-planar surfaces.- 4.8 Mosaicing "user's guide".- 4.9 Summary.- 4.9.1 Further examples.- 5 Super-resolution: Maximum Likelihood and Related Approaches.- 5.1 Introduction.- 5.2 What do we mean by "resolution"?.- 5.3 Single-image methods.- 5.4 The multi-view imaging model.- 5.4.1 A note on the assumptions made in the model.- 5.4.2 Discretization of the imaging model.- 5.4.3 Related approaches.- 5.4.4 Computing the elements in Mn.- 5.4.5 Boundary conditions.- 5.5 Justification for the Gaussian PSF.- 5.6 Synthetic test images.- 5.7 The average image.- 5.7.1 Noise robustness.- 5.8 Rudin's forward-projection method.- 5.9 The maximum-likelihood estimator.- 5.10 Predicting the behaviour of the ML estimator.- 5.11 Sensitivity of the ML estimator to noise sources.- 5.11.1 Observation noise.- 5.11.2 Poorly estimated PSF.- 5.11.3 Inaccurate registration parameters.- 5.12 Irani and Peleg's method.- 5.12.1 Least-squares minimization by steepest descent.- 5.12.2 Irani and Peleg's algorithm.- 5.12.3 Relationship to the ML estimator.- 5.12.4 Convergence properties.- 5.13 Gallery of results.- 5.14 Summary.- 6 Super-resolution Using Bayesian Priors.- 6.1 Introduction.- 6.2 The Bayesian framework.- 6.2.1 Markov random fields.- 6.2.2 Gibbs priors.- 6.2.3 Some common cases.- 6.3 The optimal Wiener filter as a MAP estimator.- 6.4 Generic image priors.- 6.5 Practical optimization.- 6.6 Sensitivity of the MAP estimators to noise sources.- 6.6.1 Exercising the prior models.- 6.6.2 Robustness to image noise.- 6.7 Hyper-parameter estimation by cross-validation.- 6.8 Gallery of results.- 6.9 Super-resolution "user's guide".- 6.10 Summary.- 7 Super-resolution Using Sub-space Models.- 7.1 Introduction.- 7.2 Bound constraints.- 7.3 Learning a face model using PCA.- 7.4 Super-resolution using the PCA model.- 7.4.1 An ML estimator (FS-ML).- 7.4.2 MAP estimators.- 7.5 The behaviour of the face model estimators.- 7.6 Examples using real images.- 7.7 Summary.- 8 Conclusions and Extensions.- 8.1 Summary.- 8.2 Extensions.- 8.2.1 Application to digital video.- 8.2.2 Model-based super-resolution.- 8.3 Final observations.- A Large-scale Linear and Non-linear Optimization.- References.

280 citations

Journal ArticleDOI
TL;DR: An uncertainty analysis which includes both the errors in image localization and the uncertainty in the imaging transformation is developed, and the distribution of correspondences can be chosen to achieve a particular bound on the uncertainty.

265 citations


"Planar homography: accuracy analysi..." refers background or methods in this paper

  • ...In addition to many methods for estimating the homography parameters [5], analytical expressions to assess the accuracy of the transformation parameters have been proposed [4]....

    [...]

  • ...[4] give closed-form expressions to estimate the variance of the 8 independent parameters1....

    [...]

  • ...SIMULATIONS We use computer simulations to compare the closed-form expressions for estimating the variances of the projective homography parameters, given in [4] and derived here (denoted Chc and Cho, respectively)....

    [...]

  • ...COVARIANCE OF PROJECTIVE HOMOGRAPHY Due to space limitation, we refer the reader to section 5 in [4], given for the estimation of the covariance of the homography parameters....

    [...]

Journal ArticleDOI
TL;DR: 3-D shape uncertainty as ellipsoids on top of the 3-D reconstruction as an enhanced visualization is shown, leading to better use of the factorization method in engineering applications.

41 citations

Frequently Asked Questions (7)
Q1. Why do the authors refer the reader to section 5 in [4]?

Due to space limitation, the authors refer the reader to section 5 in [4], given for the estimation of the covariance of the homography parameters. 

In [4], the authors claim that their solution provides better estimates in two cases: 1) Relatively small measurement noise levels, or 2) when minimum N = 4 image correspondences are utilized in the estimation of the homography. 

Computation of projective homography from frame-to-frame correspondences has been extensively studied in recent years [5], and analytical uncertainty bounds of the homography parameters and reprojection errors have been proposed [4]. 

The dashed blue envelop is the ±3σ error bound computed experimentally, and the other two envelops in green and red are derived from analytical bounds ±3σhc and ±3σho, respectively. 

The authors construct matching pairs {p, p′} based on a pre-specified homographyH; the authors use the well-know interpretation H = R + tnT in terms of the motion {R, t} of a camera relative to a planar scene with surface normal n = [−P,−Q, 1]/Zo, where P and Q control the surface slant and tilt angles, and Zo its distance from the camera. 

For small variations –max{δqi} << 1, where qi denotes i-th element of q –it can be shown [6, 7] that up to first-order, the eigenvalues and eigenvectors ofQ vary according to δλi = 

Ability to not only estimate the transformation between frames but also to assess the confidence in these estimates is important in many applications involving motion estimation from video imagery.