scispace - formally typeset
Open AccessProceedings ArticleDOI

JetStream: probabilistic contour extraction with particles

Patrick Pérez, +2 more
- Vol. 2, pp 524-531
TLDR
A sequential Monte-Carlo technique, termed JetStream, is proposed that enables constraints on curvature, corners, and contour parallelism to be mobilized, all of which are infeasible under exact optimization.
Abstract
The problem of extracting continuous structures from noisy or cluttered images is a difficult one. Successful extraction depends critically on the ability to balance prior constraints on continuity and smoothness against evidence garnered from image analysis. Exact, deterministic optimisation algorithms, based on discretized functionals, suffer from severe limitations on the form of prior constraint that can be imposed tractably. This paper proposes a sequential Monte-Carlo technique, termed JetStream, that enables constraints on curvature, corners, and contour parallelism. To be mobilized, all of which are infeasible under exact optimization. The power of JetStream is demonstrated in two contexts: (1) interactive cut-out in photo-editing applications, and (2) the recovery of roads in aerial photographs.

read more

Content maybe subject to copyright    Report

JetStream: Probabilistic Contour Extraction with Particles
Patrick P´erez, Andrew Blake, and Michel Gangnet
Microsoft Research
St George House, 1 Guildhall Street, Cambridge, CB2 3NH, UK
http://research.microsoft.com/vision
Abstract
The problem of extracting continuous structures from
noisy or cluttered images is a difficult one. Successful ex-
traction depends critically on the ability to balance prior
constraints on continuity and smoothness against evidence
garnered from image analysis. Exact, deterministic optimi-
sation algorithms, based on discretized functionals, suffer
from severe limitations on the form of prior constraint that
can be imposed tractably. This paper proposes a sequen-
tial Monte-Carlo technique, termed JetStream, that enables
constraints on curvature, corners, and contour parallelism
to be mobilized, all of which are infeasible under exact op-
timization. The power of JetStream is demonstrated in two
contexts: (1) interactive cut-out in photo-editing applica-
tions, and (2) the recovery of roads in aerial photographs.
1. Introduction
The automatic or semi-automatic extraction of contours
in images is an important generic problem in early vision
and image processing. Its domain of applications ranges
from the generic tasks of segmenting images with closed
contours to the extraction of linear structures of particular
interest such as roads in satellite and aerial images.
Despite the variety of applications, most approaches to
contour extraction turn out to rely on some minimal cost
principle. In a continuous deterministic setting, this can be
cast as the minimization over a proper set of plane curves
r
of a functional
E
(
r
;
y
)=
Z
r
g
(
(
s
)
;y
(
r
(
s
)))
ds
(1)
is the curvature,
s
is the arc-length, and
y
(
r
(
s
))
is some
scalar or vector derived at location
r
(
s
)
from the raw image
data
I
, e.g., often
y
(
r
(
s
))
is the gradient norm
jr
I
(
r
(
s
))
j
.
This functional captures some kind of regularity on candi-
date curves, while rewarding, by a lower cost, the presence
Figure 1. Probabilistic extraction of contours with Jet-
Stream. Even in presence of clutter, JetStream enables the
extraction of (Left) silhouettes for photo-editing purpose,
and (Right) roads in aerial photographs.
along the curve of contour cues such as large gradients or
edgels detected in a previous stage.
We are interested in the particular case where a starting
point
p
can be picked (either manually or automatically).
If local cost function
g
in (1) turns out not to depend on
, the optimal curve is a geodesic which can be recovered,
at least in the form of a chain of pixels, using dynamic
programming-type techniques [2, 4, 13, 14, 15, 16]. Unfor-
tunately, unless optimality is abandoned, and huge storage
resources are available, [13], there are tight restrictions on
the form of
g
. As a consequence, only a prior on the total
length of curves can then be captured, which is insufficient
in situations where a strong smoothing prior is needed. This
type of approach also relies on pixel-based discretization of
the paths, ruling out any capability for on-the-fly sub-pixel
interpolation.
A second approach, which does not suffer from these
limitations, consists in growing a contour from the seed
point according to cost function
E
. Given the current con-

tour, a new segment is appended to it according both to a
shape prior (mainly smoothness) and to the evidence pro-
vided by the data in the location under concern. This ap-
proach has been investigated in a deterministic way, as a
tool to complete discontinuous contours provided by edge
detectors [11]. A good early survey on this so-called edge
linking (or grouping) problem can be found in [1].
A recent advance on edge-linking has been obtained by
taking a probabilistic point of view: the contours are seen
as the paths of a stochastic process driven by both an in-
ner stochastic dynamics, and a statistic data model. This is
however a difficult tracking problem. Indeed, the data like-
lihood, as a function of the state, typically exhibits many
narrowspikes. As a consequence,the posterior densities are
badly behaved, preventing the use of most standard tracking
tools based on the Kalman filter and its variants. In our con-
text, multi-modality is related to the clutter of contours that
most images exhibit. Ideally we seek a tracking method
that is able: (i) to avoid spurious distracting contours, (ii) to
track, at least momentarily, the multiple off-springs starting
at branching contours, and finally (iii) to interpolate over
transient evidence “gaps”. Toward that goal, two very dif-
ferent approaches have been proposed in the literature.
In [3], Cox et al. adapt multiple hypothesis tracking
(MHT) techniques from the tracking literature. The result-
ing tracker can handle the intrinsic multi-modality of the
problem, as well as evolve multiple tracks (contours) si-
multaneously. The technique is however restricted to a spe-
cial type of data: because all possible data associations are
enumerated at each step, it is more adapted to sparse data.
Also these data must be of the same nature (e.g., position
and possibly orientation) as the hidden state since multiple
Kalman trackers requiring a linear measurement model are
run along the branches of the multiple hypothesis tree. Due
to these limitations, the technique is only applied on the
sparse output of a contour detector, thus performing edge
linking.
In [6], Geman and Jedinak introduce a very effective
road tracking technique based on the so-called “active test-
ing” approach. Both their dynamics and their data model
have nevertheless to be discrete. They indeed make use of
a “decision tree” (containing all possible sequences of mea-
surements), and of a “representation tree” of all possible
paths. As for the dynamics, they even limit it to three differ-
ent moves (no change in the direction, and change of
5
o
).
Active testing, as well as pruning, are then conducted on the
resulting ternary representation tree, using entropic tools.
We propose to tackle this tracking problem with parti-
cle filtering [5, 8, 10]. This Monte Carlo technique, based
on sequential importance sampling/resampling, provides a
sound statistical framework for propagating sample-based
approximations of posterior distributions, with almost no
restriction on the ingredients of the model. Since samples
from the posterior path distribution are maintained at each
step, different decision criteria can be worked out to decide
which is the final estimated contour, including the MAP
used by Geman and Jedinak, and the expectation used by
Cox et al. Particle filtering will offer the same features as
the two previous methods (maintaining multiple hypothe-
sis, and on-the-fly pruning), but within a more versatile and
consistent, yet simpler, framework.
The power of the proposed technique, termed JetStream,
will be demonstrated in two different contexts: (1) the in-
teractive delineation of image regionsfor photo-editingpur-
pose, and (2) the extraction of roads in aerial images (see
two result samples in Fig. 1). Besides the common core
described in Sections 2 and 3, specific ingredients are intro-
duced for each of the two applications: the incorporation of
user interaction in the cut-out application (Section 4), and
an explicit use of road width as part of the dynamical sys-
tem for road tracking (Section 5).
2 Probabilistic contour tracking
Tracking contours in still images is a rather unconven-
tional tracking problem because of the absence of a real no-
tion of time: the “time” is only associated with the progres-
sive growing of the estimated contour in the image plane.
Contrary to standard tracking problems where data arrive
one bit after another as time passes by, the whole set of data
y
is standing there at once in our case.
This absence of a natural time will have some impor-
tance in the design of both the prior and the data model.
As for the definition of data likelihood, the transposition
of standard techniques from tracking literature requires to
conduct a rather artificial sequential ordering of the data as
the tracking proceeds [3]: at step
i
, data set is constituted
of data “visible” from current location. We prefer to con-
sider the data as a whole, getting its ordering as a natural
by-product of the sequential inference.
Another consequence of this absence of natural time, is
that there is no straightforward way of tuning the “speed”,
or equivalently the length of successive moves. Whatever
the speed at which the points travel, only their trajecto-
ries matter. The combination of prior dynamics and data
model should simply make sure that the contour has rea-
sons to grow. If the dynamics permits slowing down, then
the tracker risks getting stuck at locations with high likeli-
hood, resulting in a cessation of the growing process.
2.1 Tracking framework
Let us now introduce the basics of our probabilistic con-
tour tracker. We consider random points
x
i
in the plane
=
R
2
. Any ordered sequence
x
0:
n
(
x
0

x
n
)
2
n
+1
uniquely defines a curve in some standard way, e.g., the
x
i
’s
2

are the vertices of a polyline in our experiments. The aim
is to make grow such a sequence based on a prior dynamics
p
(
x
i
+1
j
x
0:
i
)
that retains expectedproperties of the contours
to be extracted, and on a data model
p
(
y
j
x
0:
n
)
that provides
evidence about whether a measurement is, or is not, in the
vicinity of the “true” contour.
Assuming a homogeneous second-order
1
dynamics with
kernel
q
:
p
(
x
i
+1
j
x
0:
i
)=
q
(
x
i
+1
;
x
i
1:
i
)
;
8
i
2
;
the a priori density on
n
+1
is
p
(
x
0:
n
)=
p
(
x
0:1
)
n
Y
i
=2
q
(
x
i
;
x
i
2:
i
1
)
:
(2)
We also approximate measurements conditioned on
x
0:
n
as an independent spatial process
p
(
y
j
x
0:
n
)=
Y
u
2
p
(
y
(
u
)
j
x
0:
n
)
(3)
where
is a discrete set of measurement locations in
the image plane, including the
x
i
s locations. Each individ-
ual likelihood in product (3) is either
p
on
if
u
belongs to
x
0:
n
,or
p
o
if not:
p
(
y
j
x
0:
n
) =
Y
u
2
n
x
0:
n
p
o
(
y
(
u
))
n
Y
i
=0
p
on
(
y
(
x
i
)
j
x
0:
n
)
=
Y
u
2
p
o
(
y
(
u
))
n
Y
i
=0
p
on
(
y
(
x
i
)
j
x
0:
n
)
p
o
(
y
(
x
i
))
:
(4)
This likelihood is of the same form as the one derived by
Geman and Jedinak in their active testing framework [6]
The posterior density on
n
+1
is derived, up to a multi-
plicative factor independent from
x
0:
n
:
p
n
(
x
0:
n
j
y
)
/
p
(
x
0:1
)
n
Y
i
=2
q
(
x
i
j
x
i
2:
i
1
)
n
Y
i
=0
`
(
y
(
x
i
))
(5)
where
`
p
on
p
off
denotes the point-wise likelihood ratio.
2
Density
p
(
x
0:1
)
is a Dirac mass centered at locations picked
by the user. The choice of the transition probability
q
and
of the likelihood ratio
`
will be discussed in next section.
The function
E
n
(
x
0:
n
;
y
)
log
p
n
(
x
0:
n
j
y
)
can be
seen as the
n
-sample discretization of a functional of type
(1). Expressed as the minimization of this functional, the
contour extraction problem then amounts to seeking the
maximum a posteriori (MAP) estimate in our probabilistic
setting.
1
If a higher order dynamics seems more appropriate to the contours
under concern, it can be used.
2
Note that since the likelihood
p
(
y
j
x
0:
n
)
expresses the probability of
the data given that
x
0:
n
defines the only contour in the image, not simply a
part of the contour set, the posterior densities
p
n
s are not related through
marginalization:
R
x
n
p
n
(
x
0:
n
j
y
)
dx
n
6
=
p
n
1
(
x
0:
n
1
j
y
)
.
2.2 Iterative computation of posterior
The tracking philosophy relies on computing recursively
posterior densities of interest. From (5), it comes the fol-
lowing recursion:
p
i
+1
(
x
0:
i
+1
j
y
)
/
p
i
(
x
0:
i
j
y
)
q
(
x
i
+1
j
x
i
1:
i
)
`
(
y
(
x
i
+1
))
:
(6)
Although we have analytical expressions for
`
and
q
,
this recursion cannot be computed analytically: there is no
closed form expression of the posterior distributions
p
i
’s.
The recursion can however be used within a sequential
Monte Carlo framework where posterior
p
i
is approximated
by a finite set
(
x
m
0:
i
)
m
=1

M
of
M
sample paths (the “par-
ticles”). The generation of samples from
p
i
+1
is then ob-
tained in two steps.
In a first prediction (or proposal) step, each path
x
m
0:
i
is
grown of one step
~
x
m
i
+1
by sampling from a proposal den-
sity function
f
(
x
i
+1
;
x
m
0:
i
;y
)
over
, whose choice will be
discussed shortly. If the paths
(
x
m
0:
i
)
m
are fair samples
from distribution
p
i
over
n
+1
, then the extended paths
(
x
m
0:
i
;
~
x
m
i
+1
)
m
are fair samples from distribution
fp
i
over
n
+2
. Since we are seeking samples from distribution
p
i
+1
instead, we resort to importance sampling: these sample
paths are weighted according to ratio
p
i
+1
=f p
i
(normalized
over the
M
samples). The resulting weighted path set now
provides an approximation of the target distribution
p
i
+1
.
This discrete approximating distribution is used in the sec-
ond step of selection, where
M
paths are drawn with re-
placement from the previous weighted set. The new set of
paths is then distributed according to
p
i
+1
. The paths with
smallest weights are likely to get discarded by this selection
process, whereas the ones with large weights are likely to
get duplicated.
Using the expression (6) of
p
i
+1
, ratio
p
i
+1
=f p
i
boils
down to
q `=f
. The weights thus read:
m
i
+1
/
q
(~
x
m
i
+1
;
x
m
i
1:
i
)
`
(
y
(~
x
m
i
+1
))
f
(~
x
m
i
+1
;
x
m
0:
i
;y
)
(7)
with
P
m
m
i
+1
=1
. It can be shown that the optimal pro-
posal pdf is
f
=
q`=
R
x
i
+1
q`
[5], whose denominator can-
not be computed analytically in our case. The chosen pro-
posal pdf must then be sufficiently “close” to the optimal
one such that the weights do not degenerate (i.e., become
extremely small) in the re-weighting process.
Based on the discrete approximation of the posterior
p
i
, different estimates of the “best” path at step
i
can
be devised. An approximation of the MAP estimate is
provided by the path of maximum weight (before resam-
pling). A more stable estimate is provided by the mean path
1
M
P
M
m
=1
x
m
0:
i
, which is a Monte Carlo approximation of
the posterior expectation
E
(
x
0:
i
j
y
)
.
3

3 Model ingredients
3.1 Likelihood ratio
`
Most data terms for contour extraction are based on
the spatial gradient in intensity or RGB space, and/or on
edgels detected by standard means (e.g., with Canny detec-
tor). More sophisticated cues can be incorporated, such as
color/intensity consistency on each side of the contour, tex-
ture, or blur, but their relevance varies obviously from one
image to another.
Although simple, the norm of the luminance (or color)
gradient remains a robust cue. To use it as part of our mea-
surements, we must capture its marginal distributions both
off contours (
p
o
) and on contours (
p
on
).
0 20 40 60
0
0.1
0.2
0 20 40 60
0
0.1
0.2
Figure 2. Gradient norm statistics. Normalized his-
tograms of gradient norm on a baby photograph with clut-
tered background: (Left) over the whole image, along with
the fitted exponential distribution; (Right) on the outline of
the face and hat only.
The first marginal
p
o
can be empirically captured by the
distribution of the norm of the gradient on the whole im-
age (Fig. 2). In our experiments, this empirical distribution
was always well approximated by an exponential distribu-
tion with parameter
(amounting to the average norm over
the image), which we take as
p
o
. As for
p
on
it is difficult to
learn it a priori. The empirical distribution over an outline
of interest appears as a complex mixture filling the whole
range of values from 0 to a large value of gradient norm
(Fig. 2). In the absence of an appropriate statistical device
to capture adaptively this highly variable behavior, it seems
better to keep the data likelihood
p
on
as less informative as
possible. We simply use a uniform distribution.
(
x
i
)
x
i
1
x
i
x
i
i
r
I
(
x
i
)
?
Figure 3. Position and angle notations.
Observing the angle
(
x
i
)
2
[
=
2
;=
2]
, shownin Fig.
3, between the gradient normal
r
I
(
x
i
)
?
and the segment
(
x
i
1
;x
i
)
, indicates that the direction of the gradient also
retains precious information that a data model based only on
gradient norm neglects: the distribution of
is symmetric,
and it becomes tighter as the norm of the gradient increases.
More precisely, we found empiricallythat the distribution of
jr
I
j
0
:
5
exhibits a normal shape
N
(0
;
2
)
(Fig. 4).
0 20 40 60
−2
−1
0
1
2
6 4 2 0 2 4 6
0
50
100
150
200
250
Figure 4. Gradient statistics. (Left) Plot of the an-
gle
against the gradient norm for the face contour of
Fig.2; (Right) histogram of
jr
I
j
0
:
5
with its normal t
(
=1
:
36
).
There is however an important exception to the validity
of the image-gradient distribution above. At corners, the
norm of the gradient is usually large but its direction can-
not be accurately measured. Using a standard corner detec-
tor [7], each pixel
u
is associated with a label
c
(
u
) = 1
if a corner is detected, and 0 otherwise. Where a cor-
ner has been detected, it is then appropriate to accept a
wide range of image gradient directions, but to continue to
favour high gradient magnitude. We thus assume distribu-
tion
p
(
(
x
i
)
j
x
i
1:
i
;
jr
I
(
x
i
)
j
;c
(
x
i
) = 1)
is uniform. We
also assume that the probability of corner apparition is the
same on relevant contours and on background clutter, and
that the distribution of gradient direction off contours is uni-
form (the latter being supported by experimental evidence).
Finally the complete data model is dened on
y
=
(
jr
I
j
;c
)
by the two likelihoods
p
on
(
r
I
(
x
i
)
;c
(
x
i
)
j
x
i
1:
i
)
/
c
(
x
i
)
+(1
c
(
x
i
))
N
(
x
i
); 0
;
2
jr
I
(
x
i
)
j
(8)
p
o
(
r
I
(
x
i
)
;c
(
x
i
))
/
exp
jr
I
(
x
i
)
j
(9)
from which ratio
`
=
p
on
p
off
is deduced up to a multiplicative
constant.
3.2 Dynamics
q
Because of the absence of natural time, it is better to con-
sider a dynamics with fixed step length
d
. The denition of
4

the second-order dynamics
q
(
x
i
+1
j
x
i
1:
i
)
then amounts to
specifying an a priori probability distribution on direction
change
i
2
(
;
]
shown in Fig. 3.
The smoothness of the curve can be simply controlled
by choosing this distribution as Gaussian with variance
2
per length unit. For steps with length
d
, the resulting angu-
lar variance is
d
2
. However, under such a dynamics with
typical standard deviation ranging from
0
:
05
p
d
to
0
:
1
p
d
,
substancial changes of direction are most unlikely. In order
to allow for abrupt direction changes at the few locations
where corners have been detected, we mix the normal dis-
tribution with a small proportion
of uniform distribution
over
[
2
;
2
]
. The dynamics nally reads:
x
i
+1
=
x
i
+
R
(
i
)(
x
i
x
i
1
)
;
with
q
(
i
)=
+(1
)
N
(
i
;0
;d
2
)
(10)
where
R
(
i
)
is the rotation with angle
i
, and, with a slight
abuse of notation, we now use
q
to denote the prior angular
density on direction change.
3.3 Proposal sampling function
f
Now that both the dynamics and the data model are cho-
sen, it remains to devise a proposal sampling function
f
which is as much related as possible to
q`
under the con-
straint that it can be sampled from. Since the mixture dy-
namics (10) can be easily sampled, it is a natural candidate.
In this case,
~
x
m
i
+1
is predicted from
x
m
i
1:
i
by sampling from
(10) and the weights in (7) boil down to likelihood ratios
`
(
y
(~
x
m
i
+1
))
normalized over the
M
predicted positions.
With this standard choice
f
=
q
, corners will be mostly
ignored since the expected number of particles undertaking
drastic direction changes is
M
, where typically
=0
:
01
and
M
=100
. This can be circumventedby devising a pro-
posal function that depends also on the output of the corner
detector. We thus dene the prediction step as:
x
i
+1
=
x
i
+
R
(
i
)(
x
i
x
i
1
)
;
with
p
(
i
)=
c
(
x
i
)
+(1
c
(
x
i
))
N
(
i
;0
;d
2
)
:
(11)
At locations where no corners are detected, the proposal
density is the normal component of the dynamics (10). If
x
i
lies on a detected corner, the next proposed location is
obtained by turning of an angle picked uniformly between
2
and
2
. The impact of the corner-based component in
the proposal in shown in Fig. 5.
Figure 5. Using corner detection in JetStream. (Left)
With standard proposal function
f
=
q
the particle stream
overshoots the corners. (Right) Including a corner-based
component in the proposal is sufcient to accommodate
corners automatically.
The complete JetStream iteration is nally summarized
in Procedure 1.
Procedure 1 JetStream Iteration
current particle set:
(
x
m
0:
i
)
m
=1

M
Prediction:for
m
=1

M
if
c
(
x
m
i
) = 1
(corner), draw
i
from uniform
distribution on
(
2
;
2
)
if
c
(
x
m
i
)=0
(no corner), draw
i
from normal
distribution
N
(0
;
2
)
~
x
m
i
+1
=
x
m
i
+
R
(
i
)(
x
m
i
x
m
i
1
)
Weighting: compute for
m
=1

M
m
i
+1
=
Kq
(
i
)
`
(
r
I
(~
x
m
i
+1
)
;c
(~
x
m
i
+1
))
c
(
x
m
i
)
+(1
c
(
x
m
i
))
N
(
i
;0
;d
2
)
(12)
with
K
such that
P
M
k
=1
k
i
+1
=1
Selection:for
m
= 1

M
, sample index
a
(
m
)
from discrete probability
f
k
i
+1
g
k
over
f
1

M
g
,
and set
x
m
0:
i
+1
=(
x
a
(
m
)
0:
i
;
~
x
a
(
m
)
i
+1
)
(13)
In all experiments, the step length was xed to
d
= 1
,
and the mixture proportionin the dynamics was xed to
=
0
:
01
. The standard deviation
in the normal componentof
the dynamics was manually tuned within range (0.05,0.1).
As for data model, parameter
is the average gradient norm
in the image under consideration, and standard deviation
was set 1. Finally
M
=100
particles were used.
5

Citations
More filters
Book

Computer Vision: Algorithms and Applications

TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Journal ArticleDOI

Lazy snapping

TL;DR: Usability studies indicate that Lazy Snapping provides a better user experience and produces better segmentation results than the state-of-the-art interactive image cutout tool, Magnetic Lasso in Adobe Photoshop.
Journal ArticleDOI

Interactive video cutout

TL;DR: This work presents an interactive video cutout system that allows users to quickly extract foreground objects from video sequences for use in a variety of applications including compositing onto new backgrounds and NPR cartoon style rendering.
Book

Electron tomography : methods for three-dimensional visualization of structures in the cell

Joachim Frank
TL;DR: The principles of electron microscopy have been discussed in this article, including the use of the Electron Microscope as a structure projector, and the role of the Markerless Alignment in Electron Tomography.
Proceedings ArticleDOI

PAMPAS: real-valued graphical models for computer vision

TL;DR: This paper combines belief propagation with ideas from particle filtering; the resulting algorithm performs inference on graphs containing both cycles and continuous-valued latent variables with general conditional probability distributions, which has wide applicability in the computer vision domain.
References
More filters
Journal ArticleDOI

Snakes : Active Contour Models

TL;DR: This work uses snakes for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest, and uses scale-space continuation to enlarge the capture region surrounding a feature.
Proceedings ArticleDOI

A Combined Corner and Edge Detector

TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Book

Computer vision

Journal ArticleDOI

C ONDENSATION —Conditional Density Propagation forVisual Tracking

TL;DR: The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set.
Journal ArticleDOI

On sequential Monte Carlo sampling methods for Bayesian filtering

TL;DR: An overview of methods for sequential simulation from posterior distributions for discrete time dynamic models that are typically nonlinear and non-Gaussian, and how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature are shown.
Frequently Asked Questions (12)
Q1. What have the authors contributed in "Jetstream: probabilistic contour extraction with particles" ?

This paper proposes a sequential Monte-Carlo technique, termed JetStream, that enables constraints on curvature, corners, and contour parallelism to be mobilized, all of which are infeasible under exact optimization. 

A number of further issues are raised by this work. Another area of investigation, is the possibility of explicit handling of branches, for example at T-junctions, so that the boundary splits automatically, with both branches continuing to grow. 

Using simple first-order dynamics on the width, sampled simultaneously with the position dynamics, enables roads with varying width to be tracked. 

Where a corner has been detected, it is then appropriate to accept a wide range of image gradient directions, but to continue to favour high gradient magnitude. 

The chosen proposal pdf must then be sufficiently “close” to the optimal one such that the weights do not degenerate (i.e., become extremely small) in the re-weighting process. 

In practice, JetStream is run for a fixed numbern of steps (100 in their experiments) from initial conditions x 0:1 chosen by the user. 

The posterior density on n+1 is derived, up to a multiplicative factor independent from x0:n:pn(x0:njy)/p(x0:1) nY i=2 q(xijxi 2:i 1) nY i=0 `(y(xi)) (5)where ` pon poff denotes the point-wise likelihood ratio. 

Any ordered sequence x0:n (x0 xn) 2 n+1 uniquely defines a curve in some standard way, e.g., the x i’sare the vertices of a polyline in their experiments. 

The definition ofthe second-order dynamics q(xi+1jxi 1:i) then amounts to specifying an a priori probability distribution on direction change i 2 ( ; ] shown in Fig. 3.The smoothness of the curve can be simply controlled by choosing this distribution as Gaussian with variance 2 per length unit. 

As a result, and given more user interaction, LiveWire nonetheless generates a less accurate boundary contour (Fig. 7.)LiveWire boundary is distorted, whereas JetStream behaves better with less user interaction. 

This functional captures some kind of regularity on candidate curves, while rewarding, by a lower cost, the presencealong the curve of contour cues such as large gradients or edgels detected in a previous stage. 

the coefficients of such a dynamic prior could possibly be learned, either off-line for each member of some gallery of standard curve types, or adaptively, as boundary construction progresses.