scispace - formally typeset
Open AccessJournal ArticleDOI

Efficient Surface Reconstruction using Generalized Coulomb Potentials

Andrei C. Jalba, +1 more
- Vol. 13, Iss: 6, pp 1512-1519
TLDR
The spatially-adaptive method is highly resilient to shot noise since global, generalized Coulomb potentials can be used to disregard the presence of outliers due to noise, and thus they convey global information which is crucial in the fitting process.
Abstract
We propose a novel, geometrically adaptive method for surface reconstruction from noisy and sparse point clouds, without orientation information. The method employs a fast convection algorithm to attract the evolving surface towards the data points. The force field in which the surface is convected is based on generalized Coulomb potentials evaluated on an adaptive grid (i.e., an octree) using a fast, hierarchical algorithm. Formulating reconstruction as a convection problem in a velocity field generated by Coulomb potentials offers a number of advantages. Unlike methods which compute the distance from the data set to the implicit surface, which are sensitive to noise due to the very reliance on the distance transform, our method is highly resilient to shot noise since global, generalized Coulomb potentials can be used to disregard the presence of outliers due to noise. Coulomb potentials represent long-range interactions that consider all data points at once, and thus they convey global information which is crucial in the fitting process. Both the spatial and temporal complexities of our spatially-adaptive method are proportional to the size of the reconstructed object, which makes our method compare favorably with respect to previous approaches in terms of speed and flexibility. Experiments with sparse as well as noisy data sets show that the method is capable of delivering crisp and detailed yet smooth surfaces.

read more

Content maybe subject to copyright    Report

Efficient Surface Reconstruction using Generalized
Coulomb Potentials
Andrei C. Jalba and Jos B. T. M. Roerdink Senior Member, IEEE
Abstract—We propose a novel, geometrically adaptive method for surface reconstruction from noisy and sparse point clouds, without
orientation information. The method employs a fast convection algorithm to attract the evolving surface towards the data points. The
force field in which the surface is convected is based on generalized Coulomb potentials evaluated on an adaptive grid (i.e., an octree)
using a fast, hierarchical algorithm. Formulating reconstruction as a convection problem in a velocity field generated by Coulomb
potentials offers a number of advantages. Unlike methods which compute the distance from the data set to the implicit surface, which
are sensitive to noise due to the very reliance on the distance transform, our method is highly resilient to shot noise since global,
generalized Coulomb potentials can be used to disregard the presence of outliers due to noise. Coulomb potentials represent long-
range interactions that consider all data points at once, and thus they convey global information which is crucial in the fitting process.
Both the spatial and temporal complexities of our spatially-adaptive method are proportional to the size of the reconstructed object,
which makes our method compare favorably with respect to previous approaches in terms of speed and flexibility. Experiments with
sparse as well as noisy data sets show that the method is capable of delivering crisp and detailed yet smooth surfaces.
Index Terms—Surface reconstruction, Implicit surfaces, Octrees, Generalized Coulomb potentials, Polygonization.
1INTRODUCTION
Surface reconstruction is challenging because the topology of the real
surface is unknown, acquired data can be non-uniform and contami-
nated by noise, and reconstructing surfaces from large data sets can be
prohibitively expensive in terms of computations or memory usage. A
lack of information about the surface orientation at the acquired sam-
ples may further complicate the problem.
We propose a novel, geometrically adaptive method for surface
reconstruction without using orientation information. It employs a
fast convection algorithm (inspired by the tagging method of Zhao et
al. [31]) to attract the evolving surface (i.e., the current approximation
to the final surface) towards the data points. The force field in which
the surface is convected is based on generalized Coulomb potentials
evaluated on an adaptive grid (i.e., an octree) using the fast, hierarchi-
cal algorithm of Barnes and Hut [6]. The potentials are used to attract
the evolving surface towards the data points and to remove outliers due
to noise. When the convection process ends, the characteristic func-
tion
χ
(defined as 1 at points inside the object, and 0 at points outside)
yields the final implicit surface, which is polygonized using a meshing
algorithm based on tetrahedral subdivision of octree cells [27].
Formulating surface reconstruction as a convection problem offers a
number of advantages. Most methods for implicit surface fitting com-
pute a signed distance function from the given point data and represent
the reconstructed surface as an iso-contour of this function. However,
these approaches are very sensitive to outliers (shot noise) due to the
very reliance on the distance transform. By contrast, global gener-
alized Coulomb potentials can be used to disregard the presence of
outliers due to noise. Some methods first locally fit the data and then
combine these approximations by blending the locally fitting (basis)
functions. Unlike this, Coulomb potentials represent long-range inter-
actions for all data points at once, and thus convey global information.
As computing Coulomb potentials is of paramount importance in a
broad variety of problems, we can rely on a very efficient method for
computing Coulomb potentials, to arrive at reconstruction with scal-
able computational time and memory requirements.
Both authors are with the Institute for Mathematics and Computing
Science, University of Groningen, The Netherlands.
E-mail: andrei@cs.rug.nl, j.b.t.m.roerdink@rug.nl.
Manuscript received 31 March 2007; accepted 1 August 2007; posted online
27 October 2007.
For information on obtaining reprints of this article, please send e-mail to:
tvcg@computer.org.
The main contributions of this paper include:
An improved geometrically-adaptive method for surface recon-
struction based on an implicit surface representation which only
requires information about the position of the samples and is ro-
bust to the presence of noise and missing data.
A hierarchical and adaptive representation based on octrees,
which allows scalable reconstruction at different levels of detail.
New algorithms for surface reconstruction based on convection
of surfaces in generalized Coulomb potentials.
2R
ELATED WORK
Popular explicit surface representations are based on parametric (e.g.,
NURBS, B-spline and B
´
ezier patches) and triangulated surfaces. Ma-
jor drawbacks of parametric representations are that patches should
be combined to form closed surfaces, which can be very difficult for
arbitrary data sets, and noisy data sets are difficult to deal with. Tri-
angulated surfaces are usually based on tools from computational ge-
ometry, such as Delaunay triangulations and their duals, Voronoi dia-
grams. Most methods in this class extract subsets of faces of Delaunay
triangulations to yield the reconstructed surface [3, 4, 15]. They ex-
actly interpolate the data, and therefore, are rather sensitive to noise.
Moreover, inserting hundreds of thousands of points into a triangula-
tion is computationally expensive. Examples are Alpha Shapes [15],
the (Power) Crust algorithm [3,4], and the Ball-Pivoting algorithm [7].
Recently, Giesen and John [16] introduced the notion of flow in com-
putational geometry, and Scheidegger et al. [28] proposed an adaptive
method based on the Moving Least-Squares algorithm.
Much research is devoted to efficient reconstruction methods re-
lying on implicit surface representations, as these offer a number o
f
advantages. Compared to explicit methods, implicit ones elegantly
deal with objects of arbitrary topology, blend and perform Boolean
operations on surface primitives, and fill holes automatically. The tra-
ditional approach is to compute a signed distance function and repre-
sent the reconstructed implicit surface by an iso-contour of this func-
tion [5,8,12,17]. This requires a way to distinguish between the inside
and outside of closed surfaces. E.g., Hoppe et al. [17] approximate the
normal at each data point by fitting a tangent plane in its neighbor-
hood, using principal component analysis. Zhao et al. [31] use the
level-set formalism [26] for noise-free surface reconstruction. Their
method handles complicated topology and deformations, and the re-
constructed surface is smoother than piecewise linear. The main draw-
back is the sensitivity to shot noise, due to reliance on the distance
Published 14 September 2007.
1512
1077-2626/07/$25.00 © 2007 IEEE Published by the IEEE Computer Society
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 13, NO. 6, NOVEMBER/DECEMBER 2007
Authorized licensed use limited to: IEEE Xplore. Downloaded on October 19, 2008 at 00:37 from IEEE Xplore. Restrictions apply.

transform. The work of Zhao et al. [31] inspired researchers from
the computational-geometry community to develop ’geometric con-
vection’ algorithms [2, 11] in the context of surface reconstruction.
These methods deform a closed oriented pseudo-surface embedded in
the 3D Delaunay triangulation of the sampled points, and yield the re-
constructed surface as a set of oriented facets of the triangulation. Al-
though these methods tolerate small amounts of Gaussian noise, they
are computationally expensive.
More recently, blending globally or locally supported implicit prim-
itives (e.g. Radial Basis Functions or RBFs) became the favorite tech-
nique [10, 13, 21, 23]. Turk and O’Brien [30] and Carr et al. [10]
use globally supported RBFs by solving a large and dense linear sys-
tem of equations. Since the ideal RBFs are globally-supported and
non-decaying, the solution matrix is dense and ill-conditioned [20].
Moreover, these methods are very sensitive to noise because local
changes of the positions of the input points have global effects on the
reconstructed surface. Morse et al. [23] and Ohtake et al. [25] use
compactly-supported RBFs to achieve local control and reduce com-
putational costs by solving a sparse linear system. Dinh et al. [13]
use RBFs and volumetric regularization to handle noisy and sparse
range data sets. Recently, Ohtake et al. [24] proposed the so-called
’partition of the unity implicits’, which can be regarded as the combi-
nation of algebraic patches and RBFs. Carr et al. [9] further address
surface reconstruction from noisy range data by fitting a RBF to the
data and convolving with a smoothing kernel during the evaluation
of the RBF. Kojekine et al. [21] use compactly-supported RBFs and
an octree data structure such that the resulting matrix of the system
is band-diagonal, thus reducing computational costs. More recently,
Kazhdan [19] proposed to use the Fourier transform for computing
the characteristic function of the solid model, and then standard iso-
surfacing techniques (i.e., marching cubes) to triangulate its boundary.
Although (small) displacements of the positions and normals of the
samples are handled well, the temporal and spatial complexities are cu-
bic in the (uniform) grid resolution. An improved, geometrically adap-
tive method [20] with quadratic complexities, formulates reconstruc-
tion as a Poisson problem which is then tackled using efficient Poisson
solvers. However, both methods assume that orientation information
is available. Kolluri et al. [22] introduce a noise-resistant algorithm for
watertight reconstruction based on spectral graph partitioning. Since
the local spectral partitioner uses a global view of the model, their al-
gorithm can ignore outliers and patch holes in under-sampled regions.
The method is computationally expensive as it solves an eigenvalue
problem on a large graph formed by a subset of the Voronoi vertices
of the input set. Tang and Medioni [29] use tensor-voting to develop
a method which is resilient to noise and copes well with sparse data.
The downside of this method is its cubic spatio-temporal complexities.
In [18] regularized membrane potentials are used to aggregate the in-
put points on a uniform grid, then a labeling algorithm similar to that
in [31] is used to define an implicit surface, which is smoothed using
a mass-spring system and as a final step polygonized. However, this
method has cubic complexities similar to the FFT method in [19].
Similarly to RBF approaches and Poisson reconstruction, our
method creates smooth surfaces that approximate noisy data, and
combines some positive aspects of both global and local approxima-
tion schemes. Convection is performed in a global field (Coulomb
force field) which does not rely on computing local neighborhoods for
blending. However, after the indicator function of the solid has been
computed, we do employ blending based on compactly-supported ker-
nels to produce smooth triangulated surfaces.
3T
HE PROPOSED METHOD
Let S denote an input data set of samples (points, lines, etc.) lying
on or near the surface
M
M
M of an unknown object M. The problem
is to accurately reconstruct the indicator function
χ
of M,andthen
to approximate the surface
M
M
M by a watertight, smooth triangulated
iso-surface.
Given a flexible, arbitrary enclosing surface Γ
Γ
Γ, the reconstruction
problem is formulated as the convection of Γ
Γ
Γ in a conservative velocity
field v
v
v =
φ
created by the input data points, as described by the
differential equation
dΓ
Γ
Γ
dt
= v
v
v(Γ
Γ
Γ(t)) =
φ
(Γ
Γ
Γ(t)). (1)
We assume that
φ
is the electric scalar potential (Coulomb potential),
such that the velocity field becomes the electric field v
v
v = E
E
E ≡−
φ
.
We regard the sample points s
i
, s
i
S, as electric charges q
i
with posi-
tion vectors r
r
r
i
, i = 1,2,...N generating a charge distribution
ρ
. Hence
we can solve for
φ
by numerically approximating Poisson’s equation
2
φ
=
ρ
ε
0
. (2)
Solving for Poisson’s equation in this case has the problem that the
long-range nature of Coulomb interactions is not properly taken into
account. Moreover, Poisson’s equation requires a continuous charge
distribution and not a set of discrete charges, which are in fact just
a set of singularities. This problem has been addressed in [20] by
convolving the input points with a Gaussian kernel prior to solving
Poisson’s equation. Additionally, in [20] the authors directly solve
a Poisson equation for the characteristic function, assuming however
that the orientation of the sample points is available. Discretizing the
superposition integral, the potential
φ
is a sum of potentials generated
by each charge taken in isolation. However, this potential is inade-
quate for our purposes (see Fig. 2). Therefore, we use higher order
generalized Coulomb potentials which decay faster with distance than
as inverse of distance, i.e.,
φ
(r
r
r
i
)=
j=i
q
j
|r
r
r
i
r
r
r
j
|
m
, (3)
where m > 1 and we removed the constants appearing in the physi-
cal formulation. Note that similar potentials have been successfully
used for computing medial axes of 3D shapes [1]. In our case a high-
order potential should be used for noise-free data when an accurate
reconstruction is desired. By contrast, we show that for noisy data a
smaller-order potential can be used to detect and remove outlier data.
During convection generalized Coulomb potentials have to be eval-
uated not only at the sample positions, but at all centers of the (adap-
tive) grid cells. This is required by the fast convection algorithm (de-
scribed in Section 4.3) which, starting from the bounding box of the
grid, follows increasing paths of the scalar field until regional max-
ima (corresponding to the sample points) and ridges are reached. As it
sweeps the volume, this algorithm labels grid cells as exterior, so that
after propagation the remaining cells qualify as interior to the surface.
Naive evaluations of these potentials at all grid positions is expensive
for large grid resolutions. So we need to use fast adaptive solvers (see
Section 4.2) to approximate them.
Having labeled the cells of the adaptive grid (octree) as exterior and
interior to the surface, and thus equivalently determined
χ
, we com-
pute a smoothed version
˜
χ
of
χ
as the weighted sum of contributions
of nearby cells (estimated during polygonization). The weights are ob-
tained by evaluating the quadratic (approximating) C
1
–continuous B-
spline of Dodgson [14]. This has second order interpolation error, and
since it evaluates to zero and has vanishing derivatives at the bound-
ary it is conducive to stability. Moreover, it is inexpensive to compute
since it requires only three multiplications and three additions for each
evaluation.
4I
MPLEMENTATION
4.1 Adaptive octree-based approximation
Discretizing the problem on a regular 3D grid is impractical since the
memory for maintaining such a uniform structure becomes prohibitive
for fine-grain reconstruction. However, since accurate representations
are only required close to the data set, we can use an adaptive grid
based on an octree to efficiently evaluate Coulomb potentials and to
represent and triangulate the implicit function.
Given samples s
i
S, i = 1,2,...,N regarded as particles with posi-
tion s
i
.p and charge s
i
.q, and a maximum tree depth D, the octree O is
1513IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 13, NO. 6, NOVEMBER/DECEMBER 2007
Authorized licensed use limited to: IEEE Xplore. Downloaded on October 19, 2008 at 00:37 from IEEE Xplore. Restrictions apply.

built by calling the function octInsert shown in Algorithm 1 for each
particle, with n set to the root node r. When the maximum tree depth
D is reached for a non-empty leaf l, l O, the node is not further sub-
divided if a new particle is to be assigned to it. Instead, the centroid
and total charge of the subset Z of all particles s
k
that would have been
assigned to the subtree rooted at l in an infinitely-deep octree are com-
puted and stored in a new particle s
s
that is assigned to l. All particles
s
k
can now be discarded, as they are no longer needed. Effectively
we implement a low-pass filter acting on particle positions, since the
given (maximum) resolution (1/2
D
) cannot be exceeded anyway, i.e.,
s
s
.p
s
k
Z
s
k
.qs
k
.p
s
k
Z
s
k
.q
, s
s
.q
s
k
Z
s
k
.q (4)
When initially all particles have the same charge (we use q = 1), the
total charge of such particles will be larger than that of usual particles
assigned to non-empty leaves.
Algorithm 1 Insert particle s
i
in the subtree rooted at n.
function octInsert(s
i
, n)
if subtree rooted at n contains more than one particle then
determine child c of n which contains particle s
i
;
octInsert(s
i
, c);
else if subtree rooted at n contains exactly one particle then
if n.dept h D then
determine particle s
j
contained in n;
create new particle s
s
with charge s
s
.q = s
i
.q +s
j
.q
and position s
s
.p =(s
i
.q ·s
i
.p + s
j
.q · s
j
.p)/s
s
.q;
assign s
s
to n and discard particles s
i
and s
j
;
else
add ns eight children to the octree;
move particle s
j
already in n into the child in which it lies;
let c be the child in which s
i
lies;
octInsert(s
i
, c);
else
store s
i
at node n;
After octree construction the octree is balanced (by subdividing
nodes until any two neighboring cells differ at most by one in depth),
as required by both the meshing (see Section 4.4) and convection al-
gorithms (see Section 4.3 and Fig. 2).
4.2 Fast evaluation of the Coulomb potentials
To evaluate the generalized Coulomb potentials within any desired
precision on the adaptive grid obtained by stacking the cells of the
balanced octree, we rely on the fast, hierarchical algorithm of Barnes
and Hut [6], thus trading accuracy for speed. The Barnes-Hut algo-
rithm can be summarized as follows: (i) construct an octree where
each leaf contains at most one particle, (ii) for each octree cell, com-
pute the centroid and total charge for the particles it contains, (iii) tra-
verse the tree to evaluate the potential at any desired grid location. The
first step is accomplished using the function octInsert in Algorithm 1.
The second step is implemented in a depth-first-search traversal of the
balanced tree O
b
, in which the total charge n.q and the centroid n.c
of each node n O
b
are computed (using Eq. (4)) and stored at each
node.
Then we come to the core of the Barnes-Hut algorithm, evaluating
the potential at an arbitrary location p
p
p. A small positive test charge q
t
(with value q
t
= 1) is placed at position p
p
p, i.e.,
φ
(p
p
p)=q
t
N
i=1
q
i
|r
r
r
i
p
p
p|
m
. (5)
Then one computes the ratio between the size L of a given cell and the
distance r
i
, from p
p
p to the centroid of the cell, see Fig. 1. If the ratio
is smaller than
θ
(a user-supplied accuracy parameter), the potential
is computed from the total charge and centroid of the particles in the
cell. Otherwise, computation continues with the children of the node
associated to the current cell, see Algorithm 2.
L
r
i
r
j
r
k
s
i
s
j
s
k
p
Fig. 1. Estimation of the Coulomb potential at an arbitrary grid position
p
p
p.IftheratioL/r
i
<
θ
the direct contribution is replaced by one contribu-
tion due to all particles in the box. Grid cells are only subdivided during
the octree construction, see Algorithm 1.
Algorithm 2 Evaluate the potential at p
p
p due to all particles in the cell
associated with octree node n.
function computePotential(p
p
p, n) returns pot
pot = 0;
if n contains one particle s
i
then
pot =compute direct potential due to s
i
;
else
r = distance from p
p
p to centroid n.c;
L = size of the cell associated with n;
if L/r <
θ
then
compute pot due to (n.q, n.c);
else
for all eight children m
i
of n do
pot = pot+ computePotential(p
p
p, m
i
);
return pot;
4.3 Fast convection based on tagging
We revise the tagging algorithm of Zhao et al. [31] to find the exterior
and interior octree cells to the surface, i.e., to compute the characteris-
tic function
χ
of the model on octree grids.
Given the balanced octree O
b
, the task is to classify its leaf cells
as either exterior, interior or boundary at each depth of the octree,
starting at the coarsest grid resolution and moving gradually towards
the highest resolution at tree depth D. This is done in a breadth-first-
search tree traversal based on a queue Q. Initially, all octree cells
at all depths are labeled as interior, and the root of the tree as well
as a sentinel node s are added to Q. Then, the octree is traversed
by removing one node at a time from Q. Like Zhao et al. we also
use a sorted heap H storing leaf cells at the current depth d, but the
cells are sorted in increasing order of their keys (potential values at
the cell centers). If the current node n taken out from Q is the sentinel
node s, this signifies that all accessible leaves at the current depth d,
d = 1,2,...,D have been collected in the heap H (see below), and the
marching of the exterior boundary at level d towards the data points
can be performed by Algorithm 3. After marching at level d ends,
the current depth d is incremented, a new sentinel is added to Q,anda
node (at the new depth) is extracted from Q and assigned to the current
node n. The traversal continues by adding all children of n to Q. Then,
whether or not the current node was the sentinel one, it is labeled as
exterior if the node touches the bounding box. Otherwise, if (i) it is
an interior leaf node, and (ii) has at least one exterior neighbor at a
depth smaller or equal to its own, and (iii) the potential evaluated at its
center is larger or equal to that evaluated at the center of its neighbor, it
is labeled as trial and added to the heap H. The process is repeated by
extracting a new node from Q until the whole tree has been traversed.
The computation is summarized in Algorithm 3. Compared to the
tagging method there are some slight modifications. First, we replaced
the maximum heap with a minimum one, as we march towards lo-
cal maxima and ridges of generalized Coulomb potentials. A more
subtle modification was required due to the fact that during march-
ing, even if the octree is balanced, some interior neighbors of a node
may lie at a depth smaller (by one) than its own. Since these nodes
will never be reached and thus inserted in the heap, they have to be
resolved immediately. This is handled by subdividing any such neigh-
1514 JALBA AND ROERDINK: EFFICIENT SURFACE RECONSTRUCTION USING GENERALIZED COULOMB POTENTIALS
Authorized licensed use limited to: IEEE Xplore. Downloaded on October 19, 2008 at 00:37 from IEEE Xplore. Restrictions apply.

Algorithm 3 Label cells as exterior or boundary.
function marchExterior(H)
while not empty heap H do
extract node n from H;
p
n
= computePotential(n.c
c
ce
e
en
n
nt
t
te
e
er
r
r, root);
for each leaf m, neighbor of n, m.de pth n.de pth do
if m is interior and m.de pth < n.de pt h then
subdivide m;
find neighbor k of n among the children of m;
assign m = k and continue;
if m is interior then
p
m
= computePotential(m
m
m.
.
.c
c
ce
e
en
n
nt
t
te
e
er
r
r, root);
if p
m
+
ε
< p
n
then
label n as boundary;
continue with extracting a new node from H;
label n as exterior;
for each leaf m, neighbor of n do
if m is interior then
label m as trial;
insert m onto H;
bor and continuing the marching process normally. Additionally, a
user-specified parameter
ε
has been introduced to allow the marching
process to continue if the potential at a neighboring node is not small
enough compared to that of the current node. For noise-free data sets
we set this parameter to
ε
= 0, whereas for data sets corrupted by out-
liers we allow for some variation and set
ε
to a value larger than zero
(see Section 5.5).
Fig. 2. Example of 2D curve reconstruction. Top-left: input set; Top-
right: labeling fails if the octree is unbalanced (highly non-uniform data
set); Bottom-left: Labeling can still fail for slower-decaying potentials
(m = 1 in Eq. (3)) even on a balanced tree; Bottom-right: correct result
on a balanced tree with m = 5.
4.4 Smoothing and polygonization
We define the implicit surface as the zero level set of the (characteris-
tic) function
χ
(on the octree) constructed as follows. In a depth-first-
search traversal, the total charge associated with boundary and non-
leaf nodes is set to zero, whereas that of exterior nodes is set to one
and that of interior ones is set to minus one. During polygonization, a
smoothed version
˜
χ
of
χ
is computed by the weighted sum of contri-
butions of nearby cells, where the weights are obtained by evaluating
the B-spline kernel mentioned at the end of section 3, i.e.,
˜
χ
(r
r
r) q(r
r
r)
nO
b
n.qW
n
(r
r
r)
nO
b
W
n
(r
r
r)
, W
n
(r
r
r)=W
3|n.c
c
ce
e
en
n
nt
t
te
e
er
r
r r
r
r|
2h
n
.
Here q(r
r
r)=1 is a test charge at location r
r
r,andn.q, h
n
and n.c
c
ce
e
en
n
nt
t
te
e
er
r
r
are the total charge, kernel support and center of the grid cell associ-
ated with n, respectively. The support h
n
is set to h
n
= s · G/2
n.dept h
,
where s is a user-supplied parameter controlling the amount of
smoothing; n.de pth is the octree depth of node n,andG is the size
of the computational domain. With some abuse of terminology, we
call
˜
χ
the smoothed characteristic function, although it takes values in
the interval [1,1].
Algorithm 4 Evaluate
˜
χ
at location r
r
r.
function computeChi(r
r
r, s) returns F
n = locate leaf containing r
r
r;
h
n
= G/2
n.dept h
;
f = n.qW
n
(r
r
r); sf = W
n
(r
r
r);
label n as trial and add n to Q;
while not empty queue Q do
dequeue node n from Q;
for each neighbor m of n do
if m is not trial then
if m has children then
m = unlabeled child of m closest to r
r
r;
h
m
= G/2
m.dept h
;
d
m
= r
r
r.dist (m.c
c
ce
e
en
n
nt
t
te
e
er
r
r);
if d
m
h
m
and d
m
2 · s · h
n
then
f = f + m.qW
m
(r
r
r); sf = sf +W
m
(r
r
r);
label m as trial and add it to Q;
label all visited nodes as not-trial;
return (F = f /sf);
We limit the number of considered cells surrounding the cell as-
sociated to node n and which contains r
r
r, such that only cells in the
neighborhood N
O
b
(n) of n, with N
O
b
(n)={m | r
r
r.dist (m.c
c
ce
e
en
n
nt
t
te
e
er
r
r)
2 · s · h
n
} are visited. Therefore, we designed a fast flood-fill algorithm
based on a queue Q which implements blending as just described, the
pseudo-code of which is shown in Algorithm 4.
To extract the iso-surface, we use a method based on tetrahedral
subdivision of octree leaves [27]. In a depth-first-search traversal of
the octree, all grid cells corresponding to boundary nodes are collected
and their faces triangulated. We walk around each face in a fixed di-
rection, e.g., counterclockwise as seen from the positive axis perpen-
dicular to that face, starting at the corner vertex with the largest co-
ordinates. By checking whether the neighboring cells are subdivided,
this results in a unique sign pattern which helps us to triangulate the
face. Once all its faces have been triangulated, the cell is tetrahedrized
connecting each of its triangles to the center of the cubic cell. When
a new tetrahedron is to be triangulated, we check whether the implicit
function intersects it. The final, smoothed implicit surface is defined as
the iso-surface at level zero of the function
˜
χ
. To test for intersections,
we estimate the value of
˜
χ
at the vertices of each cubic cell and at the
cell center using Algorithm 4. Only if the surface intersects the tetra-
hedron it is triangulated as described above. Note that the resulting
triangulated surface is guaranteed to be manifold by the construction
of
˜
χ
.
To speed up the computations, the values of
˜
χ
are cached using a
hash map, whose keys are obtained by hashing 3D positions of cell
vertices. In this way memory usage is kept low, since only an integer
(the final hashing key) and a floating point number (the function value)
have to be stored for each vertex of the visited cells. The number
of triangles generated by this method will be larger than that of the
marching cubes algorithm (see [20, 27]), but there are no ambiguous
polygonization cases, the algorithm is straightforward and results in
better interpolation of mesh vertices.
The method is geometrically adaptive in the sense that if the sam-
ples are non-uniformly distributed, during polygonization, larger and
fewer triangles are generated in regions of small sampling density,
whereas smaller and more triangles will be generated in regions with
high sampling density. However, as the resolution of the adaptive oc-
tree grid reflects the distribution of the sample points, if the input data
1515IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 13, NO. 6, NOVEMBER/DECEMBER 2007
Authorized licensed use limited to: IEEE Xplore. Downloaded on October 19, 2008 at 00:37 from IEEE Xplore. Restrictions apply.

set is uniform, then the final mesh will have many triangles of similar
sizes.
4.5 Cost analysis
Let N denote the number of samples in the data set S. Assuming
that the maximum octree depth D is such that to each leaf a sample
is assigned (i.e., the grid resolution agrees with that of the samples),
then the number of leaves is approximately equal to the number of
samples, i.e.,2
3D
N. Then the total number M of octree nodes is
M
D
i=0
N/2
3i
N 2
3D
. Since balancing does not increase the
order of complexity, constructing the balanced octree O
b
is done in
O(M · log M).
Since each interior cell is visited at most once during convection,
the complexity of the marching procedure is O(M · logM),where
logM comes from the heap sort algorithm. Since the complexity of
evaluating the Coulomb potential (see Algorithm 2) at the center of a
grid cell is O(log M), the total complexity of the fast convection algo-
rithm is O(M · log
2
M). Further, considering that a constant number
of cells are visited for evaluating
˜
χ
at a point (see Algorithm 4), the
complexity of the polygonization step is linear in the total number of
nodes. Accordingly, in the worst case, if the tree depth is increased
by one level, the temporal complexities of both the convection and
polygonization steps increase by a factor of eight. The same worst-
case total complexity is reached when the resolution of a uniform
grid is doubled. However, when using an adaptive octree grid this
does not happen since the octree is not complete, i.e., the samples are
non-uniformly distributed on the grid (see Section 5.1). Note that our
method has similar spatial worst-case complexities.
5R
ESULTS
We present several results obtained using the proposed method, in a
large variety of settings, using a system with two dual Opteron pro-
cessors and 8 GB of RAM. The parameters of the method were set as
follows. Given a desired grid resolution (maximum octree depth D),
the size of the computational domain G is set to G = 2
D
. Parame-
ter D controls the tradeoff between the accuracy of the reconstruction
versus the computational requirements. If reconstruction is to be per-
formed at maximum resolution, D should be set such that each leaf of
the octree is assigned a sample point. The parameter
θ
controlling the
accuracy of the computations of generalized Coulomb potentials was
fixed to
θ
= 0.9. We experimented with different settings of
θ
in the
range of 0.5 0.95, without noticing differences in the quality of the
reconstructed surfaces. The factor s determining the support radius of
the smoothing kernel in Algorithm 4 was set to s = 2.0. The larger
the value of this parameter the smoother the final surface becomes,
albeit at the expense of larger computational time; we experimented
with values in the range of 1.6 2.5. The settings of the remaining
parameters (m and
ε
) will be mentioned in the text. On the setting
of parameter m we mention that similar generalized Coulomb poten-
tials have been successfully used for computing medial axes of 3D
shapes [1], and it was shown that as the potential order becomes ar-
bitrarily large, the axes approach those computed using the shortest
distance to the border [1]. Since in general it is possible to revert the
medial axis transform to obtain an approximation of the input shape,
this suggests that as the order of the potential is increased, a more ac-
curate approximation can be obtained, which in the limit yields the
original shape. In our case this means that for noise-free data, when
an accurate reconstruction is desired, a high-order potential should be
used.
5.1 Multi-resolution
The method delivers representations at different scales, where the
maximum depth of the tree D plays the role of the scale parameter.
Fig. 3 shows reconstruction results of a dragon model (3, 609,600 sam-
ples) at different octree depths. As the depth of the octree is increased
by one, details at higher resolutions are captured, while the compu-
tational time and number of triangles increase roughly by a factor of
four, whereas the memory overhead increases by a factor of two. Some
statistics are given in Table 1.
Fig. 3. Example of reconstruction at various octree depths: Top-left:
D = 7, top-right: D = 8, bottom-left: D = 9, bottom-right: D = 10.
Octree depth Time Peak memory Triangles
7 3 120 98,217
8 9 146 445,912
9 37 250 1,923,982
10 186 587 8,042,874
Table 1. Computational time (seconds), peak-memory usage (mega-
bytes) and number of triangles of the reconstructed surface as functions
of the tree depth, for the large dragon model (see Fig. 3).
5.2 Comparison to other methods
We compare our results to those obtained using Power Crust [4],
Multi-level Partition of Unity implicits (MPU) [24], the method by
Hoppe et al. [17], the FFT method in [19] and the Poisson-based
method [20]. The experiment was performed using the Stanford bunny
data set consisting of 362,000 samples assembled from range data.
The normal at each sample was estimated as in ref. [20] when required,
i.e., for the MPU, FFT and Poisson methods.
The results are shown in Fig. 4. Since this data set is contaminated
by noise, interpolating methods such as the Power Crust generate very
noisy surfaces with holes due to the non-uniformity of the samples.
The method by Hoppe et al. [17] generates a smooth surface, although
some holes are still visible due to the non-uniform distribution of sam-
ples, which the method cannot properly handle. The MPU method
yields a smooth surface without holes, but with some artefacts due to
the local nature of the fitting, which does not cope well with the noise
and non-uniformity of the data. Global methods such as the FFT, Pois-
son and our method accurately reconstruct the surface of the bunny.
Table 2 provides some statistics about the methods as well as about
the quality of the reconstructed surfaces. The last column shows the
approximation error, which is computed as the average distance from
the data points to the centroids of the mesh triangles and represents
an upper bound for the average distance from the data points to the
reconstructed surface. The smallest reconstruction error was achieved
by the Power Crust, MPU and our method. Although the Power Crust
method should have produced an interpolating surface, thus achieving
a smaller upper-bound error, this does not happen in this case, as the
Method Time Peak memory Triangles Error
Power Crust 504 2601 1,610,433 4 × 10
4
Hoppe et al. 82 230 630,345 6 × 10
4
MPU 78 421 2,121,041 4 × 10
4
FFT method 93 1700 1,458,356 6 × 10
4
Poisson method 188 283 783,127 5 × 10
4
Our method 79 197 2,647,986 4 × 10
4
Table 2. Computational time (seconds), peak-memory usage (mega-
bytes), number of triangles and reconstruction error for the Stanford
bunny by different methods (see also Fig. 4).
1516 JALBA AND ROERDINK: EFFICIENT SURFACE RECONSTRUCTION USING GENERALIZED COULOMB POTENTIALS
Authorized licensed use limited to: IEEE Xplore. Downloaded on October 19, 2008 at 00:37 from IEEE Xplore. Restrictions apply.

Citations
More filters
Proceedings ArticleDOI

Point Cloud Skeletons via Laplacian Based Contraction

TL;DR: By avoiding explicit reconstruction, this work is able to perform skeleton-driven topology repair of acquired point clouds in the presence of large amounts of missing data and show that the curve skeletons the authors extract provide an intuitive and easy-to-manipulate structure for effective topology modification, leading to more faithful surface reconstruction.
Journal ArticleDOI

On surface reconstruction: A priority driven approach

TL;DR: The algorithm can reconstruct an object surface from unorganized surface points in a fast and reliable manner and can successfully construct the surface of the objects with complex geometry or topology.
Posted Content

A two-level approach to implicit surface modeling with compactly supported radial basis functions

TL;DR: A two-level method for computing a function whose zero-level set is the surface reconstructed from given points scattered over the surface and associated with surface normal vectors, preserves the simplicity and efficiency of implicit surface interpolation with CSRBFs and the reconstructed implicit surface owns the attributes.
Journal ArticleDOI

Surface Reconstruction Based on Hierarchical Floating Radial Basis Functions

TL;DR: This paper addresses the problem of optimal centre placement for scattered data approximation using radial basis functions (RBFs) by introducing the concept of floating centres, and combines the non‐linear RBF fitting with a hierarchical domain decomposition technique.
Journal ArticleDOI

Continuous global optimization in surface reconstruction from an oriented point cloud

TL;DR: In this article, a continuous convex relaxation scheme is proposed to ensure the global minima of the geometric surface functional, which is implicitly represented by the binary segmentation of vertices of a 3D uniform grid.
References
More filters
Journal ArticleDOI

Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations

TL;DR: The PSC algorithm as mentioned in this paper approximates the Hamilton-Jacobi equations with parabolic right-hand-sides by using techniques from the hyperbolic conservation laws, which can be used also for more general surface motion problems.
Journal ArticleDOI

A hierarchical O(N log N) force-calculation algorithm

TL;DR: A novel method of directly calculating the force on N bodies that grows only as N log N is described, using a tree-structured hierarchical subdivision of space into cubic cells, each is recursively divided into eight subcells whenever more than one particle is found to occupy the same cell.
Proceedings ArticleDOI

A volumetric method for building complex models from range images

TL;DR: This paper presents a volumetric method for integrating range images that is able to integrate a large number of range images yielding seamless, high-detail models of up to 2.6 million triangles.
Proceedings ArticleDOI

Surface reconstruction from unorganized points

TL;DR: A general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points that is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners.
Proceedings ArticleDOI

Poisson surface reconstruction

TL;DR: A spatially adaptive multiscale algorithm whose time and space complexities are proportional to the size of the reconstructed model, and which reduces to a well conditioned sparse linear system.
Frequently Asked Questions (19)
Q1. What contributions have the authors mentioned in the paper "Efficient surface reconstruction using generalized coulomb potentials" ?

The authors propose a novel, geometrically adaptive method for surface reconstruction from noisy and sparse point clouds, without orientation information. Both the spatial and temporal complexities of their spatially-adaptive method are proportional to the size of the reconstructed object, which makes their method compare favorably with respect to previous approaches in terms of speed and flexibility. The force field in which the surface is convected is based on generalized Coulomb potentials evaluated on an adaptive grid ( i. e., an octree ) using a fast, hierarchical algorithm. Formulating reconstruction as a convection problem in a velocity field generated by Coulomb potentials offers a number of advantages. Unlike methods which compute the distance from the data set to the implicit surface, which are sensitive to noise due to the very reliance on the distance transform, their method is highly resilient to shot noise since global, generalized Coulomb potentials can be used to disregard the presence of outliers due to noise. Coulomb potentials represent longrange interactions that consider all data points at once, and thus they convey global information which is crucial in the fitting process. 

Further work concerns more efficient smoothing of implicit surfaces on non-uniform grids based on curvature flows and/or ( anisotropic ) diffusion, and improving the overall performance of the polygonization method. Of particular interest is the possibility of accelerating the computations using modern, programmable GPU hardware. 

The force field in which the surface is convected is based on generalized Coulomb potentials evaluated on an adaptive grid (i.e., an octree) using the fast, hierarchical algorithm of Barnes and Hut [6]. 

Reconstructing these models with methods such as the FFT [19] or the method in [18] would require two uniform grids of size 20483 voxels, i.e., more than 60 GB of memory. 

since accurate representations are only required close to the data set, the authors can use an adaptive grid based on an octree to efficiently evaluate Coulomb potentials and to represent and triangulate the implicit function. 

Morse et al. [23] and Ohtake et al. [25] use compactly-supported RBFs to achieve local control and reduce computational costs by solving a sparse linear system. 

The traditional approach is to compute a signed distance function and represent the reconstructed implicit surface by an iso-contour of this function [5,8,12,17]. 

considering that a constant number of cells are visited for evaluating χ̃ at a point (see Algorithm 4), the complexity of the polygonization step is linear in the total number of nodes. 

Ohtake et al. [24] proposed the so-called ’partition of the unity implicits’, which can be regarded as the combination of algebraic patches and RBFs. 

Since this data set is contaminated by noise, interpolating methods such as the Power Crust generate very noisy surfaces with holes due to the non-uniformity of the samples. 

Although the FFT method is resilient to Gaussian noise, it does not cope well with shot noise, as this affects the whole Fourier spectrum and not only certain (high) frequencies. 

Since each interior cell is visited at most once during convection, the complexity of the marching procedure is O(M · logM), where logM comes from the heap sort algorithm. 

Since the complexity of evaluating the Coulomb potential (see Algorithm 2) at the center of a grid cell is O(logM), the total complexity of the fast convection algorithm is O(M · log2 M). 

This is required by the fast convection algorithm (described in Section 4.3) which, starting from the bounding box of the grid, follows increasing paths of the scalar field until regional maxima (corresponding to the sample points) and ridges are reached. 

as their polygonizer is based on tetrahedral decomposition, the number of triangles of the reconstructed surface is quite high. 

The computation time of their method is comparable to that of the MPU method, which is one of the fastest geometrically-adaptive reconstruction methods according to [20, 24], while the peak memory usage of their method (at octree depth D = 9 with m = 5) remains well below those of the other methods. 

The work of Zhao et al. [31] inspired researchers from the computational-geometry community to develop ’geometric convection’ algorithms [2, 11] in the context of surface reconstruction. 

When the maximum tree depth D is reached for a non-empty leaf l, l ∈ O, the node is not further subdivided if a new particle is to be assigned to it. 

Giesen and John [16] introduced the notion of flow in computational geometry, and Scheidegger et al. [28] proposed an adaptive method based on the Moving Least-Squares algorithm.