scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Metro: measuring error on simplified surfaces

01 Jan 1996-Computer Graphics Forum (Centre National de la Recherche Scientifique)-Vol. 17, Iss: 2, pp 167-174
TL;DR: Metro allows one to compare the difference between a pair of surfaces by adopting a surface sampling approach, and returns both numerical results and visual results, by coloring the input surface according to the approximation error.
Abstract: This paper presents a new tool, Metro, designed to compensate for a deficiency in many simplification methods proposed in literature. Metro allows one to compare the difference between a pair of surfaces (e.g. a triangulated mesh and its simplified representation) by adopting a surface sampling approach. It has been designed as a highly general tool, and it does no assuption on the particular approach used to build the simplified representation. It returns both numerical results (meshes areas and volumes, maximum and mean error, etc.) and visual results, by coloring the input surface according to the approximation error. EMAIL:: r.scopigno@cnuce.cnr.it

Content maybe subject to copyright    Report

Volume xxx
,(
1998
)
number yyy pp. 000{000
Metro
: measuring error on simplied surfaces
P. Cignoni
y
,C.Rocchini
z
and R. Scopigno
x
Istituto per l'Elaborazione dell'Informazione - Consiglio Nazionale delle Ricerche, Pisa, Italy
Technical Note (Short contribution)
Abstract
This paper presents a new tool,
Metro
, designedto compensate for a deciency in many simplication
methods proposed in literature.
Metro
al lows one to compare the dierencebetween a pair of surfaces
(e.g. a triangulated mesh and its simpliedrepresentation) by adopting a surface sampling approach.
It has been designed as a highly general tool, and it does no assuption on the particular approach used
to build the simpliedrepresentation. It returns both numerical results (meshes areas and volumes,
maximum and mean error, etc.) and visual results, by coloring the input surfaceaccording to the
approximation error.
Keywords
: surface simplication, surfacecomparison, approximation error, scan conversion.
1. Introduction
Many applications produce or manage extremely com-
plex surface meshes (e.g. volume rendering, solid mo d-
eling, 3D range scanning). Excessive surface complex-
ity causes non interactive rendering, secondary{to{
main memory b ottlenecks while managing interactive
visual simulations, or network saturation in 3D dis-
tributed multi-media systems. In spite of the con-
stant increase in processing sp eed, the p erformances
required byinteractive graphics applications are in
many cases much higher than those granted by cur-
rent technology.
Substantial results have been reported in the last few
years, aimed at reducing surface complexity while as-
suring a go od shape approximation
13
;
6
. The tech-
niques prop osed simplify [triangular] meshes either by
merging/collapsi ng elements or by re-sampling ver-
tices, using dierent error criteria to measure the t-
ness of the approximated surfaces. Any level of re-
duction can be obtained with these approaches, on
the condition that a suciently coarse approximation
threshold is set (an example is drawn in Figure 1).
y
Email:
cignoni@iei.pi.cnr.it
z
Email:
rocchini@calpar.cnuce.cnr.it
x
Email:
r.scopigno@cnuce.cnr.it
A general comparison of the simplicati on ap-
proaches is not easy, because the criteria to drive
the simplication pro cess are highly dierentiated and
there is no common way of measuring error; an at-
tempt has b een recently presented
3
. In fact, many
simplication approaches do not return measures of
the
approximation error
introduced while simplifyin g
the mesh. For example, given the complexity reduction
factor set by the user, some methods try to \optimize"
the shap e of the simplied mesh, but they give no mea-
sure on the error introduced
18
;
9
;
8
. Other approaches
let the user dene the maximal error that can b e in-
troduced in a single simplication step, but return no
global error
estimate or bound
17
;
7
. Some other re-
cent metho ds adopt a global error estimate
10
;
15
;
2
;
5
or simply ensure the introduced error to b e under a
given b ound
4
. But the eld of surface simplication
still lacks a formal and universally acknowledged de-
nition of
error
, which should involve shape approxi-
mation and hop efully preservation of feature elements
and mesh attributes (e.g. color).
For these reasons, a general tool that would mea-
sure the actual geometric
\dierence"
between the
original and the simplied meshes would b e strategic
both for researchers, in the design of new simplica-
tion algorithms, and for users, to allow them to com-
pare the results of dierent simplication approaches
c
The Eurographics Association 1998. Published by Blackwell
Publishers, 108 Cowley Road, Oxford OX4 1JF, UK and 238 Main
Street, Cambridge, MA 02142, USA.

2
P. Cignoni, C. Rocchini and R. Scopigno /
Metro
:measuring error on simplied surfaces
Figure 1:
A mesh simplication example: the original mesh (7,960 triangles) is on the left, a simplied one (179
triangles) is on the right.
on the same mesh and to choose the simplica tion
method that \best ts" the target mesh. In fact, even
bounded precision methods
10
;
15
;
2
;
5
;
4
behave dier-
ently on dierent meshes. They generally ensure the
user that the approximation will not be larger than
a given threshold, but di not give data on the actual
error distribution on the mesh. An example is the fol-
lowing query: are there sections of the mesh which hold
an approximation much b etter than the given bound?
And, if yes, what is their size and distribution?
Metro
has been dened as a tool which is general
and simple to implement. It compares numerically two
triangle meshes, which describe the same surface at
dierent levels of detail (
LOD
).
Metro
requires no
knowledge on the simplication approach adopted to
build the reduced mesh.
Metro
evaluates the dierence
between two meshes, on the basis of the
approximate
distance
dened in the following section.
2. Terminology
We dene here some terms that will be used in the
following section (actually, all the measures evaluated
by
Metro
follow the denitions b elow).
The approximation error between two meshes may
be dened, as follows, as the distance between cor-
responding sections of the meshes. Given a p oint
p
and a surface
S;
we dene the distance
e
(
p; S
) as:
e
(
p; S
) = min
p
0
2
S
d
(
p; p
0
)
where
d
() is the Euclidean distance b etween two p oints
in
E
3
. The one-sided distance b etween two surfaces
S
1
;S
2
is then dened as:
E
(
S
1
;S
2
) = max
p
2
S
1
e
(
p; S
2
)
:
Note that this denition of distance is not symmetric.
There exist surfaces such that
E
(
S
1
;S
2
)
6
=
E
(
S
2
;S
1
).
Atwo-sided distance (Hausdor distance) maybe
obtained by taking the maximum of
E
(
S
1
;S
2
) and
E
(
S
2
;S
1
).
Given a set of uniformly sampled distances, we de-
note the
mean
distance
E
m
between two surfaces as
the surface integral of the distance divided by the area
of
S
1
:
E
m
(
S
1
;S
2
)=
1
j
S
1
j
Z
S
1
e
(
p; S
2
)
ds
If the surface
S
1
is orientable we can extend the
denition of distance b etween a p oint
p
of
S
1
and
S
2
so that, informally speaking, this distance
e
0
is
positive
if the nearest p oint
p
0
2
S
2
is in the outer space with
respect to
S
1
, and
negative
otherwise (see Figure 2).
Or, in other words, if
N
p
is the vector normal to
S
1
in
the sampled p oint
p
and
p
0
2
S
2
is the nearest point,
then the sign of our distance measure is the sign of
N
p
(
p
0
,
p
).
This denition of signed distance is introduced to let
Metro
distinguish b etween positive and negative dis-
tances between two surfaces as follows:
E
+
(
S
1
;S
2
) = max
p
2
S
1
e
0
(
p; S
2
)
c
The Eurographics Asso ciation 1998

P. Cignoni, C. Rocchini and R. Scopigno /
Metro
:measuring error on simplied surfaces
3
|||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
p
2
p
1
d
1
d
2
S
1
S
2
Figure 2:
Signed distance evaluation; distanceis posi-
tive in
p
1
and negative in
p
2
(
S
1
is the sampled curve).
E
,
(
S
1
;S
2
)=
j
min
p
2
S
1
e
0
(
p; S
2
)
j
Signed distances are used by
Metro
to give an inde-
pendentevaluation to the sections of the rst mesh
which are in the interior or in the exterior space with
respect to the second mesh.
3. The
Metro
Tool
Metro
numerically compares two triangle meshes
S
1
and
S
2
, which describ e the same surface at dierent
levels of detail. It requires no knowledge of the simpli-
cation approach adopted to build the reduced mesh.
Metro
evaluates the dierence between the two meshes
on the basis of the
approximation error
measure de-
ned in the previous section. It adopts an approximate
approach based on surface sampling and the computa-
tion of
point{to{surface
distances. The surface of the
rst mesh (hereafter
pivot
mesh) is sampled, and for
each elementary surface parcel we compute the dis-
tance to the not{pivot mesh.
The idea is therefore to adopt an integration process
over the surface. Surface sampling is achieved by
scan
converting
triangular faces under a user-selected sam-
pling resolution. The sampling resolution characterizes
the precision of the integration, and we observed that
in most cases a suciently thin sampling step size is
0.1% of the b ounding b ox diagonal.
We also implemented a Montecarlo approach (gener-
ate random
k
points in the interior of each face, with
the number
k
of samples prop ortional to the facet
area), whichgave similar results in terms of precision.
Moreover, the adoption of Montecarlo sampling makes
not p ossible the error visualizati on via error-texture
mapping, b ecause the latter requires a regular, raster
sampling.
In an early version of our to ol (
Metro v.1
)aray-
casting approachwas adopted to compute
point{to{
surface
distances. In order to improve performances
and precision we adopted a dierent approach in the
current release of
Metro
,
v.2
. Distances from the sam-
pling p oint and the non-pivot mesh are now computed
eciently by using a bucketed data structure. Uniform
grid (UG) techniques are very eective in geometric
computations because in many cases elements which
are far apart generally have little or no eect on each
other
1
. Lo cal processing can, therefore, highly reduce
empirical complexity for many geometric problems. A
3D
uniform grid
is used in
Metro v.2
as an indexing
scheme for the fast search of the nearest face to the
sampling p oint. The bounding box of mesh
S
2
is par-
titioned into cubic cells following a regular pattern.
Then, we store in each cell
c
ijk
the list of faces of
S
2
whichintersect
c
ijk
.For each sampling point
p
, rstly
we compute the distance b etween
p
and all the faces
of the non-pivot mesh
S
2
contained in the same grid
cell of
p
. Then, adjacent grid cells are processed, in
order of increasing distance from
p
,until we nd that
all not tested cells are farther than the current nearest
face.
The distance between
p
and a single face of
S
2
is com-
puted using an optimized algorithm contained in the
source code of the POVray-tracer
12
.
The strategy adopted implies that
uniqueness
of the
nearest point is not ensured. According to the deni-
tion in Section 2, we might nd multiple faces at min-
imal distance from the current sampling point. But, if
we are looking for unsigned
approximation error
, then
uniqueness is not a problem (because we are interested
only in the value of this distance). Conversely,inthe
case of signed approximation error evaluation, having
points at the same distance but holding dierent sign
forces
Metro
to op erate a random choice (and intro-
duces a p otential imprecision).
The worst case computational complexity of Metro
depends on the surface area
A
(
S
1
) of the pivot mesh
(measured in squared sampling step units) times the
number
n
f
of faces of the non-pivot mesh. The result-
ing complexityis
O
(
A
(
S
1
)
n
f
). But, if we use an UG,
then we can expect that a muchlower numb er of faces
will be tested to compute the minimal distance for
each sampling p oint. We measured in a numb er of runs
that the mean number of faces evaluated for each sam-
pling p oint is only few tens (as presented in Table 1).
In Table 1 we report also the running times and the
numb er of samples executed by
Metro
on three dier-
ent pairs of meshes. Times are in seconds, measured on
a SGI O2 workstation (R5000 180 Mhz, 96MB RAM).
An option is provided by
Metro
to compute a sym-
metric evaluation of the maximal error. At the end
c
The Eurographics Asso ciation 1998

4
P. Cignoni, C. Rocchini and R. Scopigno /
Metro
:measuring error on simplied surfaces
S
1
S
2
sampling step samples no. tested faces no. time
(faces no.) (faces no.) (per sample) (sec.)
4,001 69,451 0.2 365,307 30.3 29
2,867 28,322 0.1 540,667 29.3 24.7
6,369 67,607 0.1 1,670,420 24.8 89.8
Table 1:
Number of sampling points, sampling step size, time and number of faces testedper sample on three
dierent meshes.
of the sampling process, if the
,
s
option is set, then
Metro
switches the pivot and not{pivot meshes and
executes sampling again.
Given a sampling step, the mesh may contain trian-
gles whichhave an area smaller than the squared sam-
pling step.
Metro
manages this sp ecial case by adopt-
ing a random choice: a random variable is generated,
with the probability of its TRUE value equal to the ra-
tio between the triangle area and the squared sample
area. If the random value is TRUE, a single p oint{to{
surface distance is computed; otherwise,
Metro
starts
the scan conversion of the next face.
Metro Input
Metro
has a command-line input interface. The op-
tions available are shown, as usual, bytyping:
metro
-h
. The options available are shown in Figure 3.
The data formats accepted in input are either the
OpenInventor
19
format or a raw indexed represen-
tation (a list of vertex co ordinates, and a list of trian-
gular faces, dened by the three indices to the vertex
list).
The two meshes should have similar shap es (as in mul-
tiple level of detail representation). If the shap es dier
too much, with the disappearance of signicant fea-
tures, the computation of the error might be locally
imprecise.
Metro
considers excessive the dierence b e-
tween two meshes if their b ounding box diagonals dif-
fer in length by more than 10%.
If the surfaces to be compared are
not orientable
or
multiple-connected
, then it would b e imp ossible to dis-
tinguish b etween positive and negative errors (i.e. if
the low detail mesh passes beloworabove the high
detail mesh).
Metro Output
Metro
returns both
numerical
and
visual
evaluations
of surface meshes \likeness" (Figure 5 shows a snap-
shot of its GUI).
The format of the
numerical results
is reported in
Figure 4. It contains data on input meshes characteris-
tics (top ology, size, surface area, mesh volume, feature
edges total length, diagonal of the minimal bound-
ing box, diameter of the minimal bounding sphere);
the mean and maximum distances b etween meshes
(returned using absolute measures and as a p ercent-
ages of the diagonal of the mesh b ounding b ox); and
avery rough approximation of the p ositive, negative
and total volume of the dierence b etween the two
meshes (i.e. the total volume
Vt
is the volume of
(
S
1
,
S
2
)
[
(
S
2
,
S
1
)).
All the p ositive/negative measures follows the den-
itions in Section 2, and can b e computed only if the
input surfaces are orientable and single-connected.
Error is also
visualized
by coloring the pivot mesh
with resp ect to the evaluated approximation error.
Two dierent color mapping modalities are available:
per-vertex
mapping: for eachvertex, we compute the
error on each mesh vertex (as the mean of the er-
rors on the incident faces), and assign a color pro-
portional to that error. The faces are then colored
byinterpolating vertex colors;
error-texture
mapping: for each face, a rgb-texture is
computed which stores the color-coded errors eval-
uated on each sampling p oint (mapped on a color
scale).
The error-texture mapping approach gives visual re-
sults which in general are more precise, but whose vi-
sualization dep ends on the sampling step size used by
Metro
. See for example in Figure 6 the dierent visual
representation of the same mesh zone.
In both cases, a histogram reporting the error distrib-
ution is also visualized on the left of the
Metro
output
window (Figure 5).
When the
error-texture
mapping is used, we can also
visualize the error by considering its sign: zero error
maps to green, negative and positive to red and blue
(see Figure 7).
Limited numerical precision management
The error evaluated by
Metro
may be aected by the
limited numerical precision, although double precision
is adopted in numerical computations. An \ad hoc"
management has been provided for a number of dan-
gerous cases, such as nearly coincidentvertices, facets
c
The Eurographics Asso ciation 1998

P. Cignoni, C. Rocchini and R. Scopigno /
Metro
:measuring error on simplied surfaces
5
Usage: Metro file1 file2 [-a# -e# -h -l# -s ] [-r] [-q|v] [-b|bs|t]
file1, file2 : input meshes to be compared;
-a# crease angle setting for feature edges detection and
classification. The angle value "#" is given in degrees,
from 0 (all edges are classified 'feature edge') to 180 degrees.
(it is used to measure the total length of the feature edges);
-b show error using "error-texture" mode (DEFAULT is "per-vertex" mode)
-bs show error using "signed error-texture" mode (green==> error=0);
-e# set the maximal absolute error in the histogram scale and color mapping;
(it is useful to compare visually the results of two different runs of Metro);
-h show the Metro command syntax (and the options available);
-l# select the scan conversion step (value "#": percentage of the mesh bounding box);
-q use "quiet" (i.e. very synthetic) output;
-r use "Montecarlo" sampling (DEFAULT: use scan conversion);
-s compute symmetric maximum distance (double run);
-t set text mode only, do not visualize results under OpenInventor;
-v verbose output.
Example: metro -v meshcomp.iv mesh.iv -l0.5 -a45
Figure 3:
Metro
input options.
Figure 5:
The
Metro
graphic output window.
with small area, and very elongated triangles.
Another problem may b e the computation of the sum
of hundreds of thousands of nearly zero values. To min-
imize rounding errors in the computation of the sum,
we used a fanin algorithm (binary tree structured sum
11
).
4. Concluding Remarks
Wehaveintroduced a new to ol,
Metro
, to allow sim-
ple comparisons b etween surfaces. Its main use is in
the evaluation of the error introduced in the simpli-
cation of surfaces.
Metro
returns both numerical and
visual evaluations of the meshes' likeness. These mea-
sures are computed using an error dened as an ap-
c
The Eurographics Asso ciation 1998

Citations
More filters
Journal ArticleDOI
TL;DR: This work extends Poisson surface reconstruction to explicitly incorporate the points as interpolation constraints and presents several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster, higher-quality surface reconstructions.
Abstract: Poisson surface reconstruction creates watertight surfaces from oriented point sets. In this work we extend the technique to explicitly incorporate the points as interpolation constraints. The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation. In contrast to other image and geometry processing techniques, the screening term is defined over a sparse set of points rather than over the full domain. We show that these sparse constraints can nonetheless be integrated efficiently. Because the modified linear system retains the same finite-element discretization, the sparsity structure is unchanged, and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster, higher-quality surface reconstructions.

1,712 citations

Proceedings ArticleDOI
27 Oct 2002
TL;DR: This work has implemented incremental and hierarchical clustering, iterative simplification, and particle simulation algorithms to create approximations of point-based models with lower sampling density, and shows how local variation estimation and quadric error metrics can be employed to diminish the approximation error.
Abstract: In this paper we introduce, analyze and quantitatively compare a number of surface simplification methods for point-sampled geometry. We have implemented incremental and hierarchical clustering, iterative simplification, and particle simulation algorithms to create approximations of point-based models with lower sampling density. All these methods work directly on the point cloud, requiring no intermediate tesselation. We show how local variation estimation and quadric error metrics can be employed to diminish the approximation error and concentrate more samples in regions of high curvature. To compare the quality of the simplified surfaces, we have designed a new method for computing numerical and visual error estimates for point-sampled surfaces. Our algorithms are fast, easy to implement, and create high-quality surface approximations, clearly demonstrating the effectiveness of point-based surface simplification.

920 citations


Cites methods from "Metro: measuring error on simplifie..."

  • ...Similar to the Metro tool [4], we use a sampling approach to approximate surface error....

    [...]

Journal ArticleDOI
TL;DR: In this article, a 3D point cloud comparison method is proposed to measure surface changes via 3D surface estimation and orientation in 3D at a scale consistent with the local surface roughness.
Abstract: Surveying techniques such as terrestrial laser scanner have recently been used to measure surface changes via 3D point cloud (PC) comparison. Two types of approaches have been pursued: 3D tracking of homologous parts of the surface to compute a displacement field, and distance calculation between two point clouds when homologous parts cannot be defined. This study deals with the second approach, typical of natural surfaces altered by erosion, sedimentation or vegetation between surveys. Current comparison methods are based on a closest point distance or require at least one of the PC to be meshed with severe limitations when surfaces present roughness elements at all scales. To solve these issues, we introduce a new algorithm performing a direct comparison of point clouds in 3D. The method has two steps: (1) surface normal estimation and orientation in 3D at a scale consistent with the local surface roughness; (2) measurement of the mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing methods demonstrates the higher accuracy of our approach, as well as an easier workflow due to the absence of surface meshing or Digital Elevation Model (DEM) generation. Application of the method in a rapidly eroding, meandering bedrock river (Rangitikei River canyon) illustrates its ability to handle 3D differences in complex situations (flat and vertical surfaces on the same scene), to reduce uncertainty related to point cloud roughness by local averaging and to generate 3D maps of uncertainty levels. We also demonstrate that for high precision survey scanners, the total error budget on change detection is dominated by the point clouds registration error and the surface roughness. Combined with mm-range local georeferencing of the point clouds, levels of detection down to 6 mm (defined at 95% confidence) can be routinely attained in situ over ranges of 50 m. We provide evidence for the self-affine behaviour of different surfaces. We show how this impacts the calculation of normal vectors and demonstrate the scaling behaviour of the level of change detection. The algorithm has been implemented in a freely available open source software package. It operates in complex 3D cases and can also be used as a simpler and more robust alternative to DEM differencing for the 2D cases.

881 citations


Cites methods from "Metro: measuring error on simplifie..."

  • ...Surface change is calculated by the distance between a point cloud and a reference 3D mesh or theoretical model (Cignoni and Rocchini, 1998), see also Monserrat and Crosetto (2008) and Olsen et al....

    [...]

Proceedings ArticleDOI
07 Nov 2002
TL;DR: An efficient method to estimate the distance between discrete 3D surfaces represented by triangular 3D meshes based on an approximation of the Hausdorff distance is proposed.
Abstract: This paper proposes an efficient method to estimate the distance between discrete 3D surfaces represented by triangular 3D meshes. The metric used is based on an approximation of the Hausdorff distance, which has been appropriately implemented in order to reduce unnecessary computation and memory usage. Results show that when compared to similar tools, a significant gain in both memory and speed can be achieved.

751 citations


Cites methods from "Metro: measuring error on simplifie..."

  • ...The comparisons with Metro [ 3 ], a similar tool, show that Mesh is very fast, memory efficient and provides stable distance measures....

    [...]

  • ...One effective technique to achieve a large reduction of the number of point-triangle distance evaluations, also used in [ 3 ], is to use a uniform grid....

    [...]

  • ...In this paper, we present an efficient tool to evaluate the distance between 3D models, similar to Metro[ 3 ]....

    [...]

Book
28 Mar 2012
TL;DR: Level of Detail for 3D Graphics brings together, for the first time, the mechanisms, principles, practices, and theory needed by every graphics developer seeking to apply LOD methods.
Abstract: From the Publisher: Level of detail (LOD) techniques are increasingly used by professional real-time developers to strike the balance between breathtaking virtual worlds and smooth, flowing animation. Level of Detail for 3D Graphics brings together, for the first time, the mechanisms, principles, practices, and theory needed by every graphics developer seeking to apply LOD methods. Continuing advances in level of detail management have brought this powerful technology to the forefront of 3D graphics optimization research. This book, written by the very researchers and developers who have built LOD technology, is both a state-of-the-art chronicle of LOD advances and a practical sourcebook, which will enable graphics developers from all disciplines to apply these formidable techniques to their own work. Features Is a complete, practical resource for programmers wishing to incorporate LOD technology into their own systems. Is an important reference for professionals in game development, computer animation, information visualization, real-time graphics and simulation, data capture and preview, CAD display, and virtual worlds. Is accessible to anyone familiar with the essentials of computer science and interactive computer graphics. Covers the full range of LOD methods from mesh simplification to error metrics, as well as advanced issues of human perception, temporal detail, and visual fidelity measurement. Includes an accompanying Web site rich in supplementary material including source code, tools, 3D models, public domain software, documentation, LOD updates, and more. Author Biography:David Luebke David is an Assistant Professor in the Department of Computer Science at the University of Virginia. His principal research interest is the problem of rendering very complex scenes at interactive rates. His research focuses on software techniques such as polygonal simplification and occlusion culling to reduce the complexity of such scenes to manageable levels. Luebke's dissertation research, summarized in a SIGGRAPH '97 paper, introduced a dynamic, view-dependent approach to polygonal simplification for interactive rendering of extremely complex CAD models. He earned his Ph.D. at the University of North Carolina, and his Bachelors degree at the Colorado College. Martin Reddy Martin is a Senior Computer Scientist at SRI International where he works in the area of terrain visualization. This work involves the real-time display of massive terrain databases that are distributed over wide-area networks. His research interests include level of detail, visual perception, and computer graphics. His doctoral research involved the application of models of visual perception to real-time computer graphics systems, enabling the selection of level of detail based upon measures of human perception. He received his B.Sc. from the University of Strathclyde and his Ph.D. from the University of Edinburgh, UK. He is on the Board of Directors of the Web3D Consortium and chair of the GeoVRML Working Group. Jonathan D. Cohen Jon is an Assistant Professor in the Department of Computer Science at The Johns Hopkins University. He earned his Doctoral and Masters degrees from The University of North Carolina at Chapel Hill and earned his Bachelors degree from Duke University. His interests include polygonal simplification and other software acceleration techniques, parallel rendering architectures, collision detection, and high-quality interactive computer graphics. Amitabh Varshney Amitabh is an Associate Professor in the Department of Computer Science at the University of Maryland. His research interests lie in interactive computer graphics, scientific visualization, molecular graphics, and CAD. Varshney has worked on several aspects of level-of-detail simplifications including topology-preserving and topology-reducing simplifications, view-dependent simplifications, parallelization of simplification computation, as well as using triangle strips in multiresolution rendering. Varshney received his PhD and MS from the University of North Carolina at Chapel Hill in 1994 and 1991 respectively. He received his B. Tech. in Computer Science from the Indian Institute of Technology at Delhi in 1989. Benjamin Watson Ben is an Assistant Professor in Computer Science at Northwestern University. He earned his doctoral and Masters degrees at Georgia Tech's GVU Center, and his Bachelors degree at the University of California, Irvine. His dissertation focused on user performance effects of dynamic level of detail management. His other research interests include object simplification, medical applications of virtual reality, and 3D user interfaces. Robert Huebner Robert is the Director of Technology at Nihilistic Software, an independent development studio located in Marin County, California. Prior to co-founding Nihilistic, Robert has worked on a number of successful game titles including "Jedi Knight: Dark Forces 2" for LucasArts Entertainment, "Descent" for Parallax Software, and "Starcraft" for Blizzard Entertainment. Nihilistic's first title, "Vampire The Masquerade: Redemption" was released for the PC in 2000 and sold over 500,000 copies worldwide. Nihilistic's second project will be released in the Winter of 2002 on next-generation game consoles. Robert has spoken on game technology topics at SIGGRAPH, the Game Developer's Conference (GDC), and Electronic Entertainment Expo (E3). He also serves on the advisory board for the Game Developer's Conference and the International Game Developer's Association (IGDA). Robert's e-mail address is .

680 citations

References
More filters
Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work has developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models, and which also supports non-manifold surface models.
Abstract: Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations

3,564 citations

Proceedings ArticleDOI
Hugues Hoppe1
01 Aug 1996
TL;DR: The progressive mesh (PM) representation is introduced, a new scheme for storing and transmitting arbitrary triangle meshes that addresses several practical problems in graphics: smooth geomorphing of level-of-detail approximations, progressive transmission, mesh compression, and selective refinement.
Abstract: Highly detailed geometric models are rapidly becoming commonplace in computer graphics. These models, often represented as complex triangle meshes, challenge rendering performance, transmission bandwidth, and storage capacities. This paper introduces the progressive mesh (PM) representation, a new scheme for storing and transmitting arbitrary triangle meshes. This efficient, lossless, continuous-resolution representation addresses several practical problems in graphics: smooth geomorphing of level-of-detail approximations, progressive transmission, mesh compression, and selective refinement. In addition, we present a new mesh simplification procedure for constructing a PM representation from an arbitrary mesh. The goal of this optimization procedure is to preserve not just the geometry of the original mesh, but more importantly its overall appearance as defined by its discrete and scalar appearance attributes such as material identifiers, color values, normals, and texture coordinates. We demonstrate construction of the PM representation and its applications using several practical models

3,206 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: An application independent algorithm that uses local operations on geometry and topology to reduce the number of triangles in a triangle mesh and results from two different geometric modeling applications illustrate the strengths of the algorithm.
Abstract: The polygon remains a popular graphics primitive for computer graphics application. Besides having a simple representation, computer rendering of polygons is widely supported by commercial graphics hardware and software. However, because the polygon is linear, often thousands or millions of primitives are required to capture the details of complex geometry. Models of this size are generally not practical since rendering speeds and memory requirements are proportional to the number of polygons. Consequently applications that generate large polygonal meshes often use domain-specific knowledge to reduce model size. There remain algorithms, however, where domainspecific reduction techniques are not generally available or appropriate. One algorithm that generates many polygons is marching cubes. Marching cubes is a brute force surface construction algorithm that extracts isodensity surfaces from volume data, producing from one to five triangles within voxels that contain the surface. Although originally developed for medical applications, marching cubes has found more frequent use in scientific visualization where the size of the volume data sets are much smaller than those found in medical applications. A large computational fluid dynamics volume could have a finite difference grid size of order 100 by 100 by 100, while a typical medical computed tomography or magnetic resonance scanner produces over 100 slices at a resolution of 256 by 256 or 512 by 512 pixels each. Industrial computed tomography, used for inspection and analysis, has even greater resolution, varying from 512 by 512 to 1024 by 1024 pixels. For these sampled data sets, isosurface extraction using marching cubes can produce from 500k to 2,000k triangles. Even today’s graphics workstations have trouble storing and rendering models of this size. Other sampling devices can produce large polygonal models: range cameras, digital elevation data, and satellite data. The sampling resolution of these devices is also improving, resulting in model sizes that rival those obtained from medical scanners. This paper describes an application independent algorithm that uses local operations on geometry and topology to reduce the number of triangles in a triangle mesh. Although our implementation is for the triangle mesh, it can be directly applied to the more general polygon mesh. After describing other work related to model creation from sampled data, we describe the triangle decimation process and its implementation. Results from two different geometric modeling applications illustrate the strengths of the algorithm.

1,790 citations

Proceedings ArticleDOI
01 Sep 1993
TL;DR: In this article, the authors present a method for solving the following problem: given a set of data points scattered in three dimensions and an initial triangular mesh M0, produce a mesh M, of the same topological type as M0 that fits the data well and has a small number of vertices.
Abstract: We present a method for solving the following problem: Given a set of data points scattered in three dimensions and an initial triangular mesh M0, produce a mesh M, of the same topological type as M0, that fits the data well and has a small number of vertices. Our approach is to minimize an energy function that explicitly models the competing desires of conciseness of representation and fidelity to the data. We show that mesh optimization can be effectively used in at least two applications: surface reconstruction from unorganized points, and mesh simplification (the reduction of the number of vertices in an initially dense mesh of triangles).

1,424 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: This paper shows how a new set of vertices can be distributed over the surface of a model and connected to one another to create a re-tiling of a surface that is faithful to both the geometry and the topology of the original surface.
Abstract: This paper presents an automatic method of creating surface models at several levels of detail from an original polygonal description of a given object. Representing models at various levels of detail is important for achieving high frame rates in interactive graphics applications and also for speeding-up the off-line rendering of complex scenes. Unfortunately, generating these levels of detail is a time-consuming task usually left to a human modeler. This paper shows how a new set of vertices can be distributed over the surface of a model and connected to one another to create a re-tiling of a surface that is faithful to both the geometry and the topology of the original surface. The main contributions of this paper are: 1) a robust method of connecting together new vertices over a surface, 2) a way of using an estimate of surface curvature to distribute more new vertices at regions of higher curvature and 3) a method of smoothly interpolating between models that represent the same object at different levels of detail. The key notion in the re-tiling procedure is the creation of an intermediate model called the mutual tessellation of a surface that contains both the vertices from the original model and the new points that are to become vertices in the re-tiled surface. The new model is then created by removing each original vertex and locally re-triangulating the surface in a way that matches the local connectedness of the initial surface. This technique for surface retessellation has been successfully applied to iso-surface models derived from volume data, Connolly surface molecular models and a tessellation of a minimal surface of interest to mathematicians.

923 citations

Frequently Asked Questions (5)
Q1. What are the contributions in this paper?

This paper presents a new tool, Metro, designed to compensate for a de ciency in many simpli cation methods proposed in literature. 

But the added value ofMetro in the case of a bounded error method is to give the possibility to view the distribution of the error on the mesh ( Figure 5 ). 

Metro manages this special case by adopting a random choice: a random variable is generated, with the probability of its TRUE value equal to the ratio between the triangle area and the squared sample area. 

An \\ad hoc" management has been provided for a number of dangerous cases, such as nearly coincident vertices, facetsc The Eurographics Association 1998Usage: Metro file1 file2 [-a# -e# -h -l# -s ] [-r] [-q|v] [-b|bs|t]file1, file2 : input meshes to be compared;-a# crease angle setting for feature edges detection andclassification. 

Given a set of uniformly sampled distances, the authors denote the mean distance Em between two surfaces as the surface integral of the distance divided by the area of S1:Em(S1; S2) = 1jS1j Z S1 e(p; S2)