scispace - formally typeset
Open AccessJournal ArticleDOI

On simulated annealing and the construction of linear spline approximations for scattered data

Reads0
Chats0
TLDR
In this paper, the authors describe a method to create optimal linear spline approximations to arbitrary functions of one or two variables, given as scattered data without known connectivity, by choosing different vertices, governed by a simulated annealing algorithm.
Abstract
We describe a method to create optimal linear spline approximations to arbitrary functions of one or two variables, given as scattered data without known connectivity. We start with an initial approximation consisting of a fixed number of vertices and improve this approximation by choosing different vertices, governed by a simulated annealing algorithm. In the case of one variable, the approximation is defined by line segments; in the case of two variables, the vertices are connected to define a Delaunay triangulation of the selected subset of sites in the plane. In a second version of this algorithm, specifically designed for the bivariate case, we choose vertex sets and also change the triangulation to achieve both optimal vertex placement and optimal triangulation. We then create a hierarchy of linear spline approximations, each one being a superset of all lower-resolution ones.

read more

Content maybe subject to copyright    Report

UC Davis
IDAV Publications
Title
On Simulated Annealing and the Construction of Linear Spline Approximations for Scattered
Data
Permalink
https://escholarship.org/uc/item/1mw601pd
Authors
Kreylos, Oliver
Hamann, Bernd
Publication Date
1999
Peer reviewed
eScholarship.org Powered by the California Digital Library
University of California

On Simulated Annealing and the Construction of Linear
Spline Approximations for Scattered Data
Oliver Kreylos
1
2
and Bernd Hamann
1
1
Center for Image Processing and Integrated Computing (CIPIC), Department of Computer
Science, University of California, Davis, CA 95616-8562, USA
2
Institut für Betriebs- und Dialogsysteme, Fakultät für Informatik, Universität Karlsruhe (TH),
76128 Karlsruhe, Germany
Abstract. We describe a method to create optimal linear spline approximations
to arbitrary functions of one or two variables, given as scattered data without
known connectivity. We start with an initial approximation consisting of a fixed
number of vertices and improve this approximation by choosing different ver-
tices, governed by a simulated annealing algorithm. In the case of one variable,
the approximation is defined by line segments; in the case of two variables, the
vertices are connected to define a Delaunay triangulation of the selected subset
of sites in the plane. In a second version of this algorithm, specifically designed
for the bivariate case, we choose vertex sets and also change the triangulation to
achieve both optimal vertex placement and optimal triangulation. We then cre-
ate a hierarchy of linear spline approximations, each one being a superset of all
lower-resolution ones.
1 Introduction
In several applications one is concerned with the representation of complex geome-
tries or complex physical phenomena at multiple levels of resolution. In the context of
computer graphics and scientific visualization, so-called multiresolution methods are
crucial for the analysis of very large numerical data sets [1–5]. Examples include high-
resolution terrain data (digital elevation maps) and high-resolution, three-dimensional
imaging data (e. g., magnetic resonance imaging data).
We present an approach for the construction of multi resolution representations of
very large scattered data sets using an iterative optimization algorithm and the principle
of simulated annealing [9–12]. Our goal is the computation of several optimal linear
spline approximations to a given scattered data set.
We assume that the given data sets are samples of a real function of one or two
variables, with the samples randomly distributed in the function’s domain and no known
connectivity between them. Each individual linear spline approximation is defined by
its control points and, in the case of multivariate functions, by the way these points
are connected to form a triangulation. We only place control points at given sample
positions and only use the supplied function values at those positions.

1.1 Visualizing Large Data Sets
To create a hierarchy of approximations to a given scattered data set we choose N
k
ver-
tices from the set at each hierarchy level k. We ensure that the set of vertices of any
hierarchy level j
k is a subset of level ks vertex set. After having decided which
vertices to select for a hierarchy level k, that level’s vertices are connected in an appro-
priate way to form a linear spline’s control mesh. An example of such a hierarchy in the
univariate case is shown in Fig. 1.
x
f(x)f(x)
x
f(x)
x
Fig.1. A hierarchy of approximations in the univariate case. New vertices are inserted at the sites
marked by solid triangles.
When representing high-resolution data sets with low-resolution linear spline ap-
proximations, one has to be careful where to place the spline’s control points and how
to connect them in order to achieve a faithful representation of the data set, see Fig. 2.
f(x)
x
f(x)
x
Fig.2. Uniform vs. optimal control point placement for univariate data.
If the number of vertices for an approximation level is prescribed, one has to address
two problems:
1. Which vertices should one choose for the approximation, i.e., how should one cre-
ate the vertex placement?
2. How should one connect the chosen vertices, i. e., how should one create the con-
nectivity?
In the special case of a function of one variable, we only have to address the first prob-
lem, since in the univariate case the connectivity is defined by the chosen sites’ numer-
ical order.

1.2 Finding Optimal Approximations
Our approach to finding an optimal linear spline approximation for a given, fixed num-
ber of vertices N
k
is based on an iterative optimization algorithm. First, we create an ini-
tial configuration, then we improve this configuration by changing its vertex placement
and its connectivity in every step. We judge a configuration’s quality by its L
2
distance
from the scattered data set. Since this optimization problem is high-dimensional and
generally involves local minima in abundance, the algorithm of simulated annealing is
well suited to construct “good” linear spline approximations [12, 9].
Simulated annealing is an iterative method that applies random changes to the cur-
rent configuration and accepts a step depending on the resulting change of the error
measure and a value called “temperature. This value determines the probability of ac-
cepting a step that increased the error measure: The higher the temperature, the higher
the probability of accepting a bad step. The so-called “annealing schedule” determines
how fast the temperature is decreased during the iteration.
In the case of two or more variables the quality of a configuration depends on both
vertex placement and connectivity. There are two different ways to proceed:
1. One can ignore the optimization of the connectivity by enforcing a fixed type of
connectivity throughout the iteration process; in the bivariate case, an obvious can-
didate is the Delaunay triangulation [6]. Under this constraint the algorithm can
proceed exactly as in the univariate case.
2. One can attempt to optimize both parts of the configuration in parallel. For example,
before each step one could randomly decide to either move a vertex or swap a
common edge of two adjacent triangles.
2 The Optimization Algorithm
We now describe the individual steps of our algorithm. Algorithm 1 is a high-level
description. The subsequent sections describe the important steps in more detail.
Algorithm 1: Optimal linear spline approximation.
Create initial configuration (vertex placement and connectivity);
Determine initial temperature and create annealing schedule;
While iteration is not finished {
Change current configuration;
Calculate change in error measure;
Undo iteration if rejected by simulated annealing; }
Return current configuration;
2.1 Creating an Initial Configuration
Our approximations are defined over the original sites’ convex hull. In the univariate
case, we cover the convex hull by choosing the leftmost and the rightmost original
vertices and distribute the rest of vertices uniformly between them. In the multivariate

cases cover the convex hull by always selecting all non-interior vertices; then we choose
the rest of vertices randomly from the original data set. In the bivariate case, we define
the initial connectivity by a Delaunay triangulation of the initial vertices’ sites.
2.2 Creating an Annealing Schedule
A reasonable heuristic to define the initial temperature is to apply some steps of the iter-
ation scheme and to define the initial temperature in a way that the annealing algorithm
initially accepts an “expected bad” step with a probability of one half. Next, we lower
the temperature in steps, leaving it constant for a fixed number of iterations and scaling
it by a fixed factor afterwards.
2.3 Changing the Current Configuration
The simulated annealing algorithm’s core is its iteration step. In principle, one can use
any method to change the current configuration, but we have found out that the “split”
approach, shown in Algorithm 2, works very well.
Algorithm 2: Changing the current configuration.
if(acceptWithProbability(moveProbability)) { /* move a vertex */
Choose an interior vertex v;
Estimate v’s contribution vE to the error measure;
if(vE < localMovementFactor * E)
Move v globally;
else
Move v locally;
if(moveProbability == 1) /* Vertex movements only? */
Restore Delaunay property; }
else { /* swap an edge */
Choose a swappable edge e;
Swap edge e; }
The constant moveProbability is used to control the behaviour of the optimization
process for bivariate functions. If this constant’s value is one, the algorithm moves a
vertex in every step, and after each vertex movement the current triangulation is updated
to satisfy the Delaunay property. In the other case the algorithm can either move a
vertex or swap an edge, thereby optimizing both vertex placement and triangulation
simultaneously.
Estimating a Vertex’ Error Contribution. To estimate how much the removal of an
interior vertex v would increase the current error measure, we estimate the “volume” of
vs platelet: We construct an approximating least squares hyperplane H for all vertices
surrounding v. Then we calculate h as vs ordinate-direction distance from H and A
as the area of vs platelet, see Fig. 3. We define the error contribution as
A
h
2
2 in
the univariate case and as A
h
2
3 in the bivariate case, to ensure that the ratio of a
vertex’ error contribution and the used L
2
error measure is scale-invariant.

Citations
More filters
Journal ArticleDOI

Discrete Sibson interpolation

TL;DR: This work describes a discrete approach to evaluating Sibson's interpolant on a regular grid, based solely on finding nearest neighbors and rendering and blending d-dimensional spheres, which leads to a significant speed increase compared to traditional approaches, and generalizes easily to higher dimensions.
Proceedings ArticleDOI

Learning polyline maps from range scan data acquired with mobile robots

TL;DR: An algorithm that learns sets of polylines from laser range scans using the Bayesian information criterion and is able to learn accurate and highly compact polyline maps from laserrange data obtained with mobile robots.
Proceedings ArticleDOI

SVG rendering of real images using data dependent triangulation

TL;DR: This paper presents a novel technique to convert raster images in a Scalable Vector Graphic (SVG) format using Data Dependent Triangulation (DDT), and experiments and comparisons with existing techniques confirm the effectiveness of the proposed strategy.
Journal ArticleDOI

Zometool shape approximation

TL;DR: In this article, the authors present an algorithm that approximates 2-manifold surfaces with Zometool models while preserving their topology using a set of local, topology preserving Zome mesh modification operators.
Book ChapterDOI

Image compression using data-dependent triangulations

TL;DR: A method to speed up the computation of a high-quality data-dependent triangulation approximating an image using simulated annealing by probability distributions guided by local approximation error and its variance is presented.
References
More filters
Journal ArticleDOI

Equation of state calculations by fast computing machines

TL;DR: In this article, a modified Monte Carlo integration over configuration space is used to investigate the properties of a two-dimensional rigid-sphere system with a set of interacting individual molecules, and the results are compared to free volume equations of state and a four-term virial coefficient expansion.

Computational geometry. an introduction

TL;DR: This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.
Book

Algorithms in Combinatorial Geometry

TL;DR: This book offers a modern approach to computational geo- metry, an area thatstudies the computational complexity of geometric problems with an important role in this study.
Frequently Asked Questions (13)
Q1. What have the authors contributed in "On simulated annealing and the construction of linear spline approximations for scattered data" ?

The authors describe a method to create optimal linear spline approximations to arbitrary functions of one or two variables, given as scattered data without known connectivity. The authors start with an initial approximation consisting of a fixed number of vertices and improve this approximation by choosing different vertices, governed by a simulated annealing algorithm. The authors then create a hierarchy of linear spline approximations, each one being a superset of all lower-resolution ones. 

The main areas for future research are the generalization of their algorithm to functions of three and more variables and the application of their method to image and video compression. 

The main areas for future research are the generalization of their algorithm to functions of three and more variables and the application of their method to image and video compression. 

To move a vertex locally, the authors “slide” the vertex on the line from its old to its new site, dragging the edges connecting it to all surrounding vertices along. 

In the special case of a function of one variable, the authors only have to address the first problem, since in the univariate case the connectivity is defined by the chosen sites’ numerical order. 

Each individual linear spline approximation is defined by its control points and, in the case of multivariate functions, by the way these points are connected to form a triangulation. 

Simulated annealing is an iterative method that applies random changes to the current configuration and accepts a step depending on the resulting change of the error measure and a value called “temperature.” 

Since this optimization problem is high-dimensional and generally involves local minima in abundance, the algorithm of simulated annealing is well suited to construct “good” linear spline approximations [12, 9]. 

If this constant’s value is one, the algorithm moves a vertex in every step, and after each vertex movement the current triangulation is updated to satisfy the Delaunay property. 

Examples include highresolution terrain data (digital elevation maps) and high-resolution, three-dimensional imaging data (e. g., magnetic resonance imaging data). 

After having decided which vertices to select for a hierarchy level k, that level’s vertices are connected in an appropriate way to form a linear spline’s control mesh. 

6. The sixth test case is a scattered data set consisting of 37,594 vertices, resulting from a laser scan of a Ski-Doo hood and a linear spline approximation with 1,000 vertices and a general triangulation, see Fig. 11. 

In several applications one is concerned with the representation of complex geometries or complex physical phenomena at multiple levels of resolution.