scispace - formally typeset
Search or ask a question

Showing papers in "Computing in 2007"


Journal ArticleDOI
TL;DR: The main goal in this paper is to extend the error estimation approach to iterative regularization schemes (and time-continuous flows) that have emerged recently as multiscale restoration techniques and could improve some shortcomings of the variational schemes.
Abstract: In this paper, we consider error estimation for image restoration problems based on generalized Bregman distances. This error estimation technique has been used to derive convergence rates of variational regularization schemes for linear and nonlinear inverse problems by the authors before (cf. Burger in Inverse Prob 20: 1411–1421, 2004; Resmerita in Inverse Prob 21: 1303–1314, 2005; Inverse Prob 22: 801–814, 2006), but so far it was not applied to image restoration in a systematic way. Due to the flexibility of the Bregman distances, this approach is particularly attractive for imaging tasks, where often singular energies (non-differentiable, not strictly convex) are used to achieve certain tasks such as preservation of edges. Besides the discussion of the variational image restoration schemes, our main goal in this paper is to extend the error estimation approach to iterative regularization schemes (and time-continuous flows) that have emerged recently as multiscale restoration techniques and could improve some shortcomings of the variational schemes. We derive error estimates between the iterates and the exact image both in the case of clean and noisy data, the latter also giving indications on the choice of termination criteria. The error estimates are applied to various image restoration approaches such as denoising and decomposition by total variation and wavelet methods. We shall see that interesting results for various restoration approaches can be deduced from our general results by just exploring the structure of subgradients.

120 citations


Journal ArticleDOI
TL;DR: The space-time sparse grid approach can be employed together with adaptive refinement in space and time and then leads to similar approximation rates as the non-adaptive method for smooth functions.
Abstract: In this paper, we consider the discretization in space and time of parabolic differential equations where we use the so-called space-time sparse grid technique It employs the tensor product of a one-dimensional multilevel basis in time and a proper multilevel basis in space This way, the additional order of complexity of a direct space-time discretization can be avoided, provided that the solution fulfills a certain smoothness assumption in space-time, namely that its mixed space-time derivatives are bounded This holds in many applications due to the smoothing properties of the propagator of the parabolic PDE (heat kernel) In the more general case, the space-time sparse grid approach can be employed together with adaptive refinement in space and time and then leads to similar approximation rates as the non-adaptive method for smooth functions We analyze the properties of different space-time sparse grid discretizations for parabolic differential equations from both, the theoretical and practical point of view, discuss their implementational aspects and report on the results of numerical experiments

90 citations


Journal ArticleDOI
TL;DR: The focus of this presentation is on issues of flexible implementation and numerical studies of the convergence for flexible implementation of the Fourier transform on sparse grids in multiple dimensions.
Abstract: The pseudo-spectral method together with a Strang-splitting are well suited for the discretization of the time-dependent Schrodinger equation with smooth potential. The curse of dimensionality limits this approach to low dimensions, if we stick to full grids. Theoretically, sparse grids allow accurate computations in (moderately) higher dimensions, provided that we supply an efficient Fourier transform. Motivated by this application, the design of the Fourier transform on sparse grids in multiple dimensions is described in detail. The focus of this presentation is on issues of flexible implementation and numerical studies of the convergence.

61 citations


Journal ArticleDOI
TL;DR: A new coefficient-explicit theory for two-level overlapping domain decomposition preconditioners with non-standard coarse spaces in iterative solvers for finite element discretisations of second-order elliptic problems and is able to show that the condition number can be bounded independent of the ratio of the two values of α in a binary medium even when the discontinuities in the coefficient function are not resolved by the coarse mesh.
Abstract: We develop a new coefficient-explicit theory for two-level overlapping domain decomposition preconditioners with non-standard coarse spaces in iterative solvers for finite element discretisations of second-order elliptic problems. We apply the theory to the case of smoothed aggregation coarse spaces introduced by Vanek, Mandel and Brezina in the context of algebraic multigrid (AMG) and are particularly interested in the situation where the diffusion coefficient (or the permeability) α is highly variable throughout the domain. Our motivating example is Monte Carlo simulation for flow in rock with permeability modelled by log–normal random fields. By using the concept of strong connections (suitably adapted from the AMG context) we design a two-level additive Schwarz preconditioner that is robust to strong variations in α as well as to mesh refinement. We give upper bounds on the condition number of the preconditioned system which do not depend on the size of the subdomains and make explicit the interplay between the coefficient function and the coarse space basis functions. In particular, we are able to show that the condition number can be bounded independent of the ratio of the two values of α in a binary medium even when the discontinuities in the coefficient function are not resolved by the coarse mesh. Our numerical results show that the bounds with respect to the mesh parameters are sharp and that the method is indeed robust to strong variations in α. We compare the method to other preconditioners and to a sparse direct solver.

56 citations


Journal ArticleDOI
TL;DR: A new matrix, scaled odd tail, SOT, is introduced and a compromise is reached between Fourier transform and polynomial transform methods for computing the action of cyclic convolutions.
Abstract: A new matrix, scaled odd tail, SOT, is introduced. This new matrix is used to derive real and complex FFT algorithms for lengths n = 2 k . A compromise is reached between Fourier transform and polynomial transform methods for computing the action of cyclic convolutions. Both of these methods lead to arithmetic operation counts that are better than previously published results. A minor improvement is also demonstrated that enables us to compute the actions of Fermat prime order FFTs in fewer additions than previously available algorithms.

48 citations


Journal ArticleDOI
TL;DR: A general error estimate is derived that allows to immediately conclude robust convergence – w.r.t. the perturbation parameters – for certain layer-adapted meshes, thus improving and generalising previous results.
Abstract: We study a system of coupled convection-diffusion equations. The equations have diffusion parameters of different magnitudes associated with them which give rise to boundary layers at either boundary. An upwind finite difference scheme on arbitrary meshes is used to solve the system numerically. A general error estimate is derived that allows to immediately conclude robust convergence – w.r.t. the perturbation parameters – for certain layer-adapted meshes, thus improving and generalising previous results [4]. We present the results of numerical experiments to illustrate our theoretical findings.

48 citations


Journal ArticleDOI
TL;DR: This work advocates the use of the intrinsic Laplace–Beltrami operator, which satisfies a local maximum principle, guaranteeing, e.g., that no flipped triangles can occur in parameterizations, and leads to better conditioned linear systems.
Abstract: The discrete Laplace–Beltrami operator plays a prominent role in many digital geometry processing applications ranging from denoising to parameterization, editing, and physical simulation. The standard discretization uses the cotangents of the angles in the immersed mesh which leads to a variety of numerical problems. We advocate the use of the intrinsic Laplace–Beltrami operator. It satisfies a local maximum principle, guaranteeing, e.g., that no flipped triangles can occur in parameterizations. It also leads to better conditioned linear systems. The intrinsic Laplace–Beltrami operator is based on an intrinsic Delaunay triangulation of the surface. We detail an incremental algorithm to construct such triangulations together with an overlay structure which captures the relationship between the extrinsic and intrinsic triangulations. Using a variety of example meshes we demonstrate the numerical benefits of the intrinsic Laplace–Beltrami operator.

44 citations


Journal ArticleDOI
TL;DR: This work considers the non-conforming Gauss-Legendre finite element family of any even degree k≥4 and proves its inf-sup stability without assumptions on the grid.
Abstract: We consider the non-conforming Gauss-Legendre finite element family of any even degree k≥4 and prove its inf-sup stability without assumptions on the grid. This family consists of Scott-Vogelius elements where appropriate k-th-degree non-conforming bubbles are added to the velocities – which are trianglewise polynomials of degree k.

41 citations


Journal ArticleDOI
TL;DR: Convergence results are proved which show the combined influence of time and (phase) space discretization in two numerical fixed grid methods for the approximation of the full solution set.
Abstract: Numerical methods for initial value problems for differential inclusions usually require a discretization of time as well as of the set valued right hand side. In this paper, two numerical fixed grid methods for the approximation of the full solution set are proposed and analyzed. Convergence results are proved which show the combined influence of time and (phase) space discretization.

37 citations


Journal ArticleDOI
TL;DR: This article proposes a generic framework that allows us to find the matrix-valued counterparts of the Perona–Malik PDEs with various diffusivity functions, and extends the notion of derivatives and associated differential operators to matrix fields of symmetric matrices by adopting an operator-algebraic point of view.
Abstract: Diffusion tensor magnetic resonance imaging, is a image acquisition method, that provides matrix- valued data, so-called matrix fields. Hence image processing tools for the filtering and analysis of these data types are in demand. In this article, we propose a generic framework that allows us to find the matrix-valued counterparts of the Perona–Malik PDEs with various diffusivity functions. To this end we extend the notion of derivatives and associated differential operators to matrix fields of symmetric matrices by adopting an operator-algebraic point of view. In order to solve these novel matrix-valued PDEs successfully we develop truly matrix-valued analogs to numerical solution schemes of the scalar setting. Numerical experiments performed on both synthetic and real world data substantiate the effectiveness of our novel matrix-valued Perona–Malik diffusion filters.

34 citations


Journal ArticleDOI
TL;DR: The FETI-DP, BDDC and P-FETi-DP preconditioners are derived in a particulary simple abstract form and it is shown that their properties can be obtained from only a very small set of algebraic assumptions.
Abstract: The FETI-DP, BDDC and P-FETI-DP preconditioners are derived in a particulary simple abstract form. It is shown that their properties can be obtained from only a very small set of algebraic assumptions. The presentation is purely algebraic and it does not use any particular definition of method components, such as substructures and coarse degrees of freedom. It is then shown that P-FETI-DP and BDDC are in fact the same. The FETI-DP and the BDDC preconditioned operators are of the same algebraic form, and the standard condition number bound carries over to arbitrary abstract operators of this form. The equality of eigenvalues of BDDC and FETI-DP also holds in the minimalist abstract setting. The abstract framework is explained on a standard substructuring example.

Journal ArticleDOI
TL;DR: This paper studies the parametrization and implicitization of quadrics and cubic surfaces with the help of μ-bases – a newly developed tool which connects the parametric form and the implicit form of a surface.
Abstract: Parametric and implicit forms are two common representations of geometric objects. It is important to be able to pass back and forth between the two representations, two processes called parameterization and implicitization, respectively. In this paper, we study the parametrization and implicitization of quadrics (quadratic parametric surfaces with two base points) and cubic surfaces (cubic parametric surfaces with six base points) with the help of μ-bases – a newly developed tool which connects the parametric form and the implicit form of a surface. For both cases, we show that the minimal μ-bases are all linear in the parametric variables, and based on this observation, very efficient algorithms are devised to compute the minimal μ-bases either from the parametric equation or the implicit equation. The conversion between the parametric equation and the implicit equation can be easily accomplished from the minimal μ-bases.

Journal ArticleDOI
TL;DR: The author uses the generalized stereographic projection in order to transform the problem to a parameterization problem for ruled surfaces, and discusses two problems: parameterization with boundary conditions and parameterization without boundary conditions.
Abstract: It is well known that canal surfaces defined by a rational spine curve and a rational radius function are rational. The aim of the present paper is to construct a rational parameterization of low degree. The author uses the generalized stereographic projection in order to transform the problem to a parameterization problem for ruled surfaces. Two problems are discussed: parameterization with boundary conditions (design of canal surfaces with two curves on it, as is the case for rolling ball blends) and parameterization without boundary conditions.

Journal ArticleDOI
TL;DR: An approach to the computation of more accurate divided differences for the interpolation in the Newton form of the matrix exponential propagator φ(hA)v, φ (z) = (ez − 1)/z is proposed.
Abstract: In this paper, we propose an approach to the computation of more accurate divided differences for the interpolation in the Newton form of the matrix exponential propagator φ(hA)v, φ (z) = (ez − 1)/z. In this way, it is possible to approximate φ (hA)v with larger time step size h than with traditionally computed divided differences, as confirmed by numerical examples. The technique can be also extended to “higher” order φk functions, k≥0.

Journal ArticleDOI
TL;DR: In this paper, a wavelet transform of radial distribution functions and different low-rank approximations of the obtained convolution matrices are employed to improve the convergence and speed of the algorithm.
Abstract: In this article, we present a new structured wavelet algorithm to solve the Ornstein-Zernike integral equation for simple liquids. This algorithm is based on the discrete wavelet transform of radial distribution functions and different low-rank approximations of the obtained convolution matrices. The fundamental properties of wavelet bases such as the interpolation properties and orthogonality are employed to improve the convergence and speed of the algorithm. In order to solve the integral equation we have applied a combined scheme in which the coarse part of the solution is calculated by the use of wavelets and Newton-Raphson algorithm, while the fine part is solved by the direct iteration. Tests have indicated that the proposed procedure is more effective than the conventional method based on hybrid algorithms.

Journal ArticleDOI
TL;DR: Several numerical experiments show the efficiency of the proposed bases for higher polynomial degrees p, and several bases based on integrated Jacobi polynomials in which the element stiffness matrix has nonzero entries are presented.
Abstract: In this paper, we investigate the discretization of an elliptic boundary value problem in 3D by means of the hp-version of the finite element method using a mesh of tetrahedrons. We present several bases based on integrated Jacobi polynomials in which the element stiffness matrix has 𝒪(p3) nonzero entries, where p denotes the polynomial degree. The proof of the sparsity requires the assistance of computer algebra software. Several numerical experiments show the efficiency of the proposed bases for higher polynomial degrees p.

Journal ArticleDOI
TL;DR: The benefit of using the control polygon as an approximant for scientific visualization is presented in this paper.
Abstract: Non-self-intersection is both a topological and a geometric property. It is known that non-self-intersecting regular Bezier curves have non-self-intersecting control polygons, after sufficiently many uniform subdivisions. Here a sufficient condition is given within ℝ3 for a non-self-intersecting, regular C 2 cubic Bezier curve to be ambient isotopic to its control polygon formed after sufficiently many subdivisions. The benefit of using the control polygon as an approximant for scientific visualization is presented in this paper.

Journal ArticleDOI
TL;DR: This work proves optimal local approximation properties of this interpolation operator for functions in H1 by means of continuous piecewise mapped bilinear or trilinear polynomials.
Abstract: We propose a Scott-Zhang type finite element interpolation operator of first order for the approximation of H1-functions by means of continuous piecewise mapped bilinear or trilinear polynomials. The novelty of the proposed interpolation operator is that it is defined for general non-affine equivalent quadrilateral and hexahedral elements and so-called 1-irregular meshes with hanging nodes. We prove optimal local approximation properties of this interpolation operator for functions in H1. As necessary ingredients we provide a definition of a hanging node and a rigorous analysis of the issue of constrained approximation which cover both the two- and three-dimensional case in a unified fashion.

Journal ArticleDOI
TL;DR: The efficient performance of the convolution in locally refined grids is discussed, where the result is projected into some given locally refined grid (Galerkin approximation) and the overall costs are still N, where N is the sum of the dimensions of the subspaces containing f, g and the resulting function.
Abstract: Usually, the fast evaluation of a convolution integral $$\int_{{\mathbb{R}}}f(y)g(x-y)\mathrm{d}y$$ requires that the functions f, g are discretised on an equidistant grid in order to apply the fast Fourier transform. Here we discuss the efficient performance of the convolution in locally refined grids. More precisely, the convolution result is projected into some given locally refined grid (Galerkin approximation). Under certain conditions, the overall costs are still $${\mathcal{O}}(N\log N),$$ where N is the sum of the dimensions of the subspaces containing f, g and the resulting function.

Journal ArticleDOI
TL;DR: A linear time algorithm is proposed for the 1-median problem on wheel graphs and this algorithm leads to a solution method for the 2-medians problem on cactus graphs, i.e., on graphs where no two cycles have more than one vertex in common.
Abstract: This paper is dedicated to location problems on graphs. We propose a linear time algorithm for the 1-median problem on wheel graphs. Moreover, some general results for the 1-median problem are summarized and parametric median problems are investigated. These results lead to a solution method for the 2-median problem on cactus graphs, i.e., on graphs where no two cycles have more than one vertex in common. The time complexity of this algorithm is O(n2), where n is the number of vertices of the graph.

Journal ArticleDOI
TL;DR: SOCP can be applied to minimize various energy functionals defined for matrix fields and new functionals for the regularization of matrix data are proposed and the corresponding Euler–Lagrange equations are derived by means of matrix differential calculus.
Abstract: Wherever anisotropic behavior in physical measurements or models is encountered matrices provide adequate means to describe this anisotropy. Prominent examples are the diffusion tensor magnetic resonance imaging in medical imaging or the stress tensor in civil engineering. As most measured data these matrix-valued data are also polluted by noise and require restoration. The restoration of scalar images corrupted by noise via minimization of an energy functional is a well-established technique that offers many advantages. A convenient way to achieve this minimization is second-order cone programming (SOCP). The goal of this article is to transfer this method to the matrix-valued setting. It is shown how SOCP can be applied to minimize various energy functionals defined for matrix fields. These functionals couple the different matrix channels taking into account the relations between them. Furthermore, new functionals for the regularization of matrix data are proposed and the corresponding Euler–Lagrange equations are derived by means of matrix differential calculus. Numerical experiments substantiate the usefulness of the proposed methods for the restoration of matrix fields.

Journal ArticleDOI
TL;DR: For a surface with non vanishing Gaussian curvature the Gauss map is regular and can be inverted, which makes it possible to use the normal as the parameter, and then it is trivial to calculate the normal and the Gaussian map.
Abstract: For a surface with non vanishing Gaussian curvature the Gauss map is regular and can be inverted. This makes it possible to use the normal as the parameter, and then it is trivial to calculate the normal and the Gauss map. This in turns makes it easy to calculate offsets, the principal curvatures, the principal directions, etc. Such a parametrization is not only a theoretical possibility but can be used concretely. One way of obtaining this parametrization is to specify the support function as a function of the normal, i.e., as a function on the unit sphere. The support function is the distance from the origin to the tangent plane and the surface is simply considered as the envelope of its family of tangent planes. Suppose we are given points and normals and we want a C k -surface interpolating these data. The data gives the value and gradients of the support function at certain points (the given normals) on the unit sphere, and the surface can be defined by determining the support function as a C k function interpolating the given values and gradients.

Journal ArticleDOI
TL;DR: This paper presents an algebraic approach for constructing H-matrices which combines multilevel clustering methods with matrix arithmetic to compute the inverse, the inverse, and the Cholesky factors of a matrix.
Abstract: Hierarchical matrices ($${\mathcal{H}}$$-matrices) approximate matrices in a data-sparse way, and the approximate arithmetic for $${\mathcal{H}}$$-matrices is almost optimal. In this paper we present an algebraic approach for constructing $${\mathcal{H}}$$-matrices which combines multilevel clustering methods with $${\mathcal{H}}$$-matrix arithmetic to compute the $${\mathcal{H}}$$-inverse, $${\mathcal{H}}$$-LU, and the $${\mathcal{H}}$$ -Cholesky factors of a matrix. Then the $${\mathcal{H}}$$-inverse, $${\mathcal{H}}$$-LU or $${\mathcal{H}}$$-Cholesky factors can be used as preconditioners in iterative methods to solve systems of linear equations. The numerical results show that this method is efficient and greatly speeds up convergence compared to other approaches, such as JOR or AMG, for solving some large, sparse linear systems, and is comparable to other $${\mathcal{H}}$$-matrix constructions based on Nested Dissection.

Journal ArticleDOI
TL;DR: A simple method for achieving anamorphoses of 3D objects by utilizing a variation of a simple projective map that is well-known in the computer graphics literature.
Abstract: An anamorphic image appears distorted from all but a few viewpoints. They have been studied by artists and architects since the early fifteenth century. Computer graphics opens the door to anamorphic 3D geometry. We are not bound by physical reality nor a static canvas. Here we describe a simple method for achieving anamorphoses of 3D objects by utilizing a variation of a simple projective map that is well-known in the computer graphics literature. The novelty of this work is the creation of anamorphic 3D digital models, resulting in a tool for artists and architects.

Journal ArticleDOI
TL;DR: A new method is introduced for finding a near-optimal path of a nonholonomic robot moving in a 2D environment cluttered with static obstacles based on the Bump-Surfaces concept and is able to deal with robots represented by a translating and rotating rigid body.
Abstract: In this paper, a new method is introduced for finding a near-optimal path of a nonholonomic robot moving in a 2D environment cluttered with static obstacles. The method is based on the Bump-Surfaces concept and is able to deal with robots represented by a translating and rotating rigid body. The proposed approach is applied to car-like robots.

Journal ArticleDOI
TL;DR: This paper completes the classification of these four existence tests showing that, in practice, the Hansen-Sengupta existence test is actually more powerful than the existence test proposed by Frommer et al.
Abstract: The Krawczyk and the Hansen-Sengupta interval operators are closely related to the interval Newton operator These interval operators can be used as existence tests to prove existence of solutions for systems of equations It is well known that the Krawczyk operator existence test is less powerful that the Hansen-Sengupta operator existence test, the latter being less powerful than the interval Newton operator existence test In 2004, Frommer et al proposed an existence test based on the Poincare-Miranda theorem and proved that it is more powerful than the Krawczyk existence test In this paper, we complete the classification of these four existence tests showing that, in practice, the Hansen-Sengupta existence test is actually more powerful than the existence test proposed by Frommer et al

Journal ArticleDOI
TL;DR: The results of an experimental study that compared different methods for the mentioned subtasks for point based graphics are reported on.
Abstract: Point based graphics avoids the generation of a polygonal approximation of sampled geometry and uses algorithms that directly work with the point set. Basic ingredients of point based methods are algorithms to compute nearest neighbors, to estimate surface properties as, e.g. normals and to smooth the point set. In this paper we report on the results of an experimental study that compared different methods for the mentioned subtasks.

Journal ArticleDOI
S. C. Eisenstat1
TL;DR: This work presents work- and cost-optimal O(log*n) algorithms for prefix sums and linear integer sorting on a Sum-CRCW PRAM.
Abstract: We present work- and cost-optimal O(log*n) algorithms for prefix sums and linear integer sorting on a Sum-CRCW PRAM.

Journal ArticleDOI
TL;DR: This work considers a parameterized family of closed planar curves and introduces an evolution process for identifying a member of the family that approximates a given unorganized point cloud, which is shown to be equivalent to normal (or tangent) distance minimization.
Abstract: We consider a parameterized family of closed planar curves and introduce an evolution process for identifying a member of the family that approximates a given unorganized point cloud {p i } i =1,..., N . The evolution is driven by the normal velocities at the closest (or foot) points (f i ) to the data points, which are found by approximating the corresponding difference vectors p i -f i in the least-squares sense. In the particular case of parametrically defined curves, this process is shown to be equivalent to normal (or tangent) distance minimization, see [3], [19]. Moreover, it can be generalized to very general representations of curves. These include hybrid curves, which are a collection of parametrically and implicitly defined curve segments, pieced together with certain degrees of geometric continuity.

Journal ArticleDOI
TL;DR: A multiresolution morphing algorithm using ``as-rigid-as-possible'' shape interpolation combined with an angle-length basedMultiresolution decomposition of simple 2D piecewise curves is presented.
Abstract: We present a multiresolution morphing algorithm using ``as-rigid-as-possible'' shape interpolation combined with an angle-length based multiresolution decomposition of simple 2D piecewise curves. This novel multiresolution representation is defined intrinsically and has the advantage that the details' orientation follows any deformation naturally. The multiresolution morphing algorithm consists of transforming separately the coarse and detail coefficients of the multiresolution decomposition. Thus all LoD (level of detail) applications like LoD display, compression, LoD editing etc. can be applied directly to all morphs without any extra computation. Furthermore, the algorithm can robustly morph between very large size polygons with many local details as illustrated in numerous figures. The intermediate morphs behave natural and least-distorting due to the particular intrinsic multiresolution representation.