scispace - formally typeset
Search or ask a question

Showing papers in "Computing in 2008"


Journal ArticleDOI
TL;DR: An efficient interface specification as a set of C++ classes is derived that separates the applications from the grid data structures and thus, user implementations become independent of the underlying grid implementation.
Abstract: In a companion paper (Bastian et al. 2007, this issue) we introduced an abstract definition of a parallel and adaptive hierarchical grid for scientific computing. Based on this definition we derive an efficient interface specification as a set of C++ classes. This interface separates the applications from the grid data structures. Thus, user implementations become independent of the underlying grid implementation. Modern C++ template techniques are used to provide an interface implementation without big performance losses. The implementation is realized as part of the software environment DUNE (http://dune-project.org/). Numerical tests demonstrate the flexibility and the efficiency of our approach.

454 citations


Journal ArticleDOI
TL;DR: The definitions in this article serve as the basis for an implementation of an abstract grid interface as C++ classes in the framework (Bastian et al. 2008, this issue).
Abstract: We give a mathematically rigorous definition of a grid for algorithms solving partial differential equations. Unlike previous approaches (Benger 2005, PhD thesis; Berti 2000, PhD thesis), our grids have a hierarchical structure. This makes them suitable for geometric multigrid algorithms and hierarchical local grid refinement. The description is also general enough to include geometrically non-conforming grids. The definitions in this article serve as the basis for an implementation of an abstract grid interface as C++ classes in the framework (Bastian et al. 2008, this issue).

390 citations


Journal ArticleDOI
TL;DR: This revision of HOM4PS-2.0 updates its original version in three key aspects: (1) new method for finding mixed cells, (2) combining the polyhedral and linear homotopies in one step, (3) new way of dealing with curve jumping.
Abstract: HOM4PS-2.0 is a software package in FORTRAN 90 which implements the polyhedral homotopy continuation method for solving polynomial systems. It updates its original version HOM4PS in three key aspects: (1) new method for finding mixed cells, (2) combining the polyhedral and linear homotopies in one step, (3) new way of dealing with curve jumping. Numerical results show that this revision leads to a spectacular speed-up, ranging up to 1950s, over its original version on all benchmark systems, especially for large ones. It surpasses the existing packages in finding isolated zeros, such as PHCpack (Verschelde in ACM Trans Math Softw 25:251–276, 1999), PHoM (Gunji et al. in Computing 73:57–77, 2004), and Bertini (Bates et al. in Software for numerical algebraic geometry. Available at http://www.nd.edu/~sommese/bertini), in speed by big margins.

255 citations


Journal ArticleDOI
TL;DR: In this paper, a new family of abstract Multispace BDDC methods and condition number bounds from the abstract additive Schwarz preconditioning theory are given. But the abstract bounds yield polylogarithmic condition number for an arbitrary fixed number of levels and scalar elliptic problems discretized by finite elements in two and three spatial dimensions.
Abstract: The Balancing Domain Decomposition by Constraints (BDDC) method is the most advanced method from the Balancing family of iterative substructuring methods for the solution of large systems of linear algebraic equations arising from discretization of elliptic boundary value problems. In the case of many substructures, solving the coarse problem exactly becomes a bottleneck. Since the coarse problem in BDDC has the same structure as the original problem, it is straightforward to apply the BDDC method recursively to solve the coarse problem only approximately. In this paper, we formulate a new family of abstract Multispace BDDC methods and give condition number bounds from the abstract additive Schwarz preconditioning theory. The Multilevel BDDC is then treated as a special case of the Multispace BDDC and abstract multilevel condition number bounds are given. The abstract bounds yield polylogarithmic condition number bounds for an arbitrary fixed number of levels and scalar elliptic problems discretized by finite elements in two and three spatial dimensions. Numerical experiments confirm the theory.

61 citations


Journal ArticleDOI
TL;DR: A new local stabilized nonconforming finite method based on two local Gauss integrations for the two-dimensional Stokes equations based on the lowest equal-order pair of mixed finite elements, NCP1–P1 is proposed and studied.
Abstract: In this paper, we propose and study a new local stabilized nonconforming finite method based on two local Gauss integrations for the two-dimensional Stokes equations. The nonconforming method uses the lowest equal-order pair of mixed finite elements (i.e., NCP 1–P 1). After a stability condition is shown for this stabilized method, its optimal-order error estimates are obtained. In addition, numerical experiments to confirm the theoretical results are presented. Compared with some classical, closely related mixed finite element pairs, the results of the present NCP 1–P 1 mixed finite element pair show its better performance than others.

55 citations


Journal ArticleDOI
TL;DR: This paper analyzes the numerical issues of the introduced collocation method applied to the Volterra’s integro-differential system of ‘predator–prey’ dynamics arising in Ecology, and confirms that it can achieve the expected theoretical orders of convergence.
Abstract: Particular cases of nonlinear systems of delay Volterra integro-differential equations (denoted by DVIDEs) with constant delay τ > 0, arise in mathematical modelling of ‘predator–prey’ dynamics in Ecology. In this paper, we give an analysis of the global convergence and local superconvergence properties of piecewise polynomial collocation for systems of this type. Then, from the perspective of applied mathematics, we consider the Volterra’s integro-differential system of ‘predator–prey’ dynamics arising in Ecology. We analyze the numerical issues of the introduced collocation method applied to the ‘predator–prey’ system and confirm that we can achieve the expected theoretical orders of convergence.

52 citations


Journal ArticleDOI
TL;DR: The h-h/2-strategy is one well-known technique for the a posteriori error estimation for Galerkin discretizations of energy minimization problems, and this very basic error estimation strategy is also applicable to steer an h-adaptive algorithm.
Abstract: The h-h/2-strategy is one well-known technique for the a posteriori error estimation for Galerkin discretizations of energy minimization problems. One considers to estimate the error , where is a Galerkin solution with respect to a mesh and is a Galerkin solution with respect to the mesh obtained from a uniform refinement of . This error estimator is always efficient and observed to be also reliable in practice. However, for boundary element methods, the energy norm is non-local and thus the error estimator η does not provide information for a local mesh-refinement. We consider Symm’s integral equation of the first kind, where the energy space is the negative-order Sobolev space . Recent localization techniques allow to replace the energy norm in this case by some weighted L 2-norm. Then, this very basic error estimation strategy is also applicable to steer an h-adaptive algorithm. Numerical experiments in 2D and 3D show that the proposed method works well in practice. A short conclusion is concerned with other integral equations, e.g., the hypersingular case with energy space , respectively, or a transmission problem.

52 citations


Journal ArticleDOI
TL;DR: It is shown that though these two methods are formally similar, they provide different approaches to computational optimization with partial differential equations.
Abstract: Multigrid optimization schemes that solve elliptic linear and bilinear optimal control problems are discussed. For the solution of these problems, the multigrid for optimization (MGOPT) method and the collective smoothing multigrid (CSMG) method are developed and compared. It is shown that though these two methods are formally similar, they provide different approaches to computational optimization with partial differential equations.

36 citations


Journal ArticleDOI
TL;DR: An adaptive procedure based on the product-integration method of Huber is developed, which indicates that in practice the control of the local errors is sufficient for bringing the true global errors down to the level of a prescribed error tolerance.
Abstract: In contrast to the existing plethora of adaptive numerical methods for differential and integro-differential equations, there seems to be a shortage of adaptive methods for purely integral equations with weakly singular kernels, such as the first kind Abel equation. In order to make up this deficiency, an adaptive procedure based on the product-integration method of Huber is developed in this work. In the procedure, an a posteriori estimate of the dominant expansion term of the local discretisation error at a given grid node is used to determine the size of the next integration step, in a way similar to the adaptive solvers for ordinary differential equations. Computational experiments indicate that in practice the control of the local errors is sufficient for bringing the true global errors down to the level of a prescribed error tolerance. The lower limit of the acceptable values of the error tolerance parameter depends on the interference of machine errors, and the quality of the approximations available for the method coefficients specific for a given kernel function.

34 citations


Journal ArticleDOI
TL;DR: An adaptive numerical method for solving the first kind Abel integral equation is extended to automatically determine both the starting solution value and the estimate of its discretisation error, which enables an adaptive adjustment of the first integration step, to achieve a pre-defined accuracy of theStarting solution.
Abstract: In the previous work of this author (Bieniasz in Computing 83:25–39, 2008) an adaptive numerical method for solving the first kind Abel integral equation was described. It was assumed that the starting value of the solution was known and equal zero. This is a frequent situation in some applications of the Abel equation (for example in electrochemistry), but in general the starting solution value is unknown and non-zero. The presently described extension of the method allows one to automatically determine both the starting solution value and the estimate of its discretisation error. This enables an adaptive adjustment of the first integration step, to achieve a pre-defined accuracy of the starting solution. The procedure works most satisfactorily in cases when the solution possesses all, or at least several of the lowest, derivatives at the initial value of the independent variable. Otherwise, a discrepancy between the true and estimated errors of the starting solution value may occur. In such cases one may either start integration with as small step as possible, or use a smaller error tolerance at the first step.

23 citations


Journal ArticleDOI
TL;DR: Numerical results confirm that the proposed Newton method yields an efficient algorithm to treat the considered class of problems.
Abstract: The present paper is dedicated to the numerical solution of Bernoulli’s free boundary problem in three dimensions. We reformulate the given free boundary problem as a shape optimization problem and compute the shape gradient and Hessian of the given shape functional. To approximate the shape problem we apply a Ritz–Galerkin discretization. The necessary optimality condition is resolved by Newton’s method. All information of the state equation, required for the optimization algorithm, are derived by boundary integral equations which we solve numerically by a fast wavelet Galerkin scheme. Numerical results confirm that the proposed Newton method yields an efficient algorithm to treat the considered class of problems.

Journal ArticleDOI
TL;DR: The BETI method preconditioned by the projector to the “natural coarse grid” with recently proposed optimal algorithms for the solution of bound and equality constrained quadratic programming problems is combined to develop a theoretically supported scalable solver for elliptic multidomain boundary variational inequalities.
Abstract: The Boundary Element Tearing and Interconnecting (BETI) methods were recently introduced as boundary element counterparts of the well established Finite Element Tearing and Interconnecting (FETI) methods. Here we combine the BETI method preconditioned by the projector to the “natural coarse grid” with recently proposed optimal algorithms for the solution of bound and equality constrained quadratic programming problems in order to develop a theoretically supported scalable solver for elliptic multidomain boundary variational inequalities such as those describing the equilibrium of a system of bodies in mutual contact. The key observation is that the “natural coarse grid” defines a subspace that contains the solution, so that the preconditioning affects also the non-linear steps. The results are validated by numerical experiments.

Journal ArticleDOI
TL;DR: The general framework developed is applied to analyze the convergence of multi-level methods for mixed finite element discretizations of the generalized Stokes problem using the Scott–Vogelius element to satisfy the Ladyzhenskaya–Babuška–Brezzi stability condition.
Abstract: We apply the general framework developed by John et al. in Computing 64:307–321, 2000 to analyze the convergence of multi-level methods for mixed finite element discretizations of the generalized Stokes problem using the Scott–Vogelius element. The Scott–Vogelius element seems to be promising since discretely divergence-free functions are divergence-free pointwise. However, to satisfy the Ladyzhenskaya–Babuska–Brezzi stability condition, we have to deal in the multi-grid analysis with non-nested families of meshes which are derived from nested macro element triangulations. Additionally, the analysis takes into account an optional symmetric stabilization operator which suppresses spurious oscillations of the velocity provoked by a dominant reaction term. Usually, the generalized Stokes problems appears in semi-implicit splitting schemes for the unsteady Navier–Stokes equations, but the symmetric part of a stabilized discrete Oseen problem can be reguarded as a discrete generalized Stokes problem likewise.

Journal ArticleDOI
TL;DR: An interpolant defined via moments is investigated for triangles, quadrilaterals, tetrahedra, and hexahedra and arbitrarily high polynomial degree and anisotropic interpolation error estimates are proved.
Abstract: An interpolant defined via moments is investigated for triangles, quadrilaterals, tetrahedra, and hexahedra and arbitrarily high polynomial degree The elements are allowed to have diameters with different asymptotic behavior in different space directions Anisotropic interpolation error estimates are proved

Journal ArticleDOI
TL;DR: In this work, conformal mapping is used to transform harmonic Dirichlet problems of Laplace’s equation which are defined in simply-connected domains into harmonic Diriclet problems that aredefined in the unit disk, and this technique is extended to harmonic Dirchlet problems in doubly- connected domains which are now mapped onto annular domains.
Abstract: In this work, we use conformal mapping to transform harmonic Dirichlet problems of Laplace’s equation which are defined in simply-connected domains into harmonic Dirichlet problems that are defined in the unit disk. We then solve the resulting harmonic Dirichlet problems efficiently using the method of fundamental solutions (MFS) in conjunction with fast fourier transforms (FFTs). This technique is extended to harmonic Dirichlet problems in doubly-connected domains which are now mapped onto annular domains. The solution of the resulting harmonic Dirichlet problems can be carried out equally efficiently using the MFS with FFTs. Several numerical examples are presented.

Journal ArticleDOI
TL;DR: An algebraic multigrid method is presented for the single layer potential using the fast boundary element methods for an almost optimal complexity.
Abstract: Fast boundary element methods still need good preconditioning techniques for an almost optimal complexity. An algebraic multigrid method is presented for the single layer potential using the fast m...

Journal ArticleDOI
TL;DR: Two heuristic algorithms are proposed, and the behavior of the preconditioners on some larger, randomly generated systems, as well as a small selection of systems from the Matrix Market collection are studied.
Abstract: Finding bounding sets to solutions to systems of algebraic equations with uncertainties in the coefficients, as well as rapidly but rigorously locating all solutions to nonlinear systems or global optimization problems, involves bounding the solution sets to systems of equations with wide interval coefficients. In many cases, singular systems are admitted within the intervals of uncertainty of the coefficients, leading to unbounded solution sets with more than one disconnected component. This, combined with the fact that computing exact bounds on the solution set is NP-hard, limits the range of techniques available for bounding the solution sets for such systems. However, the componentwise nature and other properties make the interval Gauss–Seidel method suited to computing meaningful bounds in a predictable amount of computing time. For this reason, we focus on the interval Gauss–Seidel method. In particular, we study and compare various preconditioning techniques we have developed over the years but not fully investigated, comparing the results. Based on a study of the preconditioners in detail on some simple, specially–designed small systems, we propose two heuristic algorithms, then study the behavior of the preconditioners on some larger, randomly generated systems, as well as a small selection of systems from the Matrix Market collection.

Journal ArticleDOI
TL;DR: A linear time algorithm is presented for the parity domination problem with open and closed neighbourhoods and arbitrary cost functions on graphs with bounded treewidth and distance-hereditary graphs.
Abstract: This paper concerns a domination problem in graphs with parity constraints. The task is to find a subset of the vertices with minimum cost such that for every vertex the number of chosen vertices in its neighbourhood has a prespecified parity. This problem is known to be $${\mathcal NP}$$-hard for general graphs. A linear time algorithm was developed for series-parallel graphs and trees with unit cost and restricted to closed neighbourhoods. We present a linear time algorithm for the parity domination problem with open and closed neighbourhoods and arbitrary cost functions on graphs with bounded treewidth and distance-hereditary graphs.

Journal ArticleDOI
TL;DR: New estimates of the constant γ in the strengthened Cauchy–Bunyakowski–Schwarz (CBS) inequality are derived that allow an efficient multilevel extension of the related two-level preconditioners.
Abstract: Generalizing the approach of a previous work of the authors, dealing with two-dimensional (2D) problems, we present multilevel preconditioners for three-dimensional (3D) elliptic problems discretized by a family of Rannacher Turek non-conforming finite elements. Preconditioners based on various multilevel extensions of two-level finite element methods (FEM) lead to iterative methods which often have an optimal order computational complexity with respect to the number of degrees of freedom of the system. Such methods were first presented by Axelsson and Vassilevski in the late-1980s, and are based on (recursive) two-level splittings of the finite element space. An important point to make is that in the case of non-conforming elements the finite element spaces corresponding to two successive levels of mesh refinement are not nested in general. To handle this, a proper two-level basis is required to enable us to fit the general framework for the construction of two-level preconditioners for conforming finite elements and to generalize the method to the multilevel case. In the present paper new estimates of the constant γ in the strengthened Cauchy–Bunyakowski–Schwarz (CBS) inequality are derived that allow an efficient multilevel extension of the related two-level preconditioners. Representative numerical tests well illustrate the optimal complexity of the resulting iterative solver, also for the case of non-smooth coefficients. The second important achievement concerns the experimental study of AMLI solvers applied to the case of micro finite element (μFEM) simulation. Here the coefficient jumps are resolved on the finest mesh only and therefore the classical CBS inequality based convergence theory is not directly applicable. The obtained results, however, demonstrate the efficiency of the proposed algorithms in this case also, as is illustrated by an example of microstructure analysis of bones.

Journal ArticleDOI
TL;DR: A meldable double-ended priority queue is obtained which guarantees the worst-case cost of O(1) for find-min, find-max, insert, extract; the best- case cost of Lg n + O(lg lg n) element comparisons for delete; and the Worst-Case cost ofO(min {lg m, lgN}) for meld.
Abstract: We introduce two data-structural transformations to construct double-ended priority queues from priority queues. To apply our transformations the priority queues exploited must support the extraction of an unspecified element, in addition to the standard priority-queue operations. With the first transformation we obtain a double-ended priority queue which guarantees the worst-case cost of O(1) for find-min, find-max, insert, extract; and the worst-case cost of O(lg n) with at most lg n + O(1) element comparisons for delete. With the second transformation we get a meldable double-ended priority queue which guarantees the worst-case cost of O(1) for find-min, find-max, insert, extract; the worst-case cost of O(lg n) with at most lg n + O(lg lg n) element comparisons for delete; and the worst-case cost of O(min {lg m, lg n}) for meld. Here, m and n denote the number of elements stored in the data structures prior to the operation in question.

Journal ArticleDOI
TL;DR: In this paper, a more precise companion to the classical mediant rounding algorithm for rational numbers is proposed, which can be used to approximate rational numbers in the context of rational numbers.
Abstract: We adjoin a more precise companion to the classical mediant rounding algorithm for rational numbers

Journal ArticleDOI
TL;DR: This paper compute error bounds for approximations to a solution x* of the discretized problems of the complementarity problem NCP(f) with f(x) = Mx + φ(x), where M ∈ Rn×n is a real matrix and φ is a so-called tridiagonal (nonlinear) mapping.
Abstract: In this paper we consider the complementarity problem NCP(f) with f(x) = Mx + φ( x), where M ∈ R n×n is a real matrix and φ is a so-called tridiagonal (nonlinear) mapping. This problem occurs, for example, if certain classes of free boundary problems are discretized. We compute error bounds for approximations to a solution x* of the discretized problems. The error bounds are improved by an iterative method and can be made arbitrarily small. The ideas are illustrated by numerical experiments.

Journal ArticleDOI
TL;DR: A more precise companion to the classical mediant rounding algorithm for rational numbers is adjoin.
Abstract: We adjoin a more precise companion to the classical mediant rounding algorithm for rational numbers.

Journal ArticleDOI
TL;DR: A roundoff error analysis of formulae for evaluating polynomials are performed, taking into account that all steps but the last one can be computed to high relative accuracy.
Abstract: A roundoff error analysis of formulae for evaluating polynomials is performed. The considered formulae are linear combinations of basis functions, which can be computed with high relative accuracy. We have taken into account that all steps but the last one can be computed to high relative accuracy. The exactness of the initial data is crucial for obtaining low error bounds. The Lagrange interpolation formula and related formulae are considered and numerical experiments are provided.

Journal ArticleDOI
TL;DR: A new randomized, partition-based algorithm for the problem of computing the number of inversion pairs in an unsorted array of n numbers that uses a new inversion pair conserving partition procedure different from existing partition procedures such as Hoare-partitions and Lomuto-partition.
Abstract: In this paper, we introduce a new randomized, partition-based algorithm for the problem of computing the number of inversion pairs in an unsorted array of n numbers. The algorithm runs in expected time O(n · log n) and uses O(n) extra space. The expected time analysis of the new algorithm is different from the analyses existing in the literature, in that it explicitly uses inversion pairs. The problem of determining the inversion pair cardinality of an array finds applications in a number of design domains, including but not limited to real-time scheduling and operations research. At the heart of our algorithm is a new inversion pair conserving partition procedure that is different from existing partition procedures such as Hoare-partition and Lomuto-partition. Although the algorithm is not fully adaptive, we believe that it is the first step towards the design of an adaptive, partition-based sorting algorithm whose running time is proportional to the number of inversion pairs in the input.