scispace - formally typeset
Search or ask a question

Showing papers in "Mathematics of Computation in 2002"


Journal ArticleDOI
TL;DR: Theoretical results are confirmed in a series of numerical examples and upper bounds for the energy norm of the error which are explicit in the mesh-width h, in the polynomial degree p, and in the regularity of the exact solution are obtained.
Abstract: We study the convergence properties of the hp-version of the local discontinuous Galerkin finite element method for convection-diffusion problems; we consider a model problem in a one-dimensional space domain. We allow arbitrary meshes and polynomial degree distributions and obtain upper bounds for the energy norm of the error which are explicit in the mesh-width h, in the polynomial degree p, and in the regularity of the exact solution. We identify a special numerical flux for which the estimates are optimal in both h and p. The theoretical results are confirmed in a series of numerical examples.

234 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the approximation properties of finite element spaces on quadrilateral meshes, and demonstrate degradation of the convergence order of these meshes as compared to rectangular meshes for serendipity finite elements and for various mixed and nonconforming finite elements.
Abstract: We consider the approximation properties of finite element spaces on quadrilateral meshes. The finite element spaces are constructed starting with a given finite dimensional space of functions on a square reference element, which is then transformed to a space of functions on each convex quadrilateral element via a bilinear isomorphism of the square onto the element. It is known that for affine isomorphisms, a necessary and sufficient condition for approximation of order r + 1 in L p and order r in W 1 p is that the given space of functions on the reference element contain all polynomial functions of total degree at most r. In the case of bilinear isomorphisms, it is known that the same estimates hold if the function space contains all polynomial functions of separate degree r. We show, by means of a counterexample, that this latter condition is also necessary. As applications, we demonstrate degradation of the convergence order on quadrilateral meshes as compared to rectangular meshes for serendipity finite elements and for various mixed and nonconforming finite elements.

233 citations


Journal ArticleDOI
TL;DR: In this paper properties and construction of designs under a centered version of the L2-discrepancy are analyzed and optimization is performed using the threshold accepting heuristic which produces low discrepancy designs compared to theoretic expectation and variance.
Abstract: In this paper properties and construction of designs under a centered version of the L2-discrepancy are analyzed. The theoretic expectation and variance of this discrepancy are derived for random designs and Latin hypercube designs. The expectation and variance of Latin hypercube designs are significantly lower than that of random designs. While in dimension one the unique uniform design is also a set of equidistant points, low-discrepancy designs in higher dimension have to be generated by explicit optimization. Optimization is performed using the threshold accepting heuristic which produces low discrepancy designs compared to theoretic expectation and variance.

186 citations


Journal ArticleDOI
TL;DR: In this article, the reliability of averaging techniques for the Laplace equation with mixed boundary conditions has been investigated in a model situation with unstructured grids, nonsmoothness of exact solutions, and a wide class of averaging methods.
Abstract: Averaging techniques are popular tools in adaptive finite element methods for the numerical treatment of second order partial differential equations since they provide efficient a posteriori error estimates by a simple postprocessing. In this paper, their reliablility is shown for conforming, nonconforming, and mixed low order finite element methods in a model situation: the Laplace equation with mixed boundary conditions. Emphasis is on possibly unstructured grids, nonsmoothness of exact solutions, and a wide class of averaging techniques. Theoretical and numerical evidence supports that the reliability is up to the smoothness of given right-hand sides.

184 citations


Journal ArticleDOI
TL;DR: In this paper, the stability of the L2 projection on a family of finite element spaces of conforming piecewise linear functions satisfying certain local mesh conditions was shown to hold for locally quasi-uniform geometrically refined meshes.
Abstract: We prove the stability in H1(Ω) of the L2 projection onto a family of finite element spaces of conforming piecewise linear functions satisfying certain local mesh conditions. We give explicit formulae to check these conditions for a given finite element mesh in any number of spatial dimensions. In particular, stability of the L2 projection in H1(Ω) holds for locally quasiuniform geometrically refined meshes as long as the volume of neighboring elements does not change too drastically.

173 citations


Journal ArticleDOI
TL;DR: Two classes of iterative methods for saddle point problems are considered: inexact Uzawa algorithms and a class of methods with symmetric preconditioners and the obtained estimates are partially sharper than the known estimates in literature.
Abstract: In this paper two classes of iterative methods for saddle point problems are considered: inexact Uzawa algorithms and a class of methods with symmetric preconditioners. In both cases the iteration matrix can be transformed to a symmetric matrix by block diagonal matrices, a simple but essential observation which allows one to estimate the convergence rate of both classes by studying associated eigenvalue problems. The obtained estimates apply for a wider range of situations and are partially sharper than the known estimates in literature. A few numerical tests are given which confirm the sharpness of the estimates.

169 citations


Journal ArticleDOI
TL;DR: A novel approach to the construction of good lattice rules for the integration of Korobov classes of periodic functions over the unit s-dimensional cube by searching over all possible choices of the (s + 1)th component, while keeping all the existing components unchanged.
Abstract: This paper provides a novel approach to the construction of good lattice rules for the integration of Korobov classes of periodic functions over the unit s-dimensional cube. Theorems are proved which justify the construction of good lattice rules one component at a time - that is the lattice rule for dimension s + 1 is obtained from the rule for dimension s by searching over all possible choices of the (s + 1)th component, while keeping all the existing components unchanged. The construction, which goes against accepted wisdom, is illustrated by numerical examples. The construction is particularly useful if the components of the integrand are ordered, in the sense that the first component is more important than the second, and so on.

159 citations


Journal ArticleDOI
TL;DR: Reliablility is shown for conforming higher order finite element methods in a model situation, the Laplace equation with mixed boundary conditions, and emphasis is on possibly unstructured grids, nonsmoothness of exact solutions, and a wide class of local averaging techniques.
Abstract: Averaging techniques are popular tools in adaptive finite element methods since they provide efficient a posteriori error estimates by a simple postprocessing. In the second paper of our analysis of their reliability, we consider conforming h-FEM of higher (i.e., not of lowest) order in two or three space dimensions. In this paper, reliablility is shown for conforming higher order finite element methods in a model situation, the Laplace equation with mixed boundary conditions. Emphasis is on possibly unstructured grids, nonsmoothness of exact solutions, and a wide class of local averaging techniques. Theoretical and numerical evidence supports that the reliability is up to the smoothness of given right-hand sides.

157 citations


Journal ArticleDOI
TL;DR: Some global and uniform convergence estimates for a class of subspace correction (based on space decomposition) iterative methods applied to some unconstrained convex optimization problems are given.
Abstract: This paper gives some global and uniform convergence estimates for a class of subspace correction (based on space decomposition) iterative methods applied to some unconstrained convex optimization problems. Some multigrid and domain decomposition methods are also discussed as special examples for solving some nonlinear elliptic boundary value problems.

138 citations


Journal ArticleDOI
TL;DR: An algorithm for the construction of quasi-Monte Carlo (QMC) rules for integration in weighted Sobolev spaces is developed and the rules constructed are shifted rank-1 lattice rules, which are shown to achieve a strong tractability error bound.
Abstract: We develop and justify an algorithm for the construction of quasi-Monte Carlo (QMC) rules for integration in weighted Sobolev spaces; the rules so constructed are shifted rank-1 lattice rules. The parameters characterising the shifted lattice rule are found "component-by-component": the (d+1)-th component of the generator vector and the shift are obtained by successive 1-dimensional searches, with the previous d components kept unchanged. The rules constructed in this way are shown to achieve a strong tractability error bound in weighted Sobolev spaces. A search for n-point rules with n prime and all dimensions 1 to d requires a total cost of O(n3d2) operations. This may be reduced to O(n3d) operations at the expense of O(n2) storage. Numerical values of parameters and worst-ease errors are given for dimensions up to 40 and n up to a few thousand. The worst-case errors for these rules are found to be much smaller than the theoretical bounds.

106 citations


Journal ArticleDOI
TL;DR: This paper proposes a fast approximate algorithm for the associated Legendre transform by means of polynomial interpolation accelerated by the Fast Multipole Method (FMM), and shows that the algorithm is stable and is faster than the direct computation for N ≥ 511.
Abstract: The spectral method with discrete spherical harmonics transform plays an important role in many applications. In spite of its advantages, the spherical harmonics transform has a drawback of high computational complexity, which is determined by that of the associated Legendre transform, and the direct computation requires time of O(N3) for cut-off frequency N. In this paper, we propose a fast approximate algorithm for the associated Legendre transform. Our algorithm evaluates the transform by means of polynomial interpolation accelerated by the Fast Multipole Method (FMM). The divide-and-conquer approach with split Legendre functions gives computational complexity O(N2 log N). Experimental results show that our algorithm is stable and is faster than the direct computation for N ≥ 511.

Journal ArticleDOI
TL;DR: An ideal reduction algorithm based on lattice reduction is given for solving the discrete logarithm problem when the curve is defined over a finite field and a unique representative is obtained for each divisor class.
Abstract: This paper is concerned with algorithms for computing in the divisor class group of a nonsingular plane curve of the form yn = c(x) which has only one point at infinity. Divisors are represented as ideals, and an ideal reduction algorithm based on lattice reduction is given. We obtain a unique representative for each divisor class and the algorithms for addition and reduction of divisors run in polynomial time. An algorithm is also given for solving the discrete logarithm problem when the curve is defined over a finite field.

Journal ArticleDOI
TL;DR: A more flexible version of the Bramble-Pasciak-Steinbach criterion for H1-stability for nonconforming schemes on arbitrary (shape-regular) meshes that guarantees H 1-st stability of Π a priori for a class of adaptively-refined triangulations into right isosceles triangles.
Abstract: Suppose S ⊂ H1(Ω) is a finite-dimensional linear space based on a triangulation T of a domain Ω, and let Π : L2(Ω) → L2 (Ω) denote the L2-projection onto S. Provided the mass matrix of each element T ∈ T and the surrounding mesh-sizes obey the inequalities due to Bramble, Pasciak, and Steinbach or that neighboring element-sizes obey the global growth-condition due to Crouzeix and Thomee, Π is H1-stable: For all u ∈ H1 (Ω) we have ||Πu||H1 (Ω) ≤ C ||u||H1 (Ω) with a constant C that is independent of, e.g., the dimension of S.This paper provides a more flexible version of the Bramble-Pasciak-Steinbach criterion for H1-stability on an abstract level. In its general version, (i) the criterion is applicable to all kind of finite element spaces and yields, in particular, H1-stability for nonconforming schemes on arbitrary (shape-regular) meshes; (ii) it is weaker than (i.e., implied by) either the Bramble-Pasciak-Steinbach or the Crouzeix-Thomee criterion for regular triangulations into triangles; (iii) it guarantees H1-stability of Π a priori for a class of adaptively-refined triangulations into right isosceles triangles.

Journal ArticleDOI
TL;DR: The algorithm works over any finite field, and its running time does not rely on any unproven assumptions.
Abstract: We provide a subexponential algorithm for solving the discrete logarithm problem in Jacobians of high-genus hyperelliptic curves over finite fields. Its expected running time for instances with genus g and underlying finite field Fq satisfying g ≥ ϑ log q for a positive constant ϑ is given by O(e(f(√1+3/2ϑ + √3/2ϑ) + o(1)) √(g log q) log (g log q)) The algorithm works over any finite field, and its running time does not rely on any unproven assumptions.

Journal ArticleDOI
TL;DR: This paper presents several baby-step giant-step algorithms for the low hamming weight discrete logarithm problem, and proves a new existence result for these systems that yields a (nonuniform) deterministic algorithm with complexity O(t3/2 (log m) (t/2m/2)))).
Abstract: In this paper, we present several baby-step giant-step algorithms for the low hamming weight discrete logarithm problem. In this version of the discrete log problem, we are required to find a discrete logarithm in a finite group of order approximately 2m, given that the unknown logarithm has a specified number of l's, say t, in its binary representation. Heiman and Odlyzko presented the first algorithms for this problem. Unpublished improvements by Coppersmith include a deterministic algorithm with complexity O (m (t/2m/2)), and a Las Vegas algorithm, with complexity O (√t (m/2 t/2)).We perform an average-case analysis of Coppersmith's deterministic algorithm. The average-case complexity achieves only a constant factor speed-up over the worst-case. Therefore, we present a generalized version of Coppersmith's algorithm, utilizing a combinatorial set system that we call a splitting system. Using probabilistic methods, we prove a new existence result for these systems that yields a (nonuniform) deterministic algorithm with complexity O(t3/2 (log m) (t/2m/2)))). We also present some explicit constructions for splitting systems that make use of perfect hash families.

Journal ArticleDOI
TL;DR: Different mixed variational methods are proposed and studied in order to approximate with finite elements the unilateral problems arising in contact mechanics.
Abstract: In this paper, we propose and study different mixed variational methods in order to approximate with finite elements the unilateral problems arising in contact mechanics. The discretized unilateral conditions at the candidate contact interface are expressed by using either continuous piecewise linear or piecewise constant Lagrange multipliers in the saddle-point formulation. A priori error estimates are established and several numerical studies corresponding to the different choices of the discretized unilateral conditions are achieved.

Journal ArticleDOI
TL;DR: It is shown constructively using the Halton sequence that the e-exponent of tractability is 1, which implies that infinite dimensional integration is no harder than one-dimensional integration.
Abstract: Dimensionally unbounded problems are frequently encountered in practice, such as in simulations of stochastic processes, in particle and light transport problems and in the problems of mathematical finance. This paper considers quasi-Monte Carlo integration algorithms for weighted classes of functions of infinitely many variables, in which the dependence of functions on successive variables is increasingly limited. The dependence is modeled by a sequence of weights. The integrands belong to rather general reproducing kernel Hilbert spaces that can be decomposed as the direct sum of a series of their subspaces, each subspace containing functions of only a finite number of variables. The theory of reproducing kernels is used to derive a quadrature error bound, which is the product of two terms: the generalized discrepancy and the generalized variation.Tractability means that the minimal number of function evaluations needed to reduce the initial integration error by a factor s is bounded by Ce-p for some exponent p and some positive constant C. The e-exponent of tractability is defined as the smallest power of e-1 in these bounds. It is shown by using Monte Carlo quadrature that the e-exponent is no greater than 2 for these weighted classes of integrands. Under a somewhat stronger assumption on the weights and for a popular choice of the reproducing kernel it is shown constructively using the Halton sequence that the e-exponent of tractability is 1, which implies that infinite dimensional integration is no harder than one-dimensional integration.

Journal ArticleDOI
TL;DR: The convergence of a class of combined spectral-finite difference methods using Hermite basis, applied to the Fokker-Planck equation, is studied and it is shown that the Hermite based spectral methods are convergent with spectral accuracy in weighted Sobolev space.
Abstract: The convergence of a class of combined spectral-finite difference methods using Hermite basis, applied to the Fokker-Planck equation, is studied. It is shown that the Hermite based spectral methods are convergent with spectral accuracy in weighted Sobolev space. Numerical results indicating the spectral convergence rate are presented. A velocity scaling factor is used in the Hermite basis and is shown to improve the accuracy and effectiveness of the Hermite spectral approximation, with no increase in workload. Some basic analysis for the selection of the scaling factors is also presented.

Journal ArticleDOI
TL;DR: Rounding error analysis and numerical examples are presented to demonstrate the numerical behaviour of the algorithms that compute off-diagonal and diagonally dominant M-matrix quantities with relative errors in the magnitude of the machine precision.
Abstract: If each off-diagonal entry and the sum of each row of a diagonally dominant M-matrix are known to certain relative accuracy, then its smallest eigenvalue and the entries of its inverse are known to the same order relative accuracy independent of any condition numbers. In this paper, we devise algorithms that compute these quantities with relative errors in the magnitude of the machine precision. Rounding error analysis and numerical examples are presented to demonstrate the numerical behaviour of the algorithms.

Journal ArticleDOI
TL;DR: It is shown that a function that can be approximated sufficiently fast must belong to the native space of the basis function in use, and certain saturation theorems are given in case of thin plate spline interpolation.
Abstract: While direct theorems for interpolation with radial basis functions are intensively investigated, little is known about inverse theorems so far. This paper deals with both inverse and saturation theorems. For an inverse theorem we especially show that a function that can be approximated sufficiently fast must belong to the native space of the basis function in use. In case of thin plate spline interpolation we also give certain saturation theorems.

Journal ArticleDOI
TL;DR: This article is devoted to the construction of a Hermite-type regularization operator transforming functions that are not necessarily C 1 into globally C 1 finite-element functions that is piecewise polynomials.
Abstract: This article is devoted to the construction of a Hermite-type regularization operator transforming functions that are not necessarily C 1 into globally C 1 finite-element functions that are piecewise polynomials. This regularization operator is a projection, it preserves appropriate first and second order polynomial traces, and it has approximation properties of optimal order. As an illustration, it is used to discretize a nonhomogeneous Navier-Stokes problem, with tangential boundary condition.

Journal ArticleDOI
TL;DR: For second-order elliptic boundary value problems, the contraction number of the V-cycle algorithm is shown to improve uniformly with the increase of the number of smoothing steps, without assuming full elliptic regularity.
Abstract: The multigrid V-cycle algorithm using the Richardson relaxation scheme as the smoother is studied in this paper. For second-order elliptic boundary value problems, the contraction number of the V-cycle algorithm is shown to improve uniformly with the increase of the number of smoothing steps, without assuming full elliptic regularity. As a consequence, the V-cycle convergence result of Braess and Hackbusch is generalized to problems without full elliptic regularity.

Journal ArticleDOI
TL;DR: This paper investigates the behavior of numerical schemes for non-linear conservation laws with source terms with extensive use of weak limits and nonconservative products to describe accurately the operations achieved in practice when using Riemann-based numerical schemes.
Abstract: This paper investigates the behavior of numerical schemes for non-linear conservation laws with source terms. We concentrate on two significant examples: relaxation approximations and genuinely nonhomogeneous scalar laws. The main tool in our analysis is the extensive use of weak limits and nonconservative products which allow us to describe accurately the operations achieved in practice when using Riemann-based numerical schemes. Some illustrative and relevant computational results are provided.

Journal ArticleDOI
TL;DR: Finite element, operators defined on "rough" functions in a bounded polyhedron Ω in RN are considered and an intriguing and basic difference between approximating functions which vanish on the boundary of Ω and approximating general functions which do not is discovered.
Abstract: We consider finite element, operators defined on "rough" functions in a bounded polyhedron Ω in RN. Insisting on preserving positivity in the approximations, we discover an intriguing and basic difference between approximating functions which vanish on the boundary of Ω and approximating general functions which do not. We give impossibility results for approximation of general functions to more than first order accuracy at extreme points of Ω. We also give impossibility results about invariance of positive operators on finite element functions. This is in striking contrast to the well-studied case without positivity.

Journal ArticleDOI
TL;DR: The convergence to {Pt}t of solutions {Ptl]t of approximating Boltzmann equations with cutoff is proved and a result of Graham-Meleard is used and allows us to approximate{Ptl}t with the empirical measure {µtl,n}T of an easily simulable interacting particle system.
Abstract: Using the main ideas of Tanaka, the measure-solution {Pt}t of a 3-dimensional spatially homogeneous Boltzmann equation of Maxwellian molecules without cutoff is related to a Poisson-driven stochastic differential equation. Using this tool, the convergence to {Pt}t of solutions {Ptl}t of approximating Boltzmann equations with cutoff is proved, Then, a result of Graham-Meleard is used and allows us to approximate {Ptl}t with the empirical measure {µtl,n}t of an easily simulable interacting particle system. Precise rates of convergence are given. A numerical study lies at the end of the paper.

Journal ArticleDOI
TL;DR: A general way for the construction of quinc unx interpolatory refinement masks associated with the quincunx lattice in R2 attain the optimal approximation order and smoothness order.
Abstract: We analyze the approximation and smoothness properties of quincunx fundamental refinable functions. In particular, we provide a general way for the construction of quincunx interpolatory refinement masks associated with the quincunx lattice in R2. Their corresponding quincunx fundamental refinable functions attain the optimal approximation order and smoothness order. In addition, these examples are minimally supported with symmetry. For two special families of such quincunx interpolatory masks, we prove that their symbols are nonnegative. Finally, a general way of constructing quincunx biorthogonal wavelets is presented. Several examples of quincunx interpolatory masks and quincunx biorthogonal wavelets are explicitly computed.

Journal ArticleDOI
TL;DR: It is shown that the classical variational formulation is not suitable for this type of problem as even a simple NBVP on a disk approximated by a pixel domain differs much from the solution on the original disk with smooth boundary.
Abstract: An essential part of any boundary value problem is the domain on which the problem is defined. The domain is often given by scanning or another digital image technique with limited resolution. This leads to significant uncertainty in the domain definition. The paper focuses on the impact of the uncertainty in the domain on the Neumann boundary value problem (NBVP). It studies a scalar NBVP defined on a sequence of domains. The sequence is supposed to converge in the set sense to a limit domain. Then the respective sequence of NBVP solutions is examined. First, it is shown that the classical variational formulation is not suitable for this type of problem as even a simple NBVP on a disk approximated by a pixel domain differs much from the solution on the original disk with smooth boundary. A new definition of the NBVP is introduced to avoid this difficulty by means of reformulated natural boundary conditions. Then the convergence of solutions of the NBVP is demonstrated. The uniqueness of the limit solution, however, depends on the stability property of the limit domain. Finally, estimates of the difference between two NBVP solutions on two different but close domains are given.

Journal ArticleDOI
TL;DR: Preliminary conjectures are derived that are consistent with Shanks's observations, while fitting in with the viewpoint of Erdos and the results of Alford, Granville and Pomerance.
Abstract: Erdos conjectured that there are x1-o(1) Carmichael numbers up to x, whereas Shanks was skeptical as to whether one might even find an x up to which there are more than √x Carmichael numbers. Alford, Granville and Pomerance showed that there are more than x2/7 Carmichael numbers up to x, and gave arguments which even convinced Shanks (in person-to-person discussions) that Erdos must be correct. Nonetheless, Shanks's skepticism stemmed from an appropriate analysis of the data available to him (and his reasoning is still borne out by Pinch's extended new data), and so we herein derive conjectures that are consistent with Shanks's observations, while fitting in with the viewpoint of Erdos and the results of Alford, Granville and Pomerance.

Journal ArticleDOI
TL;DR: It is proved that the numerical solution in velocity has full accuracy up to the boundary, despite the fact that there are numerical boundary layers present in the semi-discrete solutions.
Abstract: In E & Liu (SIAM J Numer. Anal., 1995), we studied convergence and the structure of the error for several projection methods when the spatial variable was kept continuous (we call this the semi-discrete case). In this paper, we address similar questions for the fully discrete case when the spatial variables are discretized using a staggered grid. We prove that the numerical solution in velocity has full accuracy up to the boundary, despite the fact that there are numerical boundary layers present in the semi-discrete solutions.

Journal ArticleDOI
TL;DR: The aim of this paper is to extend the numerical analysis of residual error indicators to this type of methods for a model problem and to check their efficiency thanks to some numerical experiments.
Abstract: The mortar technique turns out to be well adapted to handle mesh adaptivity in finite elements, since it allows for working with nonnecessarily compatible discretizations on the elements of a nonconforming partition of the initial domain. The aim of this paper is to extend the numerical analysis of residual error indicators to this type of methods for a model problem and to check their efficiency thanks to some numerical experiments.