scispace - formally typeset
Search or ask a question

Showing papers in "Mathematics of Computation in 2013"


Journal ArticleDOI
TL;DR: A family of conforming finite elements for the Stokes problem on general triangular meshes in two dimensions is presented and it is shown that the elements satisfy the inf-sup condition and converges with order k for both the velocity and pressure.
Abstract: We present a family of conforming finite elements for the Stokes problem on general triangular meshes in two dimensions. The lowest order case consists of enriched piecewise linear polynomials for the velocity and piecewise constant polynomials for the pressure. We show that the elements satisfy the inf-sup condition and converges with order k for both the velocity and pressure. Moreover, the pressure space is exactly the divergence of the corresponding space for the velocity. Therefore the discretely divergence-free functions are divergence-free pointwise. We also show how the proposed elements are related to a class of C1 elements through the use of a discrete de Rham complex.

200 citations


Journal ArticleDOI
TL;DR: In this article, a complete error analysis of the Discontinuous Petrov Galerkin (DPG) method is given, accounting for all the approximations made in its practical implementation.
Abstract: We give a complete error analysis of the Discontinuous Petrov Galerkin (DPG) method, accounting for all the approximations made in its practical implementation. Specifi­ cally, we consider the DPG method that uses a trial space consisting of polynomials of degree p on each mesh element. Earlier works showed that there is a "trial-to-test" operator T, which when applied to the trial space, defines a test space that guarantees stability. In DPG formulations, this operator T is local: it can be applied element-by-element. However, an infinite dimensional problem on each mesh element needed to be solved to apply T. In prac­ tical computations, T is approximated using polynomials of some degree r > p on each mesh element. We show that this approximation maintains optimal convergence rates, provided that r 2: p + N, where N is the space dimension (two or more), for the Laplace equation. We also prove a similar result for the DPG method for linear elasticity. Remarks on the conditioning of the stiffness matrix in DPG methods are also included.

151 citations


Journal ArticleDOI
TL;DR: A class of conservative, discontinuous Galerkin schemes for the Generalized Korteweg–de Vries equation that preserve discrete versions of the first two invariants of the solution, usually identified with the mass, and the L2–norm of the continuous solution.
Abstract: We construct, analyze and numerically validate a class of conservative, discontinuous Galerkin schemes for the Generalized Korteweg–de Vries equation. Up to round-off error, these schemes preserve discrete versions of the first two invariants (the integral of the solution, usually identified with the mass, and the L2–norm) of the continuous solution. Numerical evidence is provided indicating that these conservation properties impart the approximations with beneficial attributes, such as more faithful reproduction of the amplitude and phase of traveling–wave solutions. The numerical simulations also indicate that the discretization errors grow only linearly as a function of time.

128 citations


Journal ArticleDOI
TL;DR: A space-time variational formulation for linear para- bolic partial dierential equations with associated Petrov-Galerkin truth nite element discretization with favorable discrete inf-sup constant is considered, in sharp contrast to classical (pessimistic) exponentially growing energy estimates.
Abstract: We consider a space-time variational formulation for linear para- bolic partial dierential equations. We introduce an associated Petrov-Galerkin truth nite element discretization with favorable discrete inf-sup constant , the inverse of which enters into error estimates: is unity for the heat equa- tion; decreases only linearly in time for non-coercive (but asymptotically stable) convection operators. The latter in turn permits eective long-time a posteriori error bounds for reduced basis approximations, in sharp contrast to classical (pessimistic) exponentially growing energy estimates. The paper contains a full analysis and various extensions for the formulation introduced briey in (13) as well as numerical results for a model reaction-convection- diusion equation.

117 citations


Journal ArticleDOI
TL;DR: How the even Goldbach conjecture was confirmed to be true for all even numbers not larger than 4 · 1018 and how the counts of minimal Goldbach partitions and of prime gaps are in excellent accord with the predictions made using the prime k-tuple conjecture of Hardy and Littlewood are described.
Abstract: This paper describes how the even Goldbach conjecture was confirmed to be true for all even numbers not larger than 4 · 1018. Using a result of Ramaré and Saouter, it follows that the odd Goldbach conjecture is true up to 8.37 · 1026. The empirical data collected during this extensive verification effort, namely, counts and first occurrences of so-called minimal Goldbach partitions with a given smallest prime and of gaps between consecutive primes with a given even gap, are used to test several conjectured formulas related to prime numbers. In particular, the counts of minimal Goldbach partitions and of prime gaps are in excellent accord with the predictions made using the prime k-tuple conjecture of Hardy and Littlewood (with an error that appears to be O( √ t log log t), where t is the true value of the quantity being estimated). Prime gap moments also show excellent agreement with a generalization of a conjecture made in 1982 by Heath-Brown. The Goldbach conjecture [13] is a famous mathematical problem whose proof, or disproof, has so far resisted the passage of time [20, Problem C1]. (According to [1], Waring and, possibly, Descartes also formulated similar conjectures.) It states, in its modern even form, that every even number larger than four is the sum of two odd prime numbers, i.e., that n = p + q. Here, and in what follows, n will always be an even integer larger than four, and p and q will always be odd prime numbers. The additive decomposition n = p + q is called a Goldbach partition of n. The one with the smallest p will be called the minimal Goldbach partition of n; the corresponding p will be denoted by p(n) and the corresponding q by q(n). It is known that up to a given number x at most O(x) even integers do not have a Goldbach partition [30], and that every large enough even number is the sum of a prime and the product of at most two primes [24]. Furthermore, according to [48], every odd number greater that one is the sum of at most five primes. As described in Table 1, over a time span of more than a century the even Goldbach conjecture was confirmed to be true up to ever-increasing upper limits. Section 1 describes the methods that were used by the first author, with computational help from the second and third authors, and others, to set the limit of verification of the Goldbach conjecture at 4 · 10. Section 2 presents a small subset of the empirical data that was gathered during the verification, namely, counts and first occurrences of primes in minimal Goldbach partitions, and counts and first occurrences of prime gaps, and compares it with the predictions made by Received by the editor May 21, 2012 and, in revised form, December 6, 2012. 2010 Mathematics Subject Classification. Primary 11A41, 11P32, 11N35; Secondary 11N05, 11Y55.

114 citations


Journal ArticleDOI
TL;DR: It is shown that the global maximum principle can be preserved while the high order accuracy of the underlying scheme is maintained.
Abstract: In this paper, we present a class of parametrized limiters used to achieve strict maximum principle for high order numerical schemes applied to hyperbolic conservation laws computation. By decoupling a sequence of parameters embedded in a group of explicit inequalities, the numerical fluxes are locally redefined in consistent and conservative formulation. We will show that the global maximum principle can be preserved while the high order accuracy of the underlying scheme is maintained. The parametrized limiters are less restrictive on the CFL number when applied to high order finite volume scheme. The less restrictive limiters allow for the development of the high order finite difference scheme which preserves the maximum principle. Within the proposed parametrized limiters framework, a successive sequence of limiters are designed to allow for significantly large CFL number by relaxing the limits on the intermediate values of the multistage Runge-Kutta method. Numerical results and preliminary analysis for linear and nonlinear scalar problems are presented to support the claim. The parametrized limiters are applied to the numerical fluxes directly. There is no increased complexity to apply the parametrized limiters to different kinds of monotone numerical fluxes.

94 citations


Journal ArticleDOI
TL;DR: In this paper, a family of finite element spaces of differential forms defined on cubical meshes in any number of dimensions is developed, which can be used for stable discretization of a variety of problems.
Abstract: We develop a family of finite element spaces of differential forms defined on cubical meshes in any number of dimensions. The family contains elements of all polynomial degrees and all form degrees. In two dimensions, these include the serendipity finite elements and the rectangular BDM elements. In three dimensions they include a recent generalization of the serendipity spaces, and new H(curl) and H(div) finite element spaces. Spaces in the family can be combined to give finite element subcomplexes of the de Rham complex which satisfy the basic hypotheses of the finite element exterior calculus, and hence can be used for stable discretization of a variety of problems. The construction and properties of the spaces are established in a uniform manner using finite element exterior calculus.

70 citations


Journal ArticleDOI
TL;DR: A general abstract frame work is proposed for a posteriori error analysis of finite element methods for solving linear nonlocal diffusion and bond-based peridynamic models and the reliability and efficiency of the estimators are proved.
Abstract: . In this paper, we present some results on a posteriori error analysis of finite element methods for solving linear nonlocal diffusion and bond-based peridynamic models. In particular, we aim to propose a general abstract frame work for a posteriori error analysis of the peridynamic problems. A posteriori error estimators are consequently prompted, the reliability and efficiency of the estimators are proved. Connections between nonlocal a posteriori error estimation and classical local estimation are studied within continuous finite element space. Numerical experiments (1D) are also given to test the theoretical conclusions.

68 citations


Journal ArticleDOI
Traian Iliescu1, Zhu Wang1
TL;DR: A variational multiscale closure modeling strategy for the numerical stabilization of proper orthogonal decomposition reducedorder models of convection-dominated equations and the numerical analysis of the finite element discretization of the model is presented.
Abstract: We introduce a variational multiscale closure modeling strategy for the numerical stabilization of proper orthogonal decomposition reducedorder models of convection-dominated equations. As a first step, the new model is analyzed and tested for convection-dominated convection-diffusionreaction equations. The numerical analysis of the finite element discretization of the model is presented. Numerical tests show the increased numerical accuracy over the standard reduced-order model and illustrate the theoretical convergence rates.

66 citations


Journal ArticleDOI
TL;DR: A novel and efficient strategy, which is entirely done by using the fast Fourier transform, is proposed to reconstruct the mean and the variance of the random source function from measurements at one boundary point, where the measurements are assumed to be available for many realizations of the source term.
Abstract: This paper is concerned with an inverse random source problem for the one-dimensional stochastic Helmholtz equation, which is to reconstruct the statistical properties of the random source function from boundary measurements of the radiating random electric field Although the emphasis of the paper is on the inverse problem, we adapt a computationally more efficient approach to study the solution of the direct problem in the context of the scattering model Specifically, the direct model problem is equivalently formulated into a two-point spatially stochastic boundary value problem, for which the existence and uniqueness of the pathwise solution is proved In particular, an explicit formula is deduced for the solution from an integral representation by solving the two-point boundary value problem Based on this formula, a novel and efficient strategy, which is entirely done by using the fast Fourier transform, is proposed to reconstruct the mean and the variance of the random source function from measurements at one boundary point, where the measurements are assumed to be available for many realizations of the source term Numerical examples are presented to demonstrate the validity and effectiveness of the proposed method

65 citations


Journal ArticleDOI
TL;DR: A new linear extrapolation of the convecting velocity for CNLE that ensures energetic stability without introducing an undesirable exponential Gronwall constant is proposed.
Abstract: We investigate the stability of a fully-implicit, linearly extrapolated Crank-Nicolson (CNLE) time-stepping scheme for finite element spatial discretization of the Navier-Stokes equations. Although presented in 1976 by Baker and applied and analyzed in various contexts since then, all known convergence estimates of CNLE require a time-step restriction. We propose a new linear extrapolation of the convecting velocity for CNLE that ensures energetic stability without introducing an undesirable exponential Gronwall constant. Such a result is unknown for conventional CNLE for inhomogeneous boundary data (usual techniques fail!). Numerical illustrations are provided showing that our new extrapolation clearly improves upon stability and accuracy from conventional CNLE.

Journal ArticleDOI
TL;DR: It is proved that, roughly speaking, if the function does not belong to Vh, the upper-bound error estimate is also sharp, and this result is further extended to various situations including general shape regular grids and many different types of finite element spaces.
Abstract: Assume that Vh is a space of piecewise polynomials of a degree less than r ≥ 1 on a family of quasi-uniform triangulation of size h. There exists the well-known upper bound of the approximation error by Vh for a sufficiently smooth function. In this paper, we prove that, roughly speaking, if the function does not belong to Vh, the upper-bound error estimate is also sharp. This result is further extended to various situations including general shape regular grids and many different types of finite element spaces. As an application, the sharpness of finite element approximation of elliptic problems and the corresponding eigenvalue problems is established.

Journal ArticleDOI
TL;DR: An a posteriori error estimate is derived for the numerical approximation of the solution of a system modeling the flow of two incompressible and immiscible fluids in a porous medium using a large class of conforming, vertex-centered finite volume-type discretizations with fully implicit time stepping.
Abstract: In this paper we derive an a posteriori error estimate for the numerical approximation of the solution of a system modeling the flow of two incompressible and immiscible fluids in a porous medium. We take into account the capillary pressure, which leads to a coupled system of two equations: parabolic and elliptic. The parabolic equation may become degenerate, i.e., the nonlinear diffusion coefficient may vanish over regions that are not known a priori. We first show that, under appropriate assumptions, the energy-type norm differences between the exact and the approximate nonwetting phase saturations, the global pressures, and the Kirchhoff transforms of the nonwetting phase saturations can be bounded by the dual norm of the residuals. We then bound the dual norm of the residuals by fully computable a posteriori estimators. Our analysis covers a large class of conforming, vertex-centered finite volume-type discretizations with fully implicit time stepping. As an example, we focus here on two approaches: a ``mathematical'' scheme derived from the weak formulation, and a phase-by-phase upstream weighting ``engineering'' scheme. Finally, we show how the different error components, namely the space discretization error, the time discretization error, the linearization error, the algebraic solver error, and the quadrature error can be distinguished and used for making the calculations efficient.

Journal ArticleDOI
TL;DR: A multiresolution-based adaptation concept is proposed that aims at accelerating Discontinuous Galerkin schemes applied to nonlinear hyperbolic conservation laws by using a hierarchy of nested grids for the data given on a uniformly refined mesh.
Abstract: A multiresolution-based adaptation concept is proposed that aims at accelerating Discontinuous Galerkin schemes applied to nonlinear hyperbolic conservation laws. Opposite to standard adaptation concepts no error estimates are needed to tag mesh elements for refinement. Instead of this, a multiresolution analysis is performed on a hierarchy of nested grids for the data given on a uniformly refined mesh. This provides difference information between successive refinement levels, that may become negligibly small in regions where the solution is locally smooth. Applying hard thresholding the data are highly compressed and local grid adaptation is triggered by the remaining significant coefficients. A central mathematical problem addressed in this work is then to show that the adaptive solution is of the same accuracy as the reference solution on a uniformly refined mesh. Numerical comparisons demonstrate the efficiency of the concept and provide reliable estimates of the actual error in the numerical solution.

Journal ArticleDOI
TL;DR: It is proved that exponential convergence holds iff B := ∑∞ i=1 1/bi < ∞ independently of a and ω and the largest p of exponential convergence is 1/B, and that the largest (or the supremum of) p for exponential convergence with strong polynomial tractability belongs to [1/(2B), 1/ B].
Abstract: We study multivariate integration for a weighted Korobov space of periodic infinitely many times differentiable functions for which the Fourier coefficients decay exponentially fast. The weights are defined in terms of two non-decreasing sequences a = {ai} and b = {bi} of numbers no less than one and a parameter ω ∈ (0, 1). Let e(n, s) be the minimal worst-case error of all algorithms that use n function values in the s-variate case. We would like to check conditions on a, b and ω such that e(n, s) decays exponentially fast, i.e., for some q ∈ (0, 1) and p > 0 we have e(n, s) = O(q n p) as n goes to infinity. The factor in the O notation may depend on s in an arbitrary way. We prove that exponential convergence holds iff B := ∑∞ i=1 1/bi < ∞ independently of a and ω. Furthermore, the largest p of exponential convergence is 1/B. We also study exponential convergence with weak, polynomial and strong polynomial tractability. This means that e(n, s) ≤ C(s) q n p for all n and s and with log C(s) = exp(o(s)) for weak tractability, with a polynomial bound on log C(s) for polynomial tractability, and with uniformly bounded C(s) for strong polynomial tractability. We prove that the notions of weak, polynomial and strong polynomial tractability are equivalent, and hold iff B < ∞ and ai are exponentially growing with i. We also prove that the largest (or the supremum of) p for exponential convergence with strong polynomial tractability belongs to [1/(2B), 1/B].

Journal ArticleDOI
TL;DR: A projection-based analysis of the hversion of the hybridizable discontinuous Galerkin methods for convectiondiffusion equations on semimatching nonconforming meshes made of simplexes; the degrees of the piecewise polynomials are allowed to vary from element to element.
Abstract: In this paper, we provide a projection-based analysis of the hversion of the hybridizable discontinuous Galerkin methods for convectiondiffusion equations on semimatching nonconforming meshes made of simplexes; the degrees of the piecewise polynomials are allowed to vary from element to element. We show that, for approximations of degree k on all elements, the order of convergence of the error in the diffusive flux is k+ 1 and that that of a projection of the error in the scalar unknown is 1 for k = 0 and k + 2 for k > 0. We also show that, for the variable-degree case, the projection of the error in the scalar variable is h-times the projection of the error in the vector variable, provided a simple condition is satisfied for the choice of the degree of the approximation on the elements with hanging nodes. These results hold for any (bounded) irregularity index of the nonconformity of the mesh. Moreover, our analysis can be extended to hypercubes.

Journal ArticleDOI
TL;DR: The first a priori error analysis of the hybridizable discontinuous Galerkin methods for the acoustic equation in the time-continuous case is presented, showing that the velocity and the gradient converge with the optimal order of k + 1 in the L2-norm uniformly in time whenever polynomials of degree k ≥ 0 are used.
Abstract: We present the first a priori error analysis of the hybridizable discontinuous Galerkin methods for the acoustic equation in the time-continuous case. We show that the velocity and the gradient converge with the optimal order of k + 1 in the L2-norm uniformly in time whenever polynomials of degree k ≥ 0 are used. Finally, we show how to take advantage of this local postprocessing to obtain an approximation to the original scalar unknown also converging with order k+2 for k ≥ 1. This puts on firm mathematical ground the numerical results obtained in J. Comput. Phys. 230 (2011), 3695–3718.

Journal ArticleDOI
TL;DR: In this article, a fully discrete analysis of the finite element heterogeneous multiscale method for a class of nonlinear elliptic homogenization problems of nonmonotone type is proposed.
Abstract: A fully discrete analysis of the finite element heterogeneous multiscale method for a class of nonlinear elliptic homogenization problems of nonmonotone type is proposed. In contrast to previous results obtained for such problems in dimension $d\leq2$ for the $H^1$ norm and for a semi-discrete formulation [W.E, P. Ming and P. Zhang, J. Amer. Math. Soc. 18 (2005), no. 1, 121–156], we obtain optimal convergence results for dimension $d\leq3$ and for a fully discrete method, which takes into account the microscale discretization. In addition, our results are also valid for quadrilateral finite elements, optimal a-priori error estimates are obtained for the $H^1$ and $L^2$ norms, improved estimates are obtained for the resonance error and the Newton method used to compute the solution is shown to converge. Numerical experiments confirm the theoretical convergence rates and illustrate the behavior of the numerical method for various nonlinear problems.

Journal ArticleDOI
TL;DR: The error estimator obtained for DG methods is comparable with the estimator for the conforming Galerkin (CG) finite element method and is shown for non-over-penalized DG methods that the discrete Lagrange multiplier is uniformly stable on non-uniform meshes.
Abstract: In this article, we derive an a posteriori error estimator for various discontinuous Galerkin (DG) methods that are proposed in (Wang, Han and Cheng, SIAM J. Numer. Anal., 48: 708-733, 2010) for an elliptic obstacle problem. Using a key property of DG methods, we perform the analysis in a general framework. The error estimator we have obtained for DG methods is comparable with the estimator for the conforming Galerkin (CG) finite element method. In the analysis, we construct a non-linear smoothing function mapping DG finite element space to CG finite element space and use it as a key tool. The error estimator consists of a discrete Lagrange multiplier associated with the obstacle constraint. It is shown for non-over-penalized DG methods that the discrete Lagrange multiplier is uniformly stable on non-uniform meshes. Finally, numerical results demonstrating the performance of the error estimator are presented.

Journal ArticleDOI
TL;DR: This work analyzes the space of bivariate functions that are piecewise polynomial of bi-degree <= (m, m') and of smoothness r along the interior edges of a planar T-mesh and gives new combinatorial lower and upper bounds for the dimension of this space by exploiting homological techniques.
Abstract: We analyze the space of bivariate functions that are piecewise polynomial of bi-degree <= (m, m') and of smoothness r along the interior edges of a planar T-mesh. We give new combinatorial lower and upper bounds for the dimension of this space by exploiting homological techniques. We relate this dimension to the weight of the maximal interior segments of the T-mesh, defined for an ordering of these maximal interior segments. We show that the lower and upper bounds coincide, for high enough degrees or for hierarchical T-meshes which are enough regular. We give a rule of subdivision to construct hierarchical T-meshes for which these lower and upper bounds coincide. Finally, we illustrate these results by analyzing spline spaces of small degrees and smoothness.

Journal ArticleDOI
TL;DR: The first a priori error analysis of a technique that allows us to numerically solve steady-state diffusion problems defined on curved domains Ω by using finite element methods defined in polyhedral subdomains Dh ⊂ Ω proves the order of convergence in the L2-norm of the approximate flux and scalar unknowns is optimal.
Abstract: We present the first a priori error analysis of a technique that allows us to numerically solve steady-state diffusion problems defined on curved domains Ω by using finite element methods defined in polyhedral subdomains Dh ⊂ Ω. For a wide variety of hybridizable discontinuous Galerkin and mixed methods, we prove that the order of convergence in the L2-norm of the approximate flux and scalar unknowns is optimal as long as the distance between the boundary of the original domain Γ and that of the computational domain Γh is of order h. We also prove that the L 2-norm of a projection of the error of the scalar variable superconverges with a full additional order when the distance between Γ and Γh is of order h 5/4 but with only half an additional order when such a distance is of order h. Finally, we present numerical experiments confirming the theoretical results and showing that even when the distance between Γ and Γh is of order h, the above-mentioned projection of the error of the scalar variable can still superconverge with a full additional order.

Journal ArticleDOI
TL;DR: A shift theorem for the k-point correlation equation in anisotropic smoothness scales is proved and it is deduced that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkins discretization of a single instance of the mean field problem.
Abstract: We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.

Journal ArticleDOI
TL;DR: An adaptive solver for random elliptic boundary value problems, using techniques from adaptive wavelet methods, which are set not in the continuous framework of a boundary value problem, but rather on the level of coefficients with respect to a hierarchic Riesz basis, such as a wavelet basis.
Abstract: We derive an adaptive solver for random elliptic boundary value problems, using techniques from adaptive wavelet methods. Substituting wavelets by polynomials of the random parameters leads to a modular solver for the parameter dependence of the random solution, which combines with any discretization on the spatial domain. In addition to selecting active polynomial modes, this solver can adaptively construct a separate spatial discretization for each of their coefficients. We show convergence of the solver in this general setting, along with a computable bound for the mean square error, and an optimality property in the case of a single spatial discretization. Numerical computations demonstrate convergence of the solver and compare it to a sparse tensor product construction. Introduction Stochastic Galerkin methods have emerged in the past decade as an efficient solution procedure for boundary value problems depending on random data; see [14, 32, 2, 30, 23, 18, 31, 28, 6, 5]. These methods approximate the random solution by a Galerkin projection onto a finite-dimensional space of random fields. This requires the solution of a single coupled system of deterministic equations for the coefficients of the Galerkin projection with respect to a predefined set of basis functions on the parameter domain. A major remaining obstacle is the construction of suitable spaces in which to compute approximate solutions. These should be adapted to the stochastic structure of the equation. Simple tensor product constructions are infeasible due to the high dimensionality of the parameter domain in the case of input random fields with low regularity. Parallel to but independently from the development of stochastic Galerkin methods, a new class of adaptive methods has emerged, which are set not in the continuous framework of a boundary value problem, but rather on the level of coefficients with respect to a hierarchic Riesz basis, such as a wavelet basis. Due to the norm equivalences constitutive of Riesz bases, errors and residuals in appropriate sequence spaces are equivalent to those in physically meaningful function spaces. This permits adaptive wavelet methods to be applied directly to a large class of equations, provided that a suitable Riesz basis is available. Received by the editor March 2, 2011 and, in revised form, September 24, 2011. 2010 Mathematics Subject Classification. Primary 35R60, 47B80, 60H25, 65C20, 65N12, 65N22, 65N30, 65J10, 65Y20. This research was supported in part by the Swiss National Science Foundation grant No. 200021-120290/1. c ©2013 American Mathematical Society Reverts to public domain 28 years from publication

Book ChapterDOI
TL;DR: This chapter presents the construction of these new tensor bases and discusses some numerical experiments that have been achieved in the context of the DFG-SPP project “Adaptive Wavelet Frame Methods for Operator Equations: Sparse Grids, Vector-Valued Spaces and Applications to Nonlinear Inverse Problems”.
Abstract: In this chapter, we present some of the major results that have been achieved in the context of the DFG-SPP project “Adaptive Wavelet Frame Methods for Operator Equations: Sparse Grids, Vector-Valued Spaces and Applications to Nonlinear Inverse Problems”. This project has been concerned with (nonlinear) elliptic and parabolic operator equations on nontrivial domains as well as with related inverse parameter identification problems. One crucial step has been the design of an efficient forward solver. We employed a spatially adaptive wavelet Rothe scheme. The resulting elliptic subproblems have been solved by adaptive wavelet Galerkin schemes based on generalized tensor wavelets that realize dimension-independent approximation rates. In this chapter, we present the construction of these new tensor bases and discuss some numerical experiments.

Journal ArticleDOI
TL;DR: An accelerated version of the modified Levenberg-Marquardt method for nonlinear equations that introduces the line search for the approximate LM step and extends the LM parameter to more general cases and compares it with both the LM method and the modified LM method.
Abstract: In this paper we propose an accelerated version of the modified Levenberg-Marquardt method for nonlinear equations (see Jinyan Fan, Mathematics of Computation 81 (2012), no. 277, 447–466). The original version uses the addition of the LM step and the approximate LM step as the trial step at every iteration, and achieves the cubic convergence under the local error bound condition which is weaker than nonsingularity. The notable differences of the accelerated modified LM method from the modified LM method are that we introduce the line search for the approximate LM step and extend the LM parameter to more general cases. The convergence order of the new method is shown to be a continuous function with respect to the LM parameter. We compare it with both the LM method and the modified LM method; on the benchmark problems we observe competitive performance.

Journal ArticleDOI
TL;DR: A Finite Element Method (FEM) which can adopt very general meshes with polygonal elements for the numerical approximation of elliptic obstacle problems and the first-order convergence estimate in a suitable (mesh-dependent) energy norm is established.
Abstract: We develop a Finite Element Method (FEM) which can adopt very general meshes with polygonal elements for the numerical approximation of elliptic obstacle problems. These kinds of methods are also known as mimetic discretization schemes, which stem from the Mimetic Finite Difference (MFD) method. The first-order convergence estimate in a suitable (mesh-dependent) energy norm is established. Numerical experiments confirming the theoretical results are also presented.

Journal ArticleDOI
TL;DR: In this article, an algorithm for computing genus-two class polynomials of a primitive quartic CM-eld K was given, and a running time bound and a proof of correctness of this algorithm were given.
Abstract: We give an algorithm that computes the genus-two class polynomials of a primitive quartic CM-eld K, and we give a running time bound and a proof of correctness of this algorithm. This is the rst proof of correctness and the rst running time bound of any algorithm that computes these polynomials. Our algorithm is based on the complex analytic method of Spallek and van Wamelen and runs in time e O( 7=2 ), where is the discriminant of K.

Journal ArticleDOI
TL;DR: In this paper, it was shown that every odd number greater than 1 can be expressed as the sum of at most five primes, improving the result of Ramar\'e.
Abstract: We prove that every odd number $N$ greater than 1 can be expressed as the sum of at most five primes, improving the result of Ramar\'e that every even natural number can be expressed as the sum of at most six primes. We follow the circle method of Hardy-Littlewood and Vinogradov, together with Vaughan's identity; our additional techniques, which may be of interest for other Goldbach-type problems, include the use of smoothed exponential sums and optimisation of the Vaughan identity parameters to save or reduce some logarithmic losses, the use of multiple scales following some ideas of Bourgain, and the use of Montgomery's uncertainty principle and the large sieve to improve the $L^2$ estimates on major arcs. Our argument relies on some previous numerical work, namely the verification of Richstein of the even Goldbach conjecture up to $4 \times 10^{14}$, and the verification of van de Lune and (independently) of Wedeniwski of the Riemann hypothesis up to height $3.29 \times 10^9$.


Journal ArticleDOI
TL;DR: This article provides an almost characterization of the approximation classes appearing when using adaptive finite elements of Lagrange type of any fixed polynomial degree in terms of Besov regularity.
Abstract: We provide an almost characterization of the approximation classes appearing when using adaptive finite elements of Lagrange type of any fixed polynomial degree. The characterization is stated in terms of Besov regularity, and requires the approximation within spaces with integrability indices below one. This article generalizes to higher order finite elements the results presented for linear finite elements by Binev et. al. [BDDP 2002].