scispace - formally typeset
Search or ask a question

Showing papers in "Mathematics of Computation in 2014"


Journal ArticleDOI
TL;DR: In this article, a weak Galerkin (WG) method is introduced and analyzed for the second order elliptic equation formulated as a system of two first order linear equations.
Abstract: . A new weak Galerkin (WG) method is introduced and analyzed for the second order elliptic equation formulated as a system of two first order linear equations. This method, called WG-MFEM, is designed by using discontinuous piecewise polynomials on finite element partitions with arbitrary shape of polygons/polyhedra. The WG-MFEM is capable of providing very accurate numerical approximations for both the primary and flux variables. Allowing the use of discontinuous approximating functions on arbitrary shape of polygons/polyhedra makes the method highly flexible in practical computation. Optimal order error estimates in both discrete H and L norms are established for the corresponding weak Galerkin mixed finite element solutions.

440 citations


Journal ArticleDOI
TL;DR: In this paper, a local generalized finite element basis for elliptic problems with heterogeneous and highly varying coefficients was constructed, which does not rely on regularity of the solution or scale separation in the coefficient.
Abstract: This paper constructs a local generalized finite element basis for elliptic problems with heterogeneous and highly varying coefficients. The basis functions are solutions of local problems on vertex patches. The error of the corresponding generalized finite element method decays exponentially with respect to the number of layers of elements in the patches. Hence, on a uniform mesh of size $ H$, patches of diameter $ H\log (1/H)$ are sufficient to preserve a linear rate of convergence in $ H$ without pre-asymptotic or resonance effects. The analysis does not rely on regularity of the solution or scale separation in the coefficient. This result motivates new and justifies old classes of variational multiscale methods. - See more at: http://www.ams.org/journals/mcom/2014-83-290/S0025-5718-2014-02868-8/#sthash.z2CCFXIg.dpuf

424 citations


Journal ArticleDOI
TL;DR: Two parallel, non-iterative, multi-physics, domain decomposition methods are proposed to solve a coupled time-dependent Stokes-Darcy system with the Beavers-Joseph-Saffman-Jones interface condition; the unconditional stability and convergence of the first method is proved and illustrated through numerical experiments.
Abstract: Two parallel, non-iterative, multi-physics, domain decomposition methods are proposed to solve a coupled time-dependent Stokes-Darcy system with the Beavers-Joseph-Saffman-Jones interface condition. For both methods, spatial discretization is effected using finite element methods. The backward Euler method and a three-step backward differentiation method are used for the temporal discretization. Results obtained at previous time steps are used to approximate the coupling information on the interface between the Darcy and Stokes subdomains at the current time step. Hence, at each time step, only a single Stokes and a single Darcy problem need be solved; as these are uncoupled, they can be solved in parallel. The unconditional stability and convergence of the first method is proved and also illustrated through numerical experiments. The improved temporal convergence and unconditional stability of the second method is also illustrated through numerical experiments.

136 citations


Journal ArticleDOI
TL;DR: This paper introduces fully computable two-sided bounds on the eigenvalues of the Laplace operator on arbitrarily coarse meshes based on some approximation of the corresponding eigenfunction in the nonconforming Crouzeix-Raviart finite element space plus some postprocessing.
Abstract: This paper introduces fully computable two-sided bounds on the eigenvalues of the Laplace operator on arbitrarily coarse meshes based on some approximation of the corresponding eigenfunction in the nonconforming Crouzeix-Raviart finite element space plus some postprocessing. The efficiency of the guaranteed error bounds involves the global mesh-size and is proven for the large class of graded meshes. Numerical examples demonstrate the reliability of the guaranteed error control even with an inexact solve of the algebraic eigenvalue problem. This motivates an adaptive algorithm which monitors the discretisation error, the maximal mesh-size, and the algebraic eigenvalue error. The accuracy of the guaranteed eigenvalue bounds is surprisingly high with efficiency indices as small as 1.4.

108 citations


Journal ArticleDOI
TL;DR: It is proved that point sets that maximize the sum of suitable powers of the Euclidean distance between pairs of points form a sequence of QMC designs for H^s(S^d) with $s\in(d/2, d/2+1)$.
Abstract: We study equal weight numerical integration, or Quasi Monte Carlo (QMC) rules, for functions in a Sobolev space Hs(Sd) with smoothness parameter s > d/2 defined over the unit sphere Sd in Rd+1. Focusing on N-point configurations that achieve optimal order QMC error bounds (as is the case for efficient spherical designs), we are led to introduce the concept of QMC designs: these are sequences of N-point configurations XN on S d such that the worst-case error satisfies sup f∈H(S), ‖f‖Hs≤1 ∣∣∣∣ 1 N ∑ x∈XN f(x)− ∫ Sd f(x) dσd(x) ∣∣∣∣ = O ( N−s/d ) , N → ∞, with an implied constant that depends on the Hs(Sd)-norm, but is independent of N . Here σd is the normalized surface measure on S d. We provide methods for generation and numerical testing of QMC designs. An essential tool is an expression for the worst-case error in terms of a reproducing kernel for the space Hs(Sd) with s > d/2. As a consequence of this and a recent result of Bondarenko et al. on the existence of spherical designs with appropriate number of points, we show that minimizers of the N-point energy for this kernel form a sequence of QMC designs for Hs(Sd). Furthermore, without appealing to the Bondarenko et al. result, we prove that point sets that maximize the sum of suitable powers of the Euclidean distance between pairs of points form a sequence of QMC designs for Hs(Sd) with s in the interval (d/2, d/2 + 1). For such spaces there exist reproducing kernels with simple closed forms that are useful for numerical testing of optimal order Quasi Monte Carlo integration. Numerical experiments suggest that many familiar sequences of point sets on the sphere (equal area points, spiral points, minimal [Coulomb or logarithmic] energy points, and Fekete points) are QMC designs for appropriate values of s. For comparison purposes we show that configurations of random points that are independently and uniformly distributed on the sphere do not constitute QMC designs for any s > d/2. If (XN ) is a sequence of QMC designs for H s(Sd), we prove that it is also a sequence of QMC designs for Hs ′ (Sd) for all s′ ∈ (d/2, s). This leads to the question of determining the supremum of such s (here called the QMC strength of the sequence), for which we provide estimates based on computations for the aforementioned sequences. Received by the editor August 15, 2012 and, in revised form, February 26, 2013. 2010 Mathematics Subject Classification. Primary 65D30, 65D32; Secondary 11K38, 41A55.

106 citations


Journal ArticleDOI
TL;DR: A new type of multi-level correction scheme based on finite element discretization to solve eigenvalue problems and can improve the accuracy of eigenpair approximations after each correc-tion step.
Abstract: In this paper, a new type of multi-level correction scheme is proposed forsolving eigenvalue problems by finite element method. With this new scheme,the accuracy of eigenpair approximations can be improved after each correc-tion step which only needs to solve a source problem on finer finite elementspace and an eigenvalue problem on the coarsest finite element space. Thiscorrection scheme can improve the efficiency of solving eigenvalue problemsby finite element method.Keywords. Eigenvalue problem, multi-level correction scheme, finite ele-ment method, multi-space, multi-grid.AMS subject classifications. 65N30, 65B99, 65N25, 65L15 1 Introduction The purpose of this paper is to propose a new type of multi-level correction schemebased on finite element discretization to solve eigenvalue problems. The two-gridmethod for solving eigenvalue problems has been proposed and analyzed by Xu andZhou in [21]. The idea of the two-grid comes from [19, 20] for nonsymmetric orindefinite problems and nonlinear elliptic equations. Since then, there have existedmany numerical methods for solving eigenvalue problems based on the idea of two-grid method ([1, 6, 17]).

97 citations


Journal ArticleDOI
TL;DR: By sending the normal stabilization function to infinity in the hybridizable discontinuous Galerkin methods, a new class of divergence-conforming methods is obtained which maintains the convergence properties of the original methods for Stokes flows.
Abstract: In this paper, we show that by sending the normal stabilization function to infinity in the hybridizable discontinuous Galerkin methods previously proposed in [Comput. Methods Appl. Mech. Engrg. 199 (2010), 582–597], for Stokes flows, a new class of divergence-conforming methods is obtained which maintains the convergence properties of the original methods. Thus, all the components of the approximate solution, which use polynomial spaces of degree k, converge with the optimal order of k + 1 in L2 for any k ≥ 0. Moreover, the postprocessed velocity approximation is also divergenceconforming, exactly divergence-free and converges with order k + 2 for k ≥ 1 and with order 1 for k = 0. The novelty of the analysis is that it proceeds by taking the limit when the normal stabilization goes to infinity in the error estimates recently obtained in [Math. Comp., 80 (2011) 723–760].

93 citations


Journal ArticleDOI
TL;DR: A general Nitsche method, which encompasses symmetric and non-symmetric variants, is proposed for frictionless unilateral contact problems in elasticity and the optimal convergence of the method is established.
Abstract: A general Nitsche method, which encompasses symmetric and non-symmetric variants, is proposed for frictionless unilateral contact problems in elasticity. The optimal convergence of the method is established both for two and three-dimensional problems and Lagrange affine and quadratic finite element methods. Two and three-dimensional numerical experiments illustrate the theory.

86 citations


Journal ArticleDOI
TL;DR: A discrete functional space on general polygonal or polyhedral meshes is introduced which mimics two important properties of the standard Crouzeix-Raviart space, namely the continuity of mean values at interfaces and the existence of an interpolator which preserves the mean value of the gradient inside each element.
Abstract: In this work we introduce a discrete functional space on general polygonal or polyhedral meshes which mimics two important properties of the standard Crouzeix-Raviart space, namely the continuity of mean values at interfaces and the existence of an interpolator which preserves the mean value of the gradient inside each element. The construction borrows ideas from both Cell Centered Galerkin and Hybrid Finite Volume methods. The discrete function space is defined from cell and face unknowns by introducing a suitable piecewise affine reconstruction on a (fictitious) pyramidal subdivision of the original mesh. Two applications are considered in which the discrete space plays an important role, namely (i) the design of a locking-free primal (as opposed to mixed) method for quasi-incompressible planar elasticity on general polygonal meshes; (ii) the design of an inf-sup stable method for the Stokes equations on general polygonal or polyhedral meshes. In this context, we also propose a general modification, applicable to any suitable discretization, which guarantees that the velocity approximation is unaffected by the presence of large irrotational body forces provided a Helmholtz decomposition of the right-hand side is available. The relation between the proposed methods and classical finite volume and finite element schemes on standard meshes is investigated. Finally, similar ideas are exploited to mimic key properties of the lowest-order Raviart-Thomas space on general polygonal or polyhedral meshes.

74 citations


Journal ArticleDOI
TL;DR: A finite element construction for use on the class of convex, planar polygons and it is shown it obtains a quadratic error convergence estimate.
Abstract: We introduce a nite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n-gon satisfying simple geometric criteria, our construction produces 2n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n(n + 1)=2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called ‘serendipity’ elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of generic convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed.

72 citations


Journal ArticleDOI
TL;DR: These projections have the properties that they commute with the exterior derivative and are bounded in the HΛk(Ω) norm independent of the mesh size h, and are locally defined in the sense that they are defined by local operators on overlapping macroelements, in the spirit of the Clement interpolant.
Abstract: We construct projections from HΛk(Ω), the space of differential k forms on Ω which belong to L2(Ω) and whose exterior derivative also belongs to L2(Ω), to finite dimensional subspaces of HΛk(Ω) consisting of piecewise polynomial differential forms defined on a simplicial mesh of Ω. Thus, their definition requires less smoothness than assumed for the definition of the canonical interpolants based on the degrees of freedom. Moreover, these projections have the properties that they commute with the exterior derivative and are bounded in the HΛk(Ω) norm independent of the mesh size h. Unlike some other recent work in this direction, the projections are also locally defined in the sense that they are defined by local operators on overlapping macroelements, in the spirit of the Clement interpolant. A double complex structure is introduced as a key tool to carry out the construction.

Journal ArticleDOI
TL;DR: The Paramodular Conjecture, supported by these computations and consistent with the Langlands philosophy and the work of H. Yoshida, is a partial extension to degree 2 of the Shimura-Taniyama Conjectures.
Abstract: We classify Siegel modular cusp forms of weight two for the paramodular group K(p) for primes p< 600. We find that weight two Hecke eigenforms beyond the Gritsenko lifts correspond to certain abelian varieties defined over the rationals of conductor p. The arithmetic classification is in a companion article by A. Brumer and K. Kramer. The Paramodular Conjecture, supported by these computations and consistent with the Langlands philosophy and the work of H. Yoshida, is a partial extension to degree 2 of the Shimura-Taniyama Conjecture. These nonlift Hecke eigenforms share Euler factors with the corresponding abelian variety $A$ and satisfy congruences modulo \ell with Gritsenko lifts, whenever $A$ has rational \ell-torsion.

Journal ArticleDOI
TL;DR: In this paper, it was shown that barycentric weights for the roots or extrema of classical families of orthogonal polynomials are expressible explicitly in terms of the nodes and weights of the corresponding Gaussian quadrature rule.
Abstract: Barycentric interpolation is arguably the method of choice for numerical polynomial interpolation. In this paper we show that barycentric weights for the roots or extrema of classical families of orthogonal polynomials are expressible explicitly in terms of the nodes and weights of the corresponding Gaussian quadrature rule. Based on the Glaser-Liu-Rokhlin algorithm for Gaussian quadrature, this leads to an O(n) computational scheme for computing barycentric weights. For the Jacobi case, known results on the barycentric weights of the Chebyshev points are recovered as special cases and some new results, such as the barycentric weights associated with the Jacobi points and the Gauss-Jacobi-Lobatto points, are obtained. We also show that the interpolants in the roots or extrema of classical orthogonal polynomials can be computed rapidly and stably by using their barycentric representations.

Journal ArticleDOI
TL;DR: This paper considers the convex minimization problem with linear constraints and a separable objective function which is the sum of many individual functions without coupled variables and develops an algorithm developed by splitting the augmented Lagrangian function in a parallel way.
Abstract: This paper considers the convex minimization problem with linear constraints and a separable objective function which is the sum of many individual functions without coupled variables. An algorithm is developed by splitting the augmented Lagrangian function in a parallel way. The new algorithm differs substantially from existing splitting methods in alternating style which require solving the decomposed subproblems sequentially, while it remains the main superiority of existing splitting methods in that the resulting subproblems could be simple enough to have closed-form solutions for such an application whose functions in the objective are simple. We show applicability and encouraging efficiency of the new algorithm by some applications in image processing.


Journal ArticleDOI
TL;DR: It is shown that these problems, as well as the bounded submonoid membership problem, are P-time decidable in hyperbolic groups and various examples of finitely presented groups where the subset sum problem is NP-complete are given.
Abstract: We generalize the classical knapsack and subset sum problems to arbitrary groups and study the computational complexity of these new problems. We show that these problems, as well as the bounded submonoid membership problem, are P-time decidable in hyperbolic groups and give various examples of finitely presented groups where the subset sum problem is NP-complete.

Journal ArticleDOI
TL;DR: This is the published version, also available here: http://dx.doi.org/10.1090/S0025-5718-2014-02822-6, of a version of this paper originally published in Math.
Abstract: This is the published version, also available here: http://dx.doi.org/10.1090/S0025-5718-2014-02822-6. First published in Math. Comput. in 2014, published by the American Mathematical Society

Journal ArticleDOI
TL;DR: In this paper, the number of zeros of Dirichlet L-functions and Dedekind zeta functions in rectangles has been shown to be polynomial.
Abstract: This paper contains new explicit upper bounds for the number of zeroes of Dirichlet L-functions and Dedekind zeta-functions in rectangles.

Journal ArticleDOI
TL;DR: The first main result is optimality of an adaptive algorithm for the effective eigenvalue computation for the Laplace operator with optimal convergence rates in terms of the number of degrees of freedom relative to the concept of a nonlinear approximation class.
Abstract: The nonconforming approximation of eigenvalues is of high practical interest because it allows for guaranteed upper and lower eigenvalue bounds and for a convenient computation via a consistent diagonal mass matrix in 2D. The first main result is a comparison which states equivalence of the error of the nonconforming eigenvalue approximation with its best-approximation error and its error in a conforming computation on the same mesh. The second main result is optimality of an adaptive algorithm for the effective eigenvalue computation for the Laplace operator with optimal convergence rates in terms of the number of degrees of freedom relative to the concept of a nonlinear approximation class. The analysis includes an inexact algebraic eigenvalue computation on each level of the adaptive algorithm which requires an iterative algorithm and a controlled termination criterion. The analysis is carried out for the first eigenvalue in a Laplace eigenvalue model problem in 2D.

Journal ArticleDOI
TL;DR: In this paper, the minimal generators of the ideal of the trifocal variety were derived from representation theory, symbolic com- putational algebra, and numerical algebraic geometry, and aective test for determining whether a given tensor is a trifocal tensor was also given.
Abstract: Techniques from representation theory, symbolic com- putational algebra, and numerical algebraic geometry are used to nd the minimal generators of the ideal of the trifocal variety. An eective test for determining whether a given tensor is a trifocal tensor is also given.

Journal ArticleDOI
TL;DR: Almost second order convergence is proven for the approximations of the continuous optimal control problem, based on the improved error estimates on the boundary and optimal regularity in weighted Sobolev spaces.
Abstract: This paper is concerned with the discretization of linear elliptic partial differential equations with Neumann boundary condition in polygonal domains. The focus is on the derivation of error estimates in the L2-norm on the boundary for linear finite elements. Whereas common techniques yield only suboptimal results, a new approach in this context is presented which allows for quasi-optimal ones, i.e., for domains with interior angles smaller than 2π/3 a convergence order two (up to a logarithmic factor) can be achieved using quasi-uniform meshes. In the presence of internal angles greater than 2π/3 which reduce the convergence rates on quasi-uniform meshes, graded meshes are used to maintain the quasi-optimal error bounds. This result is applied to linear-quadratic Neumann boundary control problems with pointwise inequality constraints on the control. The approximations of the control are piecewise constant. The state and the adjoint state are discretized by piecewise linear finite elements. In a postprocessing step approximations of the continuous optimal control are constructed which possess superconvergence properties. Based on the improved error estimates on the boundary and optimal regularity in weighted Sobolev spaces almost second order convergence is proven for the approximations of the continuous optimal control problem. Mesh grading techniques are again used for domains with interior angles greater than 2π/3. A certain regularity of the active set is assumed.

Journal ArticleDOI
TL;DR: It is shown that the uniform local-ellipticity ensures that the resulting FVM has a unique solution which enjoys an optimal error estimate and is derived from the characterization of the smallest eigenvalues of the matrices associated with the FVMs.
Abstract: We provide a method for the construction of higher-order finite volume methods (FVMs) for solving boundary value problems of the two dimensional elliptic equations. Specifically, when the trial space of the FVM is chosen to be a conforming triangle mesh finite element space, we describe a construction of the associated test space that guarantees the uniform localellipticity of the family of the resulting discrete bilinear forms. We show that the uniform local-ellipticity ensures that the resulting FVM has a unique solution which enjoys an optimal error estimate. We characterize the uniform localellipticity in terms of the uniform boundedness (below by a positive constant) of the smallest eigenvalues of the matrices associated with the FVMs. We then translate the characterization to equivalent requirements on the shapes of the triangle meshes for the trial spaces. Four convenient sufficient conditions for the family of the discrete bilinear forms to be uniformly local-elliptic are derived from the characterization. Following the general procedure, we construct four specific FVMs which satisfy the uniform local-ellipticity. Numerical results are presented to verify the theoretical results on the convergence order of the FVMs.

Journal ArticleDOI
TL;DR: A boundary integral solution is developed that is competitive with explicit finite difference methods, both in terms of accuracy and speed, and extends to higher spatial dimensions using an alternating direction implicit (ADI) framework.
Abstract: We present a new method for solving the wave equation implicitly. Our approach is to discretize the wave equation in time, following the method of lines transpose, sometimes referred to as the transverse method of lines, or Rothe’s method. We differ from conventional methods that follow this approach, in that we solve the resulting system of partial differential equations using boundary integral methods. Our algorithm extends to higher spatial dimensions using an alternating direction implicit (ADI) framework. Thus we develop a boundary integral solution, that is competitive with explicit finite difference methods, both in terms of accuracy and speed. However, it provides more flexibility in the treatment of source functions, and complex boundaries. We provide the analytical details of our one-dimensional method herein, along with a proof of the convergence of our schemes, in free space and on a bounded domain. We find that the method is unconditionally stable, and achieves second order accuracy. A caveat of the analysis is the derivation of a unique and novel optimal quadrature method, which can be viewed as a Lax-type correction.

Journal ArticleDOI
Sharif Rahman1
TL;DR: In this article, error analysis for approximations derived from referential dimensional decomposition (RDD) and analysis-of-variance dimensionality (ADD) is presented for the lower and upper bounds of the expected errors committed by bivariately and arbitrarily truncated RDD approximation when the reference point is selected randomly.
Abstract: The main theme of this paper is error analysis for approximations derived from two variants of dimensional decomposition of a multivariate function: the referential dimensional decomposition (RDD) and analysis-of-variance dimensional decomposition (ADD). New formulae are presented for the lower and upper bounds of the expected errors committed by bivariately and arbitrarily truncated RDD approximations when the reference point is selected randomly, thereby facilitating a means for weighing RDD against ADD approximations. The formulae reveal that the expected error from the S-variate RDD approximation of a function of N variables, where $0 \le S < N < \infty$, is at least $2^{S+1}$ times greater than the error from the S-variate ADD approximation. Consequently, ADD approximations are exceedingly more precise than RDD approximations. The analysis also finds the RDD approximation to be sub-optimal for an arbitrarily selected reference point, whereas the ADD approximation always results in minimum error. Therefore, the RDD approximation should be used with caution.

Journal ArticleDOI
TL;DR: A discrete approximation in time and in space of a Hilbert space valued stochastic process u(t)t∈[0,T ] satisfying a Stochastic linear evolution equation with a positive-type memory term driven by an additive Gaussian noise is investigated.
Abstract: In this paper we investigate a discrete approximation in time and in space of a Hilbert space valued stochastic process {u(t)}t∈[0,T ] satisfying a stochastic linear evolution equation with a positive-type memory term driven by an additive Gaussian noise. The equation can be written in an abstract form as du+ (∫ t 0 b(t− s)Au(s) ds ) dt = dW , t ∈ (0, T ]; u(0) = u0 ∈ H, where W is a Q-Wiener process on H = L(D) and where the main example of b we consider is given by b(t) = tβ−1/Γ (β), 0 0 such that A−α has finite trace and that Q is bounded from H into D(A) for some real κ with α− 1 β+1 < κ ≤ α. The discretization is achieved via an implicit Euler scheme and a Laplace transform convolution quadrature in time (parameter ∆t = T/n), and a standard continuous finite element method in space (parameter h). Let un,h be the discrete solution at T = n∆t. We show that ( E‖un,h − u(T )‖ )1/2 = O(h +∆t), for any γ < (1− (β + 1)(α− κ))/2 and ν ≤ 1 β+1 − α+ κ. M. Kovacs Department of Mathematics and Statistics, University of Otago, PO Box 56, Dunedin, 9054, New Zealand. E-mail: mkovacs@maths.otago.ac.nz J. Printems Laboratoire d’Analyse et de Mathematiques Appliquees CNRS UMR 8050, 61, avenue du General de Gaulle, Universite Paris–Est, 94010 Creteil, France. E-mail: printems@u-pec.fr 2 Mihaly Kovacs, Jacques Printems

Journal ArticleDOI
TL;DR: This paper considers the time-dependent two-phase Stefan problem and derives a posteriori error estimates and adaptive strategies for its conforming spatial and backward Euler temporal discretizations, and proposes an adaptive algorithm, which ensures computational savings through the online choice of a sufficient regularization parameter.
Abstract: We consider in this paper the time-dependent two-phase Stefan problem and derive a posteriori error estimates and adaptive strategies for its conforming spatial and backward Euler temporal discretizations. Regularization of the enthalpy-temperature function and iterative linearization of the arising systems of nonlinear algebraic equations are considered. Our estimators yield a guaranteed and fully computable upper bound on the dual norm of the residual, as well as on the L2(L2) error of the temperature and the L2(H1) error of the enthalpy. Moreover, they allow to distinguish the space, time, regularization, and linearization error components. An adaptive algorithm is proposed, which ensures computational savings through the online choice of a sufficient regularization parameter, a stopping criterion for the linearization iterations, local space mesh refinement, time step adjustment, and equilibration of the spatial and temporal errors. We also prove the efficiency of our estimate. Our analysis is quite general and is not focused on a specific choice of the space discretization and of the linearization. As an example, we apply it to the vertex-centered finite volume (finite element with mass lumping and quadrature) and Newton methods. Numerical results illustrate the effiectiveness of our estimates and the performance of the adaptive algorithm.


Journal ArticleDOI
TL;DR: In this paper, the authors design consistent discontinuous Galerkin finite element schemes for the approximation of the Euler-Korteweg and the Navier-Stokes-Korveeweg systems.
Abstract: We design consistent discontinuous Galerkin finite element schemes for the approximation of the Euler-Korteweg and the Navier-Stokes-Korteweg systems. We show that the scheme for the Euler-Korteweg system is energy and mass conservative and that the scheme for the Navier-Stokes-Korteweg system is mass conservative and monotonically energy dissipative. In this case the dissipation is isolated to viscous effects, that is, there is no numerical dissipation. In this sense the methods are consistent with the energy dissipation of the continuous PDE systems. - See more at: http://www.ams.org/journals/mcom/2014-83-289/S0025-5718-2014-02792-0/home.html#sthash.rwTIhNWi.dpuf

Journal ArticleDOI
Bin Han1
TL;DR: By solving only small systems of linear equations, this paper shall completely settle the problem of constructing all possible dual framelet filter banks with or without symmetry and with the shortest possible filter supports by introducing a step-by-step efficient algorithm.
Abstract: Dual wavelet frames and their associated dual framelet filter banks are often constructed using the oblique extension principle. In comparison with the construction of tight wavelet frames and tight framelet filter banks, it is indeed quite easy to obtain some particular examples of dual framelet filter banks with or without symmetry from any given pair of low-pass filters. However, such constructed dual framelet filter banks are often too particular to have some desirable properties such as balanced filter supports between primal and dual filters. From the point of view of both theory and application, it is important and interesting to have an algorithm which is capable of finding all possible dual framelet filter banks with symmetry and with the shortest possible filter supports from any given pair of low-pass filters with symmetry. However, to our best knowledge, this issue has not been resolved yet in the literature and one often has to solve systems of nonlinear equations to obtain nontrivial dual framelet filter banks. Given the fact that the construction of dual framelet filter banks is widely believed to be very flexible, the lack of a systematic algorithm for constructing all dual framelet filter banks in the literature is a little bit surprising to us. In this paper, by solving only small systems of linear equations, we shall completely settle this problem by introducing a step-by-step efficient algorithm to construct all possible dual framelet filter banks with or without symmetry and with the shortest possible filter supports. As a byproduct, our algorithm leads to a simple algorithm for constructing all symmetric tight framelet filter banks with two high-pass filters from a given low-pass filter with symmetry. Examples will be provided to illustrate our algorithm. To explain and to understand better our algorithm and dual framelet filter banks, we shall also discuss some properties of our algorithms and dual framelet filter banks in this paper.

Journal ArticleDOI
TL;DR: A rigorous implementation of the Lagarias and Odlyzko Analytic Method to evaluate the prime counting function and its use to compute unconditionally the number of primes less than 10 is described.
Abstract: We describe a rigorous implementation of the Lagarias and Odlyzko Analytic Method to evaluate the prime counting function and its use to compute unconditionally the number of primes less than 10.