scispace - formally typeset
Search or ask a question

Showing papers in "Mathematics of Computation in 2011"


Journal ArticleDOI
TL;DR: In this paper, a hybridizable discontinuous Galerkin method for numerically solving the Stokes equations is presented, which uses polynomials of degree k for all the components of the approximate solution of the gradient-velocity-pressure formulation.
Abstract: In this paper, we analyze a hybridizable discontinuous Galerkin method for numerically solving the Stokes equations. The method uses polynomials of degree k for all the components of the approximate solution of the gradient-velocity-pressure formulation. The novelty of the analysis is the use of a new projection tailored to the very structure of the numerical traces of the method. It renders the analysis of the projection of the errors very concise and allows us to see that the projection of the error in the velocity superconverges. As a consequence, we prove that the approximations of the velocity gradient, the velocity and the pressure converge with the optimal order of convergence of k+1 in L 2 for any k > 0. Moreover, taking advantage of the superconvergence properties of the velocity, we introduce a new element-by-element postprocessing to obtain a new velocity approximation which is exactly divergence-free, H(div)-conforming, and converges with order k+2 for k > 1 and with order 1 for k = 0. Numerical experiments are presented which validate the theoretical results.

181 citations


Journal ArticleDOI
TL;DR: The problem of the rate of convergence of Legendre approximation is considered, and explicit barycentric weights, in terms of Gauss-Legendre points and corresponding quadrature weights, are presented that allow a fast evaluation of the Legendre interpolation formula.
Abstract: The problem of the rate of convergence of Legendre approximation is considered. We first establish the decay rates of the coefficients in the Legendre series expansion and then derive error bounds of the truncated Legendre series in the uniform norm. In addition, we consider Legendre approximation with interpolation. In particular, we are interested in the barycentric Lagrange formula at the Gauss-Legendre points. Explicit barycentric weights, in terms of Gauss-Legendre points and corresponding quadrature weights, are presented that allow a fast evaluation of the Legendre interpolation formula. Error estimates for Legendre interpolation polynomials are also given.

159 citations



Journal ArticleDOI
TL;DR: In this paper, the authors developed and analyzed C(0) penalty methods for the fully nonlinear Monge-Ampere equation det(D(2)u) = f in two dimensions, where the key idea is to build discretizations such that the resulting discrete linearizations are symmetric, stable, and consistent with the continuous linearization.
Abstract: In this paper, we develop and analyze C(0) penalty methods for the fully nonlinear Monge-Ampere equation det(D(2)u) = f in two dimensions. The key idea in designing our methods is to build discretizations such that the resulting discrete linearizations are symmetric, stable, and consistent with the continuous linearization. We are then able to show the well-posedness of the penalty method as well as quasi-optimal error estimates using the Banach fixed-point theorem as our main tool. Numerical experiments are presented which support the theoretical results.

102 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that Dejean's conjecture holds for the last remaining open values of n, namely 15 < n < 26, where n is the number of nodes in the graph.
Abstract: We prove Dejean's conjecture. Specifically, we show that Dejean's conjecture holds for the last remaining open values of n, namely 15 < n < 26.

98 citations


Journal ArticleDOI
TL;DR: This approach uses the graph of l-isogenies to eciently compute l mod p for many primes p of a suitable form, and then applies the Chinese Remainder Theorem (CRT).
Abstract: We present a new algorithm to compute the classical modular polynomial l in the rings Z(X;Y ) and (Z=mZ)(X;Y ), for a prime l and any positive integer m. Our approach uses the graph of l-isogenies to eciently compute l mod p for many primes p of a suitable form, and then applies the Chinese Remainder Theorem (CRT). Under the Generalized Riemann Hypoth- esis (GRH), we achieve an expected running time of O(l 3 (logl) 3 log logl), and compute l mod m using O(l 2 (logl) 2 +l 2 logm) space. We have used the new algorithm to compute l with l over 5000, and l mod m with l over 20000. We also consider several modular functions g for which g is smaller than l, allowing us to handle l over 60000.

97 citations


Journal ArticleDOI
TL;DR: In this paper, a new analytical approach to operator splitting for KdV equations was proposed, where A is a linear operator and B is a quadratic operator, and it was shown that the Godunov and Strang splitting methods converge with the expected rates.
Abstract: We provide a new analytical approach to operator splitting for equations of the type u t = Au + B(u), where A is a linear operator and B is quadratic. A particular example is the Korteweg-de Vries (KdV) equation u t - uu x + u xxx = 0. We show that the Godunov and Strang splitting methods converge with the expected rates if the initial data are sufficiently regular.

93 citations


Journal ArticleDOI
TL;DR: This paper analyzes fully-mixed finite element methods for the coupling of fluid flow with porous media flow and shows that with the present approach the Stokes and Darcy flows can be approximated with the same family of finite element subspaces without adding any stabilization term.
Abstract: In this paper we analyze fully-mixed finite element methods for the coupling of fluid flow with porous media flow. Flows are governed by the Stokes and Darcy equations, respectively, and the corresponding transmission conditions are given by mass conservation, balance of normal forces, and the Beavers-Joseph-Saffman law. The fully-mixed concept employed here refers to the fact that we consider dual-mixed formulations in both the Stokes domain and the Darcy region, which means that the main unknowns are given by the pseudostress and the velocity in the fluid, together with the velocity and the pressure in the porous medium. In addition, the transmission conditions become essential, which leads to the introduction of the traces of the porous media pressure and the fluid velocity as the associated Lagrange multipliers. We apply the Fredholm and Babuska-Brezzi theories to derive sufficient conditions for the unique solvability of the resulting continuous formulation. Since the equations and unknowns can be ordered in several different ways, we choose the one yielding a doubly mixed structure for which the inf-sup conditions of the off-diagonal bilinear forms follow straightforwardly. Next, adapting to the discrete case the arguments of the continuous analysis, we are able to establish suitable hypotheses on the finite element subspaces ensuring that the associated Galerkin scheme becomes well posed. In addition, we show that the existence of uniformly bounded discrete liftings of the normal traces simplifies the derivation of the corresponding stability estimates. A feasible choice of subspaces is given by Raviart-Thomas elements of lowest order and piecewise constants for the velocities and pressures, respectively, in both domains, together with continuous piecewise linear elements for the Lagrange multipliers. This example confirms that with the present approach the Stokes and Darcy flows can be approximated with the same family of finite element subspaces without adding any stabilization term. Finally, several numerical results illustrating the good performance of the method with these discrete spaces, and confirming the theoretical rate of convergence, are provided.

83 citations


Journal ArticleDOI
TL;DR: Some fundamental theorems of the multiplicity, including local finiteness, consistency, perturbation invarance, and depth-deflatability, are proved.
Abstract: As an attempt to bridge between numerical analysis and algebraic geometry, this paper formulates the multiplicity for the general nonlinear system at an isolated zero, presents an algorithm for computing the multiplicity strucuture, proposes a depth-deflation method for accurate computation of multiple zeros, and introduces the basic algebraic theory of the multiplicity. Furthermore, this paper elaborates and proves some fundamental theorems of the multiplicity, including local finiteness, consistency, perturbation invarance, and depth-deflatability. The proposed algorithms can accurately compute the multiplicity and the multiple zeros using floating point arithmetic even if the nonlinear system is perturbed.

82 citations


Journal ArticleDOI
TL;DR: Using the concept of Geometric Weakly Admissible Meshes (see §2 below) together with an algorithm based on the classical QR factorization of matrices, the authors compute efficient points for discrete multivariate least squares approximation and Lagrange interpolation.
Abstract: Using the concept of Geometric Weakly Admissible Meshes (see §2 below) together with an algorithm based on the classical QR factorization of matrices, we compute efficient points for discrete multivariate least squares approximation and Lagrange interpolation.

82 citations


Journal ArticleDOI
TL;DR: It is shown that the P k -P k-1 element is stable and of optimal order in approximation, on a family of uniform tetrahedral grids, for all k ≥ 6, and the finite element spaces for the pressure can be avoided in computation, if a classic iterated penalty method is applied.
Abstract: It was shown two decades ago that the P k -P k-1 mixed element on triangular grids, approximating the velocity by the continuous P k piecewise polynomials and the pressure by the discontinuous P k-1 piecewise polynomials, is stable for all k > 4, provided the grids are free of a nearly-singular vertex. The problem with the method in 3D was posted then and remains open. The problem is solved partially in this work. It is shown that the P k -P k-1 element is stable and of optimal order in approximation, on a family of uniform tetrahedral grids, for all k ≥ 6. The analysis is to be generalized to non-uniform grids, when we can deal with the complicity of 3D geometry. For the divergence-free elements, the finite element spaces for the pressure can be avoided in computation, if a classic iterated penalty method is applied. The finite element solutions for the pressure are computed as byproducts from the iterate solutions for the velocity. Numerical tests are provided.

Journal ArticleDOI
TL;DR: It is proved that the AEFEM gives a contraction for the sum of the energy error and the scaled error estimator, between two consecutive adaptive loops provided the initial mesh is fine enough.
Abstract: We consider a standard Adaptive Edge Finite Element Method (AEFEM) based on arbitrary order Nedelec edge elements, for three-dimensional indefinite time-harmonic Maxwell equations. We prove that the AEFEM gives a contraction for the sum of the energy error and the scaled error estimator, between two consecutive adaptive loops provided the initial mesh is fine enough. Using the geometric decay, we show that the AEFEM yields the best-possible decay rate of the error plus oscillation in terms of the number of degrees of freedom. The main technical contribution of the paper is in the establishment of a quasi-orthogonality and a localized a posteriori error estimator.

Journal ArticleDOI
TL;DR: A new two-grid discretization method for solving partial differential equation or integral equation eigenvalue problems that significantly improves the theoretical error estimate and allows a much coarser mesh to achieve the same asymptotic convergence rate.
Abstract: This paper provides a new two-grid discretization method for solving partial differential equation or integral equation eigenvalue problems In 2001, Xu and Zhou introduced a scheme that reduces the solution of an eigenvalue problem on a finite element grid to that of one single linear problem on the same grid together with a similar eigenvalue problem on a much coarser grid By solving a slightly different linear problem on the fine grid, the new algorithm in this paper significantly improves the theoretical error estimate which allows a much coarser mesh to achieve the same asymptotic convergence rate Numerical examples are also provided to demonstrate the efficiency of the new method

Journal ArticleDOI
TL;DR: There are numerical integration rules which achieve an exponential convergence of the worst-case integration error of multivariate integration for a weighted Korobov space for which the Fourier coefficients of the functions decay exponentially fast.
Abstract: In this paper we study multivariate integration for a weighted Korobov space for which the Fourier coefficients of the functions decay exponentially fast. This implies that the functions of this space are infinitely times differentiable. Weights of the Korobov space monitor the influence of each variable and each group of variables. We show that there are numerical integration rules which achieve an exponential convergence of the worst-case integration error. We also investigate the dependence of the worst-case error on the number of variables s, and show various tractability results under certain conditions on weights of the Korobov space. Tractability means that the dependence on s is never exponential, and sometimes the dependence on s is polynomial or there is no dependence on s at all.

Journal ArticleDOI
TL;DR: An approximation technique for the Maxwell eigenvalue problem using H1-conforming finite elements is proposed and shown to be convergent and spectrally correct.
Abstract: We propose and analyze an approximation technique for the Maxwell eigenvalue problem using H1-conforming finite elements. The key idea consists of considering a mixed method controlling the divergence of the electric field in a fractional Sobolev space H−α with α ∈ ( 1 2 , 1). The method is shown to be convergent and spectrally correct.

Journal ArticleDOI
TL;DR: A new method for approximating Hilbert transforms and their inverse throughout the complex plane is constructed, which can be formulated as Riemann-Hilbert problems via Plemelj's lemma and can compute the Hilbert transform of a more general class of functions on the real line than is possible with existing methods.
Abstract: We construct a new method for approximating Hilbert transforms and their inverse throughout the complex plane. Both problems can be formulated as Riemann-Hilbert problems via Plemelj's lemma. Using this framework, we re-derive existing approaches for computing Hilbert transforms over the real line and unit interval, with the added benefit that we can compute the Hilbert transform in the complex plane. We then demonstrate the power of this approach by generalizing to the half line. Combining two half lines, we can compute the Hilbert transform of a more general class of functions on the real line than is possible with existing methods.

Journal ArticleDOI
TL;DR: This paper deals with the problem of reconstructing the electromagnetic parameters and the shape of a target from multi-static response matrix measurements at a single frequency using long-wavelength asymptotic expansions of the measurements of high-order to image geometric details of an electromagnetic target that are finer than the equivalent ellipse.
Abstract: This paper deals with the problem of reconstructing the electromagnetic parameters and the shape of a target from multi-static response matrix measurements at a single frequency. The target is of characteristic size less than the operating wavelength. Using long-wavelength asymptotic expansions of the measurements of high-order, we show how the electromagnetic parameters and the equivalent ellipse of the target can be reconstructed. The asymptotic expansions of the measurements are written in terms of the new concept of frequency dependent polarization tensors. Moreover, we extend the optimization approach proposed in Part I [9] to image geometric details of an electromagnetic target that are finer than the equivalent ellipse. The equivalent ellipse still provides a good initial guess for the optimization procedure. However, compared to the conductivity case in [9], the cost functional measures the discrepancy between the computed and measured high-order frequency dependent polarization tensors rather than between the generalized polarization tensors. The main reason for such a modification of the cost functional is the fact that the (measured) frequency dependent polarization tensors can be easily obtained from multistatic measurements by solving a linear system while the derivation of the generalized polarization tensors from measurements requires more delicate treatment. The proposed methods are numerically implemented to demonstrate their validity and efficiency. Mathematics Subject Classification (MSC2000): 35R30, 35B30.

Journal ArticleDOI
TL;DR: A cuspidal newform for Γ1(N) with weight k ≥ 2 and character e is computed using modular symbols and the corresponding automorphic representation of the adele group GL2(AQ) is defined.
Abstract: The problem. Let f be a cuspidal newform for Γ1(N) with weight k ≥ 2 and character e. There are well-established methods for computing such forms using modular symbols; see [Ste07]. Let πf be the corresponding automorphic representation of the adele group GL2(AQ).


Journal ArticleDOI
TL;DR: The presented adaptive algorithm for Raviart-Thomas mixed finite element methods solves the Poisson model problem, with optimal convergence rate.
Abstract: Various applications in fluid dynamics and computational continuum mechanics motivate the development of reliable and efficient adaptive algorithms for mixed finite element methods. In order to save degrees of freedom, not all but just some selected set of finite element domains are refined. Hence the fundamental question of convergence as well as the question of optimality require new mathematical arguments. The presented adaptive algorithm for Raviart-Thomas mixed finite element methods solves the Poisson model problem, with optimal convergence rate. Chen, Holst, and Xu presented "convergence and optimality of adaptive mixed finite element methods" (2008) following arguments of Rob Stevenson for the conforming finite element method. Their algorithm reduces oscillations separately, before approximating the solution by some adaptive algorithm in the spirit of W. Dorfler (1996). The algorithm proposed here appears more natural in switching to either reduction of the edge-error estimator or of the oscillations.

Journal ArticleDOI
TL;DR: This new method is based on numerical homogenization and allows to significantly reduce the computational cost of a fine scale discontinuous Galerkin method by probing the fine scale data on sampling domains within a macroscopic partition of the computational domain.
Abstract: An analysis of a multiscale symmetric interior penalty discontinuous Galerkin finite element method for the numerical discretization of elliptic problems with multiple scales is proposed. This new method, first described in [A. Abdulle, C.R. Acad. Sci. Paris, Ser. I 346 (2008)] is based on numerical homogenization. It allows to significantly reduce the computational cost of a fine scale discontinuous Galerkin method by probing the fine scale data on sampling domains within a macroscopic partition of the computational domain. Macroscopic numerical fluxes, an essential ingredient of discontinuous Galerkin finite elements, can be recovered from the computation on the sampling domains with negligible computation overhead. Fully discrete a priori error bounds are derived in the L-2 and H-1 norms.

Journal ArticleDOI
TL;DR: In this paper, constructive and non-constructive techniques are employed to enumerate Latin squares and related objects, and it is established that there are (i) 2036029552582883134196099 main classes of Latin squares of order 11; (ii) 6108088657705958932053657 isomorphism classes of one-factorizations of K 11,11 ; (iii) 12216177315369229261482540 isotopy classes of normal Latin squares; (iv) 14781574551580444528
Abstract: Constructive and nonconstructive techniques are employed to enumerate Latin squares and related objects. It is established that there are (i) 2036029552582883134196099 main classes of Latin squares of order 11; (ii) 6108088657705958932053657 isomorphism classes of one-factorizations of K 11,11 ; (iii) 12216177315369229261482540 isotopy classes of Latin squares of order 11; (iv) 1478157455158044452849321016 isomorphism classes of loops of order 11; and (v) 19464657391668924966791023043937578299025 isomorphism classes of quasigroups of order 11. The enumeration is constructive for the 1151666641 main classes with an autoparatopy group of order at least 3.

Journal ArticleDOI
TL;DR: The best approximation property of the error in the W 1 ∞ norm is extended to more general graded meshes, and a discussion of the properties of and relationships between similar mesh restrictions that have appeared in the literature is discussed.
Abstract: We consider finite element methods for a model second-order elliptic equation on a general bounded convex polygonal or polyhedral domain. Our first main goal is to extend the best approximation property of the error in the W 1 ∞ norm, which is known to hold on quasi-uniform meshes, to more general graded meshes. We accomplish it by a novel proof technique. This result holds under a condition on the grid which is mildly more restrictive than the shape regularity condition typically enforced in adaptive codes. The second main contribution of this work is a discussion of the properties of and relationships between similar mesh restrictions that have appeared in the literature.

Journal ArticleDOI
TL;DR: The concept of equivariant Grobner bases was introduced in this paper, where a monoid acts by homomorphisms on monomials in potentially infinitely many variables and the action must be compatible with a term order.
Abstract: Exploiting symmetry in Grobner basis computations is difficult when the symmetry takes the form of a group acting by automorphisms on monomials in finitely many variables. This is largely due to the fact that the group elements, being invertible, cannot preserve a term order. By contrast, inspired by work of Aschenbrenner and Hillar, we introduce the concept of equivariant Grobner basis in a setting where a monoid acts by homomorphisms on monomials in potentially infinitely many variables. We require that the action be compatible with a term order, and under some further assumptions derive a Buchberger-type algorithm for computing equivariant Grobner bases. Using this algorithm and the monoid of strictly increasing functions ℕ → ℕ we prove that the kernel of the ring homomorphism ℝ[y ij |i, j ∈ ℕ, i>j] → ℝ[s i , t i | i ∈ ℕ], y ij ↦ s i s j + t i t j is generated by two types of polynomials: off-diagonal 3 x 3-minors and pentads. This confirms a conjecture by Drton, Sturmfels, and Sullivant on the Gaussian two-factor model from algebraic statistics.

Journal ArticleDOI
TL;DR: Modulation of the hodograph by a scalar polynomial is proposed as a means of introducing additional degrees of freedom, in cases where solutions to the end–point interpolation problem are not found.
Abstract: The construction of space curves with rational rotation-minimizing frames (RRMF curves) by the interpolation of G1 Hermite data, i.e., initial/final points pi and pf and frames (ti, ui, vi) and (tf , uf , vf ), is addressed. Noting that the RRMF quintics form a proper subset of the spatial Pythagorean–hodograph (PH) quintics, characterized by a vector constraint on their quaternion coefficients, and that C1 spatial PH quintic Hermite interpolants possess two free scalar parameters, sufficient degrees of freedom for satisfying the RRMF condition and interpolating the end points and frames can be obtained by relaxing the Hermite data from C1 to G1. It is shown that, after satisfaction of the RRMF condition, interpolation of the end frames can always be achieved by solving a quadratic equation with a positive discriminant. Three scalar freedoms then remain for interpolation of the end–point displacement pf −pi, and this can be reduced to computing the real roots of a degree 6 univariate polynomial. The nonlinear dependence of the polynomial coefficients on the prescribed data precludes simple a priori guarantees for the existence of solutions in all cases, although existence is demonstrated for the asymptotic case of densely–sampled data from a smooth curve. Modulation of the hodograph by a scalar polynomial is proposed as a means of introducing additional degrees of freedom, in cases where solutions to the end–point interpolation problem are not found. The methods proposed herein are expected to find important applications in exactly specifying rigid–body motions along curved paths, with minimized rotation, for animation, robotics, spatial path planning, and geometric sweeping operations.

Journal ArticleDOI
TL;DR: In this paper, a local search algorithm on a suitably defined graph of birationally equivalent plane curves is proposed to construct optimized equations for the modular curve X_1(N) using a local local search.
Abstract: We present a method for constructing optimized equations for the modular curve X_1(N) using a local search algorithm on a suitably defined graph of birationally equivalent plane curves. We then apply these equations over a finite field F_q to efficiently generate elliptic curves with nontrivial N-torsion by searching for affine points on X_1(N)(F_q), and we give a fast method for generating curves with (or without) a point of order 4N using X_1(2N).

Journal ArticleDOI
TL;DR: An approach that may improve upon the O(N1/2+o(1)) bound is discussed, by suggesting a strategy to determine in timeO(N 1/2−c) for some c > 0 whether a given interval in [N, 2N ] contains a prime.
Abstract: Given a large positive integer N , how quickly can one construct a prime number larger than N (or between N and 2N)? Using probabilistic methods, one can obtain a prime number in time at most log N with high probability by selecting numbers between N and 2N at random and testing each one in turn for primality until a prime is discovered. However, if one seeks a deterministic method, then the problem is much more difficult, unless one assumes some unproven conjectures in number theory; brute force methods give a O(N1+o(1)) algorithm, and the best unconditional algorithm, due to Odlyzko, has a runtime of O(N1/2+o(1)). In this paper we discuss an approach that may improve upon the O(N1/2+o(1)) bound, by suggesting a strategy to determine in timeO(N1/2−c) for some c > 0 whether a given interval in [N, 2N ] contains a prime. While this strategy has not been fully implemented, it can be used to establish partial results, such as being able to determine the parity of the number of primes in a given interval in [N, 2N ] in time O(N1/2−c).


Journal ArticleDOI
TL;DR: Convergence of the L2-projection onto the space of polynomials up to degree p on a simplex in Rd, d >= 2.5 is studied.
Abstract: In this paper we study convergence of the L2-projection onto the space of polynomials up to degree p on a simplex in Rd, d >= 2. Optimal error estimates are established in the case of Sobolev regularity and illustrated on several numerical examples. The proof is based on the collapsed coordinate transform and the expansion into various polynomial bases involving Jacobi polynomials and their antiderivatives. The results of the present paper generalize corresponding estimates for cubes in Rd from [P. Houston, C. Schwab, E. Suli, Discontinuous hp-finite element methods for advection-diffusion-reaction problems. SIAM J. Numer. Anal. 39 (2002), no. 6, 2133-2163].

Journal ArticleDOI
TL;DR: The modified Abel lemma on summation by parts is employed to investigate the partial sum of Dougall’s bilateral 2H2-series and several unusual transformations into fast convergent series are established.
Abstract: The modified Abel lemma on summation by parts is employed to investigate the partial sum of Dougall’s bilateral 2H2-series. Several unusual transformations into fast convergent series are established. They lead surprisingly to numerous infinite series expressions for π, including several formulae discovered by Ramanujan (1914) and recently by Guillera (2008). Roughly speaking, hypergeometric series is defined to be a series ∑ Cn with term ratio C1+n/Cn a rational function of n. In general, it can be explicitly written as pFq [ a1, a2, · · · , ap b1, b2, · · · , bq ∣∣∣ z] = ∞ ∑ n=0 (a1)n(a2)n · · · (ap)n (b1)n(b2)n · · · (bq)n z n! where the rising shifted factorial is given by (x)0 ≡ 1 and (x)n = Γ(x+n)/Γ(x) = x(x+1) · · · (x+n−1) for n = 1, 2, · · · . The Γ-function is defined by the Euler integral