scispace - formally typeset
Search or ask a question

Showing papers in "Advances in Computational Mathematics in 2014"


Journal ArticleDOI
TL;DR: The problem of constructing a normalized hierarchical basis for adaptively refined spline spaces is addressed and the theory is applied to hierarchically refined tensor-productspline spaces, under certain reasonable assumptions on the given knot configuration.
Abstract: The problem of constructing a normalized hierarchical basis for adaptively refined spline spaces is addressed. Multilevel representations are defined in terms of a hierarchy of basis functions, reflecting different levels of refinement. When the hierarchical model is constructed by considering an underlying sequence of bases { Γ l } l = 0 , ? , N ? 1 $\{\Gamma ^{\ell }\}_{\ell =0,\ldots ,N-1}$ with properties analogous to classical tensor-product B-splines, we can define a set of locally supported basis functions that form a partition of unity and possess the property of coefficient preservation, i.e., they preserve the coefficients of functions represented with respect to one of the bases Γ l $\Gamma ^{\ell }$ . Our construction relies on a certain truncation procedure, which eliminates the contributions of functions from finer levels in the hierarchy to coarser level ones. Consequently, the support of the original basis functions defined on coarse grids is possibly reduced according to finer levels in the hierarchy. This truncation mechanism not only decreases the overlapping of basis supports, but it also guarantees strong stability of the construction. In addition to presenting the theory for the general framework, we apply it to hierarchically refined tensor-product spline spaces, under certain reasonable assumptions on the given knot configuration.

132 citations


Journal ArticleDOI
TL;DR: It turns out, by applying a refined error analysis, that the optimal parameter choice is more subtle than known so far in the literature, and depends on the used norm, the solution, the family of finite element spaces, and the type of mesh.
Abstract: Standard error analysis for grad-div stabilization of inf-sup stable conforming pairs of finite element spaces predicts that the stabilization parameter should be optimally chosen to be 𝒪 ( 1 ) $\mathcal O(1)$ . This paper revisits this choice for the Stokes equations on the basis of minimizing the H 1 ( Ω ) $H^{1}(\Omega )$ error of the velocity and the L 2 ( Ω ) $L^{2}(\Omega )$ error of the pressure. It turns out, by applying a refined error analysis, that the optimal parameter choice is more subtle than known so far in the literature. It depends on the used norm, the solution, the family of finite element spaces, and the type of mesh. In particular, the approximation property of the pointwise divergence-free subspace plays a key role. With such an optimal approximation property and with an appropriate choice of the stabilization parameter, estimates for the H 1 ( Ω ) $H^{1}(\Omega )$ error of the velocity are obtained that do not directly depend on the viscosity and the pressure. The minimization of the L 2 ( Ω ) $L^{2}(\Omega )$ error of the pressure requires in many cases smaller stabilization parameters than the minimization of the H 1 ( Ω ) $H^{1}(\Omega )$ velocity error. Altogether, depending on the situation, the optimal stabilization parameter could range from being very small to very large. The analytic results are supported by numerical examples. Applying the analysis to the MINI element leads to proposals for the stabilization parameter which seem to be new.

118 citations


Journal ArticleDOI
TL;DR: This work describes the construction of four different quadratures which handle logarithmically-singular kernels, and compares in numerical experiments the convergence of the four schemes in various settings, including low- and high-frequency planar Helmholtz problems, and 3D axisymmetric Laplace problems.
Abstract: Boundary integral equations and Nystrom discretization provide a powerful tool for the solution of Laplace and Helmholtz boundary value problems. However, often a weakly-singular kernel arises, in which case specialized quadratures that modify the matrix entries near the diagonal are needed to reach a high accuracy. We describe the construction of four different quadratures which handle logarithmically-singular kernels. Only smooth boundaries are considered, but some of the techniques extend straightforwardly to the case of corners. Three are modifications of the global periodic trapezoid rule, due to Kapur---Rokhlin, to Alpert, and to Kress. The fourth is a modification to a quadrature based on Gauss---Legendre panels due to Kolm---Rokhlin; this formulation allows adaptivity. We compare in numerical experiments the convergence of the four schemes in various settings, including low- and high-frequency planar Helmholtz problems, and 3D axisymmetric Laplace problems. We also find striking differences in performance in an iterative setting. We summarize the relative advantages of the schemes.

112 citations


Journal ArticleDOI
TL;DR: In this article, the main trends in the domain of uncertainty principles and localization, highlight their mutual connections and investigate practical consequences, and emphasize relations with sparse approximation and coding problems, from which significant advances have been made recently.
Abstract: The goal of this paper is to review the main trends in the domain of uncertainty principles and localization, highlight their mutual connections and investigate practical consequences. The discussion is strongly oriented towards, and motivated by signal processing problems, from which significant advances have been made recently. Relations with sparse approximation and coding problems are emphasized.

68 citations


Journal ArticleDOI
TL;DR: This paper develops methods for numerically solving optimization problems over spaces of geodesics using numerical integration of Jacobi fields and second order derivatives of geodeic families and exemplifies the differences between the linearized and exact algorithms caused by the non-linear geometry.
Abstract: In fields ranging from computer vision to signal processing and statistics, increasing computational power allows a move from classical linear models to models that incorporate non-linear phenomena. This shift has created interest in computational aspects of differential geometry, and solving optimization problems that incorporate non-linear geometry constitutes an important computational task. In this paper, we develop methods for numerically solving optimization problems over spaces of geodesics using numerical integration of Jacobi fields and second order derivatives of geodesic families. As an important application of this optimization strategy, we compute exact Principal Geodesic Analysis (PGA), a non-linear version of the PCA dimensionality reduction procedure. By applying the exact PGA algorithm to synthetic data, we exemplify the differences between the linearized and exact algorithms caused by the non-linear geometry. In addition, we use the numerically integrated Jacobi fields to determine sectional curvatures and provide upper bounds for injectivity radii.

43 citations


Journal ArticleDOI
TL;DR: A novel predictor-corrector method for the numerical solutions of fractional ordinary differential equations, which is based on the polynomial interpolation and the Gauss-Lobatto quadrature w.r.t. the Jacobi-weight function.
Abstract: We present a novel predictor-corrector method, called Jacobian-predictor-corrector approach, for the numerical solutions of fractional ordinary differential equations, which are based on the polynomial interpolation and the Gauss-Lobatto quadrature w.r.t. the Jacobi-weight function $\omega (s)=(1-s)^{\alpha -1} (1+s)^{0}$ . This method has the computational cost O(N E ) and the convergent order N I , where N E and N I are, respectively, the total computational steps and the number of used interpolation points. The detailed error analysis is performed, and the extensive numerical experiments confirm the theoretical results and show the robustness of this method.

42 citations


Journal ArticleDOI
TL;DR: It is proved that with an appropriate rescaling of the variables, both the original and the “missing” Wendland functions converge uniformly to a Gaussian as the smoothness parameter approaches infinity.
Abstract: The Wendland functions are a class of compactly supported radial basis functions with a user-specified smoothness parameter. We prove that with an appropriate rescaling of the variables, both the original and the "missing" Wendland functions converge uniformly to a Gaussian as the smoothness parameter approaches infinity. We also explore the convergence numerically with Wendland functions of different smoothness.

38 citations


Journal ArticleDOI
TL;DR: In this article, a trigonometric Galerkin discretization applied to the periodized integral equation converges with optimal order to the solution of the transverse magnetic (TM) scattering problem.
Abstract: Transverse magnetic (TM) scattering of an electromagnetic wave from a periodic dielectric diffraction grating can mathematically be described by a volume integral equation.This volume integral equation, however, in general fails to feature a weakly singular integral operator. Nevertheless, after a suitable periodization, the involved integral operator can be efficiently evaluated on trigonometric polynomials using the fast Fourier transform (FFT) and iterative methods can be used to solve the integral equation. Using Fredholm theory, we prove that a trigonometric Galerkin discretization applied to the periodized integral equation converges with optimal order to the solution of the scattering problem. The main advantage of this FFT-based discretization scheme is that the resulting numerical method is particularly easy to implement, avoiding for instance the need to evaluate quasiperiodic Green's functions.

36 citations


Journal ArticleDOI
TL;DR: In this paper, a nonlinear functional equation with strict monotonicity property for functions f on Banach spaces is solved and the Van-Cittert iteration converges in l p with exponential rate.
Abstract: Let 1 ≤ p ≤ ? In this paper, we consider solving a nonlinear functional equation f (x) = y, where x, y belong to l p and f has continuous bounded gradient in an inverse-closed subalgebra of ? (l2), the Banach algebra of all bounded linear operators on the Hilbert space l 2 We introduce strict monotonicity property for functions f on Banach spaces l p so that the above nonlinear functional equation is solvable and the solution x depends continuously on the given data y in l p We show that the Van-Cittert iteration converges in l p with exponential rate and hence it could be used to locate the true solution of the above nonlinear functional equation We apply the above theory to handle two problems in signal processing: nonlinear sampling termed with instantaneous companding and subsequently average sampling; and local identification of innovation positions and qualification of amplitudes of signals with finite rate of innovation

31 citations


Journal ArticleDOI
TL;DR: This paper focuses its attention on the connection between order of convergence and Hamiltonian deviation by multivalue methods and derives a semi-implicit GLM which results competitive to symplectic Runge-Kutta methods, which are notoriously implicit.
Abstract: It is the purpose of this paper to consider the employ of General Linear Methods (GLMs) as geometric numerical solvers for the treatment of Hamiltonian problems. Indeed, even if the numerical flow generated by a GLM cannot be symplectic, we exploit here a concept of near conservation for such methods which, properly combined with other desirable features (such as symmetry and boundedness of parasitic components), allows to achieve an accurate conservation of the Hamiltonian. In this paper we focus our attention on the connection between order of convergence and Hamiltonian deviation by multivalue methods. Moreover, we derive a semi-implicit GLM which results competitive to symplectic Runge-Kutta methods, which are notoriously implicit.

31 citations


Journal ArticleDOI
TL;DR: This analysis includes the so-called Mean Square Error (MSE) and the Benedetto-Fickus’ frame potential and characterize the cases of equality in Lidskii’s inequality from matrix theory.
Abstract: Given a finite sequence of vectors 𝓕 0 $\mathcal F_{0}$ in ? d we describe the spectral and geometrical structure of optimal frame completions of 𝓕 0 $\mathcal F_{0}$ obtained by appending a finite sequence of vectors with prescribed norms, where optimality is measured with respect to a general convex potential. In particular, our analysis includes the so-called Mean Square Error (MSE) and the Benedetto-Fickus' frame potential. On a first step, we reduce the problem of finding the optimal completions to the computation of the minimum of a convex function in a convex compact polytope in ? d . As a second step, we show that there exists a finite set (that can be explicitly computed in terms of a finite step algorithm that depends on 𝓕 0 $\mathcal F_{0}$ and the sequence of prescribed norms) such that the optimal frame completions with respect to a given convex potential can be described in terms of a distinguished element of this set. As a byproduct we characterize the cases of equality in Lidskii's inequality from matrix theory.

Journal ArticleDOI
TL;DR: This article defines a new class of Pythagorean-Hodograph curves built-upon a six-dimensional mixed algebraic-trigonometric space, shows their fundamental properties and compares them with their well-known quintic polynomial counterpart.
Abstract: In this article we define a new class of Pythagorean-Hodograph curves built-upon a six-dimensional mixed algebraic-trigonometric space, we show their fundamental properties and compare them with their well-known quintic polynomial counterpart. A complex representation for these curves is introduced and constructive approaches are provided to solve different application problems, such as interpolating C 1 Hermite data and constructing spirals as G 2 transition elements between a line segment and a circle, as well as between a pair of external circles.

Journal ArticleDOI
TL;DR: The paper promotes a numerical approach for the efficient calculation of good approximations to the dual Gabor atom for general lattices, including the non-separable ones.
Abstract: Regular Gabor frames for L 2 ( ? d ) ${\boldsymbol {L}{^{2}}(\mathbb {R}^d)}$ are obtained by applying time-frequency shifts from a lattice in ? ? ? d × ? ? $\boldsymbol {\Lambda } \vartriangleleft {\mathbb {R}^{d} \times \mathbb {\widehat {R}}}$ to some decent so-called Gabor atom g, which typically is something like a summability kernel in classical analysis, or a Schwartz function, or more generally some g ? S 0 ( ? d ) $g \in {\boldsymbol {S}_{0}(\mathbb {R}^{d})}$ . There is always a canonical dual frame, generated by the dual Gabor atom g ~ ${\widetilde g}$ . The paper promotes a numerical approach for the efficient calculation of good approximations to the dual Gabor atom for general lattices, including the non-separable ones (different from a ? d × b ? d ${a\mathbb {Z}^{d}\,{\times }\,b\mathbb {Z}^{d}}$ ). The theoretical foundation for the approach is the well-known Wexler-Raz biorthogonality relation and the more recent theory of localized frames. The combination of these principles guarantees that the dual Gabor atom can be approximated by a linear combination of a few time-frequency shifted atoms from the adjoint lattice ? ° $\boldsymbol {\Lambda }\circ$ . The effectiveness of this approach is justified by a new theoretical argument and demonstrated by numerical examples.

Journal ArticleDOI
TL;DR: This paper demonstrates that a modified version of the accelerated nested dissection scheme can execute any solve beyond the first in O(Nboundary) operations, where Nboundary denotes the number of points on the boundary.
Abstract: The large sparse linear systems arising from the finite element or finite difference discretization of elliptic PDEs can be solved directly via, e.g., nested dissection or multifrontal methods. Such techniques reorder the nodes in the grid to reduce the asymptotic complexity of Gaussian elimination from O(N 2) to O(N 1.5) for typical problems in two dimensions. It has recently been demonstrated that the complexity can be further reduced to O(N) by exploiting structure in the dense matrices that arise in such computations (using, e.g., ? $\mathcal {H}$ -matrix arithmetic). This paper demonstrates that such accelerated nested dissection techniques become particularly effective for boundary value problems without body loads when the solution is sought for several different sets of boundary data, and the solution is required only near the boundary (as happens, e.g., in the computational modeling of scattering problems, or in engineering design of linearly elastic solids). In this case, a modified version of the accelerated nested dissection scheme can execute any solve beyond the first in O(N boundary) operations, where N boundary denotes the number of points on the boundary. Typically, N boundary ~ N 0.5. Numerical examples demonstrate the effectiveness of the procedure for a broad range of elliptic PDEs that includes both the Laplace and Helmholtz equations.

Journal ArticleDOI
TL;DR: A constructive approach is provided that, for any assigned degree of polynomial reproduction, continuity order, and support width, allows for generating the fundamental spline functions of minimum degree having the desired properties.
Abstract: In this paper we consider the problem of designing piecewise polynomial local interpolants of non-uniformly spaced data. We provide a constructive approach that, for any assigned degree of polynomial reproduction, continuity order, and support width, allows for generating the fundamental spline functions of minimum degree having the desired properties. Finally, the proposed construction is extended to handle open sets of data and to the case of multiple knots.

Journal ArticleDOI
TL;DR: A computationally realizable approach for the construction of an approximate dual wavelet frame on the real line obtained by appropriate translation and dilation of a single given atom is presented.
Abstract: This paper presents a computationally realizable approach for the construction of an approximate dual wavelet frame. We are considering wavelet frames on the real line obtained by appropriate translation and dilation of a single given atom. We show asymptotic results in operator norm. Also we present numerical results to demonstrate the realizability of the approximation.

Journal ArticleDOI
TL;DR: A simple and accurate algorithm to evaluate the Hilbert transform of a real function is proposed using interpolations with piecewise–linear functions and an appropriate matrix representation reduces the complexity to the complexity of matrix-vector multiplication.
Abstract: A simple and accurate algorithm to evaluate the Hilbert transform of a real function is proposed using interpolations with piecewise---linear functions. An appropriate matrix representation reduces the complexity of this algorithm to the complexity of matrix-vector multiplication. Since the core matrix is an antisymmetric Toeplitz matrix, the discrete trigonometric transform can be exploited to calculate the matrix---vector multiplication with a reduction of the complexity to O(N log N), with N being the dimension of the core matrix. This algorithm has been originally envisaged for self-consistent simulations of radio-frequency wave propagation and absorption in fusion plasmas.

Journal ArticleDOI
TL;DR: This work proposes several algorithms, which provide optimal windows for a user-selected TF pattern with respect to different concentration criteria, and base their optimization algorithm on lp-norms as measure of TF spreading.
Abstract: Gabor analysis is one of the most common instances of time-frequency signal analysis. Choosing a suitable window for the Gabor transform of a signal is often a challenge for practical applications, in particular in audio signal processing. Many time-frequency (TF) patterns of different shapes may be present in a signal and they can not all be sparsely represented in the same spectrogram. We propose several algorithms, which provide optimal windows for a user-selected TF pattern with respect to different concentration criteria. We base our optimization algorithm on l p -norms as measure of TF spreading. For a given number of sampling points in the TF plane we also propose optimal lattices to be used with the obtained windows. We illustrate the potentiality of the method on selected numerical examples.

Journal ArticleDOI
TL;DR: This work analyzes the nature of variations in quadrature formulas of highly variable quality on a sphere, and describes an easy-to-implement least-squares remedy for previously problematic cases.
Abstract: It has been suggested in the literature that different quasi-uniform node sets on a sphere lead to quadrature formulas of highly variable quality. We analyze here the nature of these variations, and describe an easy-to-implement least-squares remedy for previously problematic cases. Quadrature accuracies are then compared for different node sets ranging from fully random to those based on Gaussian quadrature concepts.

Journal ArticleDOI
TL;DR: The main idea and techniques developed provide an efficient framework for the collocation method of various nonlinear problems and possesses the spectral accuracy in both space and time directions.
Abstract: In this paper, we propose a collocation method for an initial-boundary value problem of the generalized nonlinear Klein-Gordon equation. It possesses the spectral accuracy in both space and time directions. The numerical results indicate the high accuracy and the stability of long-time calculation of suggested algorithm, even for moderate mode in spatial approximation and big time step sizes. The main idea and techniques developed in this work provide an efficient framework for the collocation method of various nonlinear problems.

Journal ArticleDOI
TL;DR: The design of algorithms for interpolating discrete data by using weighted C1 quadratic splines in such a way that the monotonicity and convexity of the data are preserved are discussed.
Abstract: In this paper we discuss the design of algorithms for interpolating discrete data by using weighted C 1 quadratic splines in such a way that the monotonicity and convexity of the data are preserved. The analysis culminates in two algorithms with automatic selection of the shape control parameters: one to preserve the data monotonicity and other to retain the data convexity. Weighted C 1 quadratic B-splines and control point approximation are also considered.

Journal ArticleDOI
TL;DR: These schemes are extended to solve two-dimensional systems of hyperbolic conservation laws on curvilinear grids for non-Cartesian domains to obtain similar advantageous properties as those of the hybrid WENO schemes on uniform grids for Cartesian domains.
Abstract: In {J. Comput. Phys. 229 (2010) 8105-8129}, we studied hybrid weighted essentially non-oscillatory (WENO) schemes with different indicators for hyperbolic conservation laws on uniform grids for Cartesian domains. In this paper, we extend the schemes to solve two-dimensional systems of hyperbolic conservation laws on curvilinear grids for non-Cartesian domains. Our goal is to obtain similar advantageous properties as those of the hybrid WENO schemes on uniform grids for Cartesian domains. Extensive numerical results strongly support that the hybrid WENO schemes with discontinuity indicators on curvilinear grids can also save considerably on computational cost in contrast to the pure WENO schemes. They also maintain the essentially non-oscillatory property for general solutions with discontinuities and keep the sharp shock transition.

Journal ArticleDOI
TL;DR: A generic signal transform is described as a procedure of measuring the content of a signal at different values of a set of given physical quantities, which sheds a light on the relationship between signal transforms and uncertainty principles.
Abstract: The motivation to this paper stems from signal/image processing where it is desired to measure various attributes or physical quantities such as position, scale, direction and frequency of a signal or an image. These physical quantities are measured via a signal transform, for example, the short time Fourier transform measures the content of a signal at different times and frequencies. There are well known obstructions for completely accurate measurements formulated as "uncertainty principles". It has been shown recently that "conventional" localization notions, based on variances associated with Lie-group generators and their corresponding uncertainty inequality might be misleading, if they are applied to transformation groups which differ from the Heisenberg group, the latter being prevailing in signal analysis and quantum mechanics. In this paper we describe a generic signal transform as a procedure of measuring the content of a signal at different values of a set of given physical quantities. This viewpoint sheds a light on the relationship between signal transforms and uncertainty principles. In particular we introduce the concepts of "adjoint translations" and "adjoint observables", respectively. We show that the fundamental issue of interest is the measurement of physical quantities via the appropriate localization operators termed "adjoint observables". It is shown how one can define, for each localization operator, a family of related "adjoint translation" operators that translate the spectrum of that localization operator. The adjoint translations in the examples of this paper correspond to well-known transformations in signal processing such as the short time Fourier transform (STFT), the continuous wavelet transform (CWT) and the shearlet transform. We show how the means and variances of states transform appropriately under the translation action and compute associated minimizers and equalizers for the uncertainty criterion. Finally, the concept of adjoint observables is used to estimate concentration properties of ambiguity functions, the latter being an alternative localization concept frequently used in signal analysis.

Journal ArticleDOI
TL;DR: Concepts and algorithms developed within the UNLocX project are applied to MALDI Imaging data and so-called soft-segmentation maps which are obtained by non-negative matrix factorization incorporating sparsity constraints are introduced.
Abstract: This article does not present new mathematical results, it solely aims at discussing some numerical experiments with MALDI Imaging data. However, these experiments are based on and could not be done without the mathematical results obtained in the UNLocX project. They tackle two obstacles which presently prevent clinical routine applications of MALDI Imaging technology. In the last decade, matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI-IMS) has developed into a powerful bioanalytical imaging modality. MALDI imaging data consists of a set of mass spectra, which are measured at different locations of a flat tissue sample. Hence, this technology is capable of revealing the full metabolic structure of the sample under investigation. Sampling resolution as well as spectral resolution is constantly increasing, presently a conventional 2D MALDI Imaging data requires up to 100 GB per dataset. A major challenge towards routine applications of MALDI Imaging in pharmaceutical or medical workflows is the high computational cost for evaluating and visualizing the information content of MALDI imaging data. This becomes even more critical in the near future when considering cohorts or 3D applications. Due to its size and complexity MALDI Imaging constitutes a challenging test case for high performance signal processing. In this article we will apply concepts and algorithms, which were developed within the UNLocX project, to MALDI Imaging data. In particular we will discuss a suitable phase space model for such data and report on implementations of the resulting transform coders using GPU technology. Within the MALDI Imaging workflow this leads to an efficient baseline removal and peak picking. The final goal of data processing in MALDI Imaging is the discrimination of regions having different metabolic structures. We introduce and discuss so-called soft-segmentation maps which are obtained by non-negative matrix factorization incorporating sparsity constraints.

Journal ArticleDOI
TL;DR: The method allows for high accuracy numerical solutions of parabolic equations on the unit sphere to be constructed in parallel, and establishes L2 error estimates for smooth and nonsmooth initial data.
Abstract: We propose a method to construct numerical solutions of parabolic equations on the unit sphere. The time discretization uses Laplace transforms and quadrature. The spatial approximation of the solution employs radial basis functions restricted to the sphere. The method allows us to construct high accuracy numerical solutions in parallel. We establish L 2 error estimates for smooth and nonsmooth initial data, and describe some numerical experiments.

Journal ArticleDOI
TL;DR: A two level method based on Newton’s iteration for the nonlinear system arising from the Galerkin finite element approximation to the equations of motion described by the Kelvin-Voigt viscoelastic fluid flow model is studied.
Abstract: In this paper, we study a two level method based on Newton's iteration for the nonlinear system arising from the Galerkin finite element approximation to the equations of motion described by the Kelvin-Voigt viscoelastic fluid flow model. The two-grid algorithm is based on three steps. In the first step, the nonlinear system is solved on a coarse mesh 𝒯 H $\mathcal {T}_{H}$ to obtain an approximate solution u H . In the second step, the nonlinear system is linearized around u H based on Newton's iteration and the linear system is solved on a finer mesh 𝒯 h $\mathcal {T}_{h}$ . Finally, in the third step, a correction to the results obtained in the second step is achieved by solving a linear problem with a different right hand side on 𝒯 h $\mathcal {T}_{h}$ . Optimal error estimates in L ? (L 2)-norm, when h = 𝒪 ( H 2 ? ? ) $h=\mathcal {O} (H^{2-\delta })$ and in L ? (1)-norm, when h = 𝒪 ( H 5 ? 2 ? ) $h=\mathcal {O}(H^{5-2\delta })$ for velocity and in L ? (L 2)-norm, when h = 𝒪 ( H 5 ? 2 ? ) $h=\mathcal {O}(H^{5-2\delta })$ for pressure are established, where ? > 0 is arbitrarily small for two dimensions and ? = 1 2 $\delta =\frac {1}{2}$ for three dimensions.

Journal ArticleDOI
TL;DR: It is shown that the quasi-Hermitian splitting can induce accurate, robust and effective preconditioned Krylov subspace methods.
Abstract: We present a nested splitting conjugate gradient iteration method for solving large sparse continuous Sylvester equation, in which both coefficient matrices are (non-Hermitian) positive semi-definite, and at least one of them is positive definite. This method is actually inner/outer iterations, which employs the Sylvester conjugate gradient method as inner iteration to approximate each outer iterate, while each outer iteration is induced by a convergent and Hermitian positive definite splitting of the coefficient matrices. Convergence conditions of this method are studied and numerical experiments show the efficiency of this method. In addition, we show that the quasi-Hermitian splitting can induce accurate, robust and effective preconditioned Krylov subspace methods.

Journal ArticleDOI
TL;DR: An adaptive finite element scheme for the fully non-linear incompressible Navier-Stokes equations is proposed and a residual a posteriori error estimator is shown to be effective and reliable.
Abstract: This work proposes and analyses an adaptive finite element scheme for the fully non-linear incompressible Navier-Stokes equations. A residual a posteriori error estimator is shown to be effective and reliable. The error estimator relies on a Residual Local Projection (RELP) finite element method for which we prove well-posedness under mild conditions. Several well-established numerical tests assess the theoretical results.

Journal ArticleDOI
TL;DR: In this paper, the Laplace-Beltrami equation on the unit sphere in the presence of multiple "islands" is solved iteratively using the fast multipole method for the 2D Coulomb potential in order to calculate the matrixvector products.
Abstract: Integral equation methods for solving the Laplace-Beltrami equation on the unit sphere in the presence of multiple "islands" are presented. The surface of the sphere is first mapped to a multiply-connected region in the complex plane via a stereographic projection. After discretizing the integral equation, the resulting dense linear system is solved iteratively using the fast multipole method for the 2D Coulomb potential in order to calculate the matrix-vector products. This numerical scheme requires only O(N) operations, where N is the number of nodes in the discretization of the boundary. The performance of the method is demonstrated on several examples.

Journal ArticleDOI
TL;DR: The analysis shows that the convergence behavior of ℓ1—quantile regression with Gaussian kernels is almost the same as that of the RKHS-based learning schemes.
Abstract: The quantile regression problem is considered by learning schemes based on l 1--regularization and Gaussian kernels. The purpose of this paper is to present concentration estimates for the algorithms. Our analysis shows that the convergence behavior of l 1--quantile regression with Gaussian kernels is almost the same as that of the RKHS-based learning schemes. Furthermore, the previous analysis for kernel-based quantile regression usually requires that the output sample values are uniformly bounded, which excludes the common case with Gaussian noise. Our error analysis presented in this paper can give satisfactory convergence rates even for unbounded sampling processes. Besides, numerical experiments are given which support the theoretical results.