scispace - formally typeset
Search or ask a question

Showing papers in "Numerical Mathematics-theory Methods and Applications in 2017"


Journal ArticleDOI
TL;DR: A sparse structure of a set of basis-related coefficients is discovered, which allows to accelerate the computation of the collision operator of the Boltzmann equation in the random space and an accuracy result of the stochastic Galerkin method are proved in multi-dimensional cases.
Abstract: We propose a stochastic Galerkin method using sparse wavelet bases for the Boltzmann equation with multi-dimensional random inputs. Themethod uses locally supported piecewise polynomials as an orthonormal basis of the random space. By a sparse approach, only a moderate number of basis functions is required to achieve good accuracy in multi-dimensional random spaces. We discover a sparse structure of a set of basis-related coefficients, which allows us to accelerate the computation of the collision operator. Regularity of the solution of the Boltzmann equation in the random space and an accuracy result of the stochastic Galerkin method are proved in multi-dimensional cases. The efficiency of the method is illustrated by numerical examples with uncertainties from the initial data, boundary data and collision kernel.

42 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a reliable direct imaging method based on the reverse time migration for finding extended obstacles with phaseless total field data and proved that the imaging resolution of the method is essentially the same as the imaging results using the scattering data with full phase information when the measurement is far away from the obstacle.
Abstract: We propose a reliable direct imaging method based on the reverse time migration for finding extended obstacles with phaseless total field data. We prove that the imaging resolution of the method is essentially the same as the imaging results using the scattering data with full phase information when the measurement is far away from the obstacle. The imaginary part of the cross-correlation imaging functional always peaks on the boundary of the obstacle. Numerical experiments are included to illustrate the powerful imaging quality

40 citations


Journal ArticleDOI
TL;DR: A new linearized augmented Lagrangian method for Euler's elastica image denoising model is proposed, which adopts a linearized strategy to get an iteration sequence so as to reduce computational cost.
Abstract: Recently, many variational models involving high order derivatives have been widely used in image processing, because they can reduce staircase effects during noise elimination. However, it is very challenging to construct efficient algorithms to obtain the minimizers of original high order functionals. In this paper, we propose a new linearized augmented Lagrangian method for Euler's elastica image denoising model. We detail the procedures of finding the saddle-points of the augmented Lagrangian functional. Instead of solving associated linear systems by FFT or linear iterative methods (e.g., the Gauss-Seidel method), we adopt a linearized strategy to get an iteration sequence so as to reduce computational cost. In addition, we give some simple complexity analysis for the proposed method. Experimental results with comparison to the previous method are supplied to demonstrate the efficiency of the proposed method, and indicate that such a linearized augmented Lagrangian method is more suitable to deal with large-sized images.

36 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduced the Multidimensional Iterative Filtering (MIF) technique for the decomposition and time-frequency analysis of non-stationary high-dimensional signals.
Abstract: Iterative Filtering (IF) is an alternative technique to the Empirical Mode Decomposition (EMD) algorithm for the decomposition of non–stationary and non–linear signals. Recently in [3] IF has been proved to be convergent for any L 2 signal and its stability has been also demonstrated through examples. Furthermore in [3] the so called Fokker–Planck (FP) filters have been introduced. They are smooth at every point and have compact supports. Based on those results, in this paper we introduce the Multidimensional Iterative Filtering (MIF) technique for the decomposition and time–frequency analysis of non–stationary high–dimensional signals. We present the extension of FP filters to higher dimensions. We prove convergence results under general sufficient conditions on the filter shape. Finally we illustrate the promising performance of MIF algorithm, equipped with high–dimensional FP filters, when applied to the decomposition of two dimensional signals.

33 citations


Journal ArticleDOI
TL;DR: A few test cases with a wide range of wave structures for testing higher-order schemes to help CFD developers to validate and further develop their schemes to a higher level of accuracy and robustness.
Abstract: There have been great efforts on the development of higher-order numerical schemes for compressible Euler equations in recent decades. The traditional test cases proposed thirty years ago mostly target on the strong shock interactions, which may not be adequate enough for evaluating the performance of current higher-order schemes. In order to set up a higher standard for the development of new algorithms, in this paper we present a few benchmark cases with severe and complicated wave structures and interactions, which can be used to clearly distinguish different kinds of higher-order schemes. All tests are selected so that the numerical settings are very simple and any higher order scheme can be straightforwardly applied to these cases. The examples include highly oscillatory solutions and the large density ratio problem in one dimensional case. In two dimensions, the cases include hurricane-like solutions; interactions of planar contact discontinuities with asymptotic large Mach number (the composite of entropy wave and vortex sheets); interaction of planar rarefaction waves with transition from continuous flows to the presence of shocks; and other types of interactions of two-dimensional planar waves. To get good performance on all these cases may push algorithm developer to seek for new methodology in the design of higher-order schemes, and improve the robustness and accuracy of higher-order schemes to a new level of standard. In order to give reference solutions, the fourth-order gas-kinetic scheme (GKS) will be used to all these benchmark cases, even though the GKS solutions may not be very accurate in some cases. The main purpose of this paper is to recommend other CFD researchers to try these cases as well, and promote further development of higher-order schemes.

26 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid spectral element method for fractional two-point boundary value problem (FBVPs) involving both Caputo and Riemann-Liouville (RL) fractional derivatives is proposed.
Abstract: We propose a hybrid spectral element method for fractional two-point boundary value problem (FBVPs) involving both Caputo and Riemann-Liouville (RL) fractional derivatives. We first formulate these FBVPs as a second kind Volterra integral equation (VIEs) with weakly singular kernel, following a similar procedure in [16]. We then design a hybrid spectral element method with generalized Jacobi functions and Legendre polynomials as basis functions. The use of generalized Jacobi functions allow us to deal with the usual singularity of solutions at t = 0. We establish the existence and uniqueness of the numerical solution, and derive a hptype error estimates under L 2(I)-norm for the transformed VIEs. Numerical results are provided to show the effectiveness of the proposed methods.

25 citations


Journal ArticleDOI
TL;DR: The deferred correction (DC) method is a classical method for solving ordinary differential equations; one of its key features is to iteratively use lower order numerical methods so that high-order numerical scheme can be obtained as mentioned in this paper.
Abstract: The deferred correction (DC) method is a classical method for solving ordinary differential equations; one of its key features is to iteratively use lower order numerical methods so that high-order numerical scheme can be obtained. The main advantage of the DC approach is its simplicity and robustness. In this paper, the DC idea will be adopted to solve forward backward stochastic differential equations (FBSDEs) which have practical importance in many applications. Noted that it is difficult to design high-order and relatively “clean” numerical schemes for FBSDEs due to the involvement of randomness and the coupling of the FSDEs and BSDEs. This paper will describe how to use the simplest Euler method in each DC step–leading to simple computational complexity–to achieve high order rate of convergence.

22 citations


Journal ArticleDOI
Tao Cui, Wei Leng, Deng Lin, Shichao Ma, Linbo Zhang 
TL;DR: The unisolvence problem of symmetric point-sets for the polynomial spaces used in mass-lumping elements is addressed, and an interesting property of the unisolvent symmetricpoint-sets is observed and discussed.
Abstract: This paper is concerned with the construction of high order mass-lumping finite elements on simplexes and a program for computing mass-lumping finite elements on triangles and tetrahedra. The polynomial spaces for mass-lumping finite elements, as proposed in the literature, are presented and discussed. In particular, the unisolvence problem of symmetric point-sets for the polynomial spaces used in mass-lumping elements is addressed, and an interesting property of the unisolvent symmetric point-sets is observed and discussed. Though its theoretical proof is still lacking, this property seems to be true in general, and it can greatly reduce the number of cases to consider in the computations of mass-lumping elements. A program for computing mass-lumping finite elements on triangles and tetrahedra, derived from the code for computing numerical quadrature rules presented in [7], is introduced. New mass-lumping finite elements on triangles found using this program with higher orders, namely 7, 8 and 9, than those available in the literature are reported.

18 citations


Journal ArticleDOI
TL;DR: In this paper, the regularization of Boltzmann's equation to globally hyperbolic systems with collision terms was studied and it was proved that the regularized models are linearly stable at the local equilibrium and satisfy Yong's first stability condition.
Abstract: Grad's moment models for Boltzmann equation were recently regularized to globally hyperbolic systems and thus the regularized models attain local well-posedness for Cauchy data. The hyperbolic regularization is only related to the convection term in Boltzmann equation. We in this paper studied the regularized models with the presentation of collision terms. It is proved that the regularized models are linearly stable at the local equilibrium and satisfy Yong's first stability condition with commonly used approximate collision terms, and particularly with Boltzmann's binary collision model.

18 citations


Journal ArticleDOI
TL;DR: An error analysis is presented that gives effective bounds for the perturbation on the solution of such linear systems, when is computed by means of recursive filters and it is proved that such a solution can be seen as the exact solution of a perturbed linear system.
Abstract: In many applications, the Gaussian convolution is approximately computed by means of recursive filters, with a significant improvement of computational efficiency. We are interested in theoretical and numerical issues related to such an use of recursive filters in a three-dimensional variational data assimilation (3Dvar) scheme as it appears in the software OceanVar. In that context, the main numerical problem consists in solving large linear systems with high efficiency, so that an iterative solver, namely the conjugate gradient method, is equipped with a recursive filter in order to compute matrix-vector multiplications that in fact are Gaussian convolutions. Here we present an error analysis that gives effective bounds for the perturbation on the solution of such linear systems, when is computed by means of recursive filters. We first prove that such a solution can be seen as the exact solution of a perturbed linear system. Then we study the related perturbation on the solution and we demonstrate that it can be bounded in terms of the difference between the two linear operators associated to the Gaussian convolution and the recursive filter, respectively. Moreover, we show through numerical experiments that the error on the solution, which exhibits a kind of edge effect, i.e. most of the error is localized in the first and last few entries of the computed solution, is due to the structure of the difference of the two linear operators.

17 citations


Journal ArticleDOI
TL;DR: This paper presents the optimality conditions of the exact and the discrete optimal control systems, then derive both a priori and a posteriori error estimates for Stokes equations with H-norm state constraint.
Abstract: In this paper, we consider an optimal control problem governed by Stokes equations with H 1-norm state constraint. The control problem is approximated by spectral method, which provides very accurate approximation with a relatively small number of unknowns. Choosing appropriate basis functions leads to discrete system with sparse matrices. We first present the optimality conditions of the exact and the discrete optimal control systems, then derive both a priori and a posteriori error estimates. Finally, an illustrative numerical experiment indicates that the proposed method is competitive, and the estimator can indicate the errors very well.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the residual diffusion phenomenon in chaotic advection computationally via adaptive orthogonal basis, which is generated by a class of time periodic cellular flows arising in modeling transition to turbulence in Rayleigh-Benard experiments.
Abstract: We study the residual diffusion phenomenon in chaotic advection computationally via adaptive orthogonal basis. The chaotic advection is generated by a class of time periodic cellular flows arising in modeling transition to turbulence in Rayleigh-Benard experiments. The residual diffusion refers to the non-zero effective (homogenized) diffusion in the limit of zero molecular diffusion as a result of chaotic mixing of the streamlines. In this limit, the solutions of the advection-diffusion equation develop sharp gradients, and demand a large number of Fourier modes to resolve, rendering computation expensive. We construct adaptive orthogonal basis (training) with built-in sharp gradient structures from fully resolved spectral solutions at few sampled molecular diffusivities. This is done by taking snapshots of solutions in time, and performing singular value decomposition of the matrix consisting of these snapshots as column vectors. The singular values decay rapidly and allow us to extract a small percentage of left singular vectors corresponding to the top singular values as adaptive basis vectors. The trained orthogonal adaptive basis makes possible low cost computation of the effective diffusivities at smaller molecular diffusivities (testing). The testing errors decrease as the training occurs at smaller molecular diffusivities. We make use of the Poincare map of the advection-diffusion equation to bypass long time simulation and gain accuracy in computing effective diffusivity and learning adaptive basis. We observe a non-monotone relationship between residual diffusivity and the amount of chaos in the advection, though the overall trend is that sufficient chaos leads to higher residual diffusivity.

Journal ArticleDOI
TL;DR: In this paper, a linearized difference scheme is proposed to discretize the time fractional-order derivative by second-order shifted and weighted Gr¨unwald-Letnikov difference operator.
Abstract: This article is intended to fill in the blank of the numerical schemes with second-order convergence accuracy in time for nonlinear Stokes’ first problem for a heated generalized second grade fluid with fractional derivative. A linearized difference scheme is proposed. The time fractional-order derivative is discretized by second-order shifted and weighted Gr¨unwald-Letnikov difference operator. The convergence accuracy in space is improved by performing the average operator. The presented numerical method is unconditionally stable with the global convergence order of in maximum norm, where τ and h are the step sizes in time and space, respectively. Finally, numerical examples are carried out to verify the theoretical results, showing that our scheme is efficient indeed.

Journal ArticleDOI
TL;DR: In this paper, a mean-field Ito formula and mean-Field Ito-Taylor expansion were developed for mean field stochastic differential equations (MSDEs), and based on the new formula and expansion, they proposed the ItoTaylor schemes of strong order γ and weak order η for MSDEs, and theoretically obtained the convergence rate γ of the strong Ito -Taylor scheme.
Abstract: This paper is devoted to numerical methods for mean-field stochastic differential equations (MSDEs). We first develop the mean-field Ito formula and mean-field Ito-Taylor expansion. Then based on the new formula and expansion, we propose the Ito-Taylor schemes of strong order γ and weak order η for MSDEs, and theoretically obtain the convergence rate γ of the strong Ito-Taylor scheme, which can be seen as an extension of the well-known fundamental strong convergence theorem to the mean-field SDE setting. Finally some numerical examples are given to verify our theoretical results.

Journal ArticleDOI
TL;DR: In this paper, the existence of an optimal exercise policy makes the option pricing problem a free boundary value problem of a parabolic equation on an unbounded domain and the optimal exercise boundary satisfies a nonlinear Volterra integral equation and is solved by a high-order collocation method based on graded meshes.
Abstract: This paper is devoted to the American option pricing problem governed by the Black-Scholes equation. The existence of an optimal exercise policy makes the problem a free boundary value problem of a parabolic equation on an unbounded domain. The optimal exercise boundary satisfies a nonlinear Volterra integral equation and is solved by a high-order collocation method based on graded meshes. This free boundary is then deformed to a fixed boundary by the front-fixing transformation. The boundary condition at infinity (due to the fact that the underlying asset's price could be arbitrarily large in theory), is treated by the perfectly matched layer technique. Finally, the resulting initial-boundary value problems for the option price and some of the Greeks on a bounded rectangular space-time domain are solved by a finite element method. In particular, for Delta, one of the Greeks, we propose a discontinuous Galerkin method to treat the discontinuity in its initial condition. Convergence results for these two methods are analyzed and several numerical simulations are provided to verify these theoretical results.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the blowup of VOLTERRA integro-differential equations (VIDEs) with a dissipative linear term and establish a Razumikhin-type theorem to verify the unboundedness of solutions.
Abstract: In this paper, we discuss the blowup of Volterra integro-differential equations (VIDEs) with a dissipative linear term. To overcome the fluctuation of solutions, we establish a Razumikhin-type theorem to verify the unboundedness of solutions. We also introduce leaving-times and arriving-times for the estimation of the spending-times of solutions to ∞. Based on these two typical techniques, the blowup and global existence of solutions to VIDEs with local and global integrable kernels are presented. As applications, the critical exponents of semi-linear Volterra diffusion equations (SLVDEs) on bounded domains with constant kernel are generalized to SLVDEs on bounded domains and ℝ N with some local integrable kernels. Moreover, the critical exponents of SLVDEs on both bounded domains and the unbounded domain ℝ N are investigated for global integrable kernels.

Journal ArticleDOI
TL;DR: A full multigrid method with coarsening by a factor-of-three to distributed control problems constrained by Stokes equations is presented and Numerical experiments are presented to demonstrate the effectiveness and efficiency of the proposed multigrids framework to tracking-type optimal control problems.
Abstract: A full multigrid method with coarsening by a factor-of-three to distributed control problems constrained by Stokes equations is presented. An optimal control problem with cost functional of velocity and/or pressure tracking-type is considered with Dirichlet boundary conditions. The optimality system that results from a Lagrange multiplier framework, form a linear system connecting the state, adjoint, and control variables. We investigate multigrid methods with finite difference discretization on staggered grids. A coarsening by a factor-of-three is used on staggered grids that results nested hierarchy of staggered grids and simplified the inter-grid transfer operators. A distributive-Gauss-Seidel smoothing scheme is employed to update the state- and adjoint-variables and a gradient update step is used to update the control variables. Numerical experiments are presented to demonstrate the effectiveness and efficiency of the proposed multigrid framework to tracking-type optimal control problems.

Journal ArticleDOI
TL;DR: In this article, a two-phase method for reconstruction of blurred images corrupted by impulse noise is proposed, where the first phase uses a noise detector to identify the pixels that are contaminated by noise and the second phase reconstructs the noisy pixels by solving an equality constrained total variation minimization problem that preserves the exact values of the noise-free pixels.
Abstract: We propose a new two-phase method for reconstruction of blurred images corrupted by impulse noise. In the first phase, we use a noise detector to identify the pixels that are contaminated by noise, and then, in the second phase, we reconstruct the noisy pixels by solving an equality constrained total variation minimization problem that preserves the exact values of the noise-free pixels. For images that are only corrupted by impulse noise (i.e., not blurred) we apply the semismooth Newton's method to a reduced problem, and if the images are also blurred, we solve the equality constrained reconstruction problem using a first-order primal-dual algorithm. The proposed model improves the computational efficiency (in the denoising case) and has the advantage of being regularization parameter-free. Our numerical results suggest that the method is competitive in terms of its restoration capabilities with respect to the other two-phase methods.

Journal ArticleDOI
TL;DR: In this article, the Galerkin finite element method in space in conjunction with the backward Euler method and the Crank-Nicolson method in time, respectively, are presented.
Abstract: This paper is concerned with numerical method for a two-dimensional time-dependent cubic nonlinear Schrodinger equation. The approximations are obtained by the Galerkin finite element method in space in conjunction with the backward Euler method and the Crank-Nicolson method in time, respectively. We prove optimal L 2 error estimates for two fully discrete schemes by using elliptic projection operator. Finally, a numerical example is provided to verify our theoretical results.

Journal ArticleDOI
TL;DR: A heuristic Learning-based Non-Negativity Constrained Variation aiming to search the coefficients of variational model automatically and make the variation adapt different images and problems by supervised-learning strategy is presented.
Abstract: This paper presents a heuristic Learning-based Non-Negativity Constrained Variation (L-NNCV) aiming to search the coefficients of variational model automatically and make the variation adapt different images and problems by supervised-learning strategy. The model includes two terms: a problem-based term that is derived from the prior knowledge, and an image-driven regularization which is learned by some training samples. The model can be solved by classical e-constraint method. Experimental results show that: the experimental effectiveness of each term in the regularization accords with the corresponding theoretical proof; the proposed method outperforms other PDE-based methods on image denoising and deblurring.

Journal ArticleDOI
TL;DR: This paper considers the algorithm for recovering sparse orthogonal polynomials using stochastic collocation via lq minimization, and obtains recoverability results for both sparse polynomial functions and general non-sparse functions.
Abstract: In this paper we consider the algorithm for recovering sparse orthogonal polynomials using stochastic collocation via lq minimization. The main results include: 1) By using the norm inequality between lq and l 2 and the square root lifting inequality, we present several theoretical estimates regarding the recoverability for both sparse and non-sparse signals via lq minimization; 2) We then combine this method with the stochastic collocation to identify the coefficients of sparse orthogonal polynomial expansions, stemming from the field of uncertainty quantification. We obtain recoverability results for both sparse polynomial functions and general non-sparse functions. We also present various numerical experiments to show the performance of the lq algorithm. We first present some benchmark tests to demonstrate the ability of lq minimization to recover exactly sparse signals, and then consider three classical analytical functions to show the advantage of this method over the standard l 1 and reweighted l 1 minimization. All the numerical results indicate that the lq method performs better than standard l 1 and reweighted l 1 minimization.

Journal ArticleDOI
TL;DR: In this article, the role of mesh quality on the accuracy of linear finite element approximation was studied, which showed explicitly how the shape and size of elements, and symmetry structure of mesh effect on the error of numerical approximation.
Abstract: In this paper, we study the role of mesh quality on the accuracy of linear finite element approximation. We derive a more detailed error estimate, which shows explicitly how the shape and size of elements, and symmetry structure of mesh effect on the error of numerical approximation. Two computable parameters Ge and Gv are given to depict the cell geometry property and symmetry structure of the mesh. In compare with the standard a priori error estimates, which only yield information on the asymptotic error behaviour in a global sense, our proposed error estimate considers the effect of local element geometry properties, and is thus more accurate. Under certain conditions, the traditional error estimates and supercovergence results can be derived from the proposed error estimate. Moreover, the estimators Ge and Gv are computable and thus can be used for predicting the variation of errors. Numerical tests are presented to illustrate the performance of the proposed parameters Ge and Gv .

Journal ArticleDOI
TL;DR: A numerical time-stepping algorithm for differential or partial differential equations is proposed that adaptively modifies the dimensionality of the underlying modal basis expansion by using dimensionality-reduction techniques, such as the proper orthogonal decomposition.
Abstract: A numerical time-stepping algorithm for differential or partial differential equations is proposed that adaptively modifies the dimensionality of the underlying modal basis expansion. Specifically, the method takes advantage of any underlying low-dimensional manifolds or subspaces in the system by using dimensionality-reduction techniques, such as the proper orthogonal decomposition, in order to adaptively represent the solution in the optimal basis modes. The method can provide significant computational savings for systems where low-dimensional manifolds are present since the reduction can lower the dimensionality of the underlying high-dimensional system by orders of magnitude. A comparison of the computational efficiency and error for this method are given showing the algorithm to be potentially of great value for high-dimensional dynamical systems simulations, especially where slow-manifold dynamics are known to arise. The method is envisioned to automatically take advantage of any potential computational saving associated with dimensionality-reduction, much as adaptive time-steppers automatically take advantage of large step sizes whenever possible.

Journal ArticleDOI
TL;DR: In this article, the defect structures around a spherical colloidal particle in a cholesteric liquid crystal were investigated using spectral method, which is specially devised to cope with the inhomogeneity of the cholesterics at infinity.
Abstract: We investigate the defect structures around a spherical colloidal particle in a cholesteric liquid crystal using spectral method, which is specially devised to cope with the inhomogeneity of the cholesteric at infinity. We pay particular attention to the cholesteric counterparts of nematic metastable configurations. When the spherical colloidal particle imposes strong homeotropic anchoring on its surface, besides the well-known twisted Saturn ring, we find another metastable defect configuration, which corresponds to the dipole in a nematic, without outside confinement. This configuration is energetically preferable to the twisted Saturn ring when the particle size is large compared to the nematic coherence length and small compared to the cholesteric pitch. When the colloidal particle imposes strong planar anchoring, we find the cholesteric twist can result in a split of the defect core on the particle surface similar to that found in a nematic liquid crystal by lowering temperature or increasing particle size.

Journal ArticleDOI
TL;DR: By sufficiently utilizing the problem’s special structure, this paper manages to make all subproblems either possess closed-form solutions or can be solved via Fast Fourier Transforms, which makes the cost per iteration very low.
Abstract: High order total variation (TV2) and l 1 based (TV2L1) model has its advantage over the TVL1 for its ability in avoiding the staircase; and a constrained model has the advantage over its unconstrained counterpart for simplicity in estimating the parameters. In this paper, we consider solving the TV2L1 based magnetic resonance imaging (MRI) signal reconstruction problem by an efficient alternating direction method of multipliers. By sufficiently utilizing the problem's special structure, we manage to make all subproblems either possess closed-form solutions or can be solved via Fast Fourier Transforms, which makes the cost per iteration very low. Experimental results for MRI reconstruction are presented to illustrate the effectiveness of the new model and algorithm. Comparisons with its recent unconstrained counterpart are also reported.

Journal ArticleDOI
TL;DR: In this paper, the existence of a family of fractal splines satisfying for v = 0, 1, …,N and suitable boundary conditions was shown. But the spline complexity was not analyzed.
Abstract: For a prescribed set of lacunary data with equally spaced knot sequence in the unit interval, we show the existence of a family of fractal splines satisfying for v = 0, 1, … ,N and suitable boundary conditions. To this end, the unique quintic spline introduced by A. Meir and A. Sharma [SIAM J. Numer. Anal. 10(3) 1973, pp. 433-442] is generalized by using fractal functions with variable scaling parameters. The presence of scaling parameters that add extra “degrees of freedom”, self-referentiality of the interpolant, and “fractality” of the third derivative of the interpolant are additional features in the fractal version, which may be advantageous in applications. If the lacunary data is generated from a function Φ satisfying certain smoothness condition, then for suitable choices of scaling factors, the corresponding fractal spline satisfies , as the number of partition points increases.

Journal ArticleDOI
TL;DR: In this paper, an anisotropic adaptive mesh is generated as a uniform one in the metric specified by a tensor, and preservation of the maximum principle is also studied, some new sufficient conditions for maximum principle preservation are developed, and a mesh quality measure is defined.
Abstract: Anisotropic mesh adaptation is studied for linear finite element solution of 3D anisotropic diffusion problems. The 𝕄-uniform mesh approach is used, where an anisotropic adaptive mesh is generated as a uniform one in the metric specified by a tensor. In addition to mesh adaptation, preservation of the maximum principle is also studied. Some new sufficient conditions for maximum principle preservation are developed, and a mesh quality measure is defined to server as a good indicator. Four different metric tensors are investigated: one is the identity matrix, one focuses on minimizing an error bound, another one on preservation of the maximum principle, while the fourth combines both. Numerical examples show that these metric tensors serve their purposes. Particularly, the fourth leads to meshes that improve the satisfaction of the maximum principle by the finite element solution while concentrating elements in regions where the error is large. Application of the anisotropic mesh adaptation to fractured reservoir simulation in petroleum engineering is also investigated, where unphysical solutions can occur and mesh adaptation can help improving the satisfaction of the maximum principle.

Journal ArticleDOI
TL;DR: In this article, a bilinear Streamline Diffusion finite element method on Bakhvalov-Shishkin mesh for singularly perturbed convection diffusion problem is analyzed.
Abstract: In this paper, a bilinear Streamline-Diffusion finite element method on Bakhvalov-Shishkin mesh for singularly perturbed convection – diffusion problem is analyzed. The method is shown to be convergent uniformly in the perturbation parameter ∈ provided only that ∈ ≤ N –1. An convergent rate in a discrete streamline-diffusion norm is established under certain regularity assumptions. Finally, through numerical experiments, we verified the theoretical results.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed to use the interior functions of an hierarchical basis for high order BDMp elements to enforce the divergence-free condition of a magnetic field B approximated by the H(div) basis.
Abstract: In this paper, we propose to use the interior functions of an hierarchical basis for high order BDMp elements to enforce the divergence-free condition of a magnetic field B approximated by the H(div) BDMp basis. The resulting constrained finite element method can be used to solve magnetic induction equation in MHD equations. The proposed procedure is based on the fact that the scalar (p–1)-th order polynomial space on each element can be decomposed as an orthogonal sum of the subspace defined by the divergence of the interior functions of the p-th order BDMp basis and the constant function. Therefore, the interior functions can be used to remove element-wise all higher order terms except the constant in the divergence error of the finite element solution of the B-field. The constant terms from each element can be then easily corrected using a first order H(div) basis globally. Numerical results for a 3-D magnetic induction equation show the effectiveness of the proposed method in enforcing divergence-free condition of the magnetic field.

Journal ArticleDOI
TL;DR: In this article, the eigenvalues of the Schur complement of H-matrices were shown to be located in the Gersgorin disc and the Ostrowski disc under certain conditions.
Abstract: The distribution for eigenvalues of Schur complement of matrices plays an important role in many mathematical problems. In this paper, we firstly present some criteria for H-matrix. Then as application, for two class matrices whose submatrices are γ-diagonally dominant and product γ-diagonally dominant, we show that the eigenvalues of the Schur complement are located in the Gersgorin discs and the Ostrowski discs of the original matrices under certain conditions.