scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 2012"


Journal ArticleDOI
TL;DR: In this article, the authors present a common framework for methods beyond semilocal density-functional theory (DFT), including Hartree-Fock (HF), hybrid density functionals, random-phase approximation (RPA), second-order Moller-Plesset perturbation theory (MP2), and the GW method.
Abstract: The efficient implementation of electronic structure methods is essential for first principles modeling of molecules and solids. We present here a particularly efficient common framework for methods beyond semilocal density-functional theory (DFT), including Hartree-Fock (HF), hybrid density functionals, random-phase approximation (RPA), second-order Moller-Plesset perturbation theory (MP2) and the GW method. This computational framework allows us to use compact and accurate numeric atom-centered orbitals (NAOs), popular in many implementations of semilocal DFT, as basis functions. The essence of our framework is to employ the 'resolution of identity (RI)' technique to facilitate the treatment of both the two-electron Coulomb repulsion integrals (required in all these approaches) and the linear density-response function (required for RPA and GW). This is possible because these quantities can be expressed in terms of the products of single-particle basis functions, which can in turn be expanded in a set of auxiliary basis functions (ABFs). The construction of ABFs lies at the heart of the RI technique, and we propose here a simple prescription for constructing ABFs which can be applied regardless of whether the underlying radial functions have a specific analytical shape

566 citations


Journal ArticleDOI
TL;DR: In this paper, the authors employ the resolution of identity (RI) technique to facilitate the treatment of both the two-electron Coulomb repulsion integrals (required in all these approaches) as well as the linear density-response function (required for RPA and $GW$), which can in turn be expanded in a set of auxiliary basis functions (ABFs).
Abstract: Efficient implementations of electronic structure methods are essential for first-principles modeling of molecules and solids. We here present a particularly efficient common framework for methods beyond semilocal density-functional theory, including Hartree-Fock (HF), hybrid density functionals, random-phase approximation (RPA), second-order Moller-Plesset perturbation theory (MP2), and the $GW$ method. This computational framework allows us to use compact and accurate numeric atom-centered orbitals (popular in many implementations of semilocal density-functional theory) as basis functions. The essence of our framework is to employ the "resolution of identity (RI)" technique to facilitate the treatment of both the two-electron Coulomb repulsion integrals (required in all these approaches) as well as the linear density-response function (required for RPA and $GW$). This is possible because these quantities can be expressed in terms of products of single-particle basis functions, which can in turn be expanded in a set of auxiliary basis functions (ABFs). The construction of ABFs lies at the heart of the RI technique, and here we propose a simple prescription for constructing the ABFs which can be applied regardless of whether the underlying radial functions have a specific analytical shape (e.g., Gaussian) or are numerically tabulated. We demonstrate the accuracy of our RI implementation for Gaussian and NAO basis functions, as well as the convergence behavior of our NAO basis sets for the above-mentioned methods. Benchmark results are presented for the ionization energies of 50 selected atoms and molecules from the G2 ion test set as obtained with $GW$ and MP2 self-energy methods, and the G2-I atomization energies as well as the S22 molecular interaction energies as obtained with the RPA method.

462 citations


Journal ArticleDOI
TL;DR: It is shown that the construction of classical hierarchical B-splines can be suitably modified in order to define locally supported basis functions that form a partition of unity by reducing the support of basis functions defined on coarse grids, according to finer levels in the hierarchy of splines.

454 citations


Proceedings Article
03 Dec 2012
TL;DR: It is shown that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystrom method can yield impressively better generalization error bound than random Fourier features based approach.
Abstract: Both random Fourier features and the Nystrom method have been successfully applied to efficient kernel learning. In this work, we investigate the fundamental difference between these two approaches, and how the difference could affect their generalization performances. Unlike approaches based on random Fourier features where the basis functions (i.e., cosine and sine functions) are sampled from a distribution independent from the training data, basis functions used by the Nystrom method are randomly sampled from the training examples and are therefore data dependent. By exploring this difference, we show that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystrom method can yield impressively better generalization error bound than random Fourier features based approach. We empirically verify our theoretical findings on a wide range of large data sets.

328 citations


Journal ArticleDOI
TL;DR: Recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles is described.
Abstract: In this article we describe recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles These hybrid methods combine conventional piecewise polynomial approximations with high-frequency asymptotics to build basis functions suitable for representing the oscillatory solutions They have the potential to solve scattering problems accurately in a computation time that is (almost) independent of frequency and this has been realized for many model problems The design and analysis of this class of methods requires new results on the analysis and numerical analysis of highly oscillatory boundary integral operators and on the high-frequency asymptotics of scattering problems The implementation requires the development of appropriate quadrature rules for highly oscillatory integrals This article contains a historical account of the development of this currently very active field, a detailed account of recent progress and, in addition, a number of original research results on the design, analysis and implementation of these methods

242 citations


Proceedings ArticleDOI
14 May 2012
TL;DR: This work presents a full probabilistic derivation of the continuous-time estimation problem, derive an estimator based on the assumption that the densities and processes involved are Gaussian, and shows how coefficients of a relatively small number of basis functions can form the state to be estimated, making the solution efficient.
Abstract: Roboticists often formulate estimation problems in discrete time for the practical reason of keeping the state size tractable. However, the discrete-time approach does not scale well for use with high-rate sensors, such as inertial measurement units or sweeping laser imaging sensors. The difficulty lies in the fact that a pose variable is typically included for every time at which a measurement is acquired, rendering the dimension of the state impractically large for large numbers of measurements. This issue is exacerbated for the simultaneous localization and mapping (SLAM) problem, which further augments the state to include landmark variables. To address this tractability issue, we propose to move the full maximum likelihood estimation (MLE) problem into continuous time and use temporal basis functions to keep the state size manageable. We present a full probabilistic derivation of the continuous-time estimation problem, derive an estimator based on the assumption that the densities and processes involved are Gaussian, and show how coefficients of a relatively small number of basis functions can form the state to be estimated, making the solution efficient. Our derivation is presented in steps of increasingly specific assumptions, opening the door to the development of other novel continuous-time estimation algorithms through the application of different assumptions at any point. We use the SLAM problem as our motivation throughout the paper, although the approach is not specific to this application. Results from a self-calibration experiment involving a camera and a high-rate inertial measurement unit are provided to validate the approach.

225 citations


Journal ArticleDOI
TL;DR: This paper presents a general framework for high-order Lagrangian discretization of these compressible shock hydrodynamics equations using curvilinear finite elements for any finite dimensional approximation of the kinematic and thermodynamic fields.
Abstract: The numerical approximation of the Euler equations of gas dynamics in a movingLagrangian frame is at the heart of many multiphysics simulation algorithms. In this paper, we present a general framework for high-order Lagrangian discretization of these compressible shock hydrodynamics equations using curvilinear finite elements. This method is an extension of the approach outlined in [Dobrev et al., Internat. J. Numer. Methods Fluids, 65 (2010), pp. 1295--1310] and can be formulated for any finite dimensional approximation of the kinematic and thermodynamic fields, including generic finite elements on two- and three-dimensional meshes with triangular, quadrilateral, tetrahedral, or hexahedral zones. We discretize the kinematic variables of position and velocity using a continuous high-order basis function expansion of arbitrary polynomial degree which is obtained via a corresponding high-order parametric mapping from a standard reference element. This enables the use of curvilinear zone geometry, higher-ord...

197 citations


Journal ArticleDOI
TL;DR: In this paper, an isogeometric finite element method based on non-uniform rational B-splines (NURBS) basis functions is developed for natural frequencies and buckling analysis of thin symmetrically laminated composite plates based upon the classical plate theory.

194 citations


Journal ArticleDOI
TL;DR: The possibility to directly couple the finite cell method to CSG, without any necessity for meshing the three-dimensional domain, is discussed, and a combination of the best of the two approaches IGA and FCM is explored, closely following ideas of the recently introduced shell FCM.

177 citations


Journal ArticleDOI
TL;DR: Numerical experiments show that this intervention allows for stable nonlinear FCM analysis, preserving the full range of advantages of linear elastic FCM, in particular exponential rates of convergence.
Abstract: The Finite Cell Method (FCM) is an embedded domain method, which combines the fictitious domain approach with high-order finite elements, adaptive integration, and weak imposition of unfitted Dirichlet boundary conditions. For smooth problems, FCM has been shown to achieve exponential rates of convergence in energy norm, while its structured cell grid guarantees simple mesh generation irrespective of the geometric complexity involved. The present contribution first unhinges the FCM concept from a special high-order basis. Several benchmarks of linear elasticity and a complex proximal femur bone with inhomogeneous material demonstrate that for small deformation analysis, FCM works equally well with basis functions of the p-version of the finite element method or high-order B-splines. Turning to large deformation analysis, it is then illustrated that a straightforward geometrically nonlinear FCM formulation leads to the loss of uniqueness of the deformation map in the fictitious domain. Therefore, a modified FCM formulation is introduced, based on repeated deformation resetting, which assumes for the fictitious domain the deformation-free reference configuration after each Newton iteration. Numerical experiments show that this intervention allows for stable nonlinear FCM analysis, preserving the full range of advantages of linear elastic FCM, in particular exponential rates of convergence. Finally, the weak imposition of unfitted Dirichlet boundary conditions via the penalty method, the robustness of FCM under severe mesh distortion, and the large deformation analysis of a complex voxel-based metal foam are addressed.

173 citations


Journal ArticleDOI
TL;DR: This work presents a novel discretization scheme that adaptively and systematically builds the rapid oscillations of the Kohn-Sham orbitals around the nuclei as well as environmental effects into the basis functions.

Journal ArticleDOI
TL;DR: This work presents a detailed analysis of a measured SF to give experimental evidence that3-D MPI encodes information using a set of 3-D spatial patterns or basis functions that is stored in the SF.
Abstract: Magnetic particle imaging (MPI) is a new tomographic imaging approach that can quantitatively map magnetic nanoparticle distributions in vivo. It is capable of volumetric real-time imaging at particle concentrations low enough to enable clinical applications. For image reconstruction in 3-D MPI, a system function (SF) is used, which describes the relation between the acquired MPI signal and the spatial origin of the signal. The SF depends on the instrumental configuration, the applied field sequence, and the magnetic particle characteristics. Its properties reflect the quality of the spatial encoding process. This work presents a detailed analysis of a measured SF to give experimental evidence that 3-D MPI encodes information using a set of 3-D spatial patterns or basis functions that is stored in the SF. This resembles filling 3-D k-space in magnetic resonance imaging, but is faster since all information is gathered simultaneously over a broad acquisition bandwidth. A frequency domain analysis shows that the finest structures that can be encoded with the presented SF are as small as 0.6 mm. SF simulations are performed to demonstrate that larger particle cores extend the set of basis functions towards higher resolution and that the experimentally observed spatial patterns require the existence of particles with core sizes of about 30 nm in the calibration sample. A simple formula is presented that qualitatively describes the basis functions to be expected at a certain frequency.

Journal ArticleDOI
TL;DR: A Bayesian formulation that is ideally suited to combining information of physical and probabilistic natures is presented, which results in a robust regularization criterion with no more than one minimum.
Abstract: The reconstruction of acoustical sources from discrete field measurements is a difficult inverse problem that has been approached in different ways Classical methods (beamforming, near-field acoustical holography, inverse boundary elements, wave superposition, equivalent sources, etc) all consist—implicitly or explicitly—in interpolating the measurements onto some spatial functions whose propagation are known and in reconstructing the source field by retropropagation This raises the fundamental question as whether, for a given source topology and array geometry, there exists an optimal interpolation basis which minimizes the reconstruction error This paper provides a general answer to this question, by proceeding from a Bayesian formulation that is ideally suited to combining information of physical and probabilistic natures The main findings are the followings: (1) The optimal basis functions are the M eigen-functions of a specific continuous-discrete propagation operator, with M being the number of microphones in the array (2) The a priori inclusion of spatial information on the source field causes super-resolution according to a phenomenon coined “Bayesian focusing” (3) The approach is naturally endowed with an internal regularization mechanism and results in a robust regularization criterion with no more than one minimum (4) It admits classical methods as particular cases

Journal ArticleDOI
TL;DR: A chemotaxis model and a Turing system from biology and a local radial basis function method are applied to numerically approximate the solutions of time-dependent advection–diffusion–reaction and diffusion-reaction equations.

Journal ArticleDOI
TL;DR: In this article, a new approach to model order reduction of the Navier-Stokes equations at high Reynolds number is proposed, which does not rely on empirical turbulence modeling or modification of the NST equations.
Abstract: A new approach to model order reduction of the Navier-Stokes equations at high Reynolds number is proposed. Unlike traditional approaches, this method does not rely on empirical turbulence modeling or modification of the Navier-Stokes equations. It provides spatial basis functions different from the usual proper orthogonal decomposition basis function in that, in addition to optimally representing the training data set, the new basis functions also provide stable and accurate reduced-order models. The proposed approach is illustrated with two test cases: two-dimensional flow inside a square lid-driven cavity and a two-dimensional mixing layer.

Journal ArticleDOI
TL;DR: In this paper, a new procedure for estimating the bearing residual useful life (RUL) by combining data-driven and model-based techniques is presented, where relevance vector machines (RVMs) are used to select a low number of significant basis functions, called Relevant Vectors (RVs), and exponential regression is used to compute and continuously update residual life estimations.

Journal ArticleDOI
TL;DR: A novel algorithm, based on a hybrid Gaussian and plane waves (GPW) approach, is developed for the canonical second-order Møller-Plesset perturbation energy (MP2) of finite and extended systems and is highly efficient for condensed phase systems.
Abstract: A novel algorithm, based on a hybrid Gaussian and plane waves (GPW) approach, is developed for the canonical second-order Moller-Plesset perturbation energy (MP2) of finite and extended systems. The key aspect of the method is that the electron repulsion integrals (ia|λσ) are computed by direct integration between the products of Gaussian basis functions λσ and the electrostatic potential arising from a given occupied-virtual pair density ia. The electrostatic potential is obtained in a plane waves basis set after solving the Poisson equation in Fourier space. In particular, for condensed phase systems, this scheme is highly efficient. Furthermore, our implementation has low memory requirements and displays excellent parallel scalability up to 100 000 processes. In this way, canonical MP2 calculations for condensed phase systems containing hundreds of atoms or more than 5000 basis functions can be performed within minutes, while systems up to 1000 atoms and 10 000 basis functions remain feasible. Solid LiH has been employed as a benchmark to study basis set and system size convergence. Lattice constants and cohesive energies of various molecular crystals have been studied with MP2 and double-hybrid functionals.

Journal ArticleDOI
TL;DR: The analysis suggests that using the Chebyshev measure to precondition the ‘1-minimization, which has been shown to be numerically advantageous in one dimension in the literature, may in fact become less efficient in high dimensions.
Abstract: The idea of ‘1-minimization is the basis of the widely adopted compressive sensing method for function approximation. In this paper, we extend its application to high-dimensional stochastic collocation methods. To facilitate practical implementation, we employ orthogonal polynomials, particularly Legendre polynomials, as basis functions, and focus on the cases where the dimensionality is high such that one can not afford to construct high-degree polynomial approximations. We provide theoretical analysis on the validity of the approach. The analysis also suggests that using the Chebyshev measure to precondition the ‘1-minimization, which has been shown to be numerically advantageous in one dimension in the literature, may in fact become less efficient in high dimensions. Numerical tests are provided to examine the performance of the methods and validate the theoretical findings.

Journal ArticleDOI
TL;DR: In this paper, an isogeometric finite element method is presented for natural frequencies analysis of thin plate problems of various geometries, and the non-uniform rational B-splines (NURBS) basis function is applied for approximation of the thin plate deflection field, as for description of the geometry.

Book ChapterDOI
01 Jan 2012
TL;DR: In this paper, a review of the least square Monte Carlo approach for approximating the solution of backward stochastic differential equations (BSDEs) first suggested by Gobet et al. was given, and the use of basis functions, which form a system of martingales, was proposed.
Abstract: In this paper we first give a review of the least-squares Monte Carlo approach for approximating the solution of backward stochastic differential equations (BSDEs) first suggested by Gobet et al. (Ann Appl Probab., 15:2172–2202, 2005). We then propose the use of basis functions, which form a system of martingales, and explain how the least-squares Monte Carlo scheme can be simplified by exploiting the martingale property of the basis functions. We partially compare the convergence behavior of the original scheme and the scheme based on martingale basis functions, and provide several numerical examples related to option pricing problems under different interest rates for borrowing and investing.

Journal ArticleDOI
TL;DR: The hp-d-adaptive finite cell method (hp-d) as discussed by the authors combines the FA with the p-version of the finite element method and adaptive integration to achieve high convergence rate and simple mesh generation, irrespective of the geometric complexity involved.
Abstract: SUMMARY The finite cell method (FCM) combines the fictitious domain approach with the p-version of the finite element method and adaptive integration. For problems of linear elasticity, it offers high convergence rates and simple mesh generation, irrespective of the geometric complexity involved. This article presents the integration of the FCM into the framework of nonlinear finite element technology. However, the penalty parameter of the fictitious domain is restricted to a few orders of magnitude in order to maintain local uniqueness of the deformation map. As a consequence of the weak penalization, nonlinear strain measures provoke excessive stress oscillations in the cells cut by geometric boundaries, leading to a low algebraic rate of convergence. Therefore, the FCM approach is complemented by a local overlay of linear hierarchical basis functions in the sense of the hp-d method, which synergetically uses the h-adaptivity of the integration scheme. Numerical experiments show that the hp-d overlay effectively reduces oscillations and permits stronger penalization of the fictitious domain by stabilizing the deformation map. The hp-d-adaptive FCM is thus able to restore high convergence rates for the geometrically nonlinear case, while preserving the easy meshing property of the original FCM. Accuracy and performance of the present scheme are demonstrated by several benchmark problems in one, two, and three dimensions and the nonlinear simulation of a complex foam sample. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The diverse features and recent advances of the present Cryscor version are illustrated by exemplary applications to various systems: the adsorption of an argon monolayer on the MgO (100) surface, the rolling energy of a boron nitride nanoscroll, the relative stability of different aluminosilicates, the inclusion energy of methane in methane-ice-clathrates, and the effect of electron correlation on charge and momentum density of α-quartz.
Abstract: CRYSCOR is a periodic post-Hartree–Fock program based on local functions in direct space, i.e., Wannier functions and projected atomic orbitals. It uses atom centered Gaussians as basis functions. The Hartree–Fock reference, as well as symmetry information, is provided by the CRYSTAL program. CRYSCOR presently features an efficient and parallel implementation of periodic local second order Moller–Plesset perturbation theory (MP2), which allows us to study 1D-, 2D- and 3D-periodic systems beyond 1000 basis functions per unit cell. Apart from the correlation energy also the MP2 density matrix, and from that the Compton profile, are available. Very recently, a new module for calculating excitonic band gaps at the uncorrelated Configuration-Interaction-Singles (CIS) level has been added. Other advancements include new extrapolation techniques for calculating surface adsorption on semi-infinite solids. In this paper the diverse features and recent advances of the present CRYSCOR version are illustrated by exemplary applications to various systems: the adsorption of an argon monolayer on the MgO (100) surface, the rolling energy of a boron nitride nanoscroll, the relative stability of different aluminosilicates, the inclusion energy of methane in methane–ice-clathrates, and the effect of electron correlation on charge and momentum density of α-quartz. Furthermore, we present some first tentative CIS results for excitonic band gaps of simple 3D-crystals, and their dependence on the diffuseness of the basis set.

Journal ArticleDOI
Federico Chavez1, Claude Duhr1
TL;DR: In this paper, the authors studied one and two-loop triangle integrals with massless propagators and all external legs off shell and showed that there is a kinematic region where the results can be expressed in terms of a basis of single-valued polylogarithms in one complex variable.
Abstract: We study one and two-loop triangle integrals with massless propagators and all external legs off shell. We show that there is a kinematic region where the results can be expressed in terms of a basis of single-valued polylogarithms in one complex variable. The relevant space of single-valued functions can be determined a priori and the results take strikingly a simple and compact form when written in terms of this basis. We study the properties of the basis functions and illustrate how one can easily analytically continue our results to all kinematic regions where the external masses have the same sign.

Journal ArticleDOI
TL;DR: This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions.
Abstract: Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.

Journal ArticleDOI
TL;DR: In this article, Gaussian functions for correlation of all core shells of elements from Z −31 to Z −118 have been optimized in relativistic singles and doubles CI calculations, performed on the shell of highest angular momentum for each principal quantum number.
Abstract: Gaussian functions for correlation of all core shells of elements from Z = 31 to Z = 118 have been optimized in relativistic singles and doubles CI calculations, performed on the shell of highest angular momentum for each principal quantum number. The SCF functions were derived from the double-zeta, triple-zeta, and quadruple-zeta basis sets previously optimized by the author. Only those Gaussian functions that are not represented in the SCF basis sets were optimized. The functions are available from the Dirac program web site, http://dirac.chem.sdu.dk .

Journal ArticleDOI
TL;DR: The paper presents the improved element-free Galerkin (IEFG) method for three-dimensional wave propagation, which uses an orthogonal function system with a weight function as the basis function to construct the shape function.
Abstract: The paper presents the improved element-free Galerkin (IEFG) method for three-dimensional wave propagation. The improved moving least-squares (IMLS) approximation is employed to construct the shape function, which uses an orthogonal function system with a weight function as the basis function. Compared with the conventional moving least-squares (MLS) approximation, the algebraic equation system in the IMLS approximation is not ill-conditioned, and can be solved directly without deriving the inverse matrix. Because there are fewer coefficients in the IMLS than in the MLS approximation, fewer nodes are selected in the IEFG method than in the element-free Galerkin method. Thus, the IEFG method has a higher computing speed. In the IEFG method, the Galerkin weak form is employed to obtain a discretized system equation, and the penalty method is applied to impose the essential boundary condition. The traditional difference method for two-point boundary value problems is selected for the time discretization. As the wave equations and the boundary-initial conditions depend on time, the scaling parameter, number of nodes and the time step length are considered for the convergence study.

Journal ArticleDOI
TL;DR: In this paper, a new formula for the evaluation of the modal radiation Q factor is derived, which is based on the electric field integral equation, Delaunay triangulation, method of moments, Rao-Wilton-Glisson basis function and the theory of characteristic modes.
Abstract: A new formula for the evaluation of the modal radiation Q factor is derived. The total Q of selected structures is to be calculated from the set of eigenmodes with associated eigen-energies and eigen-powers. Thanks to the analytical expression of these quantities, the procedure is highly accurate, respecting arbitrary current densities flowing along the radiating device. The electric field integral equation, Delaunay triangulation, method of moments, Rao-Wilton-Glisson basis function and the theory of characteristic modes constitute the underlying theoretical background. In terms of the modal radiation Q, all necessary relations are presented and the essential points of implementation are discussed. Calculation of the modal energies and Q factors enable us to study the effect of the radiating shape separately to the feeding. This approach can be very helpful in antenna design. A few examples are given, including a thin-strip dipole, two coupled dipoles a bowtie antenna and an electrically small meander folded dipole. Results are compared with prior estimates and some observations are discussed. Good agreement is observed for different methods.

Journal ArticleDOI
TL;DR: ERKALE is a novel software program for computing X‐ray properties, such as ground‐state electron momentum densities, Compton profiles, and core and valence electron excitation spectra of atoms and molecules, which operates at Hartree–Fock or density‐functional level of theory and supports Gaussian basis sets of arbitrary angular momentum.
Abstract: ERKALE is a novel software program for computing X-ray properties, such as ground-state electron momentum densities, Compton profiles, and core and valence electron excitation spectra of atoms and molecules. The program operates at Hartree–Fock or density-functional level of theory and supports Gaussian basis sets of arbitrary angular momentum and a wide variety of exchange-correlation functionals. ERKALE includes modern convergence accelerators such as Broyden and ADIIS and it is suitable for general use, as calculations with thousands of basis functions can routinely be performed on desktop computers. Furthermore, ERKALE is written in an object oriented manner, making the code easy to understand and to extend to new properties while being ideal also for teaching purposes. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The proposed LRBFCM is efficient, accurate and stable for flow with reasonably high Reynolds numbers and is compared with analytical solution as well as other numerical methods.

Journal ArticleDOI
TL;DR: An improved complex variable element-free Galerkin (ICVEFG) method, which belongs to a novel element free Galerkins method, is presented for two-dimensional large deformation problems and has greater precision and efficiency.