scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 2009"


Journal ArticleDOI
TL;DR: The construction of transferable, hierarchical basis sets are demonstrated, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set, since all basis functions are strictly localized.

2,178 citations


Journal ArticleDOI
TL;DR: It is shown that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.

481 citations


Proceedings ArticleDOI
01 May 2009
TL;DR: The latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formulations are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules.
Abstract: Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor product or sparse grids. When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for a range of probabilistic analysis problems. In addition, analytic features of the expansions can be exploited for moment estimation and stochastic sensitivity analysis. In this paper, the latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formulations are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules. The most promising stochastic expansion approaches are then carried forward for use in new approaches for mixed aleatory-epistemic UQ, employing second-order probability approaches, and design under uncertainty, employing bilevel, sequential, and multifidelity approaches.

354 citations


Proceedings ArticleDOI
05 Jan 2009
TL;DR: Performance of PCE and SC is shown to be very similar, although when differences are evident, SC is the consistent winner over traditional PCE formulations, and this performance gap can be reduced, and in some cases, eliminated.
Abstract: Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor-products or sparse grids When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for general probabilistic analysis problems In this paper, we explore relative performance of these methods using a number of simple algebraic test problems, and analyze observed differences In these computational experiments, performance of PCE and SC is shown to be very similar, although when differences are evident, SC is the consistent winner over traditional PCE formulations This stems from the practical difficulty of optimally synchronizing the formof the PCE with the integration approach being employed, resulting in slight over- or under-integration of prescribed expansion form With additional nontraditional tailoring of PCE form, it is shown that this performance gap can be reduced, and in some cases, eliminated

341 citations


Journal ArticleDOI
TL;DR: It is shown that temporal basis functions calculated by subjecting the training data to principal component analysis (PCA) can be used to constrain the reconstruction such that the temporal resolution is improved.
Abstract: The k-t broad-use linear acquisition speed-up technique (BLAST) has become widespread for reducing image acquisition time in dynamic MRI. In its basic form k-t BLAST speeds up the data acquisition by undersampling k-space over time (referred to as k-t space). The resulting aliasing is resolved in the Fourier reciprocal x-f space (x = spatial position, f = temporal frequency) using an adaptive filter derived from a low-resolution estimate of the signal covariance. However, this filtering process tends to increase the reconstruction error or lower the achievable acceleration factor. This is problematic in applications exhibiting a broad range of temporal frequencies such as free-breathing myocardial perfusion imaging. We show that temporal basis functions calculated by subjecting the training data to principal component analysis (PCA) can be used to constrain the reconstruction such that the temporal resolution is improved. The presented method is called k-t PCA.

299 citations


Journal ArticleDOI
TL;DR: This paper investigates the approximation properties of the k-method with the theory of Kolmogorov n-widths and conducts a numerical study in which the n- width and sup–inf are computed for a number of one-dimensional cases.

265 citations


Journal ArticleDOI
TL;DR: This work has found that standard hybrid functionals can be transformed into short-range functionals without loss of accuracy in Gaussian basis sets, leading to a stable and accurate procedure for evaluating Hartree-Fock exchange at the Γ-point.
Abstract: Hartree-Fock exchange with a truncated Coulomb operator has recently been discussed in the context of periodic plane-waves calculations [Spencer, J.; Alavi, A. Phys. Rev. B: Solid State, 2008, 77, 193110]. In this work, this approach is extended to Gaussian basis sets, leading to a stable and accurate procedure for evaluating Hartree-Fock exchange at the Γ-point. Furthermore, it has been found that standard hybrid functionals can be transformed into short-range functionals without loss of accuracy. The well-defined short-range nature of the truncated exchange operator can naturally be exploited in integral screening procedures and makes this approach interesting for both condensed phase and gas phase systems. The presented Hartree-Fock implementation is massively parallel and scales up to ten thousands of cores. This makes it feasible to perform highly accurate calculations on systems containing thousands of atoms or ten thousands of basis functions. The applicability of this scheme is demonstrated by calculating the cohesive energy of a LiH crystal close to the Hartree-Fock basis set limit and by performing an electronic structure calculation of a complete protein (rubredoxin) in solution with a large and flexible basis set.

247 citations


Journal ArticleDOI
TL;DR: In both wave function and density functional calculations, the resulting basis sets reduce the basis set superposition error almost as much as the augmented correlation-consistent basis sets, although they are much smaller and give very similar energetic predictions to the much larger aug-cc-pVxZ basis sets.
Abstract: We combine the diffuse basis functions from the 6-31+G basis set of Pople and co-workers with the correlation-consistent basis sets of Dunning and co-workers. In both wave function and density functional calculations, the resulting basis sets reduce the basis set superposition error almost as much as the augmented correlation-consistent basis sets, although they are much smaller. In addition, in density functional calculations the new basis sets, called cc-pVxZ+ where x = D, T, Q, ..., or x = D+d, T+d, Q+d, ..., give very similar energetic predictions to the much larger aug-cc-pVxZ basis sets. However, energetics calculated from correlated wave function calculations are more slowly convergent with respect to the addition of diffuse functions. We also examined basis sets with the same number and type of functions as the cc-pVxZ+ sets but using the diffuse exponents of the aug-cc-pVxZ basis sets and found very similar performance to cc-pVxZ+; these basis sets are called minimally augmented cc-pVxZ, which we abbreviate as maug-cc-pVxZ.

222 citations


Journal ArticleDOI
TL;DR: The numerical results show that OO-SCS-MP2 is a major improvement in electronically complicated situations, such as represented by radicals or by transition states where spin contamination often greatly deteriorates the quality of the conventional MP2 and SCS- MP2 methods.
Abstract: An efficient implementation of the orbital-optimized second-order Moller-Plesset perturbation theory (OO-MP2) within the resolution of the identity (RI) approximation is reported. Both conventional MP2 and spin-component scaled (SCS-MP2) variants are considered, and an extensive numerical investigation of the accuracy of these approaches is presented. This work is closely related to earlier work of Lochan, R. C.; Head-Gordon, M. J. Chem. Phys. 2007, 126. Orbital optimization is achieved by making the Hylleraas functional together with the energy of the reference determinant stationary with respect to variations of the double excitation amplitudes and the molecular orbital rotation parameters. A simple iterative scheme is proposed that usually leads to convergence within 5-15 iterations. The applicability of the method to larger molecules (up to ∼1000-2000 basis functions) is demonstrated. The numerical results show that OO-SCS-MP2 is a major improvement in electronically complicated situations, such as represented by radicals or by transition states where spin contamination often greatly deteriorates the quality of the conventional MP2 and SCS-MP2 methods. The OO-(SCS-)MP2 approach reduces the error by a factor of 3-5 relative to the standard (SCS-)MP2. For closed-shell main group elements, no significant improvement in the accuracy relative to the already excellent SCS-MP2 method is observed. In addition, the problems of all MP2 variants with 3d transition-metal complexes are not solved by orbital optimization. The close relationship of the OO-MP2 method to the approximate second-order coupled cluster method (CC2) is pointed out. Both methods have comparable computational requirements. Thus, the OO-MP2 method emerges as a very useful tool for computational quantum chemistry.

204 citations


Journal ArticleDOI
TL;DR: Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible.
Abstract: Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

185 citations


Journal ArticleDOI
TL;DR: In this paper, the geometric properties of design are embedded into the NURBS basis functions and the control points whose perturbation naturally results in shape changes are used in both response and shape sensitivity analyses, where normal vector and curvature are continuous over the whole design space.
Abstract: Finite element based shape optimization has some difficulties in the parameterization of design domain. In isogeometric approach, however, the geometric properties of design are embedded into the NURBS basis functions and the control points whose perturbation naturally results in shape changes. Thus, exact geometric models can be used in both response and shape sensitivity analyses, where normal vector and curvature are continuous over the whole design space so that enhanced shape sensitivity can be obtained. In the problems of shape optimal design, refinements and design changes are easily implemented within the isogeometric framework, which maintains exact geometry without subsequent communication with CAD description. The variation of control points results in shape changes and is continuous over the whole design space. Through numerical examples, the developed isogeometric sensitivity is verified to demonstrate excellent agreements with finite difference sensitivity. Also, the proposed method works very well in various shape optimization problems.

Journal ArticleDOI
TL;DR: A neural network based multi-label learning algorithm named Ml-rbf is proposed, which is derived from the traditional radial basis function (RBF) methods and achieves highly competitive performance to other well-established multi- label learning algorithms.
Abstract: Multi-label learning deals with the problem where each instance is associated with multiple labels simultaneously. The task of this learning paradigm is to predict the label set for each unseen instance, through analyzing training instances with known label sets. In this paper, a neural network based multi-label learning algorithm named Ml-rbf is proposed, which is derived from the traditional radial basis function (RBF) methods. Briefly, the first layer of an Ml-rbf neural network is formed by conducting clustering analysis on instances of each possible class, where the centroid of each clustered groups is regarded as the prototype vector of a basis function. After that, second layer weights of the Ml-rbf neural network are learned by minimizing a sum-of-squares error function. Specifically, information encoded in the prototype vectors corresponding to all classes are fully exploited to optimize the weights corresponding to each specific class. Experiments on three real-world multi-label data sets show that Ml-rbf achieves highly competitive performance to other well-established multi-label learning algorithms.

Journal ArticleDOI
TL;DR: It is shown for the case of single-frequency microwave tomography that the imaging accuracy is comparable to that obtained when the original discrete mesh is used, despite the reduction of the dimension of the inverse problem.
Abstract: Breast imaging via microwave tomography involves estimating the distribution of dielectric properties within the patient's breast on a discrete mesh. The number of unknowns in the discrete mesh can be very large for 3-D imaging, and this results in computational challenges. We propose a new approach where the discrete mesh is replaced with a relatively small number of smooth basis functions. The dimension of the tomography problem is reduced by estimating the coefficients of the basis functions instead of the dielectric properties at each element in the discrete mesh. The basis functions are constructed using knowledge of the location of the breast surface. The number of functions used in the basis can be varied to balance resolution and computational complexity. The reduced dimension of the inverse problem enables application of a computationally efficient, multiple-frequency inverse scattering algorithm in 3-D. The efficacy of the proposed approach is verified using two 3-D anatomically realistic numerical breast phantoms. It is shown for the case of single-frequency microwave tomography that the imaging accuracy is comparable to that obtained when the original discrete mesh is used, despite the reduction of the dimension of the inverse problem. Results are also shown for a multiple-frequency algorithm where it is computationally challenging to use the original discrete mesh.

Journal ArticleDOI
TL;DR: To overcome the curse of dimensionality, a low-rank separated approximation of the solution of a stochastic partial differential with high-dimensional random input data is obtained using an alternating least-squares (ALS) scheme.

Journal ArticleDOI
27 Jul 2009
TL;DR: This work describes an optimization algorithm to minimize the deformation energy, which is robust, provably convergent, and easy to implement.
Abstract: A space deformation is a mapping from a source region to a target region within Euclidean space, which best satisfies some userspecified constraints. It can be used to deform shapes embedded in the ambient space and represented in various forms -- polygon meshes, point clouds or volumetric data. For a space deformation method to be useful, it should possess some natural properties: e.g. detail preservation, smoothness and intuitive control. A harmonic map from a domain ω ⊂ Rd to Rd is a mapping whose d components are harmonic functions. Harmonic mappings are smooth and regular, and if their components are coupled in some special way, the mapping can be detail-preserving, making it a natural choice for space deformation applications. The challenge is to find a harmonic mapping of the domain, which will satisfy constraints specified by the user, yet also be detail-preserving, and intuitive to control. We generate harmonic mappings as a linear combination of a set of harmonic basis functions, which have a closed-form expression when the source region boundary is piecewise linear. This is done by defining an energy functional of the mapping, and minimizing it within the linear span of these basis functions. The resulting mapping is harmonic, and a natural "As-Rigid-As-Possible" deformation of the source region. Unlike other space deformation methods, our approach does not require an explicit discretization of the domain. It is shown to be much more efficient, yet generate comparable deformations to state-of-the-art methods. We describe an optimization algorithm to minimize the deformation energy, which is robust, provably convergent, and easy to implement.

Journal ArticleDOI
TL;DR: In this paper, the authors derived the first known numerical shallow water model on the sphere using radial basis function (RBF) spatial discretization, a novel numerical methodology that does not require any grid or mesh.
Abstract: The paper derives the first known numerical shallow water model on the sphere using radial basis function (RBF) spatial discretization, a novel numerical methodology that does not require any grid or mesh. In order to perform a study with regard to its spatial and temporal errors, two nonlinear test cases with known analytical solutions are considered. The first is a global steady-state flow with a compactly supported velocity field, while the second is an unsteady flow where features in the flow must be kept intact without dispersion. This behaviour is achieved by introducing forcing terms in the shallow water equations. Error and time stability studies are performed, both as the number of nodes are uniformly increased and the shape parameter of the RBF is varied, especially in the flat basis function limit. Results show that the RBF method is spectral, giving exceptionally high accuracy for low number of basis functions while being able to take unusually large time steps. In order to put it in the context of other commonly used global spectral methods on a sphere, comparisons are given with respect to spherical harmonics, double Fourier series and spectral element methods.

Journal ArticleDOI
TL;DR: A universal scheme for encoding multiple symbol streams using a single driven element surrounded by parasitic elements loaded with variable reactive loads, which is based on creating a MIMO system by expanding the far-field of a compact parasitic array into an orthogonal set of angular functions (basis).
Abstract: A universal scheme for encoding multiple symbol streams using a single driven element (and consequently a single radio frequency (RF) frontend) surrounded by parasitic elements (PE) loaded with variable reactive loads, is proposed in this paper. The proposed scheme is based on creating a MIMO system by expanding the far-field of a compact parasitic array into an orthogonal set of angular functions (basis). Independent information streams are encoded by means of angular variations of the far-field in the wavevector domain, rather than spatial variations as usually happens in conventional MIMO systems. The array can spatially multiplex the input streams by creating all the desired linear combinations (for a given modulation scheme) of the basis functions. The desired combinations are obtained by projecting the ratio of the symbols to be spatially multiplexed on the ratio of the basis functions' weights (complex coefficients), which is a function of the currents induced on the PE within the antenna domain, and controlled by the independent reactive loadings.

Journal ArticleDOI
TL;DR: In this article, a flexible equivalent current method is presented, which has been derived from a general purpose boundary integral equation solver, and works with arbitrary triangular surface meshes and Rao-Wilton-Glisson basis functions, where electric and/or magnetic surface current densities can be assumed.
Abstract: The radiation of any object in a homogeneous environment can be described by a set of equivalent sources. Different types of equivalent sources are feasible and all of them have their own benefits. Equivalent current methods are especially advantageous for irregular measurement grids of arbitrary shape and if a priori information about the object shall be used. Also, they can immediately provide diagnostic information about the object. In this paper, a very flexible equivalent current method is presented, which has been derived from a general purpose boundary integral equation solver. As such the method works with arbitrary triangular surface meshes and Rao-Wilton-Glisson basis functions, where electric and/or magnetic surface current densities can be assumed. High efficiency is achieved since the multilevel fast multipole method has been adapted to speed-up the inverse solution process. Results obtained from simulations and measurements are presented, where large-scale problems with dimensions up to 75 wavelengths are considered.

Journal ArticleDOI
TL;DR: An intermittent controller with fixed sampling interval is recast as an event-driven controller that incorporates both feedforward events in response to known signals and feedback events in Response to detected disturbances.
Abstract: An intermittent controller with fixed sampling interval is recast as an event-driven controller. The key aspect of intermittent control that makes this possible is the use of basis functions, or, equivalently, a generalised hold, to generate the intersample open-loop control signal. The controller incorporates both feedforward events in response to known signals and feedback events in response to detected disturbances. The latter feature makes use of an extended basis-function generator to generate open-loop predictions of states to be compared with measured or observed states. Intermittent control is based on an underlying continuous-time controller; it is emphasised that the design of this continuous-time controller is important, particularly in the presence of input disturbances. Illustrative simulation examples are given.

Journal ArticleDOI
TL;DR: The boundary element free method (BEFM) as discussed by the authors is a direct numerical method in which the basic unknown quantity is the real solution of the nodal variables, and the boundary conditions can be applied directly and easily; thus it gives a greater computational precision.
Abstract: Combining the boundary integral equation (BIE) method and improved moving least-squares (IMLS) approximation, a direct meshless BIE method, which is called the boundary element-free method (BEFM), for two-dimensional potential problems is discussed in this paper. In the IMLS approximation, the weighted orthogonal functions are used as the basis functions; then the algebra equation system is not ill-conditioned and can be solved without obtaining the inverse matrix. Based on the IMLS approximation and the BIE for two-dimensional potential problems, the formulae of the BEFM are given. The BEFM is a direct numerical method in which the basic unknown quantity is the real solution of the nodal variables, and the boundary conditions can be applied directly and easily; thus, it gives a greater computational precision. Some numerical examples are presented to demonstrate the method.

Journal ArticleDOI
TL;DR: In this article, an implicit free boundary representation model is established by embedding structural boundary into the zero level set of a higher-dimensional level set function, and an explicit parameterization scheme for the level set surface is proposed by using radial basis functions with compact support.

Journal ArticleDOI
TL;DR: The standard Tikhonov regularization technique with the generalized cross-validation criterion for choosing the regularization parameter is adopted for solving the resulting ill-conditioned system of linear algebraic equations.

Journal ArticleDOI
TL;DR: In this article, an efficient collocation method is proposed for solving non-local parabolic partial differential equations using radial basis functions, and the results are compared with some existing methods.

Journal ArticleDOI
TL;DR: Fast lipid lateral diffusion in the CG simulations, as a result of smoother free energy landscape, makes the study of phase behavior of the binary mixture possible, and shows that the MS-CG force field can reasonably approximate the many-body potential of mean force in the coarse-grained coordinates.
Abstract: A solvent-free coarse-grained model for a 1:1 mixed dioleoylphosphatidylcholine (DOPC) and a dioleoylphospatidylethanolamine (DOPE) bilayer is developed using the multiscale coarse-graining (MS-CG) approach. B-spline basis functions are implemented instead of the original cubic spline basis functions in the MS-CG method. The new B-spline basis functions are able to dramatically reduce memory requirements and increase computational efficiency of the MS-CG calculation. Various structural properties from the CG simulations are compared with their corresponding all-atom counterpart in order to validate the CG model. The resulting CG structural properties agree well with atomistic results, which shows that the MS-CG force field can reasonably approximate the many-body potential of mean force in the coarse-grained coordinates. Fast lipid lateral diffusion in the CG simulations, as a result of smoother free energy landscape, makes the study of phase behavior of the binary mixture possible. Small clusters of dist...

Journal ArticleDOI
TL;DR: In this article, an analytic preconditioner for the electric field integral equation, based on the Calderon identities, is considered, and it is shown that the geometrically dual basis functions proposed by Buffa and Christiansen are not suitable for discretizing the integral operator appearing in the preconditionser.
Abstract: An analytic preconditioner for the electric field integral equation, based on the Calderon identities, is considered. It is shown, based on physical reasoning, that RWG elements are not suitable for discretizing the electric field integral operator appearing in the preconditioner. Instead, the geometrically dual basis functions proposed by Buffa and Christiansen are used. However, it is found that this preconditioner is vulnerable to roundoff errors at low frequencies. A loop/star decomposition of the Buffa-Christiansen basis functions is presented, along with numerical results demonstrating its effectiveness.

Journal ArticleDOI
TL;DR: A new MFD method is presented for the Stokes problem on arbitrary polygonal meshes and its stability is analyzed, which allows the method to apply to a linear elasticity problem, as well.

Journal ArticleDOI
TL;DR: In this paper, the authors exploit the smoothness of the eigenfunctions to reduce dimensionality by restricting them to a lower dimensional space of smooth functions and then approach this problem through a restricted maximum likelihood method.
Abstract: In this article, we consider the problem of estimating the eigenvalues and eigenfunctions of the covariance kernel (i.e., the functional principal components) from sparse and irregularly observed longitudinal data. We exploit the smoothness of the eigenfunctions to reduce dimensionality by restricting them to a lower dimensional space of smooth functions. We then approach this problem through a restricted maximum likelihood method. The estimation scheme is based on a Newton–Raphson procedure on the Stiefel manifold using the fact that the basis coefficient matrix for representing the eigenfunctions has orthonormal columns. We also address the selection of the number of basis functions, as well as that of the dimension of the covariance kernel by a second-order approximation to the leave-one-curve-out cross-validation score that is computationally very efficient. The effectiveness of our procedure is demonstrated by simulation studies and an application to a CD4+ counts dataset. In the simulation studies, ...

Journal ArticleDOI
TL;DR: It is found that for a given cardinal number, these selectively augmented correlation consistent basis sets yield results that are closer to the complete basis set limit than the corresponding fully augmented basis sets.
Abstract: We have optimized the lowest energy structures and calculated interaction energies for the H(2)O-H(2)O, H(2)O-H(2)S, H(2)O-NH(3), and H(2)O-PH(3) dimers with the recently developed explicitly correlated CCSD(T)-F12 methods and the associated VXZ-F12 (where X = D,T,Q) basis sets. For a given cardinal number, we find that the results obtained with the CCSD(T)-F12 methods are much closer to the CCSD(T) complete basis set limit than the conventional CCSD(T) results. In general we find that CCSD(T)-F12 results obtained with the VTZ-F12 basis set are better than the conventional CCSD(T) results obtained with an aug-cc-pV5Z basis set. We also investigate two ways to reduce the effects of basis set superposition error with conventional CCSD(T), namely, the popular counterpoise correction and limiting diffuse basis functions to the heavy atoms only. We find that for a given cardinal number, these selectively augmented correlation consistent basis sets yield results that are closer to the complete basis set limit than the corresponding fully augmented basis sets. Furthermore, we find that the difference between standard and counterpoise corrected interaction energies and intermolecular distances is reduced with the selectively augmented basis sets.

Journal ArticleDOI
TL;DR: A new algorithm for specifying the initial conditions of newly spawn basis functions that minimizes the number of spawned basis functions needed for convergence is detailed.
Abstract: The full multiple spawning (FMS) method has been developed to simulate quantum dynamics in the multistate electronic problem. In FMS, the nuclear wave function is represented in a basis of coupled, frozen Gaussians, and a “spawning” procedure prescribes a means of adaptively increasing the size of this basis in order to capture population transfer between electronic states. Herein we detail a new algorithm for specifying the initial conditions of newly spawned basis functions that minimizes the number of spawned basis functions needed for convergence. “Optimally” spawned basis functions are placed to maximize the coupling between parent and child trajectories at the point of spawning. The method is tested with a two-state, one-mode avoided crossing model and a two-state, two-mode conical intersection model.

Journal ArticleDOI
TL;DR: In this article, the authors compare two wave element methods for the 2D Helmholtz problems, namely the partition of unity FEM (PUFEM) and the ultra-weak variational formulation (UWVF), based on different variational formulations; the PUFEM basis also includes a polynomial component, whereas the UWVF basis consists purely of plane waves.
Abstract: In comparison with low-order finite element methods (FEMs), the use of oscillatory basis functions has been shown to reduce the computational complexity associated with the numerical approximation of Helmholtz problems at high wave numbers. We compare two different wave element methods for the 2D Helmholtz problems. The methods chosen for this study are the partition of unity FEM (PUFEM) and the ultra-weak variational formulation (UWVF). In both methods, the local approximation of wave field is computed using a set of plane waves for constructing the basis functions. However, the methods are based on different variational formulations; the PUFEM basis also includes a polynomial component, whereas the UWVF basis consists purely of plane waves. As model problems we investigate propagating and evanescent wave modes in a duct with rigid walls and singular eigenmodes in an L-shaped domain. Results show a good performance of both methods for the modes in the duct, but only a satisfactory accuracy was obtained in the case of the singular field. On the other hand, both the methods can suffer from the ill-conditioning of the resulting matrix system.