scispace - formally typeset
Search or ask a question
Topic

Basis function

About: Basis function is a research topic. Over the lifetime, 15615 publications have been published within this topic receiving 405683 citations. The topic is also known as: blending function.


Papers
More filters
Book
01 Jan 2000
TL;DR: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index
Abstract: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-Range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index

1,740 citations

Journal ArticleDOI
TL;DR: In this article, a discrete variable representation (DVR) is introduced for use as the L2 basis of the S-matrix version of the Kohn variational method for quantum reactive scattering.
Abstract: A novel discrete variable representation (DVR) is introduced for use as the L2 basis of the S‐matrix version of the Kohn variational method [Zhang, Chu, and Miller, J. Chem. Phys. 88, 6233 (1988)] for quantum reactive scattering. (It can also be readily used for quantum eigenvalue problems.) The primary novel feature is that this DVR gives an extremely simple kinetic energy matrix (the potential energy matrix is diagonal, as in all DVRs) which is in a sense ‘‘universal,’’ i.e., independent of any explicit reference to an underlying set of basis functions; it can, in fact, be derived as an infinite limit using different basis functions. An energy truncation procedure allows the DVR grid points to be adapted naturally to the shape of any given potential energy surface. Application to the benchmark collinear H+H2→H2+H reaction shows that convergence in the reaction probabilities is achieved with only about 15% more DVR grid points than the number of conventional basis functions used in previous S‐matrix Kohn...

1,575 citations

Journal ArticleDOI
TL;DR: This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks, and introduces new classes of smoothness functionals that lead to different classes of basis functions.
Abstract: We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks . In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.

1,408 citations

Journal ArticleDOI
TL;DR: In this article, a linear scaling, fully self-consistent density-functional method for performing first-principles calculations on systems with a large number of atoms, using standard norm-conserving pseudopotentials and flexible linear combinations of atomic orbitals (LCAO) basis sets, was implemented.
Abstract: We have implemented a linear scaling, fully self-consistent density-functional method for performing first-principles calculations on systems with a large number of atoms, using standard norm-conserving pseudopotentials and flexible linear combinations of atomic orbitals (LCAO) basis sets. Exchange and correlation are treated within the local-spin-density or gradient-corrected approximations. The basis functions and the electron density are projected on a real-space grid in order to calculate the Hartree and exchange–correlation potentials and matrix elements. We substitute the customary diagonalization procedure by the minimization of a modified energy functional, which gives orthogonal wave functions and the same energy and density as the Kohn–Sham energy functional, without the need of an explicit orthogonalization. The additional restriction to a finite range for the electron wave functions allows the computational effort (time and memory) to increase only linearly with the size of the system. Forces and stresses are also calculated efficiently and accurately, allowing structural relaxation and molecular dynamics simulations. We present test calculations beginning with small molecules and ending with a piece of DNA. Using double-z, polarized bases, geometries within 1% of experiments are obtained. © 1997 John Wiley & Sons, Inc. Int J Quant Chem 65: 453–461, 1997

1,383 citations

Journal ArticleDOI
TL;DR: A new implementation of the approximate coupled cluster singles and doubles method CC2 is reported, which is suitable for large scale integral-direct calculations and employs the resolution of the identity (RI) approximation for two-electron integrals to reduce the CPU time needed for calculation and I/O of these integrals.
Abstract: A new implementation of the approximate coupled cluster singles and doubles method CC2 is reported, which is suitable for large scale integral-direct calculations. It employs the resolution of the identity (RI) approximation for two-electron integrals to reduce the CPU time needed for calculation and I/O of these integrals. We use a partitioned form of the CC2 equations which eliminates the need to store double excitation cluster amplitudes. In combination with the RI approximation this formulation of the CC2 equations leads to a reduced scaling of memory and disk space requirements with the number of correlated electrons (n) and basis functions (N) to, respectively, O(N2) and O(nN2), compared to O(n2N2) in previous implementations. The reduced CPU, memory and disk space requirements make it possible to perform CC2 calculations with accurate basis sets on large molecules, which would not be accessible with conventional implementations of the CC2 method. We present an application to vertical excitation ene...

1,326 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
85% related
Boundary value problem
145.3K papers, 2.7M citations
84% related
Nonlinear system
208.1K papers, 4M citations
84% related
Differential equation
88K papers, 2M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023224
2022489
2021675
2020745
2019729
2018728