scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2005"


Journal ArticleDOI
TL;DR: In this article, the concept of isogeometric analysis is proposed and the basis functions generated from NURBS (Non-Uniform Rational B-Splines) are employed to construct an exact geometric model.

5,137 citations


Journal ArticleDOI
TL;DR: This paper shows how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods.
Abstract: Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.

2,472 citations


Journal ArticleDOI
TL;DR: In this article, a general approach for obtaining systematic sequences of atomic single-particle basis sets for use in correlated electronic structure calculations of molecules was developed, and all the constituent functions are defined as the solutions of variational problems.

543 citations


Journal ArticleDOI
TL;DR: The reduced-basis methods and associated a posteriori error estimators developed earlier for elliptic partial differential equations to parabolic problems with affine parameter dependence are extended and time is treated as an additional, albeit special, parameter in the formulation and solution of the problem.
Abstract: In this paper, we extend the reduced-basis methods and associated a posteriori error estimators developed earlier for elliptic partial differential equations to parabolic problems with affine parameter dependence. The essential new ingredient is the presence of time in the formulation and solution of the problem - we shall "simply" treat time as an additional, albeit special, parameter. First, we introduce the reduced-basis recipe - Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N selected points in parameter-time space - and develop a new greedy adaptive procedure to "optimally" construct the parameter-time sample set. Second, we propose error estimation and adjoint procedures that provide rigorous and sharp bounds for the error in specific outputs of interest: the estimates serve ap riorito construct our samples, and a posteriori to confirm fidelity. Third, based on the assumption of affine parameter dependence, we develop offline- online computational procedures: in the offline stage, we generate the reduced-basis space; in the online stage, given a new parameter value, we calculate the reduced-basis output and associated error bound. The operation count for the online stage depends only on N (typically small) and the parametric complexity of the problem; the method is thus ideally suited for repeated, rapid, reliable evaluation of input-output relationships in the many-query or real-time contexts.

408 citations


01 Jan 2005
TL;DR: It is demonstrated empirically that it is possible to recover an object from about 3M ‐5M projections onto generically chosen vectors with the same accuracy as the ideal M -term wavelet approximation.
Abstract: Can we recover a signal f 2 R N from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis . Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M logN generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program. In this paper, we show that these ideas are of practical significance. Inspired by theoretical developments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate empirically that it is possible to recover an object from about 3M ‐5M projections onto generically chosen vectors with the same accuracy as the ideal M -term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.

354 citations


Journal Article
TL;DR: A formal definition of the general continuous IB problem is given and an analytic solution for the optimal representation for the important case of multivariate Gaussian variables is obtained, in terms of the eigenvalue spectrum.
Abstract: The problem of extracting the relevant aspects of data was previously addressed through the information bottleneck (IB) method, through (soft) clustering one variable while preserving information about another - relevance - variable. The current work extends these ideas to obtain continuous representations that preserve relevant information, rather than discrete clusters, for the special case of multivariate Gaussian variables. While the general continuous IB problem is difficult to solve, we provide an analytic solution for the optimal representation and tradeoff between compression and relevance for the this important case. The obtained optimal representation is a noisy linear projection to eigenvectors of the normalized regression matrix Σx|yΣx-1, which is also the basis obtained in canonical correlation analysis. However, in Gaussian IB, the compression tradeoff parameter uniquely determines the dimension, as well as the scale of each eigenvector, through a cascade of structural phase transitions. This introduces a novel interpretation where solutions of different ranks lie on a continuum parametrized by the compression level. Our analysis also provides a complete analytic expression of the preserved information as a function of the compression (the "information-curve"), in terms of the eigenvalue spectrum of the data. As in the discrete case, the information curve is concave and smooth, though it is made of different analytic segments for each optimal dimension. Finally, we show how the algorithmic theory developed in the IB framework provides an iterative algorithm for obtaining the optimal Gaussian projections.

305 citations


Proceedings ArticleDOI
31 Jul 2005
TL;DR: Algebraic methods for creating implicit surfaces using linear combinations of radial basis interpolants to form complex models from scattered surface points are described, allowing the study of shape properties of large complex shapes and the exploration of diverse surface geometry.
Abstract: We describe algebraic methods for creating implicit surfaces using linear combinations of radial basis interpolants to form complex models from scattered surface points. Shapes with arbitrary topology are easily represented without the usual interpolation or aliasing errors arising from discrete sampling. These methods were first applied to implicit surfaces by Savchenko, et al. and later developed independently by Turk and O'Brien as a means of performing shape interpolation. Earlier approaches were limited as a modeling mechanism because of the order of the computational complexity involved. We explore and extend these implicit interpolating methods to make them suitable for systems of large numbers of scattered surface points by using compactly supported radial basis interpolants. The use of compactly supported elements generates a sparse solution space, reducing the computational complexity and making the technique practical for large models. The local nature of compactly supported radial basis functions permits the use of computational techniques and data structures such as k-d trees for spatial subdivision, promoting fast solvers and methods to divide and conquer many of the subproblems associated with these methods. Moreover, the representation of complex models permits the exploration of diverse surface geometry. This reduction in computational complexity enables the application of these methods to the study of shape properties of large complex shapes.

302 citations


Book
01 Jan 2005
TL;DR: In this article, the authors propose a general theory iterative estimation scheme effective gradient approximation reduction from the klaman filter estimation from linear hypotheses for 3-D reconstruction of points.
Abstract: Introduction - The aims of this book the features of this book organization and background the analytical mind: strengh and weakness. Fundamentals of linear algebra - Vector and matrix calculus Eigenvalue problem linear systems and optimization matrix and tensor algebra. Probabilities and statistical estimation - probability distributions manifolds and local distributions gaussian distributions and X2 distributions statistical estimation for gaussian models general statistical estimation maximum likelihood estimation Akaike information criterion. Representation of geometric objects - image points and image lines space points and space lines space planes conics space conics and quadrics coordinate transformation and projection. Geometric correction - general theory correction of image points and image lines correction of space points and space lines correction of space planes orthogonality correction conic incidence correction. 3-D computation by stereo vision - epipolar constraint optimal correction of correspondence 3-D reconstruction of points 3-D reconstruction of lines optimal back projection onto a space plane scenes infinitely far away camera calibration errors. Parametric fitting - general theory optimal fitting for image points optimal fitting for image lines optimal fitting for space points optimal fitting for space lines optimal fitting for space planes. Optimal filter - general theory iterative estimation scheme effective gradient approximation reduction from the klaman filter estimation from linear hypotheses. Renormalization - eigenvector fit unbiased eigenvector generalized eigenvalue fit renormalization lincarization second order renormalization. Applications of geometric estimation - image line fitting conic fitting space plane fitting by range sensing space plane fitting by stereo vision. 3-D motion analysis - general theory lincarization and renormalization optimal correction and decomposition reliability of 3-D reconstruction critical surfaces 3-D reconstruction from planar surface motion camera rotation and information. 3-D interpretation of optical flow - optical flow detection theoretical basis of 3-D interpretation optimal estimation of motion parameters. (Part contents).

298 citations


Proceedings ArticleDOI
TL;DR: It is empirically possible to recover an object from about 3M-5M projections onto generically chosen vectors with an accuracy which is as good as that obtained by the ideal M-term wavelet approximation.
Abstract: Can we recover a signal f ∈R N from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis Ψ. Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M log N generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program. In this paper, we show that these ideas are of practical significance. Inspired by theoretical developments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate that it is empirically possible to recover an object from about 3 M -5 M projections onto generically chosen vectors with an accuracy which is as good as that obtained by the ideal M -term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.

297 citations


Journal ArticleDOI
TL;DR: In this article, the application of the finite element (FE) method to ab initio electronic structure calculations in solids is reviewed, and the construction and properties of the required FE bases and their use in the selfconsistent solution of the Kohn?Sham equations of density functional theory is discussed.
Abstract: We review the application of the finite element (FE) method to ab initio electronic structure calculations in solids. The FE method is a general approach for the solution of differential and integral equations which uses a strictly local, piecewise-polynomial basis. Because the basis is composed of polynomials, the method is completely general and its accuracy is systematically improvable. Because the basis is strictly local in real space, the method allows for variable resolution in real space; produces sparse, structured matrices, enabling the effective use of iterative solution methods; and is well suited for parallel implementation. The method thus combines significant advantages of both real-space-grid and basis-oriented approaches, and so is well suited for large, accurate ab initio calculations. We review the construction and properties of the required FE bases and their use in the self-consistent solution of the Kohn?Sham equations of density functional theory.

202 citations


Journal ArticleDOI
TL;DR: The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric, suggesting it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.
Abstract: One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the polarization-consistent basis sets, which are optimized for density functional methods, are also suitable for Hartree-Fock calculations, and can be used for estimating the Hartree−Fock basis set limit to within a few micro-hartree accuracy.
Abstract: It is demonstrated that the polarization-consistent basis sets, which are optimized for density functional methods, are also suitable for Hartree–Fock calculations, and can be used for estimating the Hartree–Fock basis set limit to within a few micro-hartree accuracy. Various two- and three-point extrapolation schemes are tested and exponential functions are found to be superior compared to functions depending on the inverse power of the highest angular momentum function in the basis set. Total energies can be improved by roughly an order of magnitude, but atomization energies are only marginally improved by extrapolation.

Journal ArticleDOI
TL;DR: In a large basis, the CCSD(R12) model provides an excellent approximation to the full linear-r(12) energy contribution, whereas the magnitude of this contribution is significantly overestimated at the level of second-order perturbation theory.
Abstract: A simplified singles-and-doubles linear-r12 corrected coupled-cluster model, denoted CCSD(R12), is proposed and compared with the complete singles-and-doubles linear-r12 coupled-cluster method CCSD-R12. An orthonormal auxiliary basis set is used for the resolution-of-the-identity approximation to calculate three-electron integrals needed in the linear-r12 Ansatz. Basis-set convergence is investigated for a selected set of atoms and small molecules. In a large basis, the CCSD(R12) model provides an excellent approximation to the full linear-r12 energy contribution, whereas the magnitude of this contribution is significantly overestimated at the level of second-order perturbation theory.

Book ChapterDOI
09 Dec 2005

Journal Article
TL;DR: The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions, and the sparsity of the fit coefficients is assessed on simple hydrocarbon molecules, and shows quite early onset of linear growth in the number of significant coefficients with system size.
Abstract: One way to reduce the computational cost of electronic structure calculations is to employ auxiliary basis expansions to approximate 4 center integrals in terms of 2 and 3-center integrals, usually using the variationally optimum Coulomb metric to determine the expansion coefficients. However the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules, and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. This means it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: It is shown that it is possible to design an iterative learning algorithm that produces a dictionary with the required structure, and how well the learning algorithm recovers dictionaries that may or may not have the necessary structure is assessed.
Abstract: We propose a new method to learn overcomplete dictionaries for sparse coding structured as unions of orthonormal bases. The interest of such a structure is manifold. Indeed, it seems that many signals or images can be modeled as the superimposition of several layers with sparse decompositions in as many bases. Moreover, in such dictionaries, the efficient block coordinate relaxation (BCR) algorithm can be used to compute sparse decompositions. We show that it is possible to design an iterative learning algorithm that produces a dictionary with the required structure. Each step is based on the coefficients estimation, using a variant of BCR, followed by the update of one chosen basis, using singular value decomposition. We assess experimentally how well the learning algorithm recovers dictionaries that may or may not have the required structure, and to what extent the noise level is a disturbing factor.

Journal ArticleDOI
TL;DR: This paper focuses on the definition of the reference shapes, and together with theoretical and numerical justifications of the reduced basis element method, it provides a posteriori error analysis tools that allow us to certify the computational results.
Abstract: The reduced basis element method is a new approach for the approximation of partial differential equations that takes its roots in the domain decomposition method and in the reduced basis discretization. The basic idea is to decompose the domain of computation into a series of subdomains (the elements) that are similar to a few reference domains. These reference domains are actually "filled" with reduced basis functional spaces that are mapped to each subdomain together with the geometry. The discrete approximation space is then composed of functions with the property that a function restricted to a subdomain belongs to the mapped reduced spaces. Finally, a mortar-type method is applied to glue the various local functions. In this paper we focus on the definition of the reference shapes, and together with theoretical and numerical justifications of the method, we provide a posteriori error analysis tools that allow us to certify the computational results.

Journal ArticleDOI
TL;DR: It is concluded that the main source of error in MP2-R12 calculations in such basis sets is the choice of the correlation factor r12, and the generalized Brillouin condition is found not to lead to significant errors.
Abstract: The explicitly correlated second order Moller–Plesset (MP2-R12) methods perform well in reproducing the last detail of the correlation cusp, allowing higher accuracy than can be accessed through conventional means. Nevertheless in basis sets that are practical for calculations on larger systems (i.e., around triple- or perhaps quadruple-zeta) MP2-R12 fails to bridge the divide between conventional MP2 and the MP2 basis set limit. In this contribution we analyse the sources of error in MP2-R12 calculations in such basis sets. We conclude that the main source of error is the choice of the correlation factor r12. Sources of error that must be avoided for accurate quantum chemistry include the neglect of exchange commutators and the extended Brillouin condition. The generalized Brillouin condition is found not to lead to significant errors.

Journal ArticleDOI
TL;DR: In this paper, the frame theory of subspaces for separable Hilbert spaces was developed and an atomic resolution operator was defined for the identity resolution in Hilbert spaces, which even yields a reconstruction formula.

Journal ArticleDOI
TL;DR: In this article, the complexity of Hamel bases in separable and non-separable Banach spaces was investigated and it was shown that in a separable space a Hamel basis cannot be analytic.
Abstract: We investigate various kinds of bases in infinite dimensional Banach spaces. In particular, we consider the complexity of Hamel bases in separable and non-separable Banach spaces and show that in a separable Banach space a Hamel basis cannot be analytic, whereas there are non-separable Hilbert spaces which have a discrete and closed Hamel basis. Further we investigate the existence of certain complete minimal systems in `∞ as well as in separable Banach spaces. Outline. The paper is concerned with bases in infinite dimensional Banach spaces. The first section contains the definitions of the various kinds of bases and biorthogonal systems and also summarizes some set-theoretic terminology and notation which will be used throughout the paper. The second section provides a survey of known or elementary results. The third section deals with Hamel bases and contains some consistency results proved using the forcing technique. The fourth section is devoted to complete minimal systems (including Φ-bases and Auerbach bases) and the last section contains open problems. ∗The research for this paper began during the Workshop on Set Theory, Topology, and Banach Space Theory, which took place in June 2003 at Queen’s University Belfast, whose hospitality is gratefully acknowledged. The workshop was supported by the Nuffield Foundation Grant NAL/00513/G of the third author, the EPSRC Advanced Fellowship of the second author and the grant GACR 201/03/0933 of the fourth author. 1

Journal ArticleDOI
TL;DR: In this article, the authors studied the progressive iteration approximation property of a curve (tensor product surface) generated by blending a given data point set and a set of basis functions, and they showed that the curve has the same property as the B-spline and NURBS curve.
Abstract: In this paper, we study the progressive iteration approximation property of a curve (tensor product surface) generated by blending a given data point set and a set of basis functions. The curve (tensor product surface) has the progressive iteration approximation property as long as the basis is totally positive and the corresponding collocation matrix is nonsingular. Thus, the B-spline and NURBS curve (surface) have the progressive iteration approximation property, and Bezier curve (surface) also has the property if the corresponding collocation matrix is nonsingular.


Journal ArticleDOI
TL;DR: In this article, the expansion of a C0 semigroup and a criterion for being a Riesz basis are discussed, and the heat exchanger problem with boundary feedback is investigated.

Journal ArticleDOI
TL;DR: The proposed basis is organized in hierarchical levels, and keeps the different scales of the problem directly into the basis functions representation; the current is divided into a solenoidal and a quasi-irrotational part, which allows mapping these two vector parts onto fully scalar quantities, where the wavelets are defined.
Abstract: This paper presents the construction, use, and properties of a multiresolution (wavelet) basis for the method of moments (MoM) analysis of metal antennas, scatterers, and microwave circuits discretized by triangular meshes. Several application examples show fast convergence of iterative solvers and accurate solutions with highly sparse MoM matrices. The proposed basis is organized in hierarchical levels, and keeps the different scales of the problem directly into the basis functions representation; the current is divided into a solenoidal and a quasi-irrotational part, which allows mapping these two vector parts onto fully scalar quantities, where the wavelets are defined. As a byproduct, this paper also presents a way to construct hierarchical sets of Rao-Wilton-Glisson (RWG) functions on a family of meshes obtained by subsequent refinement, i.e., with the RWG of coarser meshes expressed as linear combinations of those of finer meshes.

Journal ArticleDOI
TL;DR: A comparison of the exchange coupling constants and spin distributions shows that both the plane-wave and the numerical basis set approaches are accurate and reliable alternatives to the more established Gaussian basis functions.
Abstract: Theoretical methods based on density-functional theory with Gaussian, plane waves, and numerical basis sets were employed to evaluate the exchange coupling constants in transition-metal complexes. In the case of the numerical basis set, the effect of different computational parameters was tested. We analyzed whether and how the use of pseudopotentials affects the calculation of the exchange coupling constants. For the three different basis sets, a comparison of the exchange coupling constants and spin distributions shows that both the plane-wave and the numerical basis set approaches are accurate and reliable alternatives to the more established Gaussian basis functions.

Journal ArticleDOI
TL;DR: A sound and complete algorithm for solving the implication of dimension constraints that uses heuristics based on the structure of the dimension and the constraints to speed up its execution is given.
Abstract: In multidimensional data models intended for online analytic processing (OLAP), data are viewed as points in a multidimensional space. Each dimension has structure, described by a directed graph of categories, a set of members for each category, and a child/parent relation between members. An important application of this structure is to use it to infer summarizability, that is, whether an aggregate view defined for some category can be correctly derived from a set of precomputed views defined for other categories. A dimension is called structurally heterogeneous if two members in a given category are allowed to have ancestors in different categories. In this article, we propose a class of integrity constraints, dimension constraints, that allow us to reason about summarizability in heterogeneous dimensions. We introduce the notion of frozen dimensions which are minimal homogeneous dimension instances representing the different structures that are implicitly combined in a heterogeneous dimension. Frozen dimensions provide the basis for efficiently testing the implication of dimension constraints and are a useful aid to understanding heterogeneous dimensions. We give a sound and complete algorithm for solving the implication of dimension constraints that uses heuristics based on the structure of the dimension and the constraints to speed up its execution. We study the intrinsic complexity of the implication problem and the running time of our algorithm.

Journal ArticleDOI
TL;DR: In this paper, the chiral Fierz-type completeness relations for SU(N) algebras were derived by using a chiral basis for the complex 4×4 matrices.
Abstract: General Fierz-type identities are examined and their well-known connection with completeness relations in matrix vector spaces is shown. In particular, I derive the chiral Fierz identities in a simple and systematic way by using a chiral basis for the complex 4×4 matrices. Other completeness relations for the fundamental representations of SU(N) algebras can be extracted using the same reasoning.

Posted Content
TL;DR: In this paper, the authors proved a result of a Ramsey-theoretic nature which implies an interesting dichotomy for subspaces of Banach spaces, which they used to give a positive answer to Banach's problem.
Abstract: A problem of Banach asks whether every infinite-dimensional Banach space which is isomorphic to all its infinite-dimensional subspaces must be isomorphic to a separable Hilbert space. In this paper we prove a result of a Ramsey-theoretic nature which implies an interesting dichotomy for subspaces of Banach spaces. Combined with a result of Komorowski and Tomczak-Jaegermann, this gives a positive answer to Banach's problem. We then generalize the Ramsey-theoretic result and deduce a further dichotomy for Banach spaces with an unconditional basis.

Journal ArticleDOI
B Liu1
TL;DR: In this article, the authors presented a method to select the orthonormal basis to represent vibration signals for rotating machinery fault diagnosis, which is formed by two sets of basis functions for representation of the transients excited by localized faults.

Journal ArticleDOI
TL;DR: This paper proposes a scheme for choosing basis functions for quantum dynamics calculations that are intermediate between the 1D eigenfunction functions and the DVR functions, and assesses the usefulness of the basis by applying it to model 6D, 8D, and 16D Hamiltonians with various coupling strengths.
Abstract: In this paper we propose a scheme for choosing basis functions for quantum dynamics calculations. Direct product bases are frequently used. The number of direct product functions required to converge a spectrum, compute a rate constant, etc., is so large that direct product calculations are impossible for molecules or reacting systems with more than four atoms. It is common to extract a smaller working basis from a huge direct product basis by removing some of the product functions. We advocate a build and prune strategy of this type. The one-dimensional (1D) functions from which we build the direct product basis are chosen to satisfy two conditions: (1) they nearly diagonalize the full Hamiltonian matrix; (2) they minimize off-diagonal matrix elements that couple basis functions with diagonal elements close to those of the energy levels we wish to compute. By imposing these conditions we increase the number of product functions that can be removed from the multidimensional basis without degrading the accuracy of computed energy levels. Two basic types of 1D basis functions are in common use: eigenfunctions of 1D Hamiltonians and discrete variable representation (DVR) functions. Both have advantages and disadvantages. The 1D functions we propose are intermediate between the 1D eigenfunction functions and the DVR functions. If the coupling is very weak, they are very nearly 1D eigenfunction functions. As the strength of the coupling is increased they resemble more closely DVR functions. We assess the usefulness of our basis by applying it to model 6D, 8D, and 16D Hamiltonians with various coupling strengths. We find approximately linear scaling.