scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2006"


Journal ArticleDOI
TL;DR: This work states thatKinetic theory models involving the Fokker-Planck equation can be accurately discretized using a mesh support using a reduced approximation basis within an adaptive procedure making use of an efficient separation of variables.
Abstract: Kinetic theory models involving the Fokker-Planck equation can be accurately discretized using a mesh support (finite elements, finite differences, finite volumes, spectral techniques, etc.). However, these techniques involve a high number of approximation functions. In the finite element framework, widely used in complex flow simulations, each approximation function is related to a node that defines the associated degree of freedom. When the model involves high dimensional spaces (including physical and conformation spaces and time), standard discretization techniques fail due to an excessive computation time required to perform accurate numerical simulations. One appealing strategy that allows circumventing this limitation is based on the use of reduced approximation basis within an adaptive procedure making use of an efficient separation of variables. (c) 2006 Elsevier B.V. All rights reserved.

546 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper studies a specific type of hierarchical function bases, defined by the eigenfunctions of the Laplace-Beltrami operator, and explains in practice how to compute an approximation of the eigens of a differential operator and shows possible applications in geometry processing.
Abstract: One of the challenges in geometry processing is to automatically reconstruct a higher-level representation from raw geometric data. For instance, computing a parameterization of an object helps attaching information to it and converting between various representations. More generally, this family of problems may be thought of in terms of constructing structured function bases attached to surfaces. In this paper, we study a specific type of hierarchical function bases, defined by the eigenfunctions of the Laplace-Beltrami operator. When applied to a sphere, this function basis corresponds to the classical spherical harmonics. On more general objects, this defines a function basis well adapted to the geometry and the topology of the object. Based on physical analogies (vibration modes), we first give an intuitive view before explaining the underlying theory. We then explain in practice how to compute an approximation of the eigenfunctions of a differential operator, and show possible applications in geometry processing.

446 citations


Journal ArticleDOI
TL;DR: Comparisons to previously published methods show that the new nsNMF method has some advantages in keeping faithfulness to the data in the achieving a high degree of sparseness for both the estimated basis and the encoding vectors and in better interpretability of the factors.
Abstract: We propose a novel nonnegative matrix factorization model that aims at finding localized, part-based, representations of nonnegative multivariate data items. Unlike the classical nonnegative matrix factorization (NMF) technique, this new model, denoted "nonsmooth nonnegative matrix factorization" (nsNMF), corresponds to the optimization of an unambiguous cost function designed to explicitly represent sparseness, in the form of nonsmoothness, which is controlled by a single parameter. In general, this method produces a set of basis and encoding vectors that are not only capable of representing the original data, but they also extract highly focalized patterns, which generally lend themselves to improved interpretability. The properties of this new method are illustrated with several data sets. Comparisons to previously published methods show that the new nsNMF method has some advantages in keeping faithfulness to the data in the achieving a high degree of sparseness for both the estimated basis and the encoding vectors and in better interpretability of the factors.

405 citations


Journal ArticleDOI
TL;DR: Two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information are proposed.
Abstract: In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.

259 citations


Journal ArticleDOI
TL;DR: It is proved that, under the weak-perspective projection model, enforcing both the basis and the rotation constraints leads to a closed-form solution to the problem of non-rigid shape and motion recovery, which is important for applications like robot navigation and human computer interaction.
Abstract: Recovery of three dimensional (3D) shape and motion of non-static scenes from a monocular video sequence is important for applications like robot navigation and human computer interaction. If every point in the scene randomly moves, it is impossible to recover the non-rigid shapes. In practice, many non-rigid objects, e.g. the human face under various expressions, deform with certain structures. Their shapes can be regarded as a weighted combination of certain shape bases. Shape and motion recovery under such situations has attracted much interest. Previous work on this problem (Bregler, C., Hertzmann, A., and Biermann, H. 2000. In Proc. Int. Conf. Computer Vision and Pattern Recognition; Brand, M. 2001. In Proc. Int. Conf. Computer Vision and Pattern Recognition; Torresani, L., Yang, D., Alexander, G., and Bregler, C. 2001. In Proc. Int. Conf. Computer Vision and Pattern Recognition) utilized only orthonormality constraints on the camera rotations (rotation constraints). This paper proves that using only the rotation constraints results in ambiguous and invalid solutions. The ambiguity arises from the fact that the shape bases are not unique. An arbitrary linear transformation of the bases produces another set of eligible bases. To eliminate the ambiguity, we propose a set of novel constraints, basis constraints, which uniquely determine the shape bases. We prove that, under the weak-perspective projection model, enforcing both the basis and the rotation constraints leads to a closed-form solution to the problem of non-rigid shape and motion recovery. The accuracy and robustness of our closed-form solution is evaluated quantitatively on synthetic data and qualitatively on real video sequences.

228 citations


Journal ArticleDOI
TL;DR: A standard set of tight s, p, d, and f functions to be added to the polarization-consistent basis sets should be suitable for calculating spin-spin coupling constants with density functional methods.
Abstract: The previously proposed polarization-consistent basis sets, optimized for density functional calculations, are evaluated for calculating indirect nuclear spin-spin coupling constants. The basis set limiting values can be obtained by performing a series of calculations with increasingly larger basis sets, but the convergence can be significantly improved by adding functions with large exponents. An accurate calculation of the Fermi-contact contribution requires the addition of tight s functions, while the paramagnetic spin-orbit contribution is sensitive to the presence of tight p functions. The spin-dipolar contribution requires the addition of p, d, and f functions. The optimal exponents for the tight functions can be obtained by optimizing the absolute sum of all contributions to the spin-spin coupling constant. On the basis of a series of test cases, we propose a standard set of tight s, p, d, and f functions to be added to the polarization-consistent basis sets. The resulting pcJ-n basis sets should be suitable for calculating spin-spin coupling constants with density functional methods.

223 citations


Journal ArticleDOI
TL;DR: Compared with those obtained using the original basis sets, recontracted LANL2DZ basis sets for first-row transition metals show clearly an improvement in the reproduction of the corresponding experimental gaps.
Abstract: In this paper we report recontracted LANL2DZ basis sets for first-row transition metals. The valence-electron shell basis functions were recontracted using the PWP86 generalized gradient approximation functional and the hybrid B3LYP one. Starting from the original LANL2DZ basis sets a cyclic method was used in order to optimize variationally the contraction coefficients, while the contraction scheme was held fixed at the original one of the LANL2DZ basis functions. The performance of the recontracted basis sets was analyzed by direct comparison between calculated and experimental excitation and ionization energies. Results reported here compared with those obtained using the original basis sets show clearly an improvement in the reproduction of the corresponding experimental gaps.

202 citations


Book ChapterDOI
18 Sep 2006
TL;DR: This paper describes a matrix decomposition formulation for Boolean data, the Discrete Basis Problem, and gives a simple greedy algorithm for solving it and shows how it can be solved using existing methods.
Abstract: Matrix decomposition methods represent a data matrix as a product of two smaller matrices: one containing basis vectors that represent meaningful concepts in the data, and another describing how the observed data can be expressed as combinations of the basis vectors. Decomposition methods have been studied extensively, but many methods return real-valued matrices. If the original data is binary, the interpretation of the basis vectors is hard. We describe a matrix decomposition formulation, the Discrete Basis Problem. The problem seeks for a Boolean decomposition of a binary matrix, thus allowing the user to easily interpret the basis vectors. We show that the problem is computationally difficult and give a simple greedy algorithm for solving it. We present experimental results for the algorithm. The method gives intuitively appealing basis vectors. On the other hand, the continuous decomposition methods often give better reconstruction accuracies. We discuss the reasons for this behavior.

200 citations


Journal ArticleDOI
TL;DR: In this article, a new method for treating arbitrary discontinuities in a finite element (FE) context is presented, which constructs an approximation space consisting of mesh-based, enriched moving least-squares (MLS) functions near the point of interest and standard FE shape functions elsewhere.
Abstract: A new method for treating arbitrary discontinuities in a finite element (FE) context is presented. Unlike the standard extended FE method (XFEM), no additional unknowns are introduced at the nodes whose supports are crossed by discontinuities. The method constructs an approximation space consisting of mesh-based, enriched moving least-squares (MLS) functions near discontinuities and standard FE shape functions elsewhere. There is only one shape function per node, and these functions are able to represent known characteristics of the solution such as discontinuities, singularities, etc. The MLS method constructs shape functions based on an intrinsic basis by minimizing a weighted error functional. Thereby, weight functions are involved, and special mesh-based weight functions are proposed in this work. The enrichment is achieved through the intrinsic basis. The method is illustrated for linear elastic examples involving strong and weak discontinuities, and matches optimal rates of convergence even for crack-tip applications. Copyright © 2006 John Wiley & Sons, Ltd.

188 citations


Journal ArticleDOI
TL;DR: The results indicate that the choice of basis function (and, where appropriate, basis width parameter) is data set dependent and evaluating all recognised basis functions suitable for RBF networks is advantageous.

154 citations


Journal ArticleDOI
TL;DR: A systematic way to modify standard basis sets for use in NMR spin-spin coupling calculations, which allows the high sensitivity of this property to the basis set to be handled in a manner which remains computationally feasible.
Abstract: This paper proposes a systematic way to modify standard basis sets for use in NMR spin−spin coupling calculations, which allows the high sensitivity of this property to the basis set to be handled in a manner which remains computationally feasible. The new basis set series is derived by uncontracting a standard basis set, such as correlation-consistent aug-cc-pVTZ, and extending it by systematically adding tight s and d functions. For elements in different rows of the periodic table, different progressions of functions are added. The new basis sets are shown to approach the basis set limit for calculations on a range of molecules containing hydrogen and first and second row atoms.

Journal ArticleDOI
Richard Szeliski1
01 Jul 2006
TL;DR: This approach removes the need to heuristically adjust the optimal number of preconditioning levels, significantly outperforms previously proposed approaches, and also maps cleanly onto data-parallel architectures such as modern GPUs.
Abstract: This paper develops locally adapted hierarchical basis functions for effectively preconditioning large optimization problems that arise in computer graphics applications such as tone mapping, gradient-domain blending, colorization, and scattered data interpolation. By looking at the local structure of the coefficient matrix and performing a recursive set of variable eliminations, combined with a simplification of the resulting coarse level problems, we obtain bases better suited for problems with inhomogeneous (spatially varying) data, smoothness, and boundary constraints. Our approach removes the need to heuristically adjust the optimal number of preconditioning levels, significantly outperforms previously proposed approaches, and also maps cleanly onto data-parallel architectures such as modern GPUs.

Journal ArticleDOI
TL;DR: It is demonstrated that the finite Heisenberg-Weyl groups provide a unified basis for the construction of useful waveforms/sequences for radar, communications, and the theory of error-correcting codes.
Abstract: We investigate the theory of the finite Heisenberg-Weyl group in relation to the development of adaptive radar and to the construction of spreading sequences and error-correcting codes in communications. We contend that this group can form the basis for the representation of the radar environment in terms of operators on the space of waveforms. We also demonstrate, following recent developments in the theory of error-correcting codes, that the finite Heisenberg-Weyl groups provide a unified basis for the construction of useful waveforms/sequences for radar, communications, and the theory of error-correcting codes.

Journal ArticleDOI
TL;DR: This work combines three solutions to electronic structure theory, local methods, density fitting, and explicit correlation, to produce a low-order scaling method that can achieve accurate MP2 energies for large systems.
Abstract: Three major obstacles in electronic structure theory are the steep scalings of computer time with respect to system size and basis size and the slow convergence of correlation energies in orbital basis sets. Three solutions to these are, respectively, local methods, density fitting, and explicit correlation; in this work, we combine all three to produce a low-order scaling method that can achieve accurate MP2 energies for large systems. The errors introduced by the local approximations into the R12 treatment are analyzed for 16 chemical reactions involving 21molecules. Weak pair approximations, as well as local resolution of the identity approximations, are tested for molecules with up to 49 atoms, over 100 correlated electrons, and over 1000 basis functions.

Journal ArticleDOI
TL;DR: The main concepts related to the “a priori” model reduction technique are revisited, and an incremental algorithm to build basis functions for the decomposition of this state evolution of Karhunen-Loève decomposition is proposed.
Abstract: Karhunen-Loeve expansion and snapshot POD are based on principal component analysis of series of data. They provide basis vectors of the subspace spanned by the data. All the data must be taken into account to find the basis vectors. These methods are not convenient for any improvement of the basis vectors when new data are added into the data base. We consider the data as a state evolution and we propose an incremental algorithm to build basis functions for the decomposition of this state evolution. The proposed algorithm is based on the APHR method (A Priori Hyper-Reduction method). This is an adaptive strategy to build reduced order model when the state evolution is implicitely defined by non-linear governing equations. In case of known state evolutions the APHR method is an incremental Karhunen-Loeve decomposition. This approach is very convenient to expand the subspace spanned by the basis functions. In the first part of the present paper the main concepts related to the “a priori” model reduction technique are revisited, as a previous task to its application in the cases considered in the next sections. Some engineering problems are defined in domains that evolve in time. When this evolution is large the present and the reference configurations differ significantly. Thus, when the problem is formulated in the total Lagrangian framework frequent remeshing is required to avoid too large distortions of the finite element mesh. Other possibility for describing these models lies in the use of an updated formulation in which the mesh is conformed to each intermediate configuration. When the finite element method is used, then frequent remeshing must be carried out to perform an optimal meshing at each intermediate configuration. However, when the natural element method, a novel meshless technique, is considered, whose accuracy does not depend significantly on the relative position of the nodes, then large simulations can be performed without any remeshing stage, being the nodal position at each intermediate configuration defined by the transport of the nodes by the material velocity or the advection terms. Thus, we analyze the extension of the “a priori” model reduc tion, based on the use in tandem of the Karhunen-Loeve decomposition (that extracts significant information) and an approximation basis enrichment based on the use of the Krylov's subspaces, previously proposed in the framework of fixed mesh simulation, to problems defined in domains evolving in time. Finally, for illustrating the technique capabilities, the “a priori” model reduction will be applied for solving the kinetic theory model which governs the orientation of the fibers immersed in a Newtonian flow.

Journal ArticleDOI
TL;DR: Density fitting is used to reduce the effort for integral evaluation, and local approximations are introduced to improve the scaling of the computational resources with molecular size.
Abstract: The recently introduced MP2-R12∕2*A(loc) and LMP2-R12∕2*A(loc) methods are modified to use a short-range correlation factor expanded as a fixed linear combination of Gaussian geminals. Density fitting is used to reduce the effort for integral evaluation, and local approximations are introduced to improve the scaling of the computational resources with molecular size. The MP2-F12∕2*A(loc) correlation energies converge very rapidly with respect to the atomic orbital basis set size. Already with the aug-cc-pVTZ basis the correlation energies computed for a set of 21 small molecules are found to be within 0.5% of the MP2 basis set limit. Furthermore the short-range correlation factor leads to an improved convergence of the resolution of the identity, and eliminates problems with long-range errors in density fitting caused by the linear r12 factor. The DF-LMP2-F12∕2*A(loc) method is applied to compute second-order correlation energies for molecules with up to 49 atoms and more than 1600 basis functions.

Journal ArticleDOI
TL;DR: This work considers the problem of discriminating between states of a specified set with maximum confidence and finds that for a set of linearly independent states unambiguous discrimination is possible if the authors allow for the possibility of an inconclusive result.
Abstract: We consider the problem of discriminating between states of a specified set with maximum confidence. For a set of linearly independent states unambiguous discrimination is possible if we allow for the possibility of an inconclusive result. For linearly dependent sets an analogous measurement is one which allows us to be as confident as possible that when a given state is identified on the basis of the measurement result, it is indeed the correct state.

Journal ArticleDOI
TL;DR: In this paper, a convergence analysis for abstract evolution equations in Banach spaces including semilinear parabolic initial-boundary value problems and spatial discretizations thereof is provided.
Abstract: In this paper, we consider a class of explicit exponential integrators that includes as special cases the explicit exponential Runge–Kutta and exponential Adams–Bashforth methods. The additional freedom in the choice of the numerical schemes allows, in an easy manner, the construction of methods of arbitrarily high order with good stability properties. We provide a convergence analysis for abstract evolution equations in Banach spaces including semilinear parabolic initial-boundary value problems and spatial discretizations thereof. From this analysis, we deduce order conditions which in turn form the basis for the construction of new schemes. Our convergence results are illustrated by numerical examples.

Journal ArticleDOI
TL;DR: A hybrid formulation combining stochastic reduced basis methods with polynomial chaos expansions for solving linear random algebraic equations arising from discretization of Stochastic partial differential equations is proposed.

Journal ArticleDOI
TL;DR: A new algorithm is proposed that makes use of the zero of the nonbias multiple autocorrelation function of the chaotic time series to determine the time delay, which efficiently depresses the computing error caused by tracing arbitrarily the slop variation of average displacement (AD) in AD algorithm.
Abstract: A new algorithm is proposed for computing the embedding dimension and delay time in phase space reconstruction. It makes use of the zero of the nonbias multiple autocorrelation function of the chaotic time series to determine the time delay, which efficiently depresses the computing error caused by tracing arbitrarily the slop variation of average displacement (AD) in AD algorithm. Thereafter, by means of the iterative algorithm of multiple autocorrelation and Γ test, the near-optimum parameters of embedding dimension and delay time are estimated. This algorithm is provided with a sound theoretic basis, and its computing complexity is relatively lower and not strongly dependent on the data length. The simulated experimental results indicate that the relative error of the correlation dimension of standard chaotic time series is decreased from 4.4% when using conventional algorithm to 1.06% when using this algorithm. The accuracy of invariants in phase space reconstruction is greatly improved.

Journal ArticleDOI
TL;DR: It is shown that significant changes in performance occur as the number of basis functions varies, and that very good results are obtained by allowing modest representation error.
Abstract: A new source model for representing spatially distributed neural activity is presented. The signal of interest is modeled as originating from a patch of cortex and is represented using a set of basis functions. Each cortical patch has its own set of bases, which allows representation of arbitrary source activity within the patch. This is in contrast to previously proposed cortical patch models which assume a specific distribution of activity within the patch. We present a procedure for designing bases that minimize the normalized mean squared representation error, averaged over different activity distributions within the patch. Extension of existing algorithms to the basis function framework is straightforward and is illustrated using linearly constrained minimum variance (LCMV) spatial filtering and maximum-likelihood signal estimation/generalized likelihood ratio test (ML/GLRT). The number of bases chosen for each patch determines a tradeoff between representation accuracy and the ability to differentiate between distinct patches. We propose choosing the minimum number of bases that satisfy a constraint on the normalized mean squared representation accuracy. A mismatch analysis for LCMV and ML/GLRT is presented to show that this is an appropriate strategy for choosing the number of bases. The effectiveness of the patch basis model is demonstrated using real and simulated evoked response data. We show that significant changes in performance occur as the number of basis functions varies, and that very good results are obtained by allowing modest representation error

Journal ArticleDOI
TL;DR: In this article, the authors give a characterization of d-dimensional modulation spaces with moderate weights by means of the ddimensional Wilson basis and prove that pseudodifferential operators with generalized Weyl symbols are bounded on these modulation spaces.
Abstract: We give a characterization of d-dimensional modulation spaces with moderate weights by means of the d-dimensional Wilson basis. As an application we prove that pseudodifferential operators with generalized Weyl symbols are bounded on these modulation spaces.

Journal ArticleDOI
TL;DR: Vector fitting (VF) is a popular iterative rational approximation technique for sampled data in the frequency domain this paper, which is used in the power system and microwave engineering community.
Abstract: Vector fitting (VF) is a popular iterative rational approximation technique for sampled data in the frequency domain. VF is nowadays widely investigated and used in the Power Systems and Microwave Engineering communities. The VF methodology is recognized as an elegant version of the Sanathanan-Koerner iteration with a well-chosen basis.

Journal ArticleDOI
TL;DR: In this article, it is shown that for every problem within dimensional regularization, using the integration-by-parts method, one is able to construct a set of master integrals such that each corresponding coefficient function is finite in the limit of dimension equal to four.

Journal ArticleDOI
TL;DR: It was shown that Satisfiability Problem and Hamiltonian Path Problem can be deterministically solved in linear or polynomial time by a uniform family of P systems with separation rules, where separation rules are not changing labels, but polarizations are used.
Abstract: The P systems (or membrane systems) are a class of distributed parallel computing devices of a biochemical type, where membrane division is the frequently investigated way for obtaining an exponential working space in a linear time, and on this basis solving hard problems, typically NP-complete problems, in polynomial (often, linear) time. In this paper, using another way to obtain exponential working space --- membrane separation, it was shown that Satisfiability Problem and Hamiltonian Path Problem can be deterministically solved in linear or polynomial time by a uniform family of P systems with separation rules, where separation rules are not changing labels, but polarizations are used. Some related open problems are mentioned.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This work proposes a proposed algorithm named tree-based orthogonal matching pursuit (TOMP), which is shown to provide significant better reconstruction compared to methods that only use sparse representation assumption.
Abstract: Recent studies in linear inverse problems have recognized the sparse representation of unknown signal in a certain basis as an useful and effective prior information to solve those problems. In many multiscale bases (e.g. wavelets), signals of interest (e.g. piecewise-smooth signals) not only have few significant coefficients, but also those significant coefficients are well-organized in trees. We propose to exploit this sparse tree representation as additional prior information for linear inverse problems with limited numbers of measurements. In particular, our proposed algorithm named tree-based orthogonal matching pursuit (TOMP) is shown to provide significant better reconstruction compared to methods that only use sparse representation assumption.

Journal ArticleDOI
TL;DR: The possible recursive definitions of principal angles and vectors in complex vector spaces are analysed and a new projector based definition is given, which enables to derive important properties of the principal vectors and to generalize a result of Bjorck and Golub.
Abstract: We analyse the possible recursive definitions of principal angles and vectors in complex vector spaces and give a new projector based definition. This enables us to derive important properties of the principal vectors and to generalize a result of Bjorck and Golub (Math. Comput. 1973; 27(123):579–594), which is the basis of today’s computational procedures in real vector spaces. We discuss other angle definitions and concepts in the last section. Copyright q 2006 John Wiley & Sons, Ltd.

Patent
21 Nov 2006
TL;DR: In this paper, a directed set is defined as a plurality of elements and chains relating the concepts, and a subset of the chains is selected to form a basis for the directed set.
Abstract: A directed set can be used to establish contexts for linguistic concepts: for example, to aid in answering a question, to refine a query, or even to determine what questions can be answered given certain knowledge. A directed set includes a plurality of elements and chains relating the concepts. One concept is identified as a maximal element. The chains connect the maximal element to each concept in the directed set, and more than one chain can connect the maximal element to any individual concept either directly or through one or more intermediate concepts. A subset of the chains is selected to form a basis for the directed set. Each concept in the directed set is measured to determine how concretely each chain in the basis represents it. These measurements for a single concept form a vector in Euclidean k-space. Distances between these vectors can be used to determine how closely related pairs of concepts are in the directed set.

Journal ArticleDOI
TL;DR: In this article, a reduced approximation basis is constructed for the Fokker-Planck equation with a mesh support, which reduces the number of realizations required for describing accurately the microstructural state due to Brownian effects.
Abstract: Stochastic simulation for finitely extensible non-linear elastic (FENE) dumbbells has been successfully applied (seethe review paper of Keunings [R. Keunings, Micro-macro methods for the multiscale simulation viscoelastic flow using molecular models of kinetic theory, in: D.M. Binding, K. Walters (Eds.), Rheology Reviews, British Society of Rheology, 2004, pp. 67-98] and the references therein). The main difficulty in these simulations is related to the high number of realizations required for describing accurately the microstructural state due to Brownian effects. The discretisation of the Fokker-Planck equation with a mesh support (finite elements, finite differences, finite volumes, spectral techniques,...) allows to go beyond the difficulty related to Brownian effects. However, kinetic theory models involve physical and conformation spaces. Thus, the molecular distribution depends on time, space as well as on the molecular orientation and extension (conformation coordinates). In this form the resulting Fokker-Planck equation is defined in a space of dimension 7. In the reduction technique proposed in this paper, a reduced approximation basis is constructed. The new shape functions are defined in the whole domain in an appropriate manner. Thus, the number of degrees of freedom involved in the solution of the Fokker-Planck equation is significantly reduced. The construction of those new approximation functions is done with an 'a priori' approach, which combines a basis reduction (using the Karhunen-Loeve decomposition) with a basis enrichment based on the use of some Krylov subspaces. This numerical technique is applied for solving the FENE model of viscoelastic flows. (c) 2006 Elsevier B.V. All rights reserved.

Journal ArticleDOI
TL;DR: It is important to sufficiently reduce the error due to the resolution of the identity approximation for the three- and four-electron integrals and the complementary auxiliary basis set method is recommended.
Abstract: The basis set limit Moller-Plesset second-order equilibrium bond lengths of He2, Be2, and Ne2, accurate to 0.01a0, are computed to be 5.785a0, 5.11a0, and 6.05a0. The corresponding binding energies are 22.4±0.1, 2180±20, and 86±2μEh, respectively. An accuracy of 95% in the binding energy requires an aug-cc-pV6Z basis or larger for conventional Moller-Plesset theory. This accuracy is obtained using an aug-cc-pV5Z basis if geminal basis functions with a linear correlation factor are included and with an aug-cc-pVQZ basis if the linear correlation factor is replaced by exp(−γr12) with γ=1. The correlation factor r12exp(−γr12) does not perform as well, describing the atom more efficiently than the dimer. The geminal functions supplement the orbital basis in the description of both the short-range correlation, at electron coalescence, and the long-range dispersion correlation and the values of γ that give the best binding energies are smaller than those that are optimum for the atom or the dimer. It is importa...