scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2000"


Book
01 Jan 2000
TL;DR: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index
Abstract: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-Range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index

1,740 citations


Journal ArticleDOI
TL;DR: It is shown that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency and provide a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.
Abstract: In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.

1,267 citations


Journal ArticleDOI
TL;DR: In this paper, a multireference second-order perturbation theory (MRPT2) has been developed which allows the use of reference wave functions with large active spaces and arbitrary configuration selection.
Abstract: A multireference second-order perturbation theory (MRPT2) has been developed which allows the use of reference wave functions with large active spaces and arbitrary configuration selection. Internally contracted configurations are used as a basis for all configuration subspaces of the first-order wave function for which the overlap matrix depends only on the second-order density matrix of the reference function. Some other subspaces which would require the third- or fourth-order density matrices are left uncontracted. This removes bottlenecks of the complete active space second order pertubation theory (CASPT2) caused by the need to construct and diagonalize large overlap matrices. Preliminary applications of the new method for 1,2-dihydronaphthalene (DHN) and free base porphin are presented in which the effect of applying occupancy restrictions in the reference wave function (restricted active space second-order perturbation theory, RASPT2) and reference configuration selection (general MRPT2) on electro...

766 citations


Journal ArticleDOI
TL;DR: In this paper, the use of basis functions as an intermediate representation for sensorimotor transformations has been studied and the neural basis of computation, learning and short-term memory has been shown to be consistent with responses of cortical neurons.
Abstract: Behaviors such as sensing an object and then moving your eyes or your hand toward it require that sensory information be used to help generate a motor command, a process known as a sensorimotor transformation. Here we review models of sensorimotor transformations that use a flexible intermediate representation that relies on basis functions. The use of basis functions as an intermediate is borrowed from the theory of nonlinear function approximation. We show that this approach provides a unifying insight into the neural basis of three crucial aspects of sensorimotor transformations, namely, computation, learning and short-term memory. This mathematical formalism is consistent with the responses of cortical neurons and provides a fresh perspective on the issue of frames of reference in spatial representations.

368 citations


Journal ArticleDOI
TL;DR: An expanded classification scheme for range-restriction scenarios is developed that conceptualizes range-Restriction scenarios from various combinations of the following facets: the variable on which selection occurs, whether unrestricted variances for the relevant variables are known, and whether a 3rd variable, if involved, is measured or unmeasured.
Abstract: A common research problem is the estimation of the population correlation between x and y from an observed correlation r xy obtained from a sample that has been restricted because of some sample selection process. Methods of correcting sample correlations for range restriction in a limited set of conditions are well-known. An expanded classification scheme for range-restriction scenarios is developed that conceptualizes range-restriction scenarios from various combinations of the following facets: (a) the variable(s) on which selection occurs (x, y and/or a 3rd variable z), (b) whether unrestricted variances for the relevant variables are known, and (c) whether a 3rd variable, if involved, is measured or unmeasured. On the basis of these facets, the authors describe potential solutions for 11 different range-restriction scenarios and summarize research to date on these techniques.

356 citations


Journal ArticleDOI
TL;DR: A recently developed general theory for basis construction will be presented, that is a generalization of the classical Laguerre theory, particularly exploiting the property that basis function models are linearly parametrized.

336 citations


Journal ArticleDOI
TL;DR: In this article, the Kohn-Sham density-functional method with Gaussian orbitals was used for the Coulomb problem with periodic boundary conditions, which achieves linear scaling of computational time with system size but also very high accuracy in all infinite summations.
Abstract: We report methodological and computational details of our Kohn-Sham density-functional method with Gaussian orbitals for systems with periodic boundary conditions. Our approach for the Coulomb problem is based on the direct space fast multipole method, which achieves not only linear scaling of computational time with system size but also very high accuracy in all infinite summations. The latter is pivotal for avoiding numerical instabilities that have previously plagued calculations with large bases, especially those containing diffuse functions. Our program also makes extensive use of other linear-scaling techniques recently developed for large clusters. Using these theoretical tools, we have implemented computational programs for energy and analytic energy gradients (forces) that make it possible to optimize geometries of periodic systems with great efficiency and accuracy. Vibrational frequencies are then accurately obtained from finite differences of forces. We demonstrate the capabilities of our methods with benchmark calculations on polyacetylene, polyphenylenevinylene, and a (5,0) carbon nanotube, employing basis sets of double zeta plus polarization quality, in conjunction with the generalized gradient approximation and kinetic-energy density-dependent functionals. The largest calculation reported in this paper contains 244 atoms and 1344 contracted Gaussians in the unit cell.

331 citations


Journal ArticleDOI
TL;DR: A model allowing to determine the weights related to interacting criteria is presented, done on the basis of the knowledge of a partial ranking over a reference set of alternatives (prototypes), a partialranking over the set of criteria, and apartial ranking over theSet of interactions between pairs of criteria.

286 citations


Proceedings ArticleDOI
Philip N. Klein1
01 Feb 2000
TL;DR: The result can be viewed as a generalization of this a lgor i thm-there is an algori thm that for any given k -"-*"r-esearch s u p p o r t e d by NSF G r a n t CCR-9700146.
Abstract: 1 I n t r o d u c t i o n The closest lattice vector problem, also called the nearest lattice point problem, is NP-hard [2], and no polynomialt ime approximat ion algori thm is known with a performance ratio bet ter than exponential. It seems worthwhile to identify circumstances in which the problem can be solved optimally. As an example of where this arises, Furst and Kannan [3] give a distribution of n-dimensional instances of SubsetSum problems for which there is an a lgor i thm that runs in polynomial t ime almost always. One ingredient in their result is an algori thm that, given a vector v and a basis of a lattice, finds the vector in the lattice that is closest to to v assuming the following condition holds: the distance from v to the lattice is less than half the length of the shortest Gram-Schmidt vector. Indeed, they show that, assuming this condition holds, there is a unique vector closest to v. Here the Gram-Schmidt vectors corresponding to a basis b x , . . . , b n are the vectors b i , . . . ,bin where b~ is the projection of b | orthogonal to the vector space generated by b l , . . . , b i -x . These are the vectors found by the Gram-Schmidt algori thm for orthogonalization. 1 Our result can be viewed as a generalization of this a lgor i thm-there is an algori thm that for any given k -"-*"r-esearch s u p p o r t e d by NSF G r a n t CCR-9700146. 1Often the G r a m S c h m i d t a l g o r i t h m is used to find an orthonormal basis , i.e. where the o u t p u t vectors have n o r m 1. Here we a s sume no such n o r m a l i z a t i o n is performed. runs in n k2+O(1) t ime and that finds the closest vector to v if the following condition holds: the distance from v to the lattice is at most k times the length of the shortest Gram-Schmidt vector. 1.1 R e l a t e d r e s u l t s Some related results will help to put our result in perspective. For comparison, Kannan [5] gives a closestvector algori thm that runs in t ime poly(n)n n where n is the dimension of the lattice. Thus the algorithm presented here is only useful when k = o ( v ~ ) . On the other hand, as mentioned above, Furst and Kannan gave an algorithm that runs in polynomial t ime and finds the closest vector when k 1/2. For any lattice basis and any vector v, the distance of v from the lattice is no more than half the sum of the lengths of the Gram-Schmidt vectors; furthermore, this bound is achievable. Thus our algori thm is useful only when the vector v is unusually close to the lattice. How small can the smallest Gram-Schmidt vector be? One can choose a lattice and a basis for it so as to make the smallest Gram-Schmidt vector arbitrarily small in comparison to the shortest vector in the lattice. On the other hand, Lagarias, Lenstra, and Sehnorr [6] have shown that for every lattice there exists a basis (the Korkin-Zolotarev basis) where the smallest GramSchmidt vector is at least 3/2n times the length of the shortest vector in the lattice. Even in this case, for our algorithm to be useful, the distance between the input vector and the lattice must be significantly less than the length of the shortest vector. 2 N o t a t i o n We use the following notation. Vectors are signified by bold face. For vectors b l , . . . , b n , we denote by V ( b l , . . . , bn) the vector space generated by these vectors, and we denote by L ( b l , . . . , b n ) the lattice they generate. The Gram-Schmidt vectors corresponding to b x , . . , bn are denoted b I . . . . ,btn. By definition, b~

208 citations


Journal ArticleDOI
TL;DR: The MOESP class of identification algorithms are made recursive on the basis of various updating schemes for subspace tracking.

207 citations


Journal ArticleDOI
TL;DR: In this paper, a more extensive interpretation of the equivalent workload formulation of a Brownian network model is developed, and a linear program called the static planning problem is introduced to articulate the notion of heavy traffic for a general open network, and the dual of that linear program is used to define a canonical choice of the basis matrix $M$.
Abstract: A recent paper by Harrison and Van Mieghem explained in general mathematical terms how one forms an “equivalent workload formulation” of a Brownian network model. Denoting by $Z(t)$ the state vector of the original Brownian network, one has a lower dimensional state descriptor $W(t) = MZ(t)$ in the equivalent workload formulation, where $M$ can be chosen as any basis matrix for a particular linear space. This paper considers Brownian models for a very general class of open processing networks, and in that context develops a more extensive interpretation of the equivalent workload formulation, thus extending earlier work by Laws on alternate routing problems. A linear program called the static planning problem is introduced to articulate the notion of “heavy traffic ” for a general open network, and the dual of that linear program is used to define a canonical choice of the basis matrix $M$. To be specific, rows of the canonical $M$ are alternative basic optimal solutions of the dual linear program. If the network data satisfy a natural monotonicity condition, the canonical matrix $M$ is shown to be nonnegative, and another natural condition is identified which insures that $M$ admits a factorization related to the notion of resource pooling.

Journal ArticleDOI
TL;DR: It is shown how the restriction to a low-dimensional basis as well as improper treatment of boundary conditions can affect the range of validity of these models.
Abstract: In this paper some implications of the technique of projecting the Navier–Stokes equations onto low-dimensional bases of eigenfunctions are explored. Such low-dimensional bases are typically obtained by truncating a particularly well-suited complete set of eigenfunctions at very low orders, arguing that a small number of such eigenmodes already captures a large part of the dynamics of the system. In addition, in the treatment of inhomogeneous spatial directions of a flow, eigenfunctions that do not satisfy the boundary conditions are often used, and in the Galerkin projection the corresponding boundary conditions are ignored. We show how the restriction to a low-dimensional basis as well as improper treatment of boundary conditions can affect the range of validity of these models. As particular examples of eigenfunction bases, systems of Karhunen–Loeve eigenfunctions are discussed in more detail, although the results presented are valid for any basis.

Journal ArticleDOI
TL;DR: It is shown that a sequence of generalized eigenfunctions of an Euler--Bernoulli beam equation with a tip mass under boundary linear feedback control forms a Riesz basis for the state Hilbert space.
Abstract: Using an abstract condition of Riesz basis generation of discrete operators in the Hilbert spaces, we show, in this paper, that a sequence of generalized eigenfunctions of an Euler--Bernoulli beam equation with a tip mass under boundary linear feedback control forms a Riesz basis for the state Hilbert space. In the meanwhile, an asymptotic expression of eigenvalues and the exponential stability are readily obtained. The main results of [ SIAM J. Control Optim., 36 (1998), pp. 1962--1986] are concluded as a special case, and the additional conditions imposed there are removed.

Journal ArticleDOI
TL;DR: In this article, the authors present the main principles that are the basis of space syntax, in addition to methodological perspectives for a closer integration with GIS, which should be of use for many GIS applications, such as urban planning and design.

Journal ArticleDOI
TL;DR: The procedure is introduced; the asymptotic bounding properties and optimal convergence rate of the error estimator are proved; computational considerations are discussed; and, finally, corroborating numerical results are presented.
Abstract: We propose a new reduced-basis output bound method for the symmetric eigenvalue problem. The numerical procedure consists of two stages: the pre-processing stage, in which the reduced basis and associated functions are computed—“off-line”—at a prescribed set of points in parameter space; and the real-time stage, in which the approximate output of interest and corresponding rigorous error bounds are computed—“on-line”—for any new parameter value of interest. The real time calculation is very inexpensive as it requires only the solution or evaluation of very small systems. We introduce the procedure; prove the asymptotic bounding properties and optimal convergence rate of the error estimator; discuss computational considerations; and, finally, present corroborating numerical results.

Proceedings ArticleDOI
01 Aug 2000
TL;DR: A new method to compute transitive closure under uncertainty which handles the merging of groups of inexact duplicate records and a generic knowledge-based frame- work for effective data cleaning that implements existing cleaning strategies and more.
Abstract: data cleaning methods work on the basis of com- puting the degree of similarity between nearby records in a sorted database. High recall is achieved by accepting records with low degrees of similarity as duplicates, at the cost of lower precision. High precision is achieved analogously at the cost of lower recall. This is the recall-precision dilemma. In this paper, we propose a generic knowledge-based frame- work for effective data cleaning that implements existing cleaning strategies and more. We develop a new method to compute transitive closure under uncertainty which handles the merging of groups of inexact duplicate records. Experi- mental results show that this framework can identify dupli- cates and anomalies with high recall and precision.

Proceedings ArticleDOI
24 Jul 2000
TL;DR: This work proposes to solve a text categorization task using a new metric between documents, based on a priori semantic knowledge about words, that can be incorporated into the definition of radial basis kernels of Support Vector Machines or directly used in a K-nearest neighbors algorithm.
Abstract: We propose to solve a text categorization task using a new metric between documents, based on a priori semantic knowledge about words. This metric can be incorporated into the definition of radial basis kernels of Support Vector Machines or directly used in a K-nearest neighbors algorithm. Both SVM and KNN are tested and compared on the 20-newsgroups database. Support Vector Machines provide the best accuracy on test data.

Journal ArticleDOI
TL;DR: In this paper, a simple theoretically motivated model to extrapolate the correlation energy based on correlation-consistent polarized X-tuple basis sets is suggested, which has the form EXcor=E∞cor[1+A3,X−3(1+Xmin−Xmax), where EXcor is the energy for the basis set, and A3 is a function of A4.
Abstract: A simple theoretically motivated model to extrapolate the correlation energy based on correlation-consistent polarized X-tuple basis sets is suggested. It has the form EXcor=E∞cor[1+A3 X−3(1+A4 X−1)], where EXcor is the energy for the X-tuple basis set, E∞cor and A3 are parameters to be determined from a set (Xmin−Xmax) of correlation consistent basis sets at a given level of theory, and A4 is a function of A3. Even for the simple (2,3) extrapolation scheme, the method is shown to yield energies for 33 test data sets that are more accurate than those obtained from pure correlation consistent sextuple-zeta basis sets at a much lower computational cost. Other extrapolation schemes have also been investigated, including a simple one-parameter rule EXcor=E∞cor(1–2.4X−3).

Journal ArticleDOI
TL;DR: This work explores the use of parameterized motion models that represent much more varied and complex motions, and shows how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions.
Abstract: Linear parameterized models of optical flow, particularly affine models, have become widespread in image motion analysis. The linear model coefficients are straightforward to estimate, and they provide reliable estimates of the optical flow of smooth surfaces. Here we explore the use of parameterized motion models that represent much more varied and complex motions. Our goals are threefold: to construct linear bases for complex motion phenomenas to estimate the coefficients of these linear modelss and to recognize or classify image motions from the estimated coefficients. We consider two broad classes of motions: i) generic “motion features” such as motion discontinuities and moving barss and ii) non-rigid, object-specific, motions such as the motion of human mouths. For motion features we construct a basis of steerable flow fields that approximate the motion features. For object-specific motions we construct basis flow fields from example motions using principal component analysis. In both cases, the model coefficients can be estimated directly from spatiotemporal image derivatives with a robust, multi-resolution scheme. Finally, we show how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions.

Journal ArticleDOI
TL;DR: New assumptions of NDS and the model of CASC are used as a basis for explaining why order emerges in organizations, and for uncovering a three‐stage process model of complex adaptive systems change (CASC).
Abstract: Complexity researchers have identified four basic assumptions underlying non‐linear dynamic systems (NDS): the assumption that change is a constant; the assumption that emergent systems are not reducible to their parts; the assumption of mutual dependence; and the assumption that complex systems behave in non‐proportional ways. In this paper I use these new assumptions as a basis for explaining why order emerges in organizations, and for uncovering a three‐stage process model of complex adaptive systems change (CASC). The insights from these NDS models are revealed through examples from two entrepreneurial firms undergoing transformative shifts in their development. These assumptions of NDS and the model of CASC may therefore be useful for understanding order creation and self‐organizing processes in work groups, project ventures, and organizations.

Journal ArticleDOI
TL;DR: Independent clusters, as treated in classical linear factor analysis, provide a desirable basis for multidimensional item response models, yielding interpretable and useful results as mentioned in this paper, while establishing a pattern for the item parameter matrix that provides identifiability conditions and facilitates interpretation of the traits.
Abstract: Independent clusters, as treated in classical linear factor analysis, provide a desirable basis for multidimensional item response models, yielding interpretable and useful results. The independentclusters basis serves to determine dimensionality, while establishing a pattern for the item parameter matrix that provides identifiability conditions and facilitates interpretation of the traits. It also provides a natural extension of known results on convergent/discriminant “construct” validity to binary items, allowing the quantification of the validity of test and subtest scores. The independent-clusters basis simplifies item/test response and information hypersurfaces, which cannot otherwise be easily studied except in the trivial case of two dimensions, and provides estimates of latent traits with uncorrelated measurement errors. In addition, the affine transformation needed for the informative analysis of the causes of differential item functioning is simplified using the independent-clusters basis. Thes...

Journal ArticleDOI
TL;DR: In this article, the problem of choosing appropriate atomic orbital basis sets for ab initio calculations on dipole-bound anions has been examined, and the issue of designing and centering the extra diffuse basis functions for the excess electron has also been studied.
Abstract: The problem of choosing appropriate atomic orbital basis sets for ab initio calculations on dipole-bound anions has been examined. Such basis sets are usually constructed as combination of a standard valence-type basis, designed to describe the neutral molecular core, and an extra diffuse set designed to describe the charge distribution of the extra electron. As part of the present work, it has been found that the most commonly used valence-type basis sets (e.g., 6-31CCG or 6-311CG), when so augmented, are subject to unpredictable under- or overestimating electron binding energies for dipole-bound anions. Whereas, when the aug-cc-pVDZ, aug-cc-pVTZ (or other medium-size polarized (MSP) basis sets) are so augmented, more reliable binding energies are obtained especially when the electron binding energy is calculated at the CCSD(T) level of theory. The issue of designing and centering the extra diffuse basis functions for the excess electron has also been studied, and our findings are discussed here. c 2000 John Wiley & Sons, Inc. Int J Quantum Chem 80: 1024-1038, 2000


Journal ArticleDOI
TL;DR: The present exposition overviews new and recent advances describing a standardized formal theory towards the evolution, classification, characterization and generic design of time discretized operators for transient/dynamic applications and explains a wide variety of generalized integration operators in time.
Abstract: Via new perspectives, for the time dimension, the present exposition overviews new and recent advances describing a standardized formal theory towards the evolution, classification, characterization and generic design of time discretized operators for transient/dynamic applications. Of fundamental importance in the present exposition are the developments encompassing the evolution of time discretized operators leading to the theoretical design of computational algorithms and their subsequent classification and characterization. And, the overall developments are new and significantly different from the way traditional modal type and a wide variety of step-by-step time marching approaches which we are mostly familiar with have been developed and described in the research literature and in standard text books over the years. The theoretical ideas and basis towards the evolution of a generalized methodology and formulations emanate under the umbrella and framework and are explained via a generalized time weighted philosophy encompassing the semi-discretized equations pertinent to transient/dynamic systems. It is herein hypothesized that integral operators and the associated representations and a wide variety of the so-called integration operators pertain to and emanate from the same family, with the burden which is being carried by a virtual field or weighted time field specifically introduced for the time discretization is strictly enacted in a mathematically consistent manner so as to first permit obtaining the adjoint operator of the original semi-discretized equation system. Subsequently, the selection or burden carried by the virtual or weighted time fields originally introduced to facilitate the time discretization process determines the formal development and outcome of “exact integral operators”, “approximate integral operators”, including providing avenues leading to the design of new computational algorithms which have not been exploited and/or explored to-date and the recovery of most of the existing algorithms, and also bridging the relationships systematically leading to the evolution of a wide variety of “integration operators”. Thus, the overall developments not only serve as a prelude towards the formal developments for “exact integral operators”, but also demonstrate that the resulting “approximate integral operators” and a wide variety of “new and existing integration operators and known methods” are simply subsets of the generalizations of a standardizedW p -Family, and emanate from the principles presented herein. The developments first leading to integral operators in time, and the resulting consequences then systematically leading to not only providing new avenues but additionally also explaining a wide variety of generalized integration operators in time of which single-step time integration operators and various widely recognized algorithms which we are familiar are simply subsets, the associated multi-step time integration operators, and a class of finite element in time integration operators, and their relationships are particularly addressed. The theoretical design developments encompass and explain a variety of time discretized operators, the recovery of various original methods of algorithmic development, and the development of new computational algorithms which have not been exploited and/or explored to-date, and furthermore, permit time discretized operators to be uniquely classified and characterized by algorithmic markers. The resulting and so-called discrete numerically assigned [DNA] algorithmic markers not only serve as a prelude towards providing a standardized formal theory of development of time discretized operators and forum for selecting and identifying time discretized operators, but also permit lucid communication when referring to various time discretized operators. That which constitutes characterization of time discretized operators are the so-called DNA algorithmic markers which essentially comprise of both: (i) the weighted time fields introduced for enacting the time discretization process, and (ii) the corresponding conditions (if any) these weighted time fields impose (dictate) upon the approximations for the dependent field variables and updates in the theoretical development of time discretized operators. As such, recent advances encompassing the theoretical design and development of computational algorithms for transient/dynamic analysis of time dependent phenomenon encountered in engineering, mathematical and physical sciences are overviewed.

Journal ArticleDOI
TL;DR: This work considers graphical representation of DNA of beta-globins of several species, including human, on the basis of the approach of A. Nandy in which nucleic bases are associated with a walk over integral points of a Cartesian x, y-coordinate system.
Abstract: We consider numerical characterization of graphical representations of DNA primary sequences. In particular we consider graphical representation of DNA of β-globins of several species, including human, on the basis of the approach of A. Nandy in which nucleic bases are associated with a walk over integral points of a Cartesian x, y-coordinate system. With a so-generated graphical representation of DNA, we associate a distance/distance matrix, the elements of which are given by the quotient of the Euclidean and the graph theoretical distances, that is, through the space and through the bond distances for pairs of bases of graphical representation of DNA. We use eigenvalues of so-constructed matrices to characterize individual DNA sequences. The eigenvalues are used to construct numerical sequences, which are subsequently used for similarity/dissimilarity analysis. The results of such analysis have been compared and combined with similarity tables based on the frequency of occurrence of pairs of bases.

Journal ArticleDOI
TL;DR: In this article, the authors studied the topological zero mode sector of type II strings on a K-ahler manifold with boundaries and constructed two finite bases, in a sense bosonic and fermionic, that generate the topology sector of the Hilbert space with boundaries.
Abstract: We study the topological zero mode sector of type II strings on a K\"ahler manifold $X$ in the presence of boundaries. We construct two finite bases, in a sense bosonic and fermionic, that generate the topological sector of the Hilbert space with boundaries. The fermionic basis localizes on compact submanifolds in $X$. A variation of the FI terms interpolates between the description of these ground states in terms of the ring of chiral fields at the boundary at small volume and helices of exceptional sheaves at large volume, respectively. The identification of the bosonic/fermionic basis with the dual bases for the non-compact/compact K-theory group on $X$ gives a natural explanation of the McKay correspondence in terms of a linear sigma model and suggests a simple generalization of McKay to singular resolutions. The construction provides also a very effective way to describe D-brane states on generic, compact Calabi--Yau manifolds and allows to recover detailed information on the moduli space, such as monodromies and analytic continuation matrices, from the group theoretical data of a simple orbifold.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed computational strategies and algorithms to perform multi-reference Moller-Plesset (MR-MP2) calculations efficiently for large molecules using improved (average) virtual orbitals.
Abstract: We propose computational strategies and algorithms to perform multi-reference Moller–Plesset (MR-MP2) calculations efficiently for large molecules. As zeroth-order reference we employ restricted configuration interaction wave functions expressed in terms of an active space of Hartree–Fock one-particle functions (RAS-CI). To accelerate the convergence of the perturbation expansion and to keep the zeroth-order spaces as small as possible (i.e. Dim<1000) we use improved (average) virtual orbitals. The length of the first-order space (single and double excitations with respect to all reference configurations) is reduced by selecting the most important configurations from the full space based on the magnitude of their H0 diagonal matrix element. The two-electron integrals in the MO basis are calculated semi-directly with the resolution of the identity (RI) method which avoids computationally demanding 4-index transformations. The errors introduced by the approximations can systematically be reduced and are found to be insignificant in applications to chemical problems. As examples we present MR-MP2 results for excitation and reaction energies of molecules for which single-reference perturbation theory is not adequate. With our approach, investigation of systems as large as porphin or C60 are possible on low-cost personal computers.

01 Mar 2000
TL;DR: The authors summarizes the research on test changes to provide an empirical basis for defining accommodations and analyze this research from three perspectives: • Tests are changed in specific ways in the manner that they are given or taken. • The change does not alter the construct of what is being measured.
Abstract: This document summarizes the research on test changes to provide an empirical basis for defining accommodations. We analyze this research from three perspectives: • Tests are changed in specific ways in the manner that they are given or taken. • The change does not alter the construct of what is being measured. • The changes are or can be referenced to individual need and differential benefit, not overall

Journal ArticleDOI
TL;DR: In this article, the second Dirac operator is shown to be equivalent to a standard Dirac one in Riemannian spaces with a five-dimensional motion group, and it is shown that it exists in all metrics for which it exists.
Abstract: We describe a Riemannian space class where the second Dirac operator arises and prove that the operator is always equivalent to a standard Dirac one. The particle state in this gravitational field is degenerate to some extent and we introduce an additional value in order to describe a particle state completely. Some supersymmetry constructions are also discussed. As an example we study all Riemannian spaces with a five-dimensional motion group and find all metrics for which the second Dirac operator exists. On the basis of our discussed examples we hypothesize about the number of second Dirac operators in Riemannian space.

Patent
Michael E. Tipping1
01 Sep 2000
TL;DR: The relevance vector machine (RVM) as mentioned in this paper is a probabilistic basis model with a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters.
Abstract: A relevance vector machine (RVM) for data modeling is disclosed. The RVM is a probabilistic basis model. Sparsity is achieved through a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters. As compared to a Support Vector Machine (SVM), the non-zero weights in the RVM represent more prototypical examples of classes, which are termed relevance vectors. The trained RVM utilizes many fewer basis functions than the corresponding SVM, and typically superior test performance. No additional validation of parameters (such as C) is necessary to specify the model, except those associated with the basis.