scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2000"


Book
01 Jan 2000
TL;DR: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index
Abstract: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-Range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index

1,740 citations


Journal ArticleDOI
TL;DR: It is shown that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency and provide a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.
Abstract: In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.

1,267 citations


Journal ArticleDOI
TL;DR: In this paper, a multireference second-order perturbation theory (MRPT2) has been developed which allows the use of reference wave functions with large active spaces and arbitrary configuration selection.
Abstract: A multireference second-order perturbation theory (MRPT2) has been developed which allows the use of reference wave functions with large active spaces and arbitrary configuration selection. Internally contracted configurations are used as a basis for all configuration subspaces of the first-order wave function for which the overlap matrix depends only on the second-order density matrix of the reference function. Some other subspaces which would require the third- or fourth-order density matrices are left uncontracted. This removes bottlenecks of the complete active space second order pertubation theory (CASPT2) caused by the need to construct and diagonalize large overlap matrices. Preliminary applications of the new method for 1,2-dihydronaphthalene (DHN) and free base porphin are presented in which the effect of applying occupancy restrictions in the reference wave function (restricted active space second-order perturbation theory, RASPT2) and reference configuration selection (general MRPT2) on electro...

766 citations


Journal ArticleDOI
TL;DR: A recently developed general theory for basis construction will be presented, that is a generalization of the classical Laguerre theory, particularly exploiting the property that basis function models are linearly parametrized.

336 citations


Journal ArticleDOI
TL;DR: In this article, the Kohn-Sham density-functional method with Gaussian orbitals was used for the Coulomb problem with periodic boundary conditions, which achieves linear scaling of computational time with system size but also very high accuracy in all infinite summations.
Abstract: We report methodological and computational details of our Kohn-Sham density-functional method with Gaussian orbitals for systems with periodic boundary conditions. Our approach for the Coulomb problem is based on the direct space fast multipole method, which achieves not only linear scaling of computational time with system size but also very high accuracy in all infinite summations. The latter is pivotal for avoiding numerical instabilities that have previously plagued calculations with large bases, especially those containing diffuse functions. Our program also makes extensive use of other linear-scaling techniques recently developed for large clusters. Using these theoretical tools, we have implemented computational programs for energy and analytic energy gradients (forces) that make it possible to optimize geometries of periodic systems with great efficiency and accuracy. Vibrational frequencies are then accurately obtained from finite differences of forces. We demonstrate the capabilities of our methods with benchmark calculations on polyacetylene, polyphenylenevinylene, and a (5,0) carbon nanotube, employing basis sets of double zeta plus polarization quality, in conjunction with the generalized gradient approximation and kinetic-energy density-dependent functionals. The largest calculation reported in this paper contains 244 atoms and 1344 contracted Gaussians in the unit cell.

331 citations


Journal ArticleDOI
TL;DR: A model allowing to determine the weights related to interacting criteria is presented, done on the basis of the knowledge of a partial ranking over a reference set of alternatives (prototypes), a partialranking over the set of criteria, and apartial ranking over theSet of interactions between pairs of criteria.

286 citations


Proceedings ArticleDOI
Philip N. Klein1
01 Feb 2000
TL;DR: The result can be viewed as a generalization of this a lgor i thm-there is an algori thm that for any given k -"-*"r-esearch s u p p o r t e d by NSF G r a n t CCR-9700146.
Abstract: 1 I n t r o d u c t i o n The closest lattice vector problem, also called the nearest lattice point problem, is NP-hard [2], and no polynomialt ime approximat ion algori thm is known with a performance ratio bet ter than exponential. It seems worthwhile to identify circumstances in which the problem can be solved optimally. As an example of where this arises, Furst and Kannan [3] give a distribution of n-dimensional instances of SubsetSum problems for which there is an a lgor i thm that runs in polynomial t ime almost always. One ingredient in their result is an algori thm that, given a vector v and a basis of a lattice, finds the vector in the lattice that is closest to to v assuming the following condition holds: the distance from v to the lattice is less than half the length of the shortest Gram-Schmidt vector. Indeed, they show that, assuming this condition holds, there is a unique vector closest to v. Here the Gram-Schmidt vectors corresponding to a basis b x , . . . , b n are the vectors b i , . . . ,bin where b~ is the projection of b | orthogonal to the vector space generated by b l , . . . , b i -x . These are the vectors found by the Gram-Schmidt algori thm for orthogonalization. 1 Our result can be viewed as a generalization of this a lgor i thm-there is an algori thm that for any given k -"-*"r-esearch s u p p o r t e d by NSF G r a n t CCR-9700146. 1Often the G r a m S c h m i d t a l g o r i t h m is used to find an orthonormal basis , i.e. where the o u t p u t vectors have n o r m 1. Here we a s sume no such n o r m a l i z a t i o n is performed. runs in n k2+O(1) t ime and that finds the closest vector to v if the following condition holds: the distance from v to the lattice is at most k times the length of the shortest Gram-Schmidt vector. 1.1 R e l a t e d r e s u l t s Some related results will help to put our result in perspective. For comparison, Kannan [5] gives a closestvector algori thm that runs in t ime poly(n)n n where n is the dimension of the lattice. Thus the algorithm presented here is only useful when k = o ( v ~ ) . On the other hand, as mentioned above, Furst and Kannan gave an algorithm that runs in polynomial t ime and finds the closest vector when k 1/2. For any lattice basis and any vector v, the distance of v from the lattice is no more than half the sum of the lengths of the Gram-Schmidt vectors; furthermore, this bound is achievable. Thus our algori thm is useful only when the vector v is unusually close to the lattice. How small can the smallest Gram-Schmidt vector be? One can choose a lattice and a basis for it so as to make the smallest Gram-Schmidt vector arbitrarily small in comparison to the shortest vector in the lattice. On the other hand, Lagarias, Lenstra, and Sehnorr [6] have shown that for every lattice there exists a basis (the Korkin-Zolotarev basis) where the smallest GramSchmidt vector is at least 3/2n times the length of the shortest vector in the lattice. Even in this case, for our algorithm to be useful, the distance between the input vector and the lattice must be significantly less than the length of the shortest vector. 2 N o t a t i o n We use the following notation. Vectors are signified by bold face. For vectors b l , . . . , b n , we denote by V ( b l , . . . , bn) the vector space generated by these vectors, and we denote by L ( b l , . . . , b n ) the lattice they generate. The Gram-Schmidt vectors corresponding to b x , . . , bn are denoted b I . . . . ,btn. By definition, b~

208 citations


Journal ArticleDOI
TL;DR: The MOESP class of identification algorithms are made recursive on the basis of various updating schemes for subspace tracking.

207 citations


Journal ArticleDOI
TL;DR: In this paper, a more extensive interpretation of the equivalent workload formulation of a Brownian network model is developed, and a linear program called the static planning problem is introduced to articulate the notion of heavy traffic for a general open network, and the dual of that linear program is used to define a canonical choice of the basis matrix $M$.
Abstract: A recent paper by Harrison and Van Mieghem explained in general mathematical terms how one forms an “equivalent workload formulation” of a Brownian network model. Denoting by $Z(t)$ the state vector of the original Brownian network, one has a lower dimensional state descriptor $W(t) = MZ(t)$ in the equivalent workload formulation, where $M$ can be chosen as any basis matrix for a particular linear space. This paper considers Brownian models for a very general class of open processing networks, and in that context develops a more extensive interpretation of the equivalent workload formulation, thus extending earlier work by Laws on alternate routing problems. A linear program called the static planning problem is introduced to articulate the notion of “heavy traffic ” for a general open network, and the dual of that linear program is used to define a canonical choice of the basis matrix $M$. To be specific, rows of the canonical $M$ are alternative basic optimal solutions of the dual linear program. If the network data satisfy a natural monotonicity condition, the canonical matrix $M$ is shown to be nonnegative, and another natural condition is identified which insures that $M$ admits a factorization related to the notion of resource pooling.

205 citations


Journal ArticleDOI
TL;DR: It is shown how the restriction to a low-dimensional basis as well as improper treatment of boundary conditions can affect the range of validity of these models.
Abstract: In this paper some implications of the technique of projecting the Navier–Stokes equations onto low-dimensional bases of eigenfunctions are explored. Such low-dimensional bases are typically obtained by truncating a particularly well-suited complete set of eigenfunctions at very low orders, arguing that a small number of such eigenmodes already captures a large part of the dynamics of the system. In addition, in the treatment of inhomogeneous spatial directions of a flow, eigenfunctions that do not satisfy the boundary conditions are often used, and in the Galerkin projection the corresponding boundary conditions are ignored. We show how the restriction to a low-dimensional basis as well as improper treatment of boundary conditions can affect the range of validity of these models. As particular examples of eigenfunction bases, systems of Karhunen–Loeve eigenfunctions are discussed in more detail, although the results presented are valid for any basis.

199 citations


Journal ArticleDOI
TL;DR: It is shown that a sequence of generalized eigenfunctions of an Euler--Bernoulli beam equation with a tip mass under boundary linear feedback control forms a Riesz basis for the state Hilbert space.
Abstract: Using an abstract condition of Riesz basis generation of discrete operators in the Hilbert spaces, we show, in this paper, that a sequence of generalized eigenfunctions of an Euler--Bernoulli beam equation with a tip mass under boundary linear feedback control forms a Riesz basis for the state Hilbert space. In the meanwhile, an asymptotic expression of eigenvalues and the exponential stability are readily obtained. The main results of [ SIAM J. Control Optim., 36 (1998), pp. 1962--1986] are concluded as a special case, and the additional conditions imposed there are removed.

Journal ArticleDOI
TL;DR: In this article, the authors present the main principles that are the basis of space syntax, in addition to methodological perspectives for a closer integration with GIS, which should be of use for many GIS applications, such as urban planning and design.

Journal ArticleDOI
TL;DR: The procedure is introduced; the asymptotic bounding properties and optimal convergence rate of the error estimator are proved; computational considerations are discussed; and, finally, corroborating numerical results are presented.
Abstract: We propose a new reduced-basis output bound method for the symmetric eigenvalue problem. The numerical procedure consists of two stages: the pre-processing stage, in which the reduced basis and associated functions are computed—“off-line”—at a prescribed set of points in parameter space; and the real-time stage, in which the approximate output of interest and corresponding rigorous error bounds are computed—“on-line”—for any new parameter value of interest. The real time calculation is very inexpensive as it requires only the solution or evaluation of very small systems. We introduce the procedure; prove the asymptotic bounding properties and optimal convergence rate of the error estimator; discuss computational considerations; and, finally, present corroborating numerical results.

Proceedings ArticleDOI
24 Jul 2000
TL;DR: This work proposes to solve a text categorization task using a new metric between documents, based on a priori semantic knowledge about words, that can be incorporated into the definition of radial basis kernels of Support Vector Machines or directly used in a K-nearest neighbors algorithm.
Abstract: We propose to solve a text categorization task using a new metric between documents, based on a priori semantic knowledge about words. This metric can be incorporated into the definition of radial basis kernels of Support Vector Machines or directly used in a K-nearest neighbors algorithm. Both SVM and KNN are tested and compared on the 20-newsgroups database. Support Vector Machines provide the best accuracy on test data.

Journal ArticleDOI
TL;DR: In this paper, a simple theoretically motivated model to extrapolate the correlation energy based on correlation-consistent polarized X-tuple basis sets is suggested, which has the form EXcor=E∞cor[1+A3,X−3(1+Xmin−Xmax), where EXcor is the energy for the basis set, and A3 is a function of A4.
Abstract: A simple theoretically motivated model to extrapolate the correlation energy based on correlation-consistent polarized X-tuple basis sets is suggested. It has the form EXcor=E∞cor[1+A3 X−3(1+A4 X−1)], where EXcor is the energy for the X-tuple basis set, E∞cor and A3 are parameters to be determined from a set (Xmin−Xmax) of correlation consistent basis sets at a given level of theory, and A4 is a function of A3. Even for the simple (2,3) extrapolation scheme, the method is shown to yield energies for 33 test data sets that are more accurate than those obtained from pure correlation consistent sextuple-zeta basis sets at a much lower computational cost. Other extrapolation schemes have also been investigated, including a simple one-parameter rule EXcor=E∞cor(1–2.4X−3).

Journal ArticleDOI
TL;DR: This work explores the use of parameterized motion models that represent much more varied and complex motions, and shows how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions.
Abstract: Linear parameterized models of optical flow, particularly affine models, have become widespread in image motion analysis. The linear model coefficients are straightforward to estimate, and they provide reliable estimates of the optical flow of smooth surfaces. Here we explore the use of parameterized motion models that represent much more varied and complex motions. Our goals are threefold: to construct linear bases for complex motion phenomenas to estimate the coefficients of these linear modelss and to recognize or classify image motions from the estimated coefficients. We consider two broad classes of motions: i) generic “motion features” such as motion discontinuities and moving barss and ii) non-rigid, object-specific, motions such as the motion of human mouths. For motion features we construct a basis of steerable flow fields that approximate the motion features. For object-specific motions we construct basis flow fields from example motions using principal component analysis. In both cases, the model coefficients can be estimated directly from spatiotemporal image derivatives with a robust, multi-resolution scheme. Finally, we show how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions.

Journal ArticleDOI
TL;DR: Independent clusters, as treated in classical linear factor analysis, provide a desirable basis for multidimensional item response models, yielding interpretable and useful results as mentioned in this paper, while establishing a pattern for the item parameter matrix that provides identifiability conditions and facilitates interpretation of the traits.
Abstract: Independent clusters, as treated in classical linear factor analysis, provide a desirable basis for multidimensional item response models, yielding interpretable and useful results. The independentclusters basis serves to determine dimensionality, while establishing a pattern for the item parameter matrix that provides identifiability conditions and facilitates interpretation of the traits. It also provides a natural extension of known results on convergent/discriminant “construct” validity to binary items, allowing the quantification of the validity of test and subtest scores. The independent-clusters basis simplifies item/test response and information hypersurfaces, which cannot otherwise be easily studied except in the trivial case of two dimensions, and provides estimates of latent traits with uncorrelated measurement errors. In addition, the affine transformation needed for the informative analysis of the causes of differential item functioning is simplified using the independent-clusters basis. Thes...

Journal ArticleDOI
TL;DR: In this article, the problem of choosing appropriate atomic orbital basis sets for ab initio calculations on dipole-bound anions has been examined, and the issue of designing and centering the extra diffuse basis functions for the excess electron has also been studied.
Abstract: The problem of choosing appropriate atomic orbital basis sets for ab initio calculations on dipole-bound anions has been examined. Such basis sets are usually constructed as combination of a standard valence-type basis, designed to describe the neutral molecular core, and an extra diffuse set designed to describe the charge distribution of the extra electron. As part of the present work, it has been found that the most commonly used valence-type basis sets (e.g., 6-31CCG or 6-311CG), when so augmented, are subject to unpredictable under- or overestimating electron binding energies for dipole-bound anions. Whereas, when the aug-cc-pVDZ, aug-cc-pVTZ (or other medium-size polarized (MSP) basis sets) are so augmented, more reliable binding energies are obtained especially when the electron binding energy is calculated at the CCSD(T) level of theory. The issue of designing and centering the extra diffuse basis functions for the excess electron has also been studied, and our findings are discussed here. c 2000 John Wiley & Sons, Inc. Int J Quantum Chem 80: 1024-1038, 2000


Journal ArticleDOI
TL;DR: This work considers graphical representation of DNA of beta-globins of several species, including human, on the basis of the approach of A. Nandy in which nucleic bases are associated with a walk over integral points of a Cartesian x, y-coordinate system.
Abstract: We consider numerical characterization of graphical representations of DNA primary sequences. In particular we consider graphical representation of DNA of β-globins of several species, including human, on the basis of the approach of A. Nandy in which nucleic bases are associated with a walk over integral points of a Cartesian x, y-coordinate system. With a so-generated graphical representation of DNA, we associate a distance/distance matrix, the elements of which are given by the quotient of the Euclidean and the graph theoretical distances, that is, through the space and through the bond distances for pairs of bases of graphical representation of DNA. We use eigenvalues of so-constructed matrices to characterize individual DNA sequences. The eigenvalues are used to construct numerical sequences, which are subsequently used for similarity/dissimilarity analysis. The results of such analysis have been compared and combined with similarity tables based on the frequency of occurrence of pairs of bases.

Journal ArticleDOI
TL;DR: In this article, the authors studied the topological zero mode sector of type II strings on a K-ahler manifold with boundaries and constructed two finite bases, in a sense bosonic and fermionic, that generate the topology sector of the Hilbert space with boundaries.
Abstract: We study the topological zero mode sector of type II strings on a K\"ahler manifold $X$ in the presence of boundaries. We construct two finite bases, in a sense bosonic and fermionic, that generate the topological sector of the Hilbert space with boundaries. The fermionic basis localizes on compact submanifolds in $X$. A variation of the FI terms interpolates between the description of these ground states in terms of the ring of chiral fields at the boundary at small volume and helices of exceptional sheaves at large volume, respectively. The identification of the bosonic/fermionic basis with the dual bases for the non-compact/compact K-theory group on $X$ gives a natural explanation of the McKay correspondence in terms of a linear sigma model and suggests a simple generalization of McKay to singular resolutions. The construction provides also a very effective way to describe D-brane states on generic, compact Calabi--Yau manifolds and allows to recover detailed information on the moduli space, such as monodromies and analytic continuation matrices, from the group theoretical data of a simple orbifold.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed computational strategies and algorithms to perform multi-reference Moller-Plesset (MR-MP2) calculations efficiently for large molecules using improved (average) virtual orbitals.
Abstract: We propose computational strategies and algorithms to perform multi-reference Moller–Plesset (MR-MP2) calculations efficiently for large molecules. As zeroth-order reference we employ restricted configuration interaction wave functions expressed in terms of an active space of Hartree–Fock one-particle functions (RAS-CI). To accelerate the convergence of the perturbation expansion and to keep the zeroth-order spaces as small as possible (i.e. Dim<1000) we use improved (average) virtual orbitals. The length of the first-order space (single and double excitations with respect to all reference configurations) is reduced by selecting the most important configurations from the full space based on the magnitude of their H0 diagonal matrix element. The two-electron integrals in the MO basis are calculated semi-directly with the resolution of the identity (RI) method which avoids computationally demanding 4-index transformations. The errors introduced by the approximations can systematically be reduced and are found to be insignificant in applications to chemical problems. As examples we present MR-MP2 results for excitation and reaction energies of molecules for which single-reference perturbation theory is not adequate. With our approach, investigation of systems as large as porphin or C60 are possible on low-cost personal computers.

Journal ArticleDOI
TL;DR: In this article, the second Dirac operator is shown to be equivalent to a standard Dirac one in Riemannian spaces with a five-dimensional motion group, and it is shown that it exists in all metrics for which it exists.
Abstract: We describe a Riemannian space class where the second Dirac operator arises and prove that the operator is always equivalent to a standard Dirac one. The particle state in this gravitational field is degenerate to some extent and we introduce an additional value in order to describe a particle state completely. Some supersymmetry constructions are also discussed. As an example we study all Riemannian spaces with a five-dimensional motion group and find all metrics for which the second Dirac operator exists. On the basis of our discussed examples we hypothesize about the number of second Dirac operators in Riemannian space.

Patent
Michael E. Tipping1
01 Sep 2000
TL;DR: The relevance vector machine (RVM) as mentioned in this paper is a probabilistic basis model with a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters.
Abstract: A relevance vector machine (RVM) for data modeling is disclosed. The RVM is a probabilistic basis model. Sparsity is achieved through a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters. As compared to a Support Vector Machine (SVM), the non-zero weights in the RVM represent more prototypical examples of classes, which are termed relevance vectors. The trained RVM utilizes many fewer basis functions than the corresponding SVM, and typically superior test performance. No additional validation of parameters (such as C) is necessary to specify the model, except those associated with the basis.

Proceedings ArticleDOI
06 Sep 2000
TL;DR: An integrated study on the possible topological relationship between multidimensional simple objects in 0, 1, 2 and 3D space using the 9-intersections model to define a unified set of conditions for eliminating relationships that cannot be realised in reality.
Abstract: This paper presents an integrated study on the possible topological relationship between multidimensional simple objects in 0, 1, 2 and 3D space. The formal categorisation of spatial relationships is completed upon the 9-intersections model. The focus is on the definition of a unified set of conditions for eliminating relationships that cannot be realised in reality. The negative conditions are formulated on the basis of dimension and co-dimension of objects, and connectivity of boundaries. The set of 25 conditions is sufficient for deriving all the possible relationships mentioned currently in the literature and for specifying the relationships between surface and surface in 3D space. Drawings of example configurations verify the obtained results in 3D space.

Journal ArticleDOI
TL;DR: In this article, the convergence behavior of the total and correlation energies of He, H2, and He2 with the increase of basis quality in the correlation-consistent basis sets was studied to predict the accurate complete basis set (CBS) limits at the MP2, CCSD, and CCSd(T) level.
Abstract: The convergence behavior of the total and correlation energies of He, H2, and He2 with the increase of basis quality in the correlation-consistent basis sets, cc-pVXZ and aug-cc-pVXZ(X=D,T,Q,5,6), was studied to search for a proper extrapolation scheme to predict the accurate complete basis set (CBS) limits at the MP2, CCSD, and CCSD(T) level. The functional form employed for extrapolation is a simple polynomial including inverse cubic power and higher-order terms of the cardinal number X in the correlation-consistent basis set as well as exponential function. It is found that a simple extrapolation of two successive correlation-consistent basis set energies (total or correlation energies) using (X+k)−3 [k=0 for MP2 and k=−1 for CCSD and CCSD(T) level] gives in general the most reliable (and accurate in case of total energy) estimates to the CBS limit energies. It is also shown that the choice of proper basis set, which can represent the electronic motions in the fragment and complex equally well, appears...

Patent
Regis J. Crinon1
13 Mar 2000
TL;DR: In this article, a vector rank filter is applied to the feature vectors representing the video frames to produce a summary of a video sequence which is most representative of the content of the sequence, frames are chosen that correspond to vectors that are the least distant to or produce the least distortion.
Abstract: Automated summarization of digital video sequences is accomplished using a vector rank filter. The consecutive frames of a digital video sequence can be represented as feature vectors which are successively accumulated in a set of vectors. The distortion of the set by the addition of each successive vector or the cumulative distance from each successive vector to all other vectors in the set is determined by a vector rank filter. When the distortion exceeds a threshold value the end of a video segment is detected. Each frame in a video segment can be ranked according to its relative similarity to the other frames of the set by applying the vector rank filter to the feature vectors representing the video frames. To produce a summary of a video sequence which is most representative of the content of the sequence, frames are chosen that correspond to vectors that are the least distant to or produce the least distortion of the set of vectors representing the segment. The ranking of the relative distortion can be used as the basis for selecting more than one frame from each segment to produce a hierarchy of summaries containing greater numbers of the frames having the most representative content.

Posted Content
TL;DR: Nice error bases have been introduced by Knill (1996) as a generalization of the Pauli basis and are shown to be projective representations of finite groups.
Abstract: Nice error bases have been introduced by Knill as a generalization of the Pauli basis. These bases are shown to be projective representations of finite groups. We classify all nice error bases of small degree, and all nice error bases with abelian index groups. We show that in general an index group of a nice error basis is necessarily solvable.

Journal ArticleDOI
TL;DR: In this article, a general construction for orthonormal bases of maximally entangled vectors, which works in any dimension, and is based on Latin squares and complex Hadamard matrices, is given.
Abstract: We analyze some special properties of a system of two qubits, and in particular of the so-called Bell basis for this system, and discuss the possibility of extending these properties to higher dimensional systems. We give a general construction for orthonormal bases of maximally entangled vectors, which works in any dimension, and is based on Latin squares and complex Hadamard matrices. However, for none of these bases the special properties of the operation of complex conjugation in Bell basis hold, namely that maximally entangled vectors have up-to-a-phase real coefficients and that factorizable unitaries have real matrix elements.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: A novel force field transformation has been developed in which the image is treated as an array of Gaussian attractors that act as the source of a force field to meet the objective in the context of ear and face biometrics.
Abstract: The objective in defining feature space is to reduce the dimension of the original pattern space yet maintaining discriminatory power for classification. To meet this objective in the context of ear and face biometrics a novel force field transformation has been developed in which the image is treated as an array of Gaussian attractors that act as the source of a force field. The directional properties of the force field are exploited to automatically locate a small number of potential energy wells and channels that form the basis of a characteristic feature vector. Here, we generalise the analysis, and the stock of applications.