scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 2004"


Journal ArticleDOI
TL;DR: Experiments demonstrate that a wide set of unsymmetric linear systems can be solved and high performance is consistently achieved for large sparse unsympetric matrices from real world applications.

1,324 citations


Journal ArticleDOI
TL;DR: In this article, the authors translate the TEBD algorithm into the language of matrix product states in order to both highlight and exploit its resemblances to the widely used density-matrix renormalization-group (DMRG) algorithms.
Abstract: An algorithm for the simulation of the evolution of slightly entangled quantum states has been recently proposed as a tool to study time-dependent phenomena in one-dimensional quantum systems. Its key feature is a time-evolving block-decimation (TEBD) procedure to identify and dynamically update the relevant, conveniently small, subregion of the otherwise exponentially large Hilbert space. Potential applications of the TEBD algorithm are the simulation of time-dependent Hamiltonians, transport in quantum systems far from equilibrium and dissipative quantum mechanics. In this paper we translate the TEBD algorithm into the language of matrix product states in order to both highlight and exploit its resemblances to the widely used density-matrix renormalization-group (DMRG) algorithms. The TEBD algorithm, being based on updating a matrix product state in time, is very accessible to the DMRG community and it can be enhanced by using well-known DMRG techniques, for instance in the event of good quantum numbers. More importantly, we show how it can be simply incorporated into existing DMRG implementations to produce a remarkably effective and versatile 'adaptive time-dependent DMRG' variant, that we also test and compare to previous proposals.

888 citations


Journal ArticleDOI
TL;DR: A polynomial-time randomized algorithm for estimating the permanent of an arbitrary n × n matrix with nonnegative entries computes an approximation that is within arbitrarily small specified relative error of the true value of the permanent.
Abstract: We present a polynomial-time randomized algorithm for estimating the permanent of an arbitrary n × n matrix with nonnegative entries. This algorithm---technically a "fully-polynomial randomized approximation scheme"---computes an approximation that is, with high probability, within arbitrarily small specified relative error of the true value of the permanent.

845 citations


Book
10 Jun 2004
TL;DR: In this paper, the authors present a review of matrix concepts for near-circular orbits and propose a solution of the linearized Equations of Motion (LEM) problem.
Abstract: 1 Orbit Determination Concepts 2 The Orbit Problem 3 Observations 4 Fundamentals of Orbit Determination 5 Square-root Solution Methods 6 Consider Covariance Analysis A Probability and Statistics B Review of Matrix Concepts C Equations of Motion D Constants E Analytical Theory for Near-Circular Orbits F Example of State Noise and Dynamic Model Compensation G Solution of the Linearized Equations of Motion H ECI and ECF Transformation

791 citations


Proceedings Article
01 Aug 2004
TL;DR: A method of calculating the transforms, currently obtained via Fourier and reverse Fourier transforms, of a signal having an arbitrary dimension of the digital representation by reducing the transform to a vector-to-circulant matrix multiplying.
Abstract: This paper describes a method of calculating the transforms, currently obtained via Fourier and reverse Fourier transforms. The method allows calculating efficiently the transforms of a signal having an arbitrary dimension of the digital representation by reducing the transform to a vector-to-circulant matrix multiplying. There is a connection between harmonic equations in rectangular and polar coordinate systems. The connection established here and used to create a very robust recursive algorithm for a conformal mapping calculation. There is also suggested a new ratio (and an efficient way of computing it) of two oscillative signal.

778 citations


Journal ArticleDOI
TL;DR: Shrinkage as mentioned in this paper is a matrix obtained from the sample covariance matrix through a transformation called shrinkage, which pulls the most extreme coefficients toward more central values, systematically reducing estimation error when it matters most.
Abstract: The central message of this article is that no one should use the sample covariance matrix for portfolio optimization. It is subject to estimation error of the kind most likely to perturb a mean-variance optimizer. Instead, a matrix can be obtained from the sample covariance matrix through a transformation called shrinkage. This tends to pull the most extreme coefficients toward more central values, systematically reducing estimation error when it matters most. Statistically, the challenge is to know the optimal shrinkage intensity. Shrinkage reduces portfolio tracking error relative to a benchmark index, and substantially raises the manager9s realized information ratio.

769 citations


Journal ArticleDOI
TL;DR: An algorithm is developed that is qualitatively faster, provided the authors may sample the entries of the matrix in accordance with a natural probability distribution, and implies that in constant time, it can be determined if a given matrix of arbitrary size has a good low-rank approximation.
Abstract: We consider the problem of approximating a given m × n matrix A by another matrix of specified rank k, which is smaller than m and n. The Singular Value Decomposition (SVD) can be used to find the "best" such approximation. However, it takes time polynomial in m, n which is prohibitive for some modern applications. In this article, we develop an algorithm that is qualitatively faster, provided we may sample the entries of the matrix in accordance with a natural probability distribution. In many applications, such sampling can be done efficiently. Our main result is a randomized algorithm to find the description of a matrix D* of rank at most k so that holds with probability at least 1 − δ (where v·vF is the Frobenius norm). The algorithm takes time polynomial in k,1/e, log(1/δ) only and is independent of m and n. In particular, this implies that in constant time, it can be determined if a given matrix of arbitrary size has a good low-rank approximation.

613 citations


Journal ArticleDOI
J.-J. Fuchs1
TL;DR: The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program.
Abstract: The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases. The question that is considered is the following: given a matrix A of dimension (n,m) with m>n and a vector b=Ax, find a sufficient condition for b to have a unique sparsest representation x as a linear combination of columns of A. Answers to this question are known when A is the concatenation of two unitary matrices and either an extensive combinatorial search is performed or a linear program is solved. We consider arbitrary A matrices and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program. The proof is elementary and the possibility of using a quadratic program opens perspectives to the case where b=Ax+e with e a vector of noise or modeling errors.

609 citations


Posted Content
TL;DR: A modification of the classical variational representation of the largest eigenvalue of a symmetric matrix is used, where cardinality is constrained, and a semidefinite programming-based relaxation is derived for the sparse PCA problem.
Abstract: We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming based relaxation for our problem. We also discuss Nesterov's smooth minimization technique applied to the SDP arising in the direct sparse PCA method.

572 citations


Journal ArticleDOI
TL;DR: This paper considers the problem of partitioning a set of m points in the n-dimensional Euclidean space into k clusters, and considers a continuous relaxation of this discrete problem: find the k-dimensional subspace V that minimizes the sum of squared distances to V of the m points, and argues that the relaxation provides a generalized clustering which is useful in its own right.
Abstract: We consider the problem of partitioning a set of m points in the n-dimensional Euclidean space into k clusters (usually m and n are variable, while k is fixed), so as to minimize the sum of squared distances between each point and its cluster center. This formulation is usually the objective of the k-means clustering algorithm (Kanungo et al. (2000)). We prove that this problem in NP-hard even for k e 2, and we consider a continuous relaxation of this discrete problem: find the k-dimensional subspace V that minimizes the sum of squared distances to V of the m points. This relaxation can be solved by computing the Singular Value Decomposition (SVD) of the m × n matrix A that represents the m pointss this solution can be used to get a 2-approximation algorithm for the original problem. We then argue that in fact the relaxation provides a generalized clustering which is useful in its own right. Finally, we show that the SVD of a random submatrix—chosen according to a suitable probability distribution—of a given matrix provides an approximation to the SVD of the whole matrix, thus yielding a very fast randomized algorithm. We expect this algorithm to be the main contribution of this paper, since it can be applied to problems of very large size which typically arise in modern applications.

523 citations


Reference EntryDOI
15 Sep 2004
TL;DR: In this article, the authors use the corresponding orbital transformation of Amos and Hall, which renders the transition density matrix diagonal and provides a unique correspondence between the excited particle and the empty hole.
Abstract: The result of a calculation of excited states, be it via configuration interaction methods or density functional response theory, is a set of coefficients describing the contribution that individual orbital excitations make to the total transition. Often times, there is no dominant amplitude describing the transition, making its qualitative description difficult. Natural transition orbitals dramatically simplify the situation by providing a compact representation of the transition density matrix. This is accomplished using the corresponding orbital transformation of Amos and Hall, which renders the transition density matrix diagonal and provides a unique correspondence between the excited ‘particle’ and empty ‘hole’. Keywords: natural transition orbital; transition density matrix; corresponding orbital transformation; singular value decomposition

Journal ArticleDOI
TL;DR: In this paper, the authors prove that the real four-dimensional Euclidean noncommutative φ^4 model is renormalisable to all orders in perturbation theory.
Abstract: We prove that the real four-dimensional Euclidean noncommutative \phi^4-model is renormalisable to all orders in perturbation theory. Compared with the commutative case, the bare action of relevant and marginal couplings contains necessarily an additional term: an harmonic oscillator potential for the free scalar field action. This entails a modified dispersion relation for the free theory, which becomes important at large distances (UV/IR-entanglement). The renormalisation proof relies on flow equations for the expansion coefficients of the effective action with respect to scalar fields written in the matrix base of the noncommutative R^4. The renormalisation flow depends on the topology of ribbon graphs and on the asymptotic and local behaviour of the propagator governed by orthogonal Meixner polynomials.

Journal ArticleDOI
TL;DR: It is shown that the DKH method is the only valid analytic unitary transformation scheme for the Dirac Hamiltonian, and explained why a straightforward numerical iterative extension of theDKH procedure to arbitrary order employing matrix representations is not feasible within standard one-component electronic structure programs.
Abstract: Exact decoupling of positive- and negative-energy states in relativistic quantum chemistry is discussed in the framework of unitary transformation techniques. The obscure situation that each scheme of decoupling transformations relies on different, but very special parametrizations of the employed unitary matrices is critically analyzed. By applying the most general power series ansatz for the parametrization of the unitary matrices it is shown that all transformation protocols for decoupling the Dirac Hamiltonian have necessarily to start with an initial free-particle Foldy-Wouthuysen step. The purely numerical iteration scheme applying X-operator techniques to the Barysz-Sadlej-Snijders (BSS) Hamiltonian is compared to the analytical schemes of the Foldy-Wouthuysen (FW) and Douglas-Kroll-Hess (DKH) approaches. Relying on an illegal 1/c expansion of the Dirac Hamiltonian around the nonrelativistic limit, any higher-order FW transformation is in principle ill defined and doomed to fail, irrespective of the specific features of the external potential. It is shown that the DKH method is the only valid analytic unitary transformation scheme for the Dirac Hamiltonian. Its exact infinite-order version can be realized purely numerically by the BSS scheme, which is only able to yield matrix representations of the decoupled Hamiltonian but no analytic expressions for this operator. It is explained why a straightforward numerical iterative extension of the DKH procedure to arbitrary order employing matrix representations is not feasible within standard one-component electronic structure programs. A more sophisticated ansatz based on a symbolical evaluation of the DKH operators via a suitable parser routine is needed instead and introduced in Part II of this work.

Journal ArticleDOI
TL;DR: In this article, the inner coupled link matrix, the eigenvalues and the corresponding eigenvectors of the coupled configuration matrix, rather than the conventional eigen values of a uniform network, are determined by means of the chaos synchronization of a time-varying complex network.
Abstract: Recently, it has been demonstrated that many large-scale complex dynamical networks display a collective synchronization motion. Here, we introduce a time-varying complex dynamical network model and further investigate its synchronization phenomenon. Based on this new complex network model, two network chaos synchronization theorems are proved. We show that the chaos synchronization of a time-varying complex network is determined by means of the inner coupled link matrix, the eigenvalues and the corresponding eigenvectors of the coupled configuration matrix, rather than the conventional eigenvalues of the coupled configuration matrix for a uniform network. Especially, we do not assume that the coupled configuration matrix is symmetric and its off-diagonal elements are nonnegative, which in a way generalizes the related results existing in the literature.

Journal ArticleDOI
TL;DR: A general theory of the relation between quantum phase transitions (QPTs) characterized by nonanalyticities in the energy and bipartite entanglement is developed and a functional relation between the matrix elements of two-particle reduced density matrices and the eigenvalues of general two-body Hamiltonians of d-level systems is derived.
Abstract: We develop a general theory of the relation between quantum phase transitions (QPTs) characterized by nonanalyticities in the energy and bipartite entanglement. We derive a functional relation between the matrix elements of two-particle reduced density matrices and the eigenvalues of general two-body Hamiltonians of $d$-level systems. The ground state energy eigenvalue and its derivatives, whose nonanalyticity characterizes a QPT, are directly tied to bipartite entanglement measures. We show that first-order QPTs are signaled by density matrix elements themselves and second-order QPTs by the first derivative of density matrix elements. Our general conclusions are illustrated via several quantum spin models.

Journal ArticleDOI
TL;DR: The results, reinterpreted from an invariant-theoretic viewpoint, provide a novel representation of a class of nonnegative symmetric polynomials, termed “sum of squares matrices.”

Journal ArticleDOI
TL;DR: A delay-dependent bounded real lemma for systems with a state- Delay-dependent condition for the existence of robust H"~ control is presented in terms of nonlinear matrix inequalities, and an iterative algorithm involving convex optimization is proposed.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a new code for the direct solution of sparse symmetric linear equations that solves indefinite systems with 2 × 2 pivoting for stability, called MA57, in HSL 2002 and superseded the well used HSL code MA27.
Abstract: We introduce a new code for the direct solution of sparse symmetric linear equations that solves indefinite systems with 2 × 2 pivoting for stability. This code, called MA57, is in HSL 2002 and supersedes the well used HSL code MA27. We describe some of the implementation details and emphasize the novel features of MA57. These include restart facilities, matrix modification, partial solution for matrix factors, solution of multiple right-hand sides, and iterative refinement and error analysis. The code is written in Fortran 77, but there are additional facilities within a Fortran 90 implementation that include the ability to identify and change pivots. Several of these facilities have been developed particularly to support optimization applications, and we illustrate the performance of the code on problems arising therefrom.

Journal ArticleDOI
01 Feb 2004
TL;DR: This paper discusses the optimization of two operations: a sparse matrix times a dense vector and a sparse matrices times a set of dense vectors, and describes the different optimizations and parameter selection techniques and evaluates them on several machines using over 40 matrices.
Abstract: Sparse matrix-vector multiplication is an important computational kernel that performs poorly on most modern processors due to a low compute-to-memory ratio and irregular memory access patterns. Optimization is difficult because of the complexity of cache-based memory systems and because performance is highly dependent on the non-zero structure of the matrix. The SPARSITY system is designed to address these problems by allowing users to automatically build sparse matrix kernels that are tuned to their matrices and machines. SPARSITY combines traditional techniques such as loop transformations with data structure transformations and optimization heuristics that are specific to sparse matrices. It provides a novel framework for selecting optimization parameters, such as block size, using a combination of performance models and search. In this paper we discuss the optimization of two operations: a sparse matrix times a dense vector and a sparse matrix times a set of dense vectors. Our experience indicates that register level optimizations are effective for matrices arising in certain scientific simulations, in particular finite-element problems. Cache level optimizations are important when the vector used in multiplication is larger than the cache size, especially for matrices in which the non-zero structure is random. For applications involving multiple vectors, reorganizing the computation to perform the entire set of multiplications as a single operation produces significant speedups. We describe the different optimizations and parameter selection techniques and evaluate them on several machines using over 40 matrices taken from a broad set of application domains. Our results demonstrate speedups of up to 4X for the single vector case and up to 10X for the multiple vector case.

Proceedings ArticleDOI
04 Jul 2004
TL;DR: This paper did extensive experiments on face image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition based method.
Abstract: We consider the problem of computing low rank approximations of matrices. The novelty of our approach is that the low rank approximations are on a sequence of matrices. Unlike the problem of low rank approximations of a single matrix, which was well studied in the past, the proposed algorithm in this paper does not admit a closed form solution in general. We did extensive experiments on face image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition based method.

Journal ArticleDOI
TL;DR: Using mirror symmetry, this paper showed that Chern-Simons theory on certain manifolds such as lens spaces reduces to a novel class of hermitean matrix models, where the measure is that of unitary matrix models.
Abstract: Using mirror symmetry, we show that Chern-Simons theory on certain manifolds such as lens spaces reduces to a novel class of hermitean matrix models, where the measure is that of unitary matrix models. We show that this agrees with the more conventional canonical quantization of Chern-Simons theory. Moreover, large-N dualities in this context lead to computation of all genus A-model topological amplitudes on toric Calabi-Yau manifolds in terms of matrix integrals. In the context of type-IIA superstring compactifications on these Calabi-Yau manifolds with wrapped D6 branes (which are dual to M-theory on G2 manifolds) this leads to engineering and solving F-terms for = 1 supersymmetric gauge theories with superpotentials involving certain multi-trace operators.


Journal ArticleDOI
TL;DR: In this paper, the robust H"2 and H"~ filtering problem for linear discrete-time systems with polytopic parameter uncertainty was studied and a matrix inequality condition was proposed to provide additional free parameters as compared to existing characterizations.

Journal ArticleDOI
TL;DR: The problems of robust stability and robust stabilization are solved with a new necessary and sufficient condition for a discrete-time singular system to be regular, causal and stable in terms of a strict linear matrix inequality (LMI).
Abstract: This note deals with the problems of robust stability and stabilization for uncertain discrete-time singular systems. The parameter uncertainties are assumed to be time-invariant and norm-bounded appearing in both the state and input matrices. A new necessary and sufficient condition for a discrete-time singular system to be regular, causal and stable is proposed in terms of a strict linear matrix inequality (LMI). Based on this, the concepts of generalized quadratic stability and generalized quadratic stabilization for uncertain discrete-time singular systems are introduced. Necessary and sufficient conditions for generalized quadratic stability and generalized quadratic stabilization are obtained in terms of a strict LMI and a set of matrix inequalities, respectively. With these conditions, the problems of robust stability and robust stabilization are solved. An explicit expression of a desired state feedback controller is also given, which involves no matrix decomposition. Finally, an illustrative example is provided to demonstrate the applicability of the proposed approach.

Journal ArticleDOI
01 Nov 2004
TL;DR: A very general class of linear subspaces of M(N) on which there exists a polynomial deterministic time algorithm to solve Edmonds' problem is introduced and it is proved that the weak membership problem for the convex set of separable normalized bipartite density matrices is NP-HARD.
Abstract: Generalizing a decision problem for bipartite perfect matching, Edmonds (J. Res. Natl. Bur. Standards 718(4) (1967) 242) introduced the problem (now known as the Edmonds Problem) of deciding if a given linear subspace of M(N) contains a non-singular matrix, where M(N) stands for the linear space of complex N × N matrices. This problem led to many fundamental developments in matroid theory, etc.Classical matching theory can be defined in terms of matrices with non-negative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. (Here operator refers to maps from matrices to matrices.) First, we reformulate the Edmonds Problem in terms of completely positive operators, or equivalently, in terms of bipartite density matrices. It turns out that one of the most important cases when Edmonds' problem can be solved in polynomial deterministic time, i.e. an intersection of two geometric matroids, corresponds to unentangled (aka separable) bipartite density matrices. We introduce a very general class (or promise) of linear subspaces of M(N) on which there exists a polynomial deterministic time algorithm to solve Edmonds' problem. The algorithm is a thoroughgoing generalization of algorithms in Linial, Samorodnitsky and Wigderson, Proceedings of the 30th ACM Symposium on Theory of Computing, ACM, New York, 1998; Gurvits and Yianilos, and its analysis benefits from an operator analog of permanents, so-called Quantum Permanents.Finally, we prove that the weak membership problem for the convex set of separable normalized bipartite density matrices is NP-HARD.

Journal ArticleDOI
TL;DR: A differential-geometric framework to define PDEs acting on some manifold constrained datasets, including the case of images taking value into matrix manifolds defined by orthogonal and spectral constraints is proposed.
Abstract: Nonlinear diffusion equations are now widely used to restore and enhance images. They allow to eliminate noise and artifacts while preserving large global features, such as object contours. In this context, we propose a differential-geometric framework to define PDEs acting on some manifold constrained datasets. We consider the case of images taking value into matrix manifolds defined by orthogonal and spectral constraints. We directly incorporate the geometry and natural metric of the underlying configuration space (viewed as a Lie group or a homogeneous space) in the design of the corresponding flows. Our numerical implementation relies on structure-preserving integrators that respect intrinsically the constraints geometry. The efficiency and versatility of this approach are illustrated through the anisotropic smoothing of diffusion tensor volumes in medical imaging.

Proceedings ArticleDOI
22 Aug 2004
TL;DR: This work furnishes a clear, information-theoretic criterion to choose a good cross-association as well as its parameters, namely, the number of row and column groups, and provides scalable algorithms to approach the optimal.
Abstract: Large, sparse binary matrices arise in numerous data mining applications, such as the analysis of market baskets, web graphs, social networks, co-citations, as well as information retrieval, collaborative filtering, sparse matrix reordering, etc. Virtually all popular methods for the analysis of such matrices---e.g., k-means clustering, METIS graph partitioning, SVD/PCA and frequent itemset mining---require the user to specify various parameters, such as the number of clusters, number of principal components, number of partitions, and "support." Choosing suitable values for such parameters is a challenging problem.Cross-association is a joint decomposition of a binary matrix into disjoint row and column groups such that the rectangular intersections of groups are homogeneous. Starting from first principles, we furnish a clear, information-theoretic criterion to choose a good cross-association as well as its parameters, namely, the number of row and column groups. We provide scalable algorithms to approach the optimal. Our algorithm is parameter-free, and requires no user intervention. In practice it scales linearly with the problem size, and is thus applicable to very large matrices. Finally, we present experiments on multiple synthetic and real-life datasets, where our method gives high-quality, intuitive results.

Book
03 Mar 2004
TL;DR: Foundations of Theory of Differential Equations with Discontinuous Right-Hand Sides Auxiliary Algebraic Statements on Solutions of Matrix Inequalities of a Special Type Dichotomy and Stability of Nonlinear Systems with Multiple Equilibria Stability of Sets of Pendulum Like Systems as mentioned in this paper.
Abstract: Foundations of Theory of Differential Equations with Discontinuous Right-Hand Sides Auxiliary Algebraic Statements on Solutions of Matrix Inequalities of a Special Type Dichotomy and Stability of Nonlinear Systems with Multiple Equilibria Stability of Equilibria Sets of Pendulum-Like Systems.

Dissertation
01 Jan 2004
TL;DR: This thesis addresses several issues related to learning with matrix factorizations, study the asymptotic behavior and generalization ability of existing methods, suggest new optimization methods, and present a novel maximum-margin high-dimensional matrix factorization formulation.
Abstract: Matrices that can be factored into a product of two simpler matrices can serve as a useful and often natural model in the analysis of tabulated or high-dimensional data. Models based on matrix factorization (Factor Analysis, PCA) have been extensively used in statistical analysis and machine learning for over a century, with many new formulations and models suggested in recent years (Latent Semantic Indexing, Aspect Models, Probabilistic PCA, Exponential PCA, Non-Negative Matrix Factorization and others). In this thesis we address several issues related to learning with matrix factorizations: we study the asymptotic behavior and generalization ability of existing methods, suggest new optimization methods, and present a novel maximum-margin high-dimensional matrix factorization formulation. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

Journal ArticleDOI
TL;DR: This work discusses 18 estimating methods for deriving preference values from pairwise judgment matrices under a common framework of effectiveness: distance minimization and correctness in error free cases and points out the importance of commensurate scales when aggregating all the columns of a judgment matrix.