scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 2007"


Book
01 Jan 2007
TL;DR: In this paper, the authors present a synthesis of the considerable body of new research into positive definite matrices, which have theoretical and computational uses across a broad spectrum of disciplines, including calculus, electrical engineering, statistics, physics, numerical analysis, quantum information theory and geometry.
Abstract: This book represents the first synthesis of the considerable body of new research into positive definite matrices. These matrices play the same role in noncommutative analysis as positive real numbers do in classical analysis. They have theoretical and computational uses across a broad spectrum of disciplines, including calculus, electrical engineering, statistics, physics, numerical analysis, quantum information theory, and geometry. Through detailed explanations and an authoritative and inspiring writing style, Rajendra Bhatia carefully develops general techniques that have wide applications in the study of such matrices. Bhatia introduces several key topics in functional analysis, operator theory, harmonic analysis, and differential geometry--all built around the central theme of positive definite matrices. He discusses positive and completely positive linear maps, and presents major theorems with simple and direct proofs. He examines matrix means and their applications, and shows how to use positive definite functions to derive operator inequalities that he and others proved in recent years. He guides the reader through the differential geometry of the manifold of positive definite matrices, and explains recent work on the geometric mean of several matrices. Positive Definite Matrices is an informative and useful reference book for mathematicians and other researchers and practitioners. The numerous exercises and notes at the end of each chapter also make it the ideal textbook for graduate-level courses.

1,594 citations


Book
06 Jul 2007

948 citations


Journal ArticleDOI
TL;DR: This work defines the Log‐Euclidean mean from a Riemannian point of view, based on a lie group structure which is compatible with the usual algebraic properties of this matrix space and a new scalar multiplication that smoothly extends the Lie group structure into a vector space structure.
Abstract: In this work we present a new generalization of the geometric mean of positive numbers on symmetric positive‐definite matrices, called Log‐Euclidean. The approach is based on two novel algebraic structures on symmetric positive‐definite matrices: first, a lie group structure which is compatible with the usual algebraic properties of this matrix space; second, a new scalar multiplication that smoothly extends the Lie group structure into a vector space structure. From bi‐invariant metrics on the Lie group structure, we define the Log‐Euclidean mean from a Riemannian point of view. This notion coincides with the usual Euclidean mean associated with the novel vector space structure. Furthermore, this means corresponds to an arithmetic mean in the domain of matrix logarithms. We detail the invariance properties of this novel geometric mean and compare it to the recently introduced affine‐invariant mean. The two means have the same determinant and are equal in a number of cases, yet they are not identical in g...

791 citations


Journal ArticleDOI
TL;DR: It is explained how special structure of the weight matrix and the data matrix can be exploited for efficient cost function and first derivative computation that allows to obtain computationally efficient solution methods.

745 citations


Posted Content
TL;DR: The algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to provide the benefits of the two major approaches to sparse recovery, and combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods.
Abstract: We demonstrate a simple greedy algorithm that can reliably recover a d-dimensional vector v from incomplete and inaccurate measurements x. Here our measurement matrix is an N by d matrix with N much smaller than d. Our algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to close the gap between two major approaches to sparse recovery. It combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods. For any measurement matrix that satisfies a Uniform Uncertainty Principle, ROMP recovers a signal with O(n) nonzeros from its inaccurate measurements x in at most n iterations, where each iteration amounts to solving a Least Squares Problem. The noise level of the recovery is proportional to the norm of the error, up to a log factor. In particular, if the error vanishes the reconstruction is exact. This stability result extends naturally to the very accurate recovery of approximately sparse signals.

730 citations


Posted Content
TL;DR: In this article, an infinite sequence of invariants for any algebraic curve is defined, which can be used to define a formal series, which satisfies formally an Hirota equation, and thus obtain a new way of constructing a tau function attached to an algebraic graph.
Abstract: For any arbitrary algebraic curve, we define an infinite sequence of invariants. We study their properties, in particular their variation under a variation of the curve, and their modular properties. We also study their limits when the curve becomes singular. In addition we find that they can be used to define a formal series, which satisfies formally an Hirota equation, and we thus obtain a new way of constructing a tau function attached to an algebraic curve. These invariants are constructed in order to coincide with the topological expansion of a matrix formal integral, when the algebraic curve is chosen as the large N limit of the matrix model's spectral curve. Surprisingly, we find that the same invariants also give the topological expansion of other models, in particular the matrix model with an external field, and the so-called double scaling limit of matrix models, i.e. the (p,q) minimal models of conformal field theory. As an example to illustrate the efficiency of our method, we apply it to the Kontsevitch integral, and we give a new and extremely easy proof that Kontsevitch integral depends only on odd times, and that it is a KdV tau-function.

625 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare different procedures for combining fixed-order tree-level matrix-element generators with parton showers and find that although similar results are obtained in all cases, there are important differences.
Abstract: We compare different procedures for combining fixed-order tree-level matrix-element generators with parton showers. We use the case of W-production at the Tevatron and the LHC to compare different implementations of the so-called CKKW and MLM schemes using different matrix-element generators and different parton cascades. We find that although similar results are obtained in all cases, there are important differences.

619 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the use of multiple regression on distance matrices (MRM), an extension of partial Mantel analysis, in spatial analysis of ecological data, where each matrix contains distances or similarities (in terms of ecological, spatial, or other attributes) between all pairwise combinations of n objects (sample units); tests of statistical significance are performed by permutation.
Abstract: I explore the use of multiple regression on distance matrices (MRM), an extension of partial Mantel analysis, in spatial analysis of ecological data. MRM involves a multiple regression of a response matrix on any number of explanatory matrices, where each matrix contains distances or similarities (in terms of ecological, spatial, or other attributes) between all pair-wise combinations of n objects (sample units); tests of statistical significance are performed by permutation. The method is flexible in terms of the types of data that may be analyzed (counts, presence–absence, continuous, categorical) and the shapes of response curves. MRM offers several advantages over traditional partial Mantel analysis: (1) separating environmental distances into distinct distance matrices allows inferences to be made at the level of individual variables; (2) nonparametric or nonlinear multiple regression methods may be employed; and (3) spatial autocorrelation may be quantified and tested at different spatial scales using a series of lag matrices, each representing a geographic distance class. The MRM lag matrices model may be parameterized to yield very similar inferences regarding spatial autocorrelation as the Mantel correlogram. Unlike the correlogram, however, the lag matrices model may also include environmental distance matrices, so that spatial patterns in species abundance distances (community similarity) may be quantified while controlling for the environmental similarity between sites. Examples of spatial analyses with MRM are presented.

556 citations


Journal ArticleDOI
TL;DR: In the framework introduced, the differential of the complex-valued matrix function is used to identify the derivatives of this function and Matrix differentiation results are derived and summarized in tables.
Abstract: A systematic theory is introduced for finding the derivatives of complex-valued matrix functions with respect to a complex-valued matrix variable and the complex conjugate of this variable. In the framework introduced, the differential of the complex-valued matrix function is used to identify the derivatives of this function. Matrix differentiation results are derived and summarized in tables which can be exploited in a wide range of signal processing related situations

504 citations


Book
27 Nov 2007
TL;DR: An essential, one-of-a-kind book for graduate-level courses in advanced statistical studies including linear and nonlinear models, multivariate analysis, and statistical computing, and it also serves as an excellent self-study guide for statistical researchers.
Abstract: A comprehensive, must-have handbook of matrix methods with a unique emphasis on statistical applications This timely book, A Matrix Handbook for Statisticians, provides a comprehensive, encyclopedic treatment of matrices as they relate to both statistical concepts and methodologies. Written by an experienced authority on matrices and statistical theory, this handbook is organized by topic rather than mathematical developments and includes numerous references to both the theory behind the methods and the applications of the methods. A uniform approach is applied to each chapter, which contains four parts: a definition followed by a list of results; a short list of references to related topics in the book; one or more references to proofs; and references to applications. The use of extensive cross-referencing to topics within the book and external referencing to proofs allows for definitions to be located easily as well as interrelationships among subject areas to be recognized. A Matrix Handbook for Statisticians addresses the need for matrix theory topics to be presented together in one book and features a collection of topics not found elsewhere under one cover. These topics include: * Complex matrices * A wide range of special matrices and their properties * Special products and operators, such as the Kronecker product * Partitioned and patterned matrices * Matrix analysis and approximation * Matrix optimization * Majorization * Random vectors and matrices * Inequalities, such as probabilistic inequalities Additional topics, such as rank, eigenvalues, determinants, norms, generalized inverses, linear and quadratic equations, differentiation, and Jacobians, are also included. The book assumes a fundamental knowledge of vectors and matrices, maintains a reasonable level of abstraction when appropriate, and provides a comprehensive compendium of linear algebra results with use or potential use in statistics. A Matrix Handbook for Statisticians is an essential, one-of-a-kind book for graduate-level courses in advanced statistical studies including linear and nonlinear models, multivariate analysis, and statistical computing. It also serves as an excellent self-study guide for statistical researchers.

502 citations


01 Jan 2007
TL;DR: This work considers the numerical calculation of several matrix eigenvalue problems which require some manipulation before the standard algorithms may be used, and studies several eigen value problems which arise in least squares.
Abstract: We consider the numerical calculation of several matrix eigenvalue problems which require some manipulation before the standard algorithms may be used. This includes finding the stationary values of a quadratic form subject to linear constraints and determining the eigenvalues of a matrix which is modified by a matrix of rank one. We also consider several inverse eigenvalue problems. This includes the problem of determining the coefficients for the Gauss–Radau and Gauss–Lobatto quadrature rules. In addition, we study several eigenvalue problems which arise in least squares.

Journal ArticleDOI
TL;DR: This paper presents a substantially generalized co-clustering framework wherein any Bregman divergence can be used in the objective function, and various conditional expectation based constraints can be considered based on the statistics that need to be preserved.
Abstract: Co-clustering, or simultaneous clustering of rows and columns of a two-dimensional data matrix, is rapidly becoming a powerful data analysis technique. Co-clustering has enjoyed wide success in varied application domains such as text clustering, gene-microarray analysis, natural language processing and image, speech and video analysis. In this paper, we introduce a partitional co-clustering formulation that is driven by the search for a good matrix approximation---every co-clustering is associated with an approximation of the original data matrix and the quality of co-clustering is determined by the approximation error. We allow the approximation error to be measured using a large class of loss functions called Bregman divergences that include squared Euclidean distance and KL-divergence as special cases. In addition, we permit multiple structurally different co-clustering schemes that preserve various linear statistics of the original data matrix. To accomplish the above tasks, we introduce a new minimum Bregman information (MBI) principle that simultaneously generalizes the maximum entropy and standard least squares principles, and leads to a matrix approximation that is optimal among all generalized additive models in a certain natural parameter space. Analysis based on this principle yields an elegant meta algorithm, special cases of which include most previously known alternate minimization based clustering algorithms such as kmeans and co-clustering algorithms such as information theoretic (Dhillon et al., 2003b) and minimum sum-squared residue co-clustering (Cho et al., 2004). To demonstrate the generality and flexibility of our co-clustering framework, we provide examples and empirical evidence on a variety of problem domains and also describe novel co-clustering applications such as missing value prediction and compression of categorical data matrices.

Journal ArticleDOI
TL;DR: A generalized version of the nonlinear small gain theorem is provided for the case of more than two coupled input-to-state stable systems and an associated lower-dimensional discrete time dynamical system.
Abstract: We provide a generalized version of the nonlinear small gain theorem for the case of more than two coupled input-to-state stable systems. For this result the interconnection gains are described in a nonlinear gain matrix, and the small gain condition requires bounds on the image of this gain matrix. The condition may be interpreted as a nonlinear generalization of the requirement that the spectral radius of the gain matrix is less than 1. We give some interpretations of the condition in special cases covering two subsystems, linear gains, linear systems and an associated lower-dimensional discrete time dynamical system.

Journal ArticleDOI
TL;DR: In this article, the integrable structure of spin chain models with centrally extended su(2|2) and psu(2, 2|4) symmetry is investigated.
Abstract: We investigate the integrable structure of spin chain models with centrally extended su(2|2) and psu(2, 2|4) symmetry. These chains have their origin in the planar anti-de Sitter/conformal field theory correspondence, but they also contain the one-dimensional Hubbard model as a special case. We begin with an overview of the representation theory of centrally extended su(2|2). These results are applied in the construction and investigation of an interesting S-matrix with su(2|2) symmetry. In particular, they enable a remarkably simple proof of the Yang-Baxter relation. We also show the equivalence of the S- matrix to Shastry's R-matrix and thus uncover a hidden supersymmetry in the integrable structure of the Hubbard model. We then construct eigenvalues of the corresponding transfer matrix in order to formulate an analytic Bethe ansatz. Finally, the form of transfer matrix eigenvalues for models with psu(2, 2|4) symmetry is sketched.

Journal ArticleDOI
TL;DR: On the basis of the arithmetic aggregation operator and hybrid aggregation operator, an approach to group decision making with interval-valued intuitionistic judgment matrices is given and some of their desirable properties are investigated in detail.

Journal ArticleDOI
TL;DR: The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems, and the law of large numbers for operator-valued random variables for Banach spaces is used.
Abstract: We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(rlog r) with a small error in the spectral norm, where r = VAV2F/VAV22 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.

Journal ArticleDOI
TL;DR: In this article, the leading correction to the bipartite entanglement entropy at large sub-system size, in integrable quantum field theories with diagonal scattering matrices, is computed.
Abstract: In this paper we compute the leading correction to the bipartite entanglement entropy at large sub-system size, in integrable quantum field theories with diagonal scattering matrices. We find a remarkably universal result, depending only on the particle spectrum of the theory and not on the details of the scattering matrix. We employ the “replica trick” whereby the entropy is obtained as the derivative with respect to n of the trace of the nth power of the reduced density matrix of the sub-system, evaluated at n=1. The main novelty of our work is the introduction of a particular type of twist fields in quantum field theory that are naturally related to branch points in an n-sheeted Riemann surface. Their two-point function directly gives the scaling limit of the trace of the nth power of the reduced density matrix. Taking advantage of integrability, we use the expansion of this two-point function in terms of form factors of the twist fields, in order to evaluate it at large distances in the two-particle approximation. Although this is a well-known technique, the new geometry of the problem implies a modification of the form factor equations satisfied by standard local fields of integrable quantum field theory. We derive the new form factor equations and provide solutions, which we specialize both to the Ising and sinh-Gordon models.

Posted Content
TL;DR: These two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist.
Abstract: Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of ``components.'' Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the input data. In this paper, we propose and study matrix approximations that are explicitly expressed in terms of a small number of columns and/or rows of the data matrix, and thereby more amenable to interpretation in terms of the original data. Our main algorithmic results are two randomized algorithms which take as input an $m \times n$ matrix $A$ and a rank parameter $k$. In our first algorithm, $C$ is chosen, and we let $A'=CC^+A$, where $C^+$ is the Moore-Penrose generalized inverse of $C$. In our second algorithm $C$, $U$, $R$ are chosen, and we let $A'=CUR$. ($C$ and $R$ are matrices that consist of actual columns and rows, respectively, of $A$, and $U$ is a generalized inverse of their intersection.) For each algorithm, we show that with probability at least $1-\delta$: $$ ||A-A'||_F \leq (1+\epsilon) ||A-A_k||_F, $$ where $A_k$ is the ``best'' rank-$k$ approximation provided by truncating the singular value decomposition (SVD) of $A$. The number of columns of $C$ and rows of $R$ is a low-degree polynomial in $k$, $1/\epsilon$, and $\log(1/\delta)$. Our two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist. Both of our algorithms are simple, they take time of the order needed to approximately compute the top $k$ singular vectors of $A$, and they use a novel, intuitive sampling method called ``subspace sampling.''

Journal ArticleDOI
TL;DR: A new projection method to solve large-scale continuous-time Lyapunov matrix equations based on matrix factorizations, generated as a combination of Krylov subspaces in A and A^{-1}$.
Abstract: In this paper we propose a new projection method to solve large-scale continuous-time Lyapunov matrix equations. The new approach projects the problem onto a much smaller approximation space, generated as a combination of Krylov subspaces in $A$ and $A^{-1}$. The reduced problem is then solved by means of a direct Lyapunov scheme based on matrix factorizations. The reported numerical results show the competitiveness of the new method, compared to a state-of-the-art approach based on the factorized alternating direction implicit iteration.

Proceedings ArticleDOI
26 Aug 2007
TL;DR: It is shown that Toeplitz-structured matrices with entries drawn independently from the same distributions are also sufficient to recover x from y with high probability, and the performance of such matrices is compared with that of fully independent and identically distributed ones.
Abstract: The problem of recovering a sparse signal x Rn from a relatively small number of its observations of the form y = Ax Rk, where A is a known matrix and k « n, has recently received a lot of attention under the rubric of compressed sensing (CS) and has applications in many areas of signal processing such as data cmpression, image processing, dimensionality reduction, etc. Recent work has established that if A is a random matrix with entries drawn independently from certain probability distributions then exact recovery of x from these observations can be guaranteed with high probability. In this paper, we show that Toeplitz-structured matrices with entries drawn independently from the same distributions are also sufficient to recover x from y with high probability, and we compare the performance of such matrices with that of fully independent and identically distributed ones. The use of Toeplitz matrices in CS applications has several potential advantages: (i) they require the generation of only O(n) independent random variables; (ii) multiplication with Toeplitz matrices can be efficiently implemented using fast Fourier transform, resulting in faster acquisition and reconstruction algorithms; and (iii) Toeplitz-structured matrices arise naturally in certain application areas such as system identification.

Posted Content
TL;DR: This article presents a novel numerical abstract domain for static analysis by abstract interpretation and gives an efficient representation based on Difference-Bound Matrices with O(n/sup 2/) memory cost, where n is the number of variables, and graph-based algorithms for all common abstract operators, with O (n/Sup 3/) time cost.
Abstract: This article presents a new numerical abstract domain for static analysis by abstract interpretation. It extends a former numerical abstract domain based on Difference-Bound Matrices and allows us to represent invariants of the form (+/-x+/-y<=c), where x and y are program variables and c is a real constant. We focus on giving an efficient representation based on Difference-Bound Matrices - O(n2) memory cost, where n is the number of variables - and graph-based algorithms for all common abstract operators - O(n3) time cost. This includes a normal form algorithm to test equivalence of representation and a widening operator to compute least fixpoint approximations.

Journal ArticleDOI
TL;DR: Applying diagonalization of square matrices to the analysis of the structure of γ and G matrices gives greater insight into the form and strength of nonlinear selection, and the availability of genetic variance for multiple traits.
Abstract: Two symmetric matrices underlie our understanding of microevolutionary change. The first is the matrix of nonlinear selection gradients (gamma) which describes the individual fitness surface. The second is the genetic variance-covariance matrix (G) that influences the multivariate response to selection. A common approach to the empirical analysis of these matrices is the element-by-element testing of significance, and subsequent biological interpretation of pattern based on these univariate and bivariate parameters. Here, I show why this approach is likely to misrepresent the genetic basis of quantitative traits, and the selection acting on them in many cases. Diagonalization of square matrices is a fundamental aspect of many of the multivariate statistical techniques used by biologists. Applying this, and other related approaches, to the analysis of the structure of gamma and G matrices, gives greater insight into the form and strength of nonlinear selection, and the availability of genetic variance for multiple traits.

Journal ArticleDOI
TL;DR: The results show that even with some deviation from the optimal design, the LOS MEMO case outperforms the i.i.d. Rayleigh case in terms of MI.
Abstract: This paper describes a technique for realizing a high-rank channel matrix in a line-of-sight (LOS) multiple-input multiple-output (MIMO) transmission scenario. This is beneficial for systems which are unable to make use of the originally derived MIMO gain given by independent and identically distributed (i.i.d.) flat Rayleigh fading subchannels. The technique is based on optimization of antenna placement in uniform linear arrays with respect to mutual information (MI). By introducing a new and more general 3-D geometrical model than that applied in earlier work, additional insight into the optimal design parameters is gained. We also perform a novel analysis of the sensitivity of the optimal design parameters, and derive analytical expressions for the eigenvalues of the pure LOS channel matrix which are valid also when allowing for non-optimal design. Furthermore, we investigate the approximations introduced in the derivations, in order to reveal when the results are applicable. The LOS matrix is employed in a Ricean fading channel model, and performance is evaluated with respect to the average MI and the MI cumulative distribution function. Our results show that even with some deviation from the optimal design, the LOS MEMO case outperforms the i.i.d. Rayleigh case in terms of MI.

Journal ArticleDOI
TL;DR: Two fourth-order cumulant-based techniques for the estimation of the mixing matrix in underdetermined independent component analysis based on a simultaneous matrix diagonalization and a simultaneous off-diagonalization are studied.
Abstract: In this paper we study two fourth-order cumulant-based techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous off-diagonalization. The number of sources that can be allowed is roughly quadratic in the number of observations. For both methods, explicit expressions for the maximum number of sources are given. Simulations illustrate the performance of the techniques

Book ChapterDOI
TL;DR: In this article, the use of linear matrix inequalities (LMIs) in control of matrix inequality problems is discussed and tools for transforming matrix inequalities problems into a suitable LMI-format for solution.
Abstract: This chapter gives an introduction to the use of linear matrix inequalities (LMIs) in control. LMI problems are defined and tools described for transforming matrix inequality problems into a suitable LMI-format for solution. Several examples explain the use of these fundamental tools.

Book
01 Jan 2007
TL;DR: In this article, the authors introduce linear algebra concepts and matrix decompositions for data mining and pattern recognition, including vectors and matrices in data mining, and linear systems and least squares for pattern recognition.
Abstract: Preface Part I. Linear Algebra Concepts and Matrix Decompositions: 1. Vectors and matrices in data mining and pattern recognition 2. Vectors and matrices 3. Linear systems and least squares 4. Orthogonality 5. QR decomposition 6. Singular value decomposition 7. Reduced rank least squares models 8. Tensor decomposition 9. Clustering and non-negative matrix factorization Part II. Data Mining Applications: 10. Classification of handwritten digits 11. Text mining 12. Page ranking for a Web search engine 13. Automatic key word and key sentence extraction 14. Face recognition using rensor SVD Part III. Computing the Matrix Decompositions: 15. Computing Eigenvalues and singular values Bibliography Index.

Journal ArticleDOI
TL;DR: It is shown that the increment-based computational approach gives locally quasi-optimal low-rank approximations that are well suited for numerical integration.
Abstract: For the low-rank approximation of time-dependent data matrices and of solutions to matrix differential equations, an increment-based computational approach is proposed and analyzed. In this method, the derivative is projected onto the tangent space of the manifold of rank-$r$ matrices at the current approximation. With an appropriate decomposition of rank-$r$ matrices and their tangent matrices, this yields nonlinear differential equations that are well suited for numerical integration. The error analysis compares the result with the pointwise best approximation in the Frobenius norm. It is shown that the approach gives locally quasi-optimal low-rank approximations. Numerical experiments illustrate the theoretical results.

Book
16 Jul 2007
TL;DR: The aim of this book is to provide a Discussion of Parametric Models of Observations and their Applications to Numerical Methods for Parameter Estimation, and some of the methods used in this book addressed these problems.
Abstract: Preface. 1 Introduction. 2 Parametric Models of Observations. 3 Distributions of Observations. 4 Precision and Accuracy. 5 Precise and Accurate Estimation. 6 Numerical Methods for Parameter Estimation. 7 Solutions or Partial Solutions to Problems. Appendix A: Statistical Results. Appendix B: Vectors and Matrices. Appendix C: Positive Semidefinite and Positive Definite Matrices. Appendix D: Vector and Matrix Differentiation. References. Topic Index.

Journal ArticleDOI
TL;DR: The derivation avoids both the overcounting ambiguities and the single-shell approximation for the equilibrium density matrix prevalent in current methods, ensuring that relevant sum rules hold rigorously and spectral features at energies below the temperature can be described accurately.
Abstract: We show how spectral functions for quantum impurity models can be calculated very accurately using a complete set of discarded numerical renormalization group eigenstates, recently introduced by Anders and Schiller. The only approximation is to judiciously exploit energy scale separation. Our derivation avoids both the overcounting ambiguities and the single-shell approximation for the equilibrium density matrix prevalent in current methods, ensuring that relevant sum rules hold rigorously and spectral features at energies below the temperature can be described accurately.

Journal ArticleDOI
TL;DR: A new method for calculating the missing elements of an incomplete matrix of pairwise comparison values for a decision problem is proposed and it is shown that the optimal values are obtained by solving a linear system and unicity of the solution is proved under general assumptions.