scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Generalized inverse for matrices

01 Jul 1955-Vol. 51, Iss: 03, pp 406-413
TL;DR: A generalization of the inverse of a non-singular matrix is described in this paper as the unique solution of a certain set of equations, which is used here for solving linear matrix equations, and for finding an expression for the principal idempotent elements of a matrix.
Abstract: This paper describes a generalization of the inverse of a non-singular matrix, as the unique solution of a certain set of equations. This generalized inverse exists for any (possibly rectangular) matrix whatsoever with complex elements. It is used here for solving linear matrix equations, and among other applications for finding an expression for the principal idempotent elements of a matrix. Also a new type of spectral decomposition is given.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Recent progress about link prediction algorithms is summarized, emphasizing on the contributions from physical perspectives and approaches, such as the random-walk-based methods and the maximum likelihood methods.
Abstract: Link prediction in complex networks has attracted increasing attention from both physical and computer science communities. The algorithms can be used to extract missing information, identify spurious interactions, evaluate network evolving mechanisms, and so on. This article summaries recent progress about link prediction algorithms, emphasizing on the contributions from physical perspectives and approaches, such as the random-walk-based methods and the maximum likelihood methods. We also introduce three typical applications: reconstruction of networks, evaluation of network evolving mechanism and classification of partially labeled networks. Finally, we introduce some applications and outline future challenges of link prediction algorithms.

2,530 citations

Book
01 Jan 1987
TL;DR: Iterative Methods for Optimization does more than cover traditional gradient-based optimization: it is the first book to treat sampling methods, including the Hooke& Jeeves, implicit filtering, MDS, and Nelder& Mead schemes in a unified way.
Abstract: This book presents a carefully selected group of methods for unconstrained and bound constrained optimization problems and analyzes them in depth both theoretically and algorithmically. It focuses on clarity in algorithmic description and analysis rather than generality, and while it provides pointers to the literature for the most general theoretical results and robust software, the author thinks it is more important that readers have a complete understanding of special cases that convey essential ideas. A companion to Kelley's book, Iterative Methods for Linear and Nonlinear Equations (SIAM, 1995), this book contains many exercises and examples and can be used as a text, a tutorial for self-study, or a reference. Iterative Methods for Optimization does more than cover traditional gradient-based optimization: it is the first book to treat sampling methods, including the Hooke& Jeeves, implicit filtering, MDS, and Nelder& Mead schemes in a unified way.

1,980 citations


Cites background from "A Generalized inverse for matrices"

  • ...A† is called the Moore–Penrose inverse [49], [189], [212]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a numerically stable and fairly fast scheme is described to compute the unitary matrices U and V which transform a given matrix A into a diagonal form π = U^ * AV, thus exhibiting A's singular values on π's diagonal.
Abstract: A numerically stable and fairly fast scheme is described to compute the unitary matrices U and V which transform a given matrix A into a diagonal form $\Sigma = U^ * AV$, thus exhibiting A’s singular values on $\Sigma $’s diagonal. The scheme first transforms A to a bidiagonal matrix J, then diagonalizes J. The scheme described here is complicated but does not suffer from the computational difficulties which occasionally afflict some previously known methods. Some applications are mentioned, in particular the use of the pseudo-inverse $A^I = V\Sigma ^I U^* $ to solve least squares problems in a way which dampens spurious oscillation and cancellation.

1,683 citations

Journal ArticleDOI
TL;DR: This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks, and introduces new classes of smoothness functionals that lead to different classes of basis functions.
Abstract: We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks . In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.

1,408 citations

Journal ArticleDOI
TL;DR: An MNI‐to‐Talairach (MTT) transform to correct for bias between MNI and Talairach coordinates was formulated using a best‐fit analysis in one hundred high‐resolution 3‐D MR brain images.
Abstract: MNI coordinates determined using SPM2 and FSL/FLIRT with the ICBM-152 template were compared to Talairach coordinates determined using a landmark-based Talairach registration method (TAL). Analysis revealed a clear-cut bias in reference frames (origin, orientation) and scaling (brain size). Accordingly, ICBM-152 fitted brains were consistently larger, oriented more nose down, and translated slightly down relative to TAL fitted brains. Whole brain analysis of MNI/Talairach coordi- nate disparity revealed an ellipsoidal pattern with disparity ranging from zero at a point deep within the left hemisphere to greater than 1-cm for some anterior brain areas. MNI/Talairach coordinate dis- parity was generally less for brains fitted using FSL. The mni2tal transform generally reduced MNI/ Talairach coordinate disparity for inferior brain areas but increased disparity for anterior, posterior, and superior areas. Coordinate disparity patterns differed for brain templates (MNI-305, ICBM-152) using the same fitting method (FSL/FLIRT) and for different fitting methods (SPM2, FSL/FLIRT) using the same template (ICBM-152). An MNI-to-Talairach (MTT) transform to correct for bias between MNI and Talairach coordinates was formulated using a best-fit analysis in one hundred high-resolution 3-D MR brain images. MTT transforms optimized for SPM2 and FSL were shown to reduced group mean MNI/Talairach coordinate disparity from a 5-13 mm to 1-2 mm for both deep and superficial brain sites. MTT transforms provide a validated means to convert MNI coordinates to Talairach compatible coordinates for studies using either SPM2 or FSL/FLIRT with the ICBM-152 template. Hum Brain Mapp

1,293 citations

References
More filters
Journal ArticleDOI

1,245 citations

Book
01 Jan 1947
TL;DR: The first formal introduction to linear algebra, a branch of modern mathematics that studies vectors and vector spaces, was the Finite Dimensional Vector Spaces (FDVSP) by Halmos as discussed by the authors.
Abstract: As a newly minted Ph.D., Paul Halmos came to the Institute for Advanced Study in 1938--even though he did not have a fellowship--to study among the many giants of mathematics who had recently joined the faculty. He eventually became John von Neumann's research assistant, and it was one of von Neumann's inspiring lectures that spurred Halmos to write Finite Dimensional Vector Spaces. The book brought him instant fame as an expositor of mathematics. Finite Dimensional Vector Spaces combines algebra and geometry to discuss the three-dimensional area where vectors can be plotted. The book broke ground as the first formal introduction to linear algebra, a branch of modern mathematics that studies vectors and vector spaces. The book continues to exert its influence sixty years after publication, as linear algebra is now widely used, not only in mathematics but also in the natural and social sciences, for studying such subjects as weather problems, traffic flow, electronic circuits, and population genetics. In 1983 Halmos received the coveted Steele Prize for exposition from the American Mathematical Society for "his many graduate texts in mathematics dealing with finite dimensional vector spaces, measure theory, ergodic theory, and Hilbert space."

1,238 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown how a special type of matrices, which will be defined below, can be used as an expedient which simplifies the t reatment of problems associated with the method of least squares.
Abstract: It is general ly known that many problems dealt with by means of the method of least squares often lead to extremely inh'icate functional relations. In such cases it may therefore be very difficult to give a concise and readily inlelligibl'e representation of the maihemat[cat process in question. On the, whole, the purpose in view can be attained by the aid of classical l)Vt)cedur~,s, but, the manne r of representation frequently gives lillh' orieulation. Nowadays, however, we have the calculus of mah'io.~,s which is excecdihgly well adapted lo the solution of problems by m e a n s of Ihe method of least squares. Some results obtained by the application of the calculus of malrices to these problems are described in tim present paper. 1-l. JANSEN t930 seems Io have been ihe firsl sci~,niist who used lhe calculus of matrices for representing Ihe e!ome.ntary problems involved in the metho(l of least squares, lie developed a theory of the adjustment of elements and the a(l.justmenl~ of era'relates by direct rewr i t ing in the form of or(liuary mah'ices according to GAYLEY. Among other authors, we may mention [.~m.t.~ 1933, M,tnc,tN'roNt t_944, and RmCHF.NEDEr~ (Indicikalkfd) ~.9-I.2. In what lollows, it will be shown how a special type of matrices, which will be defined below, can be used as an expedient which simplifies the t reatment of problems associated with the method of least squares.

108 citations

Journal ArticleDOI

10 citations


"A Generalized inverse for matrices" refers background in this paper

  • ...Thus (5) follows, and substituting (5) in (7) we get ( 4 )....

    [...]

  • ...Proof. I first show that equations ( 4 ) and (5) are equivalent to the single equation...

    [...]

  • ...A* given in Theorem 1. It seems desirable, however, to show that it is also a direct consequence of (3), ( 4 ), (5) and (6)....

    [...]

  • ...Equation (7) follows from ( 4 ) and (5), since it is merely (5) substituted in (4)....

    [...]

  • ...Equation (7) follows from (4) and (5), since it is merely (5) substituted in ( 4 )....

    [...]