scispace - formally typeset
Search or ask a question
Topic

Triangular matrix

About: Triangular matrix is a research topic. Over the lifetime, 3084 publications have been published within this topic receiving 36062 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm, and the improved memory and time efficiencies are especially true for large sized patterns training.
Abstract: The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.

495 citations

Proceedings ArticleDOI
30 Jul 1982
TL;DR: In this paper, a unified concept of using systolic arrays to perform real-time triangularization for both general and band matrices is presented, and a framework is presented for the solution of linear systems with pivoting and for least squares computations.
Abstract: Given an n x p matrix X with p < n, matrix triangularization, or triangularization in short, is to determine an n x n nonsingular matrix Al such that MX = [ R 0 where R is p x p upper triangular, and furthermore to compute the entries in R. By triangularization, many matrix problems are reduced to the simpler problem of solving triangular- linear systems (see for example, Stewart). When X is a square matrix, triangularization is the major step in almost all direct methods for solving general linear systems. When M is restricted to be an orthogonal matrix Q, triangularization is also the key step in computing least squares solutions by the QR decomposition, and in computing eigenvalues by the QR algorithm. Triangularization is computationally expensive, however. Algorithms for performing it typically require n3 operations on general n x n matrices. As a result, triangularization has become a bottleneck in some real-time applications.11 This paper sketches unified concepts of using systolic arrays to perform real-time triangularization for both general and band matrices. (Examples and general discussions of systolic architectures can be found in other papers.6.7) Under the same framework systolic triangularization arrays arc derived for the solution of linear systems with pivoting and for least squares computations. More detailed descriptions of the suggested systolic arrays will appear in the final version of the paper.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

474 citations

Journal ArticleDOI
TL;DR: This survey paper describes how strands of work that are important in two different fields, matrix theory and complex function theory, have come together in some work on fast computational algorithms for matrices with what the authors call displacement structure, and develops a fast triangularization procedure.
Abstract: In this survey paper, we describe how strands of work that are important in two different fields, matrix theory and complex function theory, have come together in some work on fast computational algorithms for matrices with what we call displacement structure. In particular, a fast triangularization procedure can be developed for such matrices, generalizing in a striking way an algorithm presented by Schur (1917) [J. Reine Angew. Math., 147 (1917), pp. 205–232] in a paper on checking when a power series is bounded in the unit disc. This factorization algorithm has a surprisingly wide range of significant applications going far beyond numerical linear algebra. We mention, among others, inverse scattering, analytic and unconstrained rational interpolation theory, digital filter design, adaptive filtering, and state-space least-squares estimation.

447 citations

Book ChapterDOI
01 Jan 1972
TL;DR: By following the formulation of elimination as a combinatorial process a considerable insight into the elimination process by studying the evolution of the cycle structure and the vertex-separator, or cut-set, structure of a graph under elimination can be gained.
Abstract: Publisher Summary This chapter provides an overview of a graph-theoretic study of the numerical solution of sparse positive definite systems of linear equations. By following the formulation of elimination as a combinatorial process a considerable insight into the elimination process by studying the evolution of the cycle structure and the vertex-separator, or cut-set, structure of a graph under elimination can be gained. Furthermore, by counting the arithmetic operations necessary to effect the decompositions, these criteria for optimization is related to the computational complexity of calculations involving the elimination process. A graph-theoretic approach for dealing with sparse systems in regard to Gaussian elimination is to attempt to find permutation matrices P, Q such that A = PMQ, which is block lower triangular, as in this case it is necessary only to decompose the diagonal blocks of PMQ. Naturally, such a transformation does not preserve symmetry. However, results of these are not applicable when M is symmetric positive definite and irreducible, because the algorithm would then produce only one diagonal block, M.

425 citations

Journal ArticleDOI
TL;DR: In this article, a modification of Henderson's procedure for finding the diagonal elements of an L (or A) matrix which does not require that L or A be stored in memory is described.
Abstract: A numerator relationship matrix for a group of animals is, by definition, the matrix with the ijth off-diagonal element equal to the numerator of Wright's [1922] coefficient of relationship between the ith and jth animals and with the ith diagonal element equal to 1 + fi where fi is Wright's [1922] coefficient of inbreeding for the ith animal. The numerator relationship matrix, say A, can be computed recursively (see Emik and Terrill [1949]), and for most situations, inbreeding and relationship coefficients can be calculated with a computer more rapidly in this manner than by path coefficient methods (Wright [1922]). The exception to this is when the dimension of A is too large for it to be stored in computer memory. Then computation of A is exceedingly time consuming. In addition to its usefulness for obtaining inbreeding and relationship coefficients, the inverse of A is required for best linear unbiased prediction of breeding values (Henderson [1973]) but, in general, A is too large to invert by conventional means. Recently, however, Henderson [1976] has described methods for computing a lower triangular matrix Z, defined such that LL' = A, with the object of computing A` = (L')1(LX1. He discovered that A-1 can be found directly from a list of sires and dams and the diagonal elements of L. Since the latter are functions of the diagonal elements of A, A1 for a noninbred population can be computed without having to compute either A or L. However, for an inbred population, the diagonal elements of L (or A) must first be found and when L is too large to store in computer memory, this can be very time consuming if Henderson's computing formulas are used. The purpose of this paper is to describe a modification of Henderson's procedure for finding the diagonal elements of an L (or A) matrix which does not require that L (or A) be stored in memory. It is therefore possible to compute rapidly inbreeding coefficients or the inverse of a numerator relationship matrix for very large numbers of animals. For example, less than three minutes were required by an IBM 370/135 to compute the diagonal elements and the inverse of a numerator relationship matrix for 1000 animals. Use of this procedure

388 citations

Network Information
Related Topics (5)
Polynomial
52.6K papers, 853.1K citations
88% related
Bounded function
77.2K papers, 1.3M citations
86% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
83% related
Matrix (mathematics)
105.5K papers, 1.9M citations
82% related
Hilbert space
29.7K papers, 637K citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202352
2022113
2021126
2020128
2019149
2018123