scispace - formally typeset
Search or ask a question
Journal ArticleDOI

On the number of multiplications necessary to compute certain functions

Shmuel Winograd1
01 Mar 1970-Communications on Pure and Applied Mathematics (Wiley Subscription Services, Inc., A Wiley Company)-Vol. 23, Iss: 2, pp 165-179
TL;DR: A new algorithm for matrix multiplication, which requires about 1/2(n cubed) multiplications, is obtained following the results of Pan Motzkin about polynomial evaluation and the product of a matrix by vector.
Abstract: : The number of multiplications and divisions required in certain computations is investigated. In particular, results of Pan Motzkin, about polynomial evaluation as well as similar results about the product of a matrix by vector, are obtained. As an application of the results on the product of a matrix by vector, a new algorithm for matrix multiplication, which requires about 1/2(n cubed) multiplications, is obtained.
Citations
More filters
Book
01 Jan 1998
TL;DR: This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them.
Abstract: This book is the second volume in a projected five-volume survey of numerical linear algebra and matrix algorithms. This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them. Stressing depth over breadth, Professor Stewart treats the derivation and implementation of the more important algorithms in detail. The notes and references sections contain pointers to other methods along with historical comments. The book is divided into two parts: dense eigenproblems and large eigenproblems. The first part gives a full treatment of the widely used QR algorithm, which is then applied to the solution of generalized eigenproblems and the computation of the singular value decomposition. The second part treats Krylov sequence methods such as the Lanczos and Arnoldi algorithms and presents a new treatment of the Jacobi-Davidson method. The volumes in this survey are not intended to be encyclopedic. By treating carefully selected topics in depth, each volume gives the reader the theoretical and practical background to read the research literature and implement or modify new algorithms. The algorithms treated are illustrated by pseudocode that has been tested in MATLAB implementations.

653 citations

01 Jan 1990
TL;DR: Algebraic complexity theory as mentioned in this paper is a project of lower bounds and optimality, which unifies two quite different traditions: mathematical logic and the theory of recursive functions, and numerical algebra.
Abstract: Publisher Summary This chapter discusses algebraic complexity theory. Complexity theory, as a project of lower bounds and optimality, unites two quite different traditions. The first comes from mathematical logic and the theory of recursive functions. In this, the basic computational model is the Turing machine. The second tradition has developed from questions of numerical algebra. The problems in this typically have a fixed finite size. Consequently, the computational model is based on something like an ordinary computer that however is supplied with the ability to perform any arithmetic operation with infinite precision and that in turn is required to deliver exact results. The formal model is that of straight-line program or arithmetic circuit or computation sequence, more generally that of computation tree.

569 citations

Book ChapterDOI
05 Sep 1977
TL;DR: One approach to understanding complexity issues for certain easily computable natural functions is surveyed, and the notion of rigidity does offer for the first time a reduction of relevant computational questions to noncomputional properties.
Abstract: We have surveyed one approach to understanding complexity issues for certain easily computable natural functions. Shifting graphs have been seen to account accurately and in a unified way for the superlinear complexity of several problems for various restricted models of computation. To attack "unrestricted" models (in the present context combinational circuits or straight-line arithmetic programs,) a first attempt, through superconcentrators, fails to provide any lower bounds although it does give counter-examples to alternative approaches. The notion of rigidity, however, does offer for the first time a reduction of relevant computational questions to noncomputional properties. The "reduction" consists of the conjunction of Corollary 6.3 and Theorem 6.4 which show that "for most sets of linear forms over the reals the stated algebraic and combinatorial reasons account for the fact that they cannot be computed in linear time and depth O(log n) simultaneously." We have outlined some problem areas which our preliminary results raise, and feel that further progress on most of these is humanly feasible. We would be interested in alternative approaches also.

406 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of parallel techniques for problems in linear algebra is given, specific topics include: relevant computer models and their consequences for programs, evaluation of arithmetic expressions, solution of general and special linear systems of equations, and computation of eigenvalues.
Abstract: The existence of parallel and pipeline computers has inspired a new approach to algorithmic analysis. Classical numerical methods are generally unable to exploit multiple processors and powerful vector-oriented hardware. Efficient parallel algorithms can be created by reformulating familiar algorithms or by discovering new ones, and the results are often surprising. A comprehensive survey of parallel techniques for problems in linear algebra is given. Specific topics include: relevant computer models and their consequences for programs, evaluation of arithmetic expressions, solution of general and special linear systems of equations, and computation of eigenvalues.

338 citations

Journal ArticleDOI
TL;DR: Algorithms which use only $O(\sqrt n )$ nonscalar multiplications to evaluate polynomials of degree n, and proofs that at least $\sqrt ...
Abstract: We present algorithms which use only $O(\sqrt n )$ nonscalar multiplications (i.e. multiplications involving “x” on both sides) to evaluate polynomials of degree n, and proofs that at least $\sqrt ...

302 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, Cook et al. gave an algorithm which computes the coefficients of the product of two square matrices A and B of order n with less than 4. 7 n l°g 7 arithmetical operations (all logarithms in this paper are for base 2).
Abstract: t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical operations (all logarithms in this paper are for base 2, thus tog 7 ~ 2.8; the usual method requires approximately 2n 3 arithmetical operations). The algorithm induces algorithms for invert ing a matr ix of order n, solving a system of n linear equations in n unknowns, comput ing a determinant of order n etc. all requiring less than const n l°g 7 arithmetical operations. This fact should be compared with the result of KLYUYEV and KOKOVKINSHCHERBAK [1 ] tha t Gaussian elimination for solving a system of l inearequations is optimal if one restricts oneself to operations upon rows and columns as a whole. We also note tha t WlNOGRAD [21 modifies the usual algorithms for matr ix multiplication and inversion and for solving systems of linear equations, trading roughly half of the multiplications for additions and subtractions. I t is a pleasure to thank D. BRILLINGER for inspiring discussions about the present subject and ST. COOK and B. PARLETT for encouraging me to write this paper. 2. We define algorithms e~, ~ which mult iply matrices of order m2 ~, by induction on k: ~ , 0 is the usual algorithm, for matr ix multiplication (requiring m a multiplications and m 2 ( m t) additions), e~,k already being known, define ~ , ~ +t as follows: If A, B are matrices of order m 2 k ~ to be multiplied, write

2,581 citations

Journal ArticleDOI
TL;DR: A new way of computing the inner product of two vectors is described that can be performed using roughly n3/2 multiplications instead of the n3multiplications which the regular method necessitates.
Abstract: —In this note we describe a new way of computing the inner product of two vectors. This method cuts down the number of multiplications required when we want to perform a large number of inner products on a smaller set of vectors. In particular, we obtain that the product of two n×n matrices can be performed using roughly n3/2 multiplications instead of the n3multiplications which the regular method necessitates.

136 citations

Journal ArticleDOI
TL;DR: In this article, the authors give lower bounds for the number of operations in schemes without initial conditioning of the coefficients, and for schemes with initial conditioning on the coefficients for simultaneous computation of the values of several polynomials.
Abstract: CONTENTSIntroduction § 1. Lower bounds for the number of operations in schemes without initial conditioning of the coefficients § 2. Lower bounds for the number of operations in schemes with initial conditioning of the coefficients § 3. Construction of schemes with initial conditioning of the coefficients for the computation of one polynomial § 4. Schemes with initial conditioning of the coefficients for simultaneous computation of the values of several polynomialsReferences

105 citations