scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Matrix multiplication via arithmetic progressions

Don Coppersmith1, Shmuel Winograd1
01 Jan 1987-pp 1-6
TL;DR: A new method for accelerating matrix multiplication asymptotically is presented, by using a basic trilinear form which is not a matrix product, and making novel use of the Salem-Spencer Theorem.
Abstract: We present a new method for accelerating matrix multiplication asymptotically. This work builds on recent ideas of Volker Strassen, by using a basic trilinear form which is not a matrix product. We make novel use of the Salem-Spencer Theorem, which gives a fairly dense set of integers with no three-term arithmetic progression. Our resulting matrix exponent is 2.376.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a new method for accelerating matrix multiplication asymptotically is presented, based on the ideas of Volker Strassen, by using a basic trilinear form which is not a matrix product.

2,454 citations

Proceedings ArticleDOI
28 Jun 2009
TL;DR: Based on the results, it is believed that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time.
Abstract: Influence maximization is the problem of finding a small subset of nodes (seed nodes) in a social network that could maximize the spread of influence. In this paper, we study the efficient influence maximization from two complementary directions. One is to improve the original greedy algorithm of [5] and its improvement [7] to further reduce its running time, and the second is to propose new degree discount heuristics that improves influence spread. We evaluate our algorithms by experiments on two large academic collaboration graphs obtained from the online archival database arXiv.org. Our experimental results show that (a) our improved greedy algorithm achieves better running time comparing with the improvement of [7] with matching influence spread, (b) our degree discount heuristics achieve much better influence spread than classic degree and centrality-based heuristics, and when tuned for a specific influence cascade model, it achieves almost matching influence thread with the greedy algorithm, and more importantly (c) the degree discount heuristics run only in milliseconds while even the improved greedy algorithms run in hours in our experiment graphs with a few tens of thousands of nodes.Based on our results, we believe that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time. Therefore, contrary to what implied by the conclusion of [5] that traditional heuristics are outperformed by the greedy approximation algorithm, our results shed new lights on the research of heuristic algorithms.

2,073 citations


Cites background from "Matrix multiplication via arithmeti..."

  • ...vertices take O(mn) time, or using fast binary matrix multiplication takes O(n) [2], which is not as good as O(mn) for sparse graphs such as social network graphs....

    [...]

Book
30 Oct 1997
TL;DR: This chapter discusses decision problems and Complexity over a Ring and the Fundamental Theorem of Algebra: Complexity Aspects.
Abstract: 1 Introduction.- 2 Definitions and First Properties of Computation.- 3 Computation over a Ring.- 4 Decision Problems and Complexity over a Ring.- 5 The Class NP and NP-Complete Problems.- 6 Integer Machines.- 7 Algebraic Settings for the Problem "P ? NP?".- 8 Newton's Method.- 9 Fundamental Theorem of Algebra: Complexity Aspects.- 10 Bezout's Theorem.- 11 Condition Numbers and the Loss of Precision of Linear Equations.- 12 The Condition Number for Nonlinear Problems.- 13 The Condition Number in ?(H(d).- 14 Complexity and the Condition Number.- 15 Linear Programming.- 16 Deterministic Lower Bounds.- 17 Probabilistic Machines.- 18 Parallel Computations.- 19 Some Separations of Complexity Classes.- 20 Weak Machines.- 21 Additive Machines.- 22 Nonuniform Complexity Classes.- 23 Descriptive Complexity.- References.

1,594 citations

Journal ArticleDOI
TL;DR: This paper develops a robust hierarchical clustering algorithm ROCK that employs links and not distances when merging clusters, and indicates that ROCK not only generates better quality clusters than traditional algorithms, but it also exhibits good scalability properties.

1,383 citations

Proceedings ArticleDOI
23 Mar 1999
TL;DR: This work develops a robust hierarchical clustering algorithm, ROCK, that employs links and not distances when merging clusters, and shows that ROCK not only generates better quality clusters than traditional algorithms, but also exhibits good scalability properties.
Abstract: We study clustering algorithms for data with Boolean and categorical attributes. We show that traditional clustering algorithms that use distances between points for clustering are not appropriate for Boolean and categorical attributes. Instead, we propose a novel concept of links to measure the similarity/proximity between a pair of data points. We develop a robust hierarchical clustering algorithm, ROCK, that employs links and not distances when merging clusters. Our methods naturally extend to non-metric similarity measures that are relevant in situations where a domain expert/similarity table is the only source of knowledge. In addition to presenting detailed complexity results for ROCK, we also conduct an experimental study with real-life as well as synthetic data sets. Our study shows that ROCK not only generates better quality clusters than traditional algorithms, but also exhibits good scalability properties.

1,322 citations

References
More filters
Journal ArticleDOI
TL;DR: By a modification of Salem and Spencer' method, the better estimate 1-_2/2log2 + e v(N) > N VloggN is shown.
Abstract: Communicated October 18, 1946 Let S be a set of non-negative integers 0 and sufficiently large N log 2 + . v(N) > N log log N I will show in this note that, by a modification of their method, the better estimate 1-_2/2log2 + e v(N) > N VloggN

471 citations

Journal ArticleDOI
TL;DR: By combining Pan’s trilinear technique with a strong version of the compression theorem for the case of several disjoint matrix multiplications it is shown that multiplication of N \times N matrices (over arbitrary fields) is possible in time.
Abstract: In 1979 considerable progress was made in estimating the complexity of matrix multiplication. Here the new techniques and recent results are presented, based upon the notion of approximate rank and the observation that certain patterns of partial matrix multiplication (some of the entries of the matrices may be zero) can efficiently be utilized to perform multiplication of large total matrices. By combining Pan’s trilinear technique with a strong version of our compression theorem for the case of several disjoint matrix multiplications it is shown that multiplication of $N \times N$ matrices (over arbitrary fields) is possible in time $O(N^\beta )$, where $\beta $ is a bit smaller than $3\ln 52/\ln 110 \approx 2.522$.

267 citations

Journal ArticleDOI
TL;DR: A consequence of these results is that $\omega $, the exponent for matrix multiplication, is a limit point, that is, it cannot be realized by any single algorithm.
Abstract: The main results of this paper have the following flavor: Given one algorithm for multiplying matrices, there exists another, better, algorithm. A consequence of these results is that $\omega $, th...

228 citations


Additional excerpts

  • ...The most exciting aspect of Strassen’s new approach is that it eliminates a major barrier to proving w =2. Namely, if one uses a fuced, jinire basic algorithm in the hypothesis of Schijnhage’s t-theorem, then L, the number of multiplications, must strictly exceed either #x, the number of x-variables, or #y or #z [ CW ], because the basic algorithm is a matrix multiplication algorithm....

    [...]

Journal ArticleDOI
TL;DR: The significance of this notion lies, above all, in the key role of matrix multiplication for numerical linear algebra, where the following problems all have "exponent'": Matrix inversion, LK-decomposition, evaluation of the determinant or of all coefficients of the characteristic polynomial and for k = C also Qß- decomposition and unitary transformation to Hessenberg form.
Abstract: The significance of this notion lies, above all, in the key role of matrix multiplication for numerical linear algebra. Thus the following problems all have \"exponent' : Matrix inversion, LK-decomposition, evaluation of the determinant or of all coefficients of the characteristic polynomial and for k = C also Qß-decomposition and unitary transformation to Hessenberg form. (See Strassen [48], [49], BunchHopcroft [12], Schönhage [45], Baur-Strassen [3], Keller [32].) Apart from this, such diverse computational problems äs finding the transitive closure of a finite relation, parsing a context free language or Computing a generalized Fourier transform are reducible to matrix multiplication. (See the survey article of Paterson [43], and Beth [4].) Of course the above definition of is incomplete: We have not explained what we mean by an algorithm or by the complexity of a computational problem. Fortunately, there is no need to do so. We may use instead the notion of rank of a bilinear map,

181 citations