scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1979"


Book ChapterDOI
16 Jul 1979
TL;DR: The authors showed that parsing strings of length n is harder than recognizing such strings by a factor of only 0(log n), at most, for linear and/or unambiguous context-free languages.
Abstract: Several results on the computational complexity of general context-free language parsing and recognition are given. In particular we show that parsing strings of length n is harder than recognizing such strings by a factor of only 0(log n), at most. The same is true for linear and/or unambiguous context-free languages. We also show that the time to multiply \(\sqrt n \times \sqrt n\) Boolean Matrices is a lower bound on the time to recognize all prefixes of a string (or do on-line recognition), which in turn is a lower bound on the time to generate a particular convenient representation of all parses of a string (in an ambiguous grammar). Thus these problems are solvable in linear time only if n×n Boolean matrix multiplication can be done in 0(n2).

15 citations



Journal ArticleDOI
Dan Gordon1
TL;DR: A p-measure is defined as a measure for which Blum's axioms can be proved in a given axiomatic system and it is shown that the complexity class of a p-function contains only p-functions and that all p- functions form a single complexity class.

7 citations


Proceedings Article
01 Jan 1979

2 citations


Proceedings ArticleDOI
01 Apr 1979
TL;DR: It is shown that sequential complexity will decrease whereas parallel complexity will increase, and the reduction of the last transform step is resulting in an improved performance of fast transform filter algorithms, especially when the transform size is of moderate small size.
Abstract: Complexity analysis is highly recommended to be based on suitable sequential and parallel machines using arbitrary resources. Traditional complexity predicates are changed completely with respect to fast hardware multipliers, complex arithmetic processing units and the forthcoming VLSI technology. Both signal flow graph derivative and program based complexity analysis are proposed. It is shown that sequential complexity will decrease whereas parallel complexity will increase. The analysis of the last transform step in a transform convolution system carries out an increase of the time complexity at all. Consequently the reduction of the last transform step is resulting in an improved performance of fast transform filter algorithms, especially when the transform size is of moderate small size. This is applicable to the prime factor DFT computation via fast convolution, the WFTA transforms and arbitrary FFT transformations in general.

1 citations



Journal ArticleDOI
TL;DR: This paper is concerned with minimizing arithmetic operations, which came into existence with the advent of high-speed computers and is further enhanced by the development of still faster computers.
Abstract: This paper is concerned with minimizing arithmetic operations. A potential reader might ask oneself \"Why should one worry about the number of arithmetic operations when calculators and computers are so fast?\" It is true that the mechanics of using a slide rule and the study of logarithms for computational purposes are becoming obsolete. However, the subject area of this paper only came into existence with the advent of high-speed computers and is further enhanced by the development of still faster computers. The reason for this can best be demonstrated by the following problem, which is indicative of this area of study. Consider multiplying two square n X n matrices A with elements aij and B with elements bjk. Their product is an n X n matrix C whose elements are defined by:

1 citations



Journal ArticleDOI
TL;DR: A new, close to linear, estimate is obtained for the complexity of computing a product of polynomials over a finite field and a group of non-singular linear tensor-rank-preserving taransformations is described.
Abstract: SOME aspects of the theory of the algebraic complexity of computations are investigated, namely, the complexity of the computation of certain sets of bilinear forms the point of view of the number of multiplications and divisions. The complexity of the computation of a pair of bilinear forms is characterized. A new, close to linear, estimate is obtained for the complexity of computing a product of polynomials over a finite field. A group of non-singular linear tensor-rank-preserving taransformations is described. The behaviour almost everywhere of the rank in tensor space is considered.

1 citations