scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1975"


Proceedings ArticleDOI
05 May 1975
TL;DR: An effort is made to recast classical theorems into a useful computational form and analogies are developed between constructibility questions in Euclidean geometry and computability questions in modern computational complexity.
Abstract: The complexity of a number of fundamental problems in computational geometry is examined and a number of new fast algorithms are presented and analyzed. General methods for obtaining results in geometric complexity are given and upper and lower bounds are obtained for problems involving sets of points, lines, and polygons in the plane. An effort is made to recast classical theorems into a useful computational form and analogies are developed between constructibility questions in Euclidean geometry and computability questions in modern computational complexity.

287 citations


Journal ArticleDOI
TL;DR: The cost of a complete updating algorithm is taken to be the number of bits it reads and/or writes in updating the representation of a data base, and lower bounds to measures of this cost are cited.
Abstract: Four costs of a retrieval algorithm are the number of bits needed to store a representation of a data base, the number of those bits which must be accessed to answer a retrieval question, the number of bits of state information required, and the logic complexity of the algorithm. Firm lower bounds are given to measures of the first three costs for simple binary retrieval problems. Systems are constructed which attain each bound separately. A system which finds the value of the kth bit in an N-bit string attains all bounds simultaneously. For two other more complex retrieval problems there are trading curves between storage and worst-case access, and between storage and average access. Lower and upper bounds to the trading curves are found. Minimal storage is a point of discontinuity on both curves, and for some complex problems large increases in storage are needed to approach minimal access. The cost of a complete updating algorithm is taken to be the number of bits it reads and/or writes in updating the representation of a data base. Lower bounds to measures of this cost are cited. Optimal minimal-storage systems also have minimal update cost. Optimal minimal-access systems with large storage cost also have large update cost, but a small increase in storage for such a system may reduce update cost dramatically. KEY ~'ORDS AND PHRASES.' file, storage, retrieval, access, exact match, table lookup, computational complexity, retrieval algorithms, Kraft inequality CR CATEGORIES: 3.70, 3.72, 3.74, 5.25, 5.6

40 citations


Proceedings ArticleDOI
13 Oct 1975
TL;DR: Theorem 2.3.3 as mentioned in this paper Theorem 3 is in the same vein as the proof of Theorem 2 and Theorem 4 is in Section 4 and Section 5.
Abstract: 3. Summary of Results Ig 19 n-19 Ig(k/n) + 0(1) if k > n n/k + Ig 19 k + 0(1) if k < n We shall prove Theorems 1 and 2 in Sections 4 and 5. The proof of Theorem 3 is in the same vein and will not be given here.

32 citations


Journal ArticleDOI
TL;DR: An additive degree of freedom is defined, which turns out to be an exact measure of the complexity of computation of a family 7 of linear forms in r variables over a field.
Abstract: The notion of the linear algorithm to compute a family 7 of linear forms in r variables over a field is defined. Ways to save addltmns are investigated by analyzing the combinatorial aspects of linear dependences between subrows of a given matrix F. Further, an additive degree of freedom is defined, which turns out to be an exact measure of the complexity of computation of 7.

30 citations



Journal ArticleDOI
TL;DR: In this article, the authors describe some new techniques for discussing approximate complexity and state several theorems which extend the scope of earlier results, such as the concept of weak solutions.
Abstract: Hubert's 13th problem dealt with the functional complexity of a specific function of three variables. One measure of the complexity of a function of n variables is whether it can be represented in terms of functions of fewer variables according to a specific schema. For example, we may ask if a given function F(x, y) is nomographic—i.e. can it be written in the form /(0(x) + ^00), using only functions of one variable. A larger class of functions are those that can be represented as uniform limits of nomographic functions. Membership in such a class is a measure of the approximate functional complexity of a function F, and may be an appropriate concept in discussing computational approximation. Some of the important questions dealing with complexity have been answered by the work of Vitushkin [8], Arnol'd [1], and Kolmogorov [5]. In the present note, we describe some new techniques for discussing approximate complexity and state several theorems which extend the scope of earlier results. We first observe that if all the component functions used in a particular representation schema are sufficiently smooth, then the resulting class of representable functions will be solutions of one or more specific partial differential equations, which may in fact yield local characterizations for smoothly representable functions. (For example, smoothly representable functions of the form f( y)> &(y> z ) ) m u s t satisfy a fourth-order equation with 55 terms.) However, this observation does not seem to be immediately useful in treating approximate representation, nor in dealing with representation by functions required only to be continuous. (It is tempting to hope that an appropriate concept of weak solution will be useful here.) Our results are of two types. The first is based on the study of level sets, as with much of the preceding work in complexity; excellent surveys may be found in Sprecher [7], and in [2] and [6]. Any representation schema can be regarded as a mapping diagram whose commutativity imposes necessary conditions on each component function, which in turn give rise to relations between their level sets. In some cases, this may be carried over to approximate representation. The following is typical.

4 citations


Proceedings ArticleDOI
05 May 1975
TL;DR: The classes of integers and polynomials that can be evaluated within given complexity bounds are considered and the existence of proper hierarchies of complexity classes are proved.
Abstract: The difficulty of evaluating integers and polynomials has been studied in various frameworks ranging from the addition-chain approach [5] to integer evaluation to recent efforts aimed at generating polynomials that are hard to evaluate [2,8,10]. Here we consider the classes of integers and polynomials that can be evaluated within given complexity bounds and prove the existence of proper hierarchies of complexity classes. The framework in which our problems are cast is general enough to allow any finite set of binary operations rather than just addition, subtraction, multiplication, and division. The motivation for studying complexity classes rather than specific integers or polynomials is analogous to why complexity classes are studied in automata-based complexity: (i) the immense difficulty associated with computing the complexity of a specific integer or polynomial; (ii) the important insight obtained from discovering the structure of the complexity classes.

2 citations