scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Lower bounds for algebraic computation trees

01 Dec 1983-pp 80-86
TL;DR: All the apparently known lower bounds for linear decision trees are extended to bounded degree algebraic decision trees, thus answering the open questions raised by Steele and Yao [20].
Abstract: A topological method is given for obtaining lower bounds for the height of algebraic computation trees, and algebraic decision trees. Using this method we are able to generalize, and present in a uniform and easy way, almost all the known nonlinear lower bounds for algebraic computations. Applying the method to decision trees we extend all the apparently known lower bounds for linear decision trees to bounded degree algebraic decision trees, thus answering the open questions raised by Steele and Yao [20]. We also show how this new method can be used to establish lower bounds on the complexity of constructions with ruler and compass in plane Euclidean geometry.
Citations
More filters
Book
01 Jan 2001
TL;DR: The complexity class P is formally defined as the set of concrete decision problems that are polynomial-time solvable, and encodings are used to map abstract problems to concrete problems.
Abstract: problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a "problem" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that "solves" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out "expensive" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its "complexity," that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a "standard" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by

2,817 citations


Cites methods from "Lower bounds for algebraic computat..."

  • ...Lower bounds for sorti ng using generalizations of the decision-tree model were studied comprehensiv ly by Ben-Or [36]....

    [...]

Book
01 Jan 1987
TL;DR: This book offers a modern approach to computational geo- metry, an area thatstudies the computational complexity of geometric problems with an important role in this study.
Abstract: This book offers a modern approach to computational geo- metry, an area thatstudies the computational complexity of geometric problems. Combinatorial investigations play an important role in this study.

2,284 citations


Cites background from "Lower bounds for algebraic computat..."

  • ...A stronger O( n logn) time lower bound for identifying all extreme points of a set of n points in E2 can be found in Yao (1981) (see also Steele, Yao (1982) and Ben-Or (1983)); this solves Exercise 8....

    [...]

  • ...A stronger O( n logn) time lower bound for identifying all extreme points of a set of n points in E2 can be found in Yao (1981) (see also Steele, Yao (1982) and Ben-Or (1983)); this solves Exercise 8.5(a). An O( nlogn) time lower bound for constructing the convex hull of n points in E 3 , even if they are presorted, is given in Seidel (1985), which solves Exercise 8....

    [...]

Book
30 Oct 1997
TL;DR: This chapter discusses decision problems and Complexity over a Ring and the Fundamental Theorem of Algebra: Complexity Aspects.
Abstract: 1 Introduction.- 2 Definitions and First Properties of Computation.- 3 Computation over a Ring.- 4 Decision Problems and Complexity over a Ring.- 5 The Class NP and NP-Complete Problems.- 6 Integer Machines.- 7 Algebraic Settings for the Problem "P ? NP?".- 8 Newton's Method.- 9 Fundamental Theorem of Algebra: Complexity Aspects.- 10 Bezout's Theorem.- 11 Condition Numbers and the Loss of Precision of Linear Equations.- 12 The Condition Number for Nonlinear Problems.- 13 The Condition Number in ?(H(d).- 14 Complexity and the Condition Number.- 15 Linear Programming.- 16 Deterministic Lower Bounds.- 17 Probabilistic Machines.- 18 Parallel Computations.- 19 Some Separations of Complexity Classes.- 20 Weak Machines.- 21 Additive Machines.- 22 Nonuniform Complexity Classes.- 23 Descriptive Complexity.- References.

1,594 citations


Cites background from "Lower bounds for algebraic computat..."

  • ...Some central examples of algebraic computational models along with lower bounds for them are given by Steele and Yao [1982], Ben-Or [1983], and Smale [1987]. Two early books on algebraic complexity are the ones by Borodin and Munro [1975] and by Winograd [1980]....

    [...]

  • ...Some central examples of algebraic computational models along with lower bounds for them are given by Steele and Yao [1982], Ben-Or [1983], and Smale [1987]....

    [...]

  • ...Some central examples of algebraic computational models along with lower bounds for them are given by Steele and Yao [1982], Ben-Or [1983], and Smale [1987]. Two early books on algebraic complexity are the ones by Borodin and Munro [1975] and by Winograd [1980]. A recent survey of the subject can be found in [Strassen 1990]....

    [...]

  • ...Some central examples of algebraic computational models along with lower bounds for them are given by Steele and Yao [1982], Ben-Or [1983], and Smale [1987]. Two early books on algebraic complexity are the ones by Borodin and Munro [1975] and by Winograd [1980]. A recent survey of the subject can be found in [Strassen 1990]. For a comprehensive book see [Biirgisser, Clausen, and Shokrollahi 1996]. Complexity issues are at the forefront of current research related to designing algorithms for finding zeros of polynomials and determining the solvability of polynomial systems. Amongst the major references here are: Collins [1975]; Schtinhage [1982]; Shub and Smale [1993a, 1993b, 1993c, 1996, 1994]; Ben-Or, Kozen, and Reif [1986]; Grigoriev and Vorobjov [1988]; Renegar [1987a, 1992a]; Pan [1987, 1995]; Canny [1988]; and Heintz, Roy, and Solerno [1990], This work may be considered the modem counterpart to algorithmic investigations begun earlier in the century by Hermann [1926], Van der Waerden [1949], and Tarski [1951], in particular related to elimination theory for real closed fields....

    [...]

Book
01 May 2002
TL;DR: This book is primarily a textbook introduction to various areas of discrete geometry, in which several key results and methods are explained, in an accessible and concrete manner, in each area.
Abstract: From the Publisher: Discrete geometry investigates combinatorial properties of configurations of geometric objects. To a working mathematician or computer scientist, it offers sophisticated results and techniques of great diversity and it is a foundation for fields such as computational geometry or combinatorial optimization. This book is primarily a textbook introduction to various areas of discrete geometry. In each area, it explains several key results and methods, in an accessible and concrete manner. It also contains more advanced material in separate sections and thus it can serve as a collection of surveys in several narrower subfields. The main topics include: basics on convex sets, convex polytopes, and hyperplane arrangements; combinatorial complexity of geometric configurations; intersection patterns and transversals of convex sets; geometric Ramsey-type results; polyhedral combinatorics and high-dimensional convexity; and lastly, embeddings of finite metric spaces into normed spaces. Jiri Matousek is Professor of Computer Science at Charles University in Prague. His research has contributed to several of the considered areas and to their algorithmic applications. This is his third book.

1,591 citations

References
More filters
Book
01 Jan 1968
TL;DR: The Singular Points of Complex Hypersurfaces (AM-61) as mentioned in this paper is a seminal work in the area of complex hypersurfaces, and is based on as mentioned in this paper.
Abstract: The description for this book, Singular Points of Complex Hypersurfaces. (AM-61), will be forthcoming.

2,676 citations

Journal ArticleDOI
01 Feb 1964
TL;DR: In this article, it was shown that the number of points in Vc is equal to (deg fi) (deg f2) (deg mf) since each point of V0 lies close to some real point on Vc.
Abstract: PROOF Approximate fi, * *, fm by real polynomials F1, * , Fm of the same degrees whose coefficients are algebraically independent Now consider the variety Vc in the complex Cartesian space C'defined by the equations F1 =0, * * *, Fm =0 It follows from van der Waerden [9, ?41 ] that the number of points in Vc is equal to (deg fi) (deg f2) (deg fm) Since each point of V0 lies close to some real point of Vc; this proves Lemma 1

577 citations

Journal ArticleDOI
TL;DR: Using the nonscalar complexity in k, the complexity of single power sums, single elementary symmetric functions, the resultant and the discriminant as root functions are determined up to order of magnitude.

473 citations

Book
01 Jan 1975
TL;DR: Thank you for downloading the computational complexity of algebraic and numeric problems elsevier computer science library theory of computation series 1.
Abstract: Thank you for downloading the computational complexity of algebraic and numeric problems elsevier computer science library theory of computation series 1. As you may know, people have search numerous times for their chosen readings like this the computational complexity of algebraic and numeric problems elsevier computer science library theory of computation series 1, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they are facing with some harmful virus inside their computer.

421 citations

Proceedings ArticleDOI
05 May 1975
TL;DR: An effort is made to recast classical theorems into a useful computational form and analogies are developed between constructibility questions in Euclidean geometry and computability questions in modern computational complexity.
Abstract: The complexity of a number of fundamental problems in computational geometry is examined and a number of new fast algorithms are presented and analyzed. General methods for obtaining results in geometric complexity are given and upper and lower bounds are obtained for problems involving sets of points, lines, and polygons in the plane. An effort is made to recast classical theorems into a useful computational form and analogies are developed between constructibility questions in Euclidean geometry and computability questions in modern computational complexity.

287 citations


"Lower bounds for algebraic computat..." refers background in this paper

  • ...More recently Shamos in his work on computational -geometry [18] studied a number of fundamental problems in this area, and was able to give upper and lower bounds for problems involving set of points, lines, and polygons in the plane....

    [...]