scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1988"


Journal ArticleDOI
TL;DR: An alternative method based on the preflow concept of Karzanov, which runs as fast as any other known method on dense graphs, achieving an O(n) time bound on an n-vertex graph and faster on graphs of moderate density.
Abstract: All previously known efficient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense graphs, achieving an O(n3) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n2/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efficient distributed and parallel implementations. A parallel implementation running in O(n2log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin algorithm, which also uses n processors but requires O(n2) space.

1,700 citations


Journal ArticleDOI
TL;DR: Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models that allow partially synchronous processors to reach some approximately common notion of time.
Abstract: The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound D on the time required for a message to be sent from one processor to another and a known fixed upper bound P on the relative speeds of different processors. In an asynchronous system no fixed upper bounds D and P exist. In one version of partial synchrony, fixed bounds D and P exist, but they are not known a priori. The problem is to design protocols that work correctly in the partially synchronous system regardless of the actual values of the bounds D and P. In another version of partial synchrony, the bounds are known, but are only guaranteed to hold starting at some unknown time T, and protocols must be designed to work correctly regardless of when time T occurs. Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models. Lower bounds that show in most cases that our protocols are optimal with respect to the number of faults tolerated are also given. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant “distributed clocks” that allow partially synchronous processors to reach some approximately common notion of time.

1,613 citations


Journal ArticleDOI
TL;DR: It is shown for various classes of concept representations that these cannot be learned feasibly in a distribution-free sense unless R = NP, and relationships between learning of heuristics and finding approximate solutions to NP-hard optimization problems are given.
Abstract: The computational complexity of learning Boolean concepts from examples is investigated. It is shown for various classes of concept representations that these cannot be learned feasibly in a distribution-free sense unless R = NP. These classes include (a) disjunctions of two monomials, (b) Boolean threshold functions, and (c) Boolean formulas in which each variable occurs at most once. Relationships between learning of heuristics and finding approximate solutions to NP-hard optimization problems are given.

539 citations


Journal ArticleDOI
TL;DR: A randomly chosen family of clauses of size k over n variables is unsatisfiable, but every resolution proof of its unsatisfiability must generate at least (1 + ε)n clauses.
Abstract: For every choice of positive integers c and k such that k ≥ 3 and c2-k ≥ 0.7, there is a positive number e such that, with probability tending to 1 as n tends to ∞, a randomly chosen family of cn clauses of size k over n variables is unsatisfiable, but every resolution proof of its unsatisfiability must generate at least (1 + e)n clauses.

494 citations


Journal ArticleDOI
TL;DR: It is shown that determining whether s(G) ≤ K, for a given integer K, is NP-hard for general graphs but can be solved in linear time for trees.
Abstract: T. Parsons originally proposed and studied the following pursuit-evasion problem on graphs: Members of a team of searchers traverse the edges of a graph G in pursuit of a fugitive, who moves along the edges of the graph with complete knowledge of the locations of the pursuers. What is the smallest number s(G) of searchers that will suffice for guaranteeing capture of the fugitive? It is shown that determining whether s(G) ≤ K, for a given integer K, is NP-complete for general graphs but can be solved in linear time for trees. We also provide a structural characterization of those graphs G with s(G) ≤ K for K = 1, 2, 3.

412 citations


Journal ArticleDOI
TL;DR: Algorithms for containment and equivalence of such “inequality queries” are given, under the assumption that the data domains are dense and totally ordered.
Abstract: Conjunctive queries are generalized so that inequality comparisons can be made between elements of the query. Algorithms for containment and equivalence of such “inequality queries” are given, under the assumption that the data domains are dense and totally ordered. In general, containment does not imply the existence of homomorphisms (containment mappings), but the homomorphism property does exist for subclasses of inequality queries. A minimization algorithm is defined using the equivalence algorithm. It is first shown that the constants appearing in a query can be divided into “essential” and “nonessential” subgroups. The minimum query can be nondeterministically guessed using only the essential constants of the original query.

313 citations


Journal ArticleDOI
TL;DR: New methods are employed to prove membership in P for a number of problems whose complexities are not otherwise known and their utility is illustrated.
Abstract: Recent advances in graph theory and graph algorithms dramatically alter the traditional view of concrete complexity theory, in which a decision problem is generally shown to be in P by producing an efficient algorithm to solve an optimization version of the problem. Nonconstructive tools are now available for classifying problems as decidable in polynomial time by guaranteeing only the existence of polynomial-time decision algorithms. In this paper these new methods are employed to prove membership in P for a number of problems whose complexities are not otherwise known. Powerful consequences of these techniques are pointed out and their utility is illustrated. A type of partially ordered set that supports this general approach is defined and explored.

246 citations


Journal ArticleDOI
TL;DR: A new technique for proving lower bounds in the synchronous model is presented, based on a string-producing mechanism from formal language theory, first introduced by Thue to study square-free words.
Abstract: The computational capabilities of a system of n indistinguishable (anonymous) processors arranged on a ring in the synchronous and asynchronous models of distributed computation are analyzed. A precise characterization of the functions that can be computed in this setting is given. It is shown that any of these functions can be computed in O(n2) messages in the asynchronous model. This is also proved to be a lower bound for such elementary functions as AND, SUM, and Orientation. In the synchronous model any computable function can be computed in O(n log n) messages. A ring can be oriented and start synchronized within the same bounds.The main contribution of this paper is a new technique for proving lower bounds in the synchronous model. With this technique tight lower bounds of t(n log n) (for particular n) are proved for XOR, SUM, Orientation, and Start Synchronization. The technique is based on a string-producing mechanism from formal language theory, first introduced by Thue to study square-free words. Two methods for generalizing the synchronous lower bounds to arbitrary ring sizes are presented.

213 citations


Journal ArticleDOI
TL;DR: Simulation results are reported that indicate alternative retransmission protocols can significantly improve performance, and it is established that binary exponential backoff is stable if the sum of the arrival rates is sufficiently small.
Abstract: Binary exponential backoff is a randomized protocol for regulating transmissions on a multiple-access broadcast channel. Ethernet, a local-area network, is built upon this protocol. The fundamental theoretical issue is stability: Does the backlog of packets awaiting transmission remain bounded in time, provided the rates of new packet arrivals are small enough? It is assumed n ≥ 2 stations share the channel, each having an infinite buffer where packets accumulate while the station attempts to transmit the first from the buffer. Here, it is established that binary exponential backoff is stable if the sum of the arrival rates is sufficiently small. Detailed results are obtained on which rates lead to stability when n = 2 stations share the channel. In passing, several other results are derived bearing on the efficiency of the conflict resolution process. Simulation results are reported that, in particular, indicate alternative retransmission protocols can significantly improve performance.

199 citations


Journal ArticleDOI
TL;DR: It is shown that the shortest time to extinction (STE) policy is optimal for a class of continuous and discrete time nonpreemptive M/G/1 queues that do not allow unforced idle times.
Abstract: Many problems can be modeled as single-server queues with impatient customers. An example is that of the transmission of voice packets over a packet-switched network. If the voice packets do not reach their destination within a certain time interval of their transmission, they are useless to the receiver and considered lost. It is therefore desirable to schedule the customers such that the fraction of customers served within their respective deadlines is maximized. For this measure of performance, it is shown that the shortest time to extinction (STE) policy is optimal for a class of continuous and discrete time nonpreemptive M/G/1 queues that do not allow unforced idle times. When unforced idle times are allowed, the best policies belong to the class of shortest time to extinction with inserted idle time (STEI) policies. An STEI policy requires that the customer closest to his or her deadline be scheduled whenever it schedules a customer. It also has the choice of inserting idle times while the queue is nonempty. It is also shown that the STE policy is optimal for the discrete time G/D/1 queue where all customers receive one unit of service. The paper concludes with a comparison of the expected customer loss using an STE policy with that of the first-come, first-served (FCFS) scheduling policy for one specific queue.

188 citations


Journal ArticleDOI
TL;DR: Using Thérien's classification of finite monoids, new characterizations are given of the classes of automata and a new proof that the dot-depth hierarchy of algebraic automata theory is infinite is given.
Abstract: Recently a new connection was discovered between the parallel complexity class NC1 and the theory of finite automata in the work of Barrington on bounded width branching programs. There (nonuniform) NC1 was characterized as those languages recognized by a certain nonuniform version of a DFA. Here we extend this characterization to show that the internal structures of NC1 and the class of automata are closely related.In particular, using Therien's classification of finite monoids, we give new characterizations of the classes AC0, depth-kAC0, and ACC, the last being the AC0 closure of the mod q functions for all constant q. We settle some of the open questions in [3], give a new proof that the dot-depth hierarchy of algebraic automata theory is infinite [8], and offer a new framework for understanding the internal structure of NC1.

Journal ArticleDOI
TL;DR: It is shown for arbitrary graphs that a degenerate form of the basic annealing algorithm (obtained by letting “temperature” be a suitably chosen constant) produces matchings with nearly maximum cardinality in polynomial average time.
Abstract: The random, heuristic search algorithm called simulated annealing is considered for the problem of finding the maximum cardinality matching in a graph. It is shown that neither a basic form of the algorithm, nor any other algorithm in a fairly large related class of algorithms, can find maximum cardinality matchings such that the average time required grows as a polynomial in the number of nodes of the graph. In contrast, it is also shown for arbitrary graphs that a degenerate form of the basic annealing algorithm (obtained by letting “temperature” be a suitably chosen constant) produces matchings with nearly maximum cardinality in polynomial average time.

Journal ArticleDOI
TL;DR: It is shown that analysis is tractable for this model provided certain restrictions are imposed on subject creation and that this model admits a variety of useful systems.
Abstract: The protection state of a system is defined by the privileges possessed by subjects at a given moment. Operations that change this state are themselves authorized by the current state. This poses a design problem in constructing the initial state so that all derivable states conform to a particular policy. It also raises an analysis problem of characterizing the protection states derivable from a given initial state. A protection model provides a framework for both design and analysis. Design generality and tractable analysis are inherently conflicting goals. Analysis is particularly difficult if creation of subjects is permitted. The schematic protection model resolves this conflict by classifying subjects and objects into protection types. The privileges possessed by a subject consist of a type-determined part specified by a static protection scheme and a dynamic part consisting of tickets (capabilities). It is shown that analysis is tractable for this model provided certain restrictions are imposed on subject creation. A scheme authorizes creation of subjects via a binary relation on subject types. Our principal constraint is that this relation be acyclic, excepting loops that authorize a subject to create subjects of its own type. Our assumptions admit a variety of useful systems.

Journal ArticleDOI
TL;DR: Three ways in which formal languages can be defined by Thue systems with the Church-Rosser property are studied, and some general results about the three families of languages so determined are studied.
Abstract: Since about 1971, much research has been done on Thue systems that have properties that ensure viable and efficient computation. The strongest of these is the Church-Rosser property, which states that two equivalent strings can each be brought to a unique canonical form by a sequence of length-reducing rules. In this paper three ways in which formal languages can be defined by Thue systems with this property are studied, and some general results about the three families of languages so determined are studied.

Journal ArticleDOI
TL;DR: The “uniqueness” property of logical rules is introduced, which is satisfied by many of the common examples of rules and is easily recognized.
Abstract: Considered is the question of whether top-down (Prolog-like) evaluation of a set of logical rules can be guaranteed to terminate. The NAIL! system is designed to process programs consisting of logical rules and to select, for each fragment of the program, the best from among many possible strategies for its evaluation. In the context of such a system, it is essential that termination tests be fast. Thus, the “uniqueness” property of logical rules is introduced. This property is satisfied by many of the common examples of rules and is easily recognized. For rules with this property, a set of inequalities, whose satisfaction is sufficient for termination of the rules, can be generated in polynomial time. Then a polynomial test for satisfaction of constraints generated by this process is given.

Journal ArticleDOI
TL;DR: It is shown that most algebraic algorithms can be probabilistically applied to data that are given by a straight-line computation, and every degree-bounded rational function can be computed fast in parallel, that is, in polynomial size and polylogarithmic depth.
Abstract: Algorithms on multivariate polynomials represented by straight-line programs are developed. First, it is shown that most algebraic algorithms can be probabilistically applied to data that are given by a straight-line computation. Testing such rational numeric data for zero, for instance, is facilitated by random evaluations modulo random prime numbers. Then, auxiliary algorithms that determine the coefficients of a multivariate polynomial in a single variable are constructed. The first main result is an algorithm that produces the greatest common divisor of the input polynomials, all in straight-line representation. The second result shows how to find a straight-line program for the reduced numerator and denominator from one for the corresponding rational function. Both the algorithm for that construction and the greatest common divisor algorithm are in random polynomial time for the usual coefficient fields and output a straight-line program, which with controllably high probability correctly determines the requested answer. The running times are polynomial functions in the binary input size, the input degrees as unary numbers, and the logarithm of the inverse of the failure probability. The algorithm for straight-line programs for the numerators and denominators of rational functions implies that every degree-bounded rational function can be computed fast in parallel, that is, in polynomial size and polylogarithmic depth.

Journal ArticleDOI
TL;DR: Another theory of correctness is proposed, involving the concepts of commit serializability, recoverability, and resiliency, involvingThe concept of commit Serializability and recoverability in a database system.
Abstract: Reliable concurrent processing of transactions in a database system is examined. Since serializability, the conventional concurrency control correctness criterion, is not adequate in the presence of common failures, another theory of correctness is proposed, involving the concepts of commit serializability, recoverability, and resiliency.

Journal ArticleDOI
TL;DR: The hierarchy of the classes BP-subscrpt of all sequences of Boolean functions that may be computed by d-times only branching programs of polynomial size is introduced and it is shown constructively that BP 1 is a proper subset of BP 2.
Abstract: Exponential lower bounds on the complexity of computing the clique functions in the Boolean decision-tree model are proved. For one-time-only branching programs, large polynomial lower bounds are proved for k-clique functions if k is fixed, and exponential lower bounds if k increases with n. Finally, the hierarchy of the classes BPd(P) of all sequences of Boolean functions that may be computed by d-times only branching programs of polynomial size is introduced. It is shown constructively that BP1(P) is a proper subset of BP2(P).

Journal ArticleDOI
TL;DR: A probabilistic scheme for implementing shared memory on a bounded-degree network of processors that enables n processors to store and retrieve an arbitrary set of n data items in O(logn) parallel steps is presented.
Abstract: A central issue in the theory of parallel computation is the gap between the ideal models that utilize shared memory and the feasible models that consist of a bounded-degree network of processors sharing no common memory. This problem has been widely studied. Here a tight bound for the probabilistic complexity of this problem is established. The solution in this paper is based on a probabilistic scheme for implementing shared memory on a bounded-degree network of processors. This scheme, which we term parallel has/zing, enables n processors to store and retrieve an arbitrary set of n data items in O(logn) parallel steps. The items’ locations are specified by a function chosen randomly from a small class of universal hash functions. A hash function in this class has a small description and can therefore be efficiently distributed among the processors. A deterministic lower bound for the point-to-point communication model is also presented.

Journal ArticleDOI
TL;DR: Topological methods and a reduction to linear matroid parity are used to develop a polynomial-time algorithm to find a maximum-genus cellular imbedding, which seems to be the first im Bedding algorithm for which the running time is not exponential in the genus of the imbeding surface.
Abstract: The computational complexity of constructing the imbeddings of a given graph into surfaces of different genus is not well understood. In this paper, topological methods and a reduction to linear matroid parity are used to develop a polynomial-time algorithm to find a maximum-genus cellular imbedding. This seems to be the first imbedding algorithm for which the running time is not exponential in the genus of the imbedding surface.

Journal ArticleDOI
TL;DR: It is shown that complete and minimal sets of unifiers may not always exist for many-sorted unification, and it is proved that being a forest-structured sort hierarchy is a necessary and sufficient criterion for the Robinson Unification Theorem to hold.
Abstract: Many-sorted unification is considered; that is, unification in the many-sorted free algebras of terms, where variables, as well as the domains and ranges of functions, are restricted to certain subsets of the universe, given as a potentially infinite hierarchy of sorts. It is shown that complete and minimal sets of unifiers may not always exist for many-sorted unification. Conditions for sort hierarchies that are equivalent for the existence of these sets with one, finitely many, or infinitely many elements are presented. It is also proved that being a forest-structured sort hierarchy is a necessary and sufficient criterion for the Robinson Unification Theorem to hold for many-sorted unification. An algorithm for many-sorted unification is given.

Journal ArticleDOI
TL;DR: It can be shown that the reducibility of a program's augmented flow graph, augmenting edges and all, is a necessary and sufficient condition for the eliminability of go to's from that program under the stricter rules.
Abstract: Suppose we want to eliminate the local go to statements of a Pascal program by replacing them with multilevel loop exit statements. The standard ground rules for eliminating go to's require that we preserve the flow graph of the program, but they allow us to completely rewrite the control structures that glue together the program's atomic tests and actions. The go to's can be eliminated from a program under those ground rules if and only if the flow graph of that program has the graph-theoretic property named reducibility.This paper considers a stricter set of ground rules, introduced by Peterson, Kasami, and Tokura, which demand that we preserve the program's original control structures, as well as its flow graph, while we eliminate its go to's. In particular, we are allowed to delete the go to statements and the labels that they jump to and to insert various exit statements and labeled repeat-endloop pairs for them to jump out of. But we are forbidden to change the rest of the program text in any way. The critical issue that determines whether go to's can be eliminated under these stricter rules turns out to be the static order of the atomic tests and actions in the program text. This static order can be encoded in the program's flow graph by augmenting it with extra edges. It can then be shown that the reducibility of a program's augmented flow graph, augmenting edges and all, is a necessary and sufficient condition for the eliminability of go to's from that program under the stricter rules.

Journal ArticleDOI
N. Shankar1
TL;DR: A formalization and proof of the Church-Rosser theorem that was carried out with the Boyer-Moore theorem prover is described, which illustrates the effective use of the boyer- Moore theoremProver in proof checking difficult metamathematical proofs.
Abstract: The Church-Rosser theorem is a celebrated metamathematical result on the lambda calculus. We describe a formalization and proof of the Church-Rosser theorem that was carried out with the Boyer-Moore theorem prover. The proof presented in this paper is based on that of Tait and Martin-Lof. The mechanical proof illustrates the effective use of the Boyer-Moore theorem prover in proof checking difficult metamathematical proofs.

Journal ArticleDOI
TL;DR: A large class of relational database update transactions is investigated with respect to equivalence and optimization, and a simple, natural subclass of transactions, called strongly acyclic, is shown to have particularly desirable properties.
Abstract: A large class of relational database update transactions is investigated with respect to equivalence and optimization. The transactions are straight-line programs with inserts, deletes, and modifications using simple selection conditions. Several basic results are obtained. It is shown that transaction equivalence can be decided in polynomial time. A number of optimality criteria for transactions are then proposed, as well as two normal forms. Polynomial-time algorithms for transaction optimization and normalization are exhibited. Also, an intuitively appealing system of axioms for proving transaction equivalence is introduced. Finally, a simple, natural subclass of transactions, called strongly acyclic, is shown to have particularly desirable properties.

Journal ArticleDOI
TL;DR: The first part of the paper shows that previous theoretical work on the semantics of probabilistic programs and on the correctness of performance annotated programs can be used to automate the average-case analysis of simple programs containing assignments, conditionals, and loops and presents an original method that generalizes the previous approach and is applicable to functional programs that make use of recursion and complex data structures.
Abstract: The first part of the paper shows that previous theoretical work on the semantics of probabilistic programs (Kozen) and on the correctness of performance annotated programs (Ramshaw) can be used to automate the average-case analysis of simple programs containing assignments, conditionals, and loops. A performance compiler has been developed using this theoretical foundation. The compiler is described, and it is shown that special cases of symbolic simplifications of formulas play a major role in rendering the system usable. The performance compiler generates a system of recurrence equations derived from a given program whose efficiency one wishes to analyze. This generation is always possible, but the problem of solving the resulting equations may be complex. The second part of the paper presents an original method that generalizes the previous approach and is applicable to functional programs that make use of recursion and complex data structures. Several examples are presented, including an analysis of binary tree sort. A key feature of the analysis of such programs is that distributions on complex data structures are represented using attributed probabilistic grammars.

Journal ArticleDOI
TL;DR: The best possible simulation in terms of the g gsubscrpt i i i s and h h subscRpt i 2 is characterized by giving such a simulation and proving its optimality in the worst-case sense.
Abstract: Let G and H be two mesh-connected arrays of processors, where G = g1, X g2 X … X g1, H = h1 x h2 x … x hd, and g1 … g1 ≤ h1 … hd. The problem of simulating G by H is considered and the best possible simulation in terms of the gi's and hi's is characterized by giving such a simulation and proving its optimality in the worst-case sense. Also the same bound on the average cost of encoding the edges of G as distinct paths in H is established.

Journal ArticleDOI
TL;DR: Results are given which show that 1LIA's are surprisingly very powerful in that they can accept languages which seemingly require two-way communication.
Abstract: In this paper, a very simple model of parallel computation is considered, and the question of how restricting the flow of data to be one way compares with two-way flow is studied. It is shown that the one-way version is surprisingly very powerful in that it can solve problems that seemingly require two-way communication. Whether or not one-way communication is strictly weaker than two-way is an open problem, although the conjecture in this paper is in the positive. It is shown, however, that proving this conjecture is at least as hard as some well-known open problems in complexity theory.

Journal ArticleDOI
TL;DR: The critical role played by (almost) transient states is exposed, resulting in a straightforward algorithm for the construction of a sequence of aggregate generators associated with various time scales that provide a uniform asymptotic approximation of the original probability transition function.
Abstract: A new algorithm for the hierarchical aggregation of singularly perturbed finite-state Markov processes is derived. The approach taken bridges the gap between conceptually simple results for a relatively restricted class of processes and the significantly more complex results for the general case. The critical role played by (almost) transient states is exposed, resulting in a straightforward algorithm for the construction of a sequence of aggregate generators associated with various time scales. These generators together provide a uniform asymptotic approximation of the original probability transition function.

Journal ArticleDOI
TL;DR: The following problem is studied: How, and to what extent, can the retrieval speed of external hashing be improved by storing a small amount of extra information in internal storage by developing several algorithms that guarantee retrieval in one access.
Abstract: The following problem is studied: How, and to what extent, can the retrieval speed of external hashing be improved by storing a small amount of extra information in internal storage? Several algorithms that guarantee retrieval in one access are developed and analyzed. In the first part of the paper, a restricted class of algorithms is studied, and a lower bound on the amount of extra storage is derived. An algorithm that achieves this bound, up to a constant difference, is also given. In the second part of the paper a number of restrictions are relaxed and several more practical algorithms are developed and analyzed. The last one, in particular, is very simple and efficient, allowing retrieval in one access using only a fixed number of bits of extra internal storage per bucket. The amount of extra internal storage depends on several factors, but it is typically very small: only a fraction of a bit per record stored. The cost of inserting a record is also analyzed and found to be low. Taking all factors into account, this algorithm is highly competitive for applications requiring very fast retrieval.

Journal ArticleDOI
TL;DR: This result generalizes the construction of a polynomial LSA for the knapsack problem and destroys the hope of proving nonpolynomial lower bounds on LSAs for any problem that can be recognized by a PRAM as above with 2 poly(n) processors in poly(italic) time.
Abstract: Let M be a parallel RAM with p processors and arithmetic operations addition and subtraction recognizing L ⊂ Nn in T steps. (Inputs for M are given integer by integer, not bit by bit.) Then L can be recognized by a (sequential) linear search algorithm (LSA) in O(n4(log(n) + T + log(p))) steps. Thus many n-dimensional restrictions of NP-complete problems (binary programming, traveling salesman problem, etc.) and even that of the uniquely optimum traveling salesman problem, which is DP2-complete, can be solved in polynomial time by an LSA. This result generalizes the construction of a polynomial LSA for the n-dimensional restriction of the knapsack problem previously shown by the author, and destroys the hope of proving nonpolynomial lower bounds on LSAs for any problem that can be recognized by a PRAM as above with 2poly(n) processors in poly(n) time.