scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1987"


Journal ArticleDOI
TL;DR: All three variants of the classical problem of optimal policy computation in Markov decision processes, finite horizon, infinite horizon discounted, and infinite horizon average cost are shown to be complete for P, and therefore most likely cannot be solved by highly parallel algorithms.
Abstract: We investigate the complexity of the classical problem of optimal policy computation in Markov decision processes. All three variants of the problem finite horizon, infinite horizon discounted, and infinite horizon average cost were known to be solvable in polynomial time by dynamic programming finite horizon problems, linear programming, or successive approximation techniques infinite horizon. We show that they are complete for P, and therefore most likely cannot be solved by highly parallel algorithms. We also show that, in contrast, the deterministic cases of all three problems can be solved very fast in parallel. The version with partially observed states is shown to be PSPACE-complete, and thus even less likely to be solved in polynomial time than the NP-complete problems; in fact, we show that, most likely, it is not possible to have an efficient on-line implementation involving polynomial time on-line computations and memory of an optimal policy, even if an arbitrary amount of precomputation is allowed. Finally, the variant of the problem in which there are no observations is shown to be NP-complete.

1,466 citations


Journal ArticleDOI
TL;DR: An algorithm for determining the shortest path between a source and a destination on an arbitrary (possibly nonconvex) polyhedral surface and generalizes to the case of multiple source points to build the Voronoi diagram on the surface.
Abstract: We present an algorithm for determining the shortest path between a source and a destination on an arbitrary (possibly nonconvex) polyhedral surface. The path is constrained to lie on the surface, and distances are measured according to the Euclidean metric. Our algorithm runs in time O(n log n) and requires O(n2) space, where n is the number ofedges ofthe surface. Afterwe run our algorithm, the distance from the source to any other destination may be determined using standard techniques in time O(log n) by locating the destination in the subdivision created by the algorithm. The actual shortest path from the source to a destination can be reported in time O(k+ log n), where k is the number of faces crossed by the path. The algorithm generalizes to the case of multiple source points to build the Voronoi diagram on the surface, where n is now the maximum of the number of vertices and the number of sources.

705 citations


Journal ArticleDOI
01 Jul 1987-Nature
TL;DR: Information-based complexity seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results to specific problems.
Abstract: Information-based complexity seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results to specific problems. This allows one to determine what is meant by an optimal algorithm in many practical situations, and offers a variety of interesting and sometimes surprising theoretical results.

647 citations


Journal ArticleDOI
TL;DR: It is proved that if the complexity class co -NP is contained in IP[k] for some constant k, then the polynomial-time hierarchy collapses to the second level and if the Graph Isomorphism problem is NP-complete, then this hierarchy collapses.

434 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the standard and augmented group technology problem, and proposed two algorithms to solve the two problems: the standard formulation and the augmented formulation, which allows the creation of machine cells and part families with a low degree of interaction by removing parts with low values of corresponding costs from the incidence matrix.

380 citations


Journal ArticleDOI
TL;DR: A preprocessing algorithm is presented to make certain polynomial time algorithms strongly polynomially bounded in the size of the combinatorial structure and which yields the same set of optimal solutions asw.
Abstract: We present a preprocessing algorithm to make certain polynomial time algorithms strongly polynomial time. The running time of some of the known combinatorial optimization algorithms depends on the size of the objective functionw. Our preprocessing algorithm replacesw by an integral valued-w whose size is polynomially bounded in the size of the combinatorial structure and which yields the same set of optimal solutions asw. As applications we show how existing polynomial time algorithms for finding the maximum weight clique in a perfect graph and for the minimum cost submodular flow problem can be made strongly polynomial. Further we apply the preprocessing technique to make H. W. Lenstra’s and R. Kannan’s Integer Linear Programming algorithms run in polynomial space. This also reduces the number of arithmetic operations used. The method relies on simultaneous Diophantine approximation.

371 citations


Journal ArticleDOI
TL;DR: A polynomial time algorithm that, for every input graph, either outputs the minimum bisection of the graph or halts without output is described, which shows that the algorithm chooses the former course with high probability for many natural classes of graphs.
Abstract: In the paper, we describe a polynomial time algorithm that, for every input graph, either outputs the minimum bisection of the graph or halts without output. More importantly, we show that the algorithm chooses the former course with high probability for many natural classes of graphs. In particular, for every fixedd≧3, all sufficiently largen and allb=o(n 1−1/[(d+1)/2]), the algorithm finds the minimum bisection for almost alld-regular labelled simple graphs with 2n nodes and bisection widthb. For example, the algorithm succeeds for almost all 5-regular graphs with 2n nodes and bisection widtho(n 2/3). The algorithm differs from other graph bisection heuristics (as well as from many heuristics for other NP-complete problems) in several respects. Most notably:

356 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: New linear time distributed algorithms for a class of problems in an asynchronous communication network, including Minimum-Weight Spanning Tree, Leader Election, and computing a sensitive decomposable function are developed.
Abstract: This paper develops linear time distributed algorithms for a class of problems in an asynchronous communication network Those problems include Minimum-Weight Spanning Tree (MST), Leader Election, counting the number of network nodes, and computing a sensitive decomposable function (eg majority, parity, maximum, OR, AND) The main problem considered is the problem of finding the MST This problem, which has been known for at least 9 years, is one of the most fundamental and the most studied problems in the field of distributed network algorithms Any algorithm for any one of the problems above requires at least O(E + VlogV) communication and O(V) time in the general network In this paper, we present new algorithms, which achieve those lower bounds The best previous algorithm requires T(E + VlogV) in communication and T(V log V) in time Our result enables to improve algorithms for many other problems in distributed computing, achieving lower bounds on their communication and time complexities

329 citations


Proceedings ArticleDOI
18 May 1987
TL;DR: A new graph representation for prefix computation is presented that leads to the design of a fast, area-efficient binary adder, and its area is close to known lower bounds on the VLSI area of parallel prefix graphs.
Abstract: In this paper, we study area-time tradeoffs in VLSI for prefix computation using graph representations of this problem. Since the problem is intimately related to binary addition, the results we obtain lead to the design of area-time efficient VLSI adders. This is a major goal of our work: to design very low latency addition circuitry that is also area efficient. To this end, we present a new graph representation for prefix computation that leads to the design of a fast, area-efficient binary adder. The new graph is a combination of previously known graph representations for prefix computation, and its area is close to known lower bounds on the VLSI area of parallel prefix graphs. Using it, we are able to design VLSI adders having area A = 0(n log n) whose delay time is the lowest possible value, i. e. the fastest possible area-efficient VLSI adder.

326 citations


Journal ArticleDOI
TL;DR: An algorithm is given that finds a cycle basis with the shortest possible length in $O(m^3 n)$ operations, which is the first known polynomial-time algorithm for this problem.
Abstract: Define the length of a basis of the cycle space of a graph to be the sum of the lengths of all cycles in the basis. An algorithm is given that finds a cycle basis with the shortest possible length in $O(m^3 n)$ operations, where m is the number of edges and n is the number of vertices. This is the first known polynomial-time algorithm for this problem. Edges may be weighted or unweighted. Also, the shortest cycle basis is shown to have at most ${{3(n - 1)(n - 2)} / 2}$ edges for the unweighted case. $O(mn^2 )$ algorithm to obtain a suboptimal cycle basis of length $O(n^2 )$ for unweighted graphs is also given.

285 citations


Proceedings Article
01 Jul 1987
TL;DR: The general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.
Abstract: The paper studies effective approximate solutions to combinatorial counting and uniform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 +n-@) are available either for all fl E R or for no fi E R. A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good

Journal ArticleDOI
TL;DR: A weaker assumption about one-way functions is proposed, which is not only sufficient, but also necessary for the existence of pseudorandom generators.
Abstract: Pseudorandom generators transform in polynomial time a short random “seed” into a long “pseudorandom” string. This string cannot be random in the classical sense of [6], but testing that requires an unrealistic amount of time (say, exhaustive search for the seed). Such pseudorandom generators were first discovered in [2] assuming that the function (a x modb) is one-way, i.e., easy to compute, but hard to invert on a noticeable fraction of instances. In [12] this assumption was generalized to the existence of any one-way permutation. The permutation requirement is sufficient but still very strong. It is unlikely to be proven necessary, unless something crucial, like P=NP, is discovered. Below, among other observations, a weaker assumption about one-way functions is proposed, which is not only sufficient, but also necessary for the existence of pseudorandom generators.

Journal ArticleDOI
01 Jan 1987-Networks
TL;DR: Two bipartite matching problems arising in Vehicle Scheduling are considered and a heuristic algorithm based on Lagrangean relaxation for the capacitated version of the multicommodity matching is presented together with experimental results.
Abstract: Two bipartite matching problems arising in Vehicle Scheduling are considered: the capacitated matching and the multicommodity matching. For the former, given a reasonable cost structure, we can exhibit a polynomial time algorithm, while the general case is conjectured to be NP-hard. The latter problem is shown to be NP-hard. A heuristic algorithm based on Lagrangean relaxation for the capacitated version of the multicommodity matching is also presented together with experimental results.

Journal ArticleDOI
TL;DR: The problem is reduced to an equivalent one in which the arc flow costs are nonnegative and a dynamic-programming method is given, called the send-and-split method, to solve it, which unify, significantly generalize, and improve upon e.g., for tandem facilities known polynomial-time dynamic- programming algorithms.
Abstract: Many problems from inventory, production and capacity planning, and from network design, exhibit scale economies and can be formulated in terms of finding minimum-additive-concave-cost nonnegative network flows. We reduce the problem to an equivalent one in which the arc flow costs are nonnegative and give a dynamic-programming method, called the send-and-split method, to solve it. The main work of the method entails repeatedly solving set-splitting and minimum-cost-chain problems. In uncapacitated networks with n nodes, a arcs, and d + 1 demand nodes, i.e., nodes with nonzero exogenous demand, the algorithm requires up to n2-13d + s2d operations additions and comparisons where s = n log2n + 3a is the number of operations required to solve a minimum-cost-chain problem with nonnegative arc costs on the augmented graph formed by appending a node and an arc thereto from each node in the graph. If also the network is k-planar, i.e., the graph is planar with all demand nodes lying on the boundary of k faces, the method requires at most n2-kd3k + sd2k operations. The algorithm can be applied to capacitated networks because they can be reduced to equivalent uncapacitated ones. These results unify, significantly generalize e.g., to cyclic problems, and sometimes improve upon e.g., for tandem facilities known polynomial-time dynamic-programming algorithms for Wagner and Whitin's Wagner, H. M., Whitin, T. M. 1958. Dynamic version of the economic lot size model. Management Sci.5 89--96. dynamic economic-order-quantity problem, Zangwill's Zangwill, W. I. 1969. A backlogging model and a multi-echelon model of a dynamic economic lot size production system---A network approach. Management Sci.15 506--527. generalization to tandem facilities, and Veinott's Veinott, Jr., A. F. 1969. Minimum concave-cost solution of Leontief substitution models of multi-facility inventory systems. Oper. Res.17 262--291. increasing-capacity warehousing problem. The networks for the finite-period versions of these problems are each 1-planar. The method improves upon Zangwill's Zangwill, W. I. 1968. Minimum concave cost flows in certain networks. Management Sci.14 429--450. related Oand running-time dynamic-programming method for finding minimum-additive-concave-cost non-negative flows in circuitless single-source networks. We also implement the method to solve in polynomial time the d + 1-demand-node and k-planar versions of the minimum-cost forest and Steiner problems in graphs. The running time for the d + 1-demand-node Steiner problem in graphs is comparable to that of Dreyfus and Wagner's Dreyfus, S. E., Wagner, R. A. 1971. The Steiner problem in graphs. Networks1 195--207. method.

Proceedings Article
01 Jan 1987
TL;DR: It is shown that knowledge complexity can be used to show that a language is easy to prove and that there are not any perfect zero-knowledge protocols for NP-complete languages unless the polynomial time hierarchy collapses.
Abstract: A Perfect Zero-Knowledge interactive proof system convinces a verifier that a string is in a language without revealing any additional knowledge in an information-theoretic sense. We show that for any language that has a perfect zero-knowledge proof system, its complement has a short interactive protocol. This result implies that there are not any perfect zero-knowledge protocols for NP-complete languages unless the polynomial time hierarchy collapses. This paper demonstrates that knowledge complexity can be used to show that a language is easy to prove.

Proceedings ArticleDOI
12 Oct 1987
TL;DR: A model of Hierarchical Memory with Block Transfer, like a random access machine, except that access to location x takes time f(x), and a block of consecutive locations can be copied from memory to memory, taking one unit of time per element after the initial access time is introduced.
Abstract: In this paper we introduce a model of Hierarchical Memory with Block Transfer (BT for short). It is like a random access machine, except that access to location x takes time f(x), and a block of consecutive locations can be copied from memory to memory, taking one unit of time per element after the initial access time. We first study the model with f(x) = xα for 0 ≪ α ≪ 1. A tight bound of θ(n log log n) is shown for many simple problems: reading each input, dot product, shuffle exchange, and merging two sorted lists. The same bound holds for transposing a √n × √n matrix; we use this to compute an FFT graph in optimal θ(n log n) time. An optimal θ(n log n) sorting algorithm is also shown. Some additional issues considered are: maintaining data structures such as dictionaries, DAG simulation, and connections with PRAMs. Next we study the model f(x) = x. Using techniques similar to those developed for the previous model, we show tight bounds of θ(n log n) for the simple problems mentioned above, and provide a new technique that yields optimal lower bounds of Ω(n log2n) for sorting, computing an FFT graph, and for matrix transposition. We also obtain optimal bounds for the model f(x)= xα with α ≫ 1. Finally, we study the model f(x) = log x and obtain optimal bounds of θ(n log*n) for simple problems mentioned above and of θ(n log n) for sorting, computing an FFT graph, and for some permutations.

Proceedings ArticleDOI
01 Jan 1987
TL;DR: This paper showed that for any language that has a perfect zero-knowledge proof system, its complement has a short interactive protocol, which implies that there are not any perfect zero knowledge protocols for NP-complete languages unless the polynomial time hierarchy collapses.
Abstract: A Perfect Zero-Knowledge interactive proof system convinces a verifier that a string is in a language without revealing any additional knowledge in an information-theoretic sense. We show that for any language that has a perfect zero-knowledge proof system, its complement has a short interactive protocol. This result implies that there are not any perfect zero-knowledge protocols for NP-complete languages unless the polynomial time hierarchy collapses. This paper demonstrates that knowledge complexity can be used to show that a language is easy to prove.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: It is shown that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors, which demonstrates an exponential gap in complexity between randomization and determinism.
Abstract: The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n/s) ‘log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism. l i ‘ 1992 Academic press, IX

Journal ArticleDOI
TL;DR: The experimental results on examples with a large number of irregular blocks show that the new methodology out-performs other well-known deterministic algorithms, and gives results that are comparable to random-based algorithms but with a computing time an order of magnitude less.
Abstract: A new methodology for hierarchical floor planning and global routing for building block layout is presented. Unlike the traditional approach, which separates placement and global routing into two consecutive stages, our approach accomplishes both jobs simultaneously in a hierarchical fashion. The global routing problem is formulated at each level as a series of the minimum Steiner tree problem in a special class of partial 3-trees, which can be solved optimally in linear time. The floor planner with a maximum of five rooms per level has been implemented in the C language, running on a VAX 8650 under 4.3 BSD UNIX. The experimental results on examples with a large number of irregular blocks show that our approach out-performs other well-known deterministic algorithms, and gives results that are comparable to random-based algorithms but with a computing time an order of magnitude less. Due to the unique goal-oriented and pattern-directed features of our floor planner, it accepts specifications for overall aspect ratio and I/O pad positions, thus making our approach suitable for hierarchical design.

Proceedings ArticleDOI
01 Oct 1987
TL;DR: An algorithm is presented for finding the ordering leading to the most compact representation of the ordered binary decision diagram for Boolean functions with time complexity O(n/sup 2/3/Sup n/), an improvement over the previous best, which required O( n!2/sup n/).
Abstract: The ordered binary decision diagram is a canonical representation for Boolean functions, presented by Bryant as a compact representation for a broad class of interesting functions derived from circuits. However, the size of the diagram is very sensitive to the choice of ordering on the variables; hence for some applications, such as Differential Cascode Voltage Switch (DCVS) trees, it becomes extremely important to find the ordering leading to the most compact representation. We present an algorithm for this problem with time complexity O(n/sup 2/3/sup n/), an improvement over the previous best, which required O(n!2/sup n/).

Proceedings ArticleDOI
01 Oct 1987
TL;DR: The first algorithm for computing the largest-area empty rectangle is optimal within a multiplicative constant and the two algorithms for computing such a rectangle can be modified to compute thelargest-perimeter rectangle in memory space.
Abstract: We provide two algorithms for solving the following problem: Given a rectangle containing n points, compute the largest-area and the largest-perimeter subrectangles with sides parallel to the given rectangle that lie within this rectangle and that do not contain any points in their interior. For finding the largest-area empty rectangle, the first algorithm takes O(n log3n) time and O(n) memory space and it simplifies the algorithm given by Chazelle, Drysdale and Lee which takes O(n log3n) time but O(n log n) storage. The second algorithm for computing the largest-area empty rectangle is more complicated but it only takes O(n log2n) time and O(n) memory space. The two algorithms for computing the largest-area rectangle can be modified to compute the largest-perimeter rectangle in O(n log2n) and O(n log n) time, respectively. Since O(n log n) is a lower bound on time for computing the largest-perimeter empty rectangle, the second algorithm for computing such a rectangle is optimal within a multiplicative constant.

Journal ArticleDOI
TL;DR: In this article, a polynomial time algorithm for searching for Hamilton cycles in undirected graphs is described, where the asymptotic probability of success is that of the existence of such a cycle.
Abstract: This paper describes a polynomial time algorithm HAM that searches for Hamilton cycles in undirected graphs. On a random graph its asymptotic probability of success is that of the existence of such a cycle. If all graphs withn vertices are considered equally likely, then using dynamic programming on failure leads to an algorithm with polynomial expected time. The algorithm HAM is also used to solve the symmetric bottleneck travelling salesman problem with probability tending to 1, asn tends to ∞.

Proceedings ArticleDOI
12 Oct 1987
TL;DR: An algorithm which solves the findpath or generalized movers' problem in single exponential sequential time, the first algorithm for the problem whose sequential time bound is less than double exponential and an algebraic tool called the multivariate resultant which gives a necessary and sufficient condition for a system of homogeneous polynomials to have a solution.
Abstract: We present an algorithm which solves the findpath or generalized movers' problem in single exponential sequential time. This is the first algorithm for the problem whose sequential time bound is less than double exponential. In fact, the combinatorial exponent of the algorithm is equal to the number of degrees of freedom, making it worst-case optimal, and equaling or improving the time bounds of many special purpose algorithms. The algorithm accepts a formula for a semi-algebraic set S describing the set of free configurations and produces a one-dimensional skeleton or "roadmap" of the set, which is connected within each connected component of S. Additional points may be linked to the roadmap in linear time. Our method draws from results of singularity theory, and in particular makes use of the notion of stratified sets as an efficient alternative to cell decomposition. We introduce an algebraic tool called the multivariate resultant which gives a necessary and sufficient condition for a system of homogeneous polynomials to have a solution, and show that it can be computed in polynomial parallel time. Among the consequences of this result are new methods for quantifier elimination and an improved gap theorem for the absolute value of roots of a system of polynomials.

Journal ArticleDOI
TL;DR: The proposed approach for merging leads to a parallel sorting algorithm that sorts a vector of length N in O(log2 k + N/k) log N) time, which is optimal, for k ¿ N/log2 N, in view of the ¿(N) and N log N lower bounds on merging and sorting, respectively.
Abstract: A parallel algorithm is described for merging two sorted vectors of total length N. The algorithm runs on a shared-memory model of parallel computation that disallows more than one processor to simultaneously read from or write into the same memory location. It uses k processors where l ? k ? N and requires O(N/k + log k × log N) time. The proposed approach for merging leads to a parallel sorting algorithm that sorts a vector of length N in O(log2 k + N/k) log N) time. Because they modify their behavior and hence their running time according to the number of available processors, the two new algorithms are said to be self-reconfiguring. In addition, both algorithms are optimal, for k ? N/log2 N, in view of the ?(N) and ?(N log N) lower bounds on merging and sorting, respectively.

Journal ArticleDOI
TL;DR: This paper illustrates how recent advances in graph theory and graph algorithms dramatically alter the situation for classifying problems as decidable in polynomial time by nonconstructively proving only the existence ofPolynomial-time decision algorithms.

ReportDOI
01 Dec 1987
TL;DR: In this paper, two new parallel algorithms are presented for the problem of labeling the connected components of a binary image, which is also known as the connected ones problem, using a shrinking operation defined by Levialdi and having time complexities of O(N log N) bit operations.
Abstract: : Two new parallel algorithms are presented for the problem of labeling the connected components of a binary image, which is also known as the connected ones problem. The machine model is an SIMD two-dimensional mesh connected computer consisting of an N x N array of processing elements, each containing a single pixel of an N x N image. Both new algorithms use a shrinking operation defined by Levialdi and have time complexities of O(N log N) bit operations, which makes them the fastest local algorithms for the problem. Compared with other approaches having similar or better time complexities, this local approach dramatically simplifies the algorithms and reduces the constants of proportionality by nearly two orders of magnitude, thus making them the first practical algorithms for the problem. The two algorithms differ in the amount of memory required per processing element; the first uses O(N) bits while the second employs a novel compression scheme to reduce the requirement to O(log N) bits.

Journal ArticleDOI
TL;DR: The least weight subsequence (LWS) problem is introduced, and is shown to be equivalent to the classic minimum path problem for directed graphs, and to be solvable in O(n log n) time generally and, for certain weight functions, in linear time.
Abstract: The least weight subsequence (LWS) problem is introduced, and is shown to be equivalent to the classic minimum path problem for directed graphs. A special case of the LWS problem is shown to be solvable in $O(n\log n)$ time generally and, for certain weight functions, in linear time. A number of applications are given, including an optimum paragraph formation problem and the problem of finding a minimum height B-tree, whose solutions realize improvement in asymptotic time complexity.

Proceedings ArticleDOI
01 Oct 1987
TL;DR: This paper describes a graph theoretic algorithm which, given a particular layout, finds a layer assignment that requires the minimum number of vias and yields globally optimum results when the maximum junction degree is limited to three and has been fully implemented.
Abstract: This paper describes a graph theoretic algorithm which, given a particular layout, finds a layer assignment that requires the minimum number of vias. The time complexity of the algorithm is O(n /sup 3/) where n is the number of routing segments in the given layout. Unlike previous algorithms, this algorithm does not require the layout to be grid based and places no constraints on the location of vias or the number of wires that may be joined at a single junction. The algorithm yields globally optimum results when the maximum junction degree is limited to three and has been fully implemented.

Journal ArticleDOI
TL;DR: Almost all the possible differences between the completeness notions w.r.t. any pair of the following reductions are shown, in DEXT.

Journal ArticleDOI
TL;DR: Upper bound time-space trade-offs are established for sorting and selection in two computational models for machines with input in read-only random access registers and on a read- only tape.