Showing papers by "Tom Leighton published in 1991"
••
03 Jan 1991
TL;DR: It is proved that a (simple) k-commodity flow problem can be approximately solved by approximately solving O(k log2n) single-comodity minimum-cost flow problems, and the first polynomial-time combinatorial algorithms for approximately solving the multicommodation flow problem are described.
Abstract: All previously known algorithms for solving the multicommodity flow problem with capacities are based on linear programming. The best of these algorithms uses a fast matrix multiplication algorithm and takes O(k3.5n3m0.5 log(nDU)) time for the multicommodity flow problem with integer demands and at least O(k2.5n2m0.5 log(n��1DU)) time to find an approximate solution, where k is the number of commodities, n and m denote the number of nodes and edges in the network, D is the largest demand, and U is the largest edge capacity. As a consequence, even multicommodity flow problems with just a few commodities are believed to be much harder than single-commodity maximum-flow or minimum-cost flow problems. In this paper, we describe the first polynomial-time combinatorial algorithms for approximately solving the multicommodity flow problem. The running time of our randomized algorithm is (up to log factors) the same as the time needed to solve k single-commodity flow problems, thus giving the surprising result that approximately computing a k-commodity maximum-flow is not much harder than computing about k single-commodity maximum-flows in isolation. In fact, we prove that a (simple) k-commodity flow problem can be approximately solved by approximately solving O(k log2n) single-commodity minimum-cost flow problems. Our k-commodity algorithm runs in O (knm log4n) time with high probability. We also describe a deterministic algorithm that uses an O(k)-factor more time. Given any multicommodity flow problem as input, both algorithms are guaranteed to provide a feasible solution to a modified flow problem in which all capacities are increased by a (1 + �)-factor, or to provide a proof that there is no feasible solution to the original problem. We also describe faster approximation algorithms for multicommodity flow problems with a special structure, such as those that arise in "sparsest cut" problems and uniform concurrent flow problems.
232 citations
••
01 Jun 1991TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, which aims to provide real-time information about the response of the immune system to natural disasters.
Abstract: Mathematics Department and 07960 Laboratory for Computer Science Massachusetts Institute of Technology
73 citations
•
TL;DR: In this paper, the ranking property of a k-round tournament over n=2/sup k/ players is analyzed, and it is demonstrated that the tournament possesses a surprisingly strong ranking property.
Abstract: A natural k-round tournament over n=2/sup k/ players is analyzed, and it is demonstrated that the tournament possesses a surprisingly strong ranking property. The ranking property of this tournament is exploited by being used as a building block for efficient parallel sorting algorithms under a variety of different models of computation. Three important applications are provided. First, a sorting circuit of depth 7.44 log n, which sorts all but a superpolynomially small fraction of the n-factorial possible input permutations, is defined. Secondly, a randomized sorting algorithm that runs in O(log n) word steps with very high probability is given for the hypercube and related parallel computers (the butterfly, cube-connected cycles, and shuffle-exchange). Thirdly, a randomized algorithm that runs in O(m+log n)-bit steps with very high probability is given for sorting n O(m)-bit records on an n log n-node butterfly.<>
47 citations
••
01 Mar 1991TL;DR: In this article, the authors investigate the performance of deterministic and randomized algorithms for tree embedding and derive lower bounds on the congestion that any on-line allocation algorithm must incur in order to guarantee load balance.
Abstract: Many tree–structured computations are inherently parallel.
As leaf processes are recursively spawned they can
be assigned to independent processors in a multicomputer
network. To maintain load balance, an on–line
mapping algorithm must distribute processes equitably
among processors. Additionally, the algorithm itself
must be distributed in nature, and process allocation
must be completed via message–passing with minimal
communication overhead.
This paper investigates bounds on the performance
of deterministic and randomized algorithms for on–line
tree embedding. In particular, we study tradeoffs between
performance (load–balance) and communication
overhead (message congest ion). We give a simple technique
to derive lower bounds on the congestion that
any on–line allocation algorithm must incur in order to
guarantee load balance. This technique works for both
randomized and deterministic algorithms, although we
find that the performance of randomized on-line algorithms
to be somewhat better than that of deterministic
algorithms. Optimal bounds are achieved for several
networks including multi–dimensional grids and butterflies.
23 citations
••
01 Sep 1991TL;DR: The problem of constructing a sorting circuit that will work well even if a constant fraction of its comparators fail at random is addressed.
Abstract: The problem of constructing a sorting circuit that will work well even if a constant fraction of its comparators fail at random is addressed. Two types of comparator failure are considered: passive failures, which result in no comparison being made (i.e., the items being compared are output in the same order that they are input), and destructive failures, which result in the items being output in the reverse of the correct order. In either scenario, it is assumed that each comparator is faulty with some constant probability rho , and a circuit is said to be fault-tolerant if it performs some desired function with high probability given that each comparator fails with probability rho . One passive and two destructive circuits are constructed. >
18 citations
••
01 Sep 1991TL;DR: The problem of dynamically allocating and deallocating local memory resources among multiple users in a parallel or distributed system is considered and an online allocation algorithm is devised that minimizes both the fraction of unused space due to fragmentation of the memory and the slowdown needed by the system to service user requests.
Abstract: The problem of dynamically allocating and deallocating local memory resources among multiple users in a parallel or distributed system is considered. The goal is to devise an online allocation algorithm that minimizes both the fraction of unused space due to fragmentation of the memory and the slowdown needed by the system to service user requests. The problem is solved in near-optimal fashion by devising an algorithm that allows the memory to be used to 100% of capacity despite the fragmentation and guarantees that service delays will always be within a constant factor of optimal. The algorithm is completely online (no foreknowledge of user activity is assumed) and can accommodate any sequence of insertions and deletions by the users which does not violate global memory bounds. The results have applications in the domain of parallel disk allocation. >
2 citations