scispace - formally typeset
Search or ask a question

Showing papers in "Algorithmica in 1991"


Journal ArticleDOI
TL;DR: This paper describes a circuit transformation called retiming in which registers are added at some points in a circuit and removed from others in such a way that the functional behavior of the circuit as a whole is preserved.
Abstract: This paper describes a circuit transformation calledretiming in which registers are added at some points in a circuit and removed from others in such a way that the functional behavior of the circuit as a whole is preserved. We show that retiming can be used to transform a given synchronous circuit into a more efficient circuit under a variety of different cost criteria. We model a circuit as a graph in which the vertex setV is a collection of combinational logic elements and the edge setE is the set of interconnections, each of which may pass through zero or more registers. We give anO(?VźE?lg?V?) algorithm for determining an equivalent retimed circuit with the smallest possible clock period. We show that the problem of determining an equivalent retimed circuit with minimum state (total number of registers) is polynomial-time solvable. This result yields a polynomial-time optimal solution to the problem of pipelining combinational circuitry with minimum register cost. We also give a chacterization of optimal retiming based on an efficiently solvable mixed-integer linear-programming problem.

940 citations


Journal ArticleDOI
TL;DR: This note presents a simplification and generalization of an algorithm for searchingk-dimensional trees for nearest neighbors reported by Friedmanet al [3], which can be generalized to allow a partition plane to have an arbitrary orientation, rather than insisting that it be perpendicular to a coordinate axis, as in the original algorithm.
Abstract: This note presents a simplification and generalization of an algorithm for searchingk-dimensional trees for nearest neighbors reported by Friedmanet al [3]. If the distance between records is measured usingL2, the Euclidean norm, the data structure used by the algorithm to determine the bounds of the search space can be simplified to a single number. Moreover, because distance measurements inL2 are rotationally invariant, the algorithm can be generalized to allow a partition plane to have an arbitrary orientation, rather than insisting that it be perpendicular to a coordinate axis, as in the original algorithm. When ak-dimensional tree is built, this plane can be found from the principal eigenvector of the covariance matrix of the records to be partitioned. These techniques and others yield variants ofk-dimensional trees customized for specific applications. It is wrong to assume thatk-dimensional trees guarantee that a nearest-neighbor query completes in logarithmic expected time. For smallk, logarithmic behavior is observed on all but tiny trees. However, for largerk, logarithmic behavior is achievable only with extremely large numbers of records. Fork = 16, a search of ak-dimensional tree of 76,000 records examines almost every record.

380 citations


Journal ArticleDOI
TL;DR: The partitioning algorithm is developed, a randomized on-line algorithm for the paging problem, which it is proved that its expected cost on any sequence of requests is within a factor ofHk of optimum.
Abstract: Thepaging problem is that of deciding which pages to keep in a memory ofk pages in order to minimize the number of page faults. We develop thepartitioning algorithm, a randomized on-line algorithm for the paging problem. We prove that its expected cost on any sequence of requests is within a factor ofH k of optimum. (H k is thekth harmonic number, which is about ln(k).) No on-line algorithm can perform better by this measure. Our result improves by a factor of two the best previous algorithm.

247 citations


Journal ArticleDOI
TL;DR: Most of the important results on the theory of Simulated Annealing are reviewed, placing them in a unified framework and new results are reported as well.
Abstract: Simulated Annealing has been a very successful general algorithm for the solution of large, complex combinatorial optimization problems. Since its introduction, several applications in different fields of engineering, such as integrated circuit placement, optimal encoding, resource allocation, logic synthesis, have been developed. In parallel, theoretical studies have been focusing on the reasons for the excellent behavior of the algorithm. This paper reviews most of the important results on the theory of Simulated Annealing, placing them in a unified framework. New results are reported as well.

172 citations


Journal ArticleDOI
TL;DR: An O(n1+ɛ)-time algorithm for computing convex layers in ℝ3, and an output sensitive algorithm for Computing a level in an arrangements of planes in ™3, whose time complexity is O((b+n) nɛ, whereb is the size of the level.
Abstract: \indent We consider the half-space range reporting problem: Given an $n$ point set $S$ in ${\bf R} ^d$, preprocess it into a data structure, so that, given a query half-space $\gamma$, the points of $S \cap \gamma$ can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points of $S$. For given $m, n \leq m \leq n ^ {\lfloor d/2 \rfloor}$ and an arbitrarily small positive constant $\varepsilon$, we achieve $O(m^{1+ \varepsilon})$ space and preprocessing time, $O \frac {n}{m^{1/ \lfloor d/2 \rfloor}} \, \mbox {log} \, n)$ query time and $O(m^{1 + \varepsilon} / n)$ amortized update time $(d \geq 3)$. We present, among others, the following applications: an $O(n ^ {1 + \varepsilon})$ time algorithm for computing convex layers in ${\bf R}^3$, and an output sensitive algorithm for computing a level in an arrangements of planes in ${\bf R} ^3$, whose time complexity is $O((b+n) \, n^ \varepsilon)$, where $b$ is the size of the level.

135 citations


Journal ArticleDOI
TL;DR: It is found that a number of circuit placement problems have energy landscapes with fractal properties, thus giving for the first time a reasonable explanation of the successful application of simulated annealing to problems in the VLSI domain.
Abstract: We present a new theoretical framework for analyzing simulated annealing. The behavior of simulated annealing depends crucially on the ldŋergy landscape” associated with the optimization problem: the landscape must have special properties if annealing is to be efficient. We prove that certain fractal properties are sufficient for simulated annealing to be efficient in the following sense: If a problem is scaled to have best solutions of energy 0 and worst solutions of energy 1, a solution of expected energy no more than ɛ can be found in time polynomial in 1/ɛ, where the exponent of the polynomial depends on certain parameters of the fractal. Higher-dimensional versions of the problem can be solved with almost identical efficiency. The cooling schedule used to achieve this result is the familiar geometric schedule of annealing practice, rather than the logarithmic schedule of previous theory. Our analysis is more realistic than those of previous studies of annealing in the constraints we place on the problem space and the conclusions we draw about annealing's performance. The mode of analysis is also new: Annealing is modeled as a random walk on a graph, and recent theorems relating the “conductance” of a graph to the mixing rate of its associated Markov chain generate both a new conceptual approach to annealing and new analytical, quantitative methods. The efficiency of annealing is compared with that of random sampling and descent algorithms. While these algorithms are more efficient for some fractals, their run times increase exponentially with the number of dimensions, making annealing better for problems of high dimensionality. We find that a number of circuit placement problems have energy landscapes with fractal properties, thus giving for the first time a reasonable explanation of the successful application of simulated annealing to problems in the VLSI domain.

95 citations


Journal ArticleDOI
TL;DR: This paper describes a simple parallel algorithm for list ranking that matches the performance of the Cole-Vishkin [CV3] algorithm but is simple and has reasonable constant factors.
Abstract: In this paper we describe a simple parallel algorithm for list ranking. The algorithm is deterministic and runs inO(logn) time on an EREW PRAM withn/logn processors. The algorithm matches the performance of the Cole-Vishkin [CV3] algorithm but is simple and has reasonable constant factors.

75 citations


Journal ArticleDOI
TL;DR: By constructing a master equation for the distribution of outcomes from simulated annealing, this formalism is able to characterize this process exactly for arbitraryAnnealing schedules on extremely small problems.
Abstract: By constructing a master equation for the distribution of outcomes from simulated annealing, we are able to characterize this process exactly for arbitrary annealing schedules on extremely small problems. Two sorts of numerical experiments are reported, using this formalism. First, annealing schedules are found which minimize the cut cost of partitioning a highly symmetric weighted graph, using a fixed number of Monte Carlo search steps. The experiments yield some surprising results, which sharpen our understanding of the problems inherent in trying to optimize a stochastic search. For example, optimal annealing schedules are not monotone decreasing in temperature. Second, we construct configuration spaces of random energies and varying connectivity. These are used to compare different annealing schedules which are common in the literature. The experiments also provide an occasion to contrast annealing schedules derived from asymptotic, worst-case bounds on convergence to the global optimum with adaptive schedules which attempt to maintain the system close to equilibrium throughout the annealing process.

71 citations


Journal ArticleDOI
TL;DR: It is proved that the height of an associated digital tree is simply related to the alignment matrix through some order statistics, and established that the Height of adigital trie under anindependent model is asymptotically equal to 2 logαn wheren is the number of words stored in the trie and α is a parameter of the probabilistic model.
Abstract: This paper studies in a probabilistic framework some topics concerning the way words (strings) can overlap, and relationship of this to the height of digital trees associated with this set of words. A word is defined as a random sequence of (possibly infinite) symbols over a finite alphabet. A key notion of analignment matrix {Cij}i,j=1n is introduced whereCij is the length of the longest string that is a prefix of theith and thejth word. It is proved that the height of an associated digital tree is simply related to the alignment matrix through some order statistics. In particular, using this observation and proving some inequalities for order statistics, we establish that the height of adigital trie under anindependent model (i.e., all words are statistically independent) is asymptotically equal to 2 logźn wheren is the number of words stored in the trie and ź is a parameter of the probabilistic model. This result is generalized in three directions, namely we considerb-tries,Markovian model (i.e., dependency among letters in a word), and adependent model (i.e., dependency among words). In particular, when consecutive letters in a word are Markov dependent (Markovian model), then we demonstrate that the height converges in probability to 2 logźn where ź is a parameter of the underlying Markov chain. On the other hand, for suffix trees which fall into the dependent model, we show that the height does not exceed 2 logźn, where ź is a parameter of the probabilistic model. These results find plenty of applications in the analysis of data structures built over digital words.

69 citations


Journal ArticleDOI
TL;DR: An algorithm is given for embeddingn-node graphs of thicknessk ink layers usingO(n3) area, using no contact cuts, and respecting prespecified node placements, which is asymptotically optimal for placement-respecting algorithms, even if more layers are allowed.
Abstract: In this paper we propose two new multilayer grid models for VLSI layout, both of which take into account the number of contact cuts used. For the first model in which nodes “exist” only on one layer, we prove a tight area × (number of contact cuts) = Θ(n 2) tradeoff for embeddingn-node planar graphs of bounded degree in two layers. For the second model in which nodes “exist” simultaneously on all layers, we give a number of upper bounds on the area needed to embed groups using no contact cuts. We show that anyn-node graph of thickness 2 can be embedded on two layers inO(n 2) area. This bound is tight even if more layers and any number of contact cuts are allowed. We also show that planar graphs of bounded degree can be embedded on two layers inO(n 3/2(logn)2) area.

62 citations


Journal ArticleDOI
TL;DR: A class of algorithms for finding the global minimum of a continuous-variable function defined on a hypercube, based on both diffusion processes and simulated annealing, are presented, and it is shown that “learning” in these networks can be achieved by a set of three interconnected diffusion machines.
Abstract: The first purpose of this paper is to present a class of algorithms for finding the global minimum of a continuous-variable function defined on a hypercube. These algorithms, based on both diffusion processes and simulated annealing, are implementable as analog integrated circuits. Such circuits can be viewed as generalizations of neural networks of the Hopfield type, and are called "diffusion machines." Our second objective is to show that "learning" in these networks can be achieved by a set of three interconnected diffusion machines: one that learns, one to model the desired behavior, and one to compute the weight changes.

Journal ArticleDOI
TL;DR: A deterministic approximation algorithm is presented that uses close to the minimum possible channel space in a two-dimensional gate-array and is best suited to cases where the number of terminals on each net is small.
Abstract: We consider the problem of routing multiterminal nets in a two-dimensional gate-array. Given a gate-array and a set of nets to be routed, we wish to find a routing that uses as little channel space as possible. We present a deterministic approximation algorithm that uses close to the minimum possible channel space. We cast the routing problem as a new form of zero-one multicommodity flow, an integer-programming problem. We solve this integer program approximately by first solving its linear-program relaxation and then rounding any fractions that appear in the solution to the linear program. The running time of the rounding algorithm is exponential in the number of terminals in a net but polynomial in the number of nets and the size of the array. The algorithm is thus best suited to cases where the number of terminals on each net is small.

Journal ArticleDOI
TL;DR: In a two-dimensional Delaunay-triangulated domain, there exists a partial ordering of the triangles (with respect to a vertex) that is consistent with the two-dimensionally visible triangles from that vertex.
Abstract: In a two-dimensional Delaunay-triangulated domain, there exists a partial ordering of the triangles (with respect to a vertex) that is consistent with the two-dimensional visibility of the triangles from that vertex. An equivalent statement is that a polygon that is star-shaped with respect to a given vertex can be extended, one triangle at a time, until it includes the entire domain. Arbitrary planar triangulations do not possess this useful property which allows incremental processing of the triangles.

Journal ArticleDOI
TL;DR: It is shown under suitable conditions onU(·), {ξk}, {ak}, and {bk} thatXk converges in probability to the set of global minima of U(·) and howXk is restricted to a compact set and its effect on convergence is given.
Abstract: We study the convergence of a class of discrete-time continuous-state simulated annealing type algorithms for multivariate optimization. The general algorithm that we consider is of the formX k +1 =X k −a k (▽U(X k ) + ξk) +b k W k . HereU(·) is a smooth function on a compact subset of ℝ d , {ξk} is a sequence of ℝ d -valued random variables, {W k } is a sequence of independent standardd-dimensional Gaussian random variables, and {a k }, {b k } are sequences of positive numbers which tend to zero. These algorithms arise by adding decreasing white Gaussian noise to gradient descent, random search, and stochastic approximation algorithms. We show under suitable conditions onU(·), {ξ k }, {a k }, and {b k } thatX k converges in probability to the set of global minima ofU(·). A careful treatment of howX k is restricted to a compact set and its effect on convergence is given.

Journal ArticleDOI
TL;DR: An algorithm to compute the convex hull of a curved object bounded by0(n) algebraic curve segments of maximum degreed is presented.
Abstract: We present an0(n ·do(1)) algorithm to compute the convex hull of a curved object bounded by0(n) algebraic curve segments of maximum degreed.

Journal ArticleDOI
TL;DR: A variant of k-D trees, the divided k-d tree, is described that has some important advantages over ordinaryk-d trees and allows for the insertion and deletion of points in O(logn) worst-case time.
Abstract: A variant ofk-d trees, thedivided k-d tree, is described that has some important advantages over ordinaryk-d trees. The dividedk-d tree is fully dynamic and allows for the insertion and deletion of points inO(logn) worst-case time. Moreover, dividedk-d trees allow for split and concatenate operations. Different types of queries can be performed with equal or almost equal efficiency as on ordinaryk-d trees. Both two- and multidimensional dividedk-d trees are studied.

Journal ArticleDOI
TL;DR: This work considers the following “fence enclosure” problem: given a set of points in the plane with valuesvi ≥ 0, the authors wish to enclose a subset of the points with a fence (a simple closed curve) in order to maximize the “value” of the enclosure.
Abstract: We study a variety of geometric versions of the classical knapsack problem. In particular, we consider the following "fence enclosure" problem: given a setS ofn points in the plane with valuesvi ź 0, we wish to enclose a subset of the points with a fence (a simple closed curve) in order to maximize the "value" of the enclosure. The value of the enclosure is defined to be the sum of the values of the enclosed points minus the cost of the fence. We consider various versions of the problem, such as allowingS to consist of points and/or simple polygons. Other versions of the problems are obtained by restricting the total amount of fence available and also allowing the enclosure to consist of at mostM connected components. When there is an upper bound on the length of fence available, we show that the problem is NP-complete. We also provide polynomial-time algorithms for many versions of the fence problem when an unrestricted amount of fence is available.

Journal ArticleDOI
TL;DR: This paper discusses the complexity of packingk-chains (simple paths of lengthk) into an undirected graph; the chains packed must be either vertex-disjoint or edge- Disjoint.
Abstract: This paper discusses the complexity of packingk-chains (simple paths of lengthk) into an undirected graph; the chains packed must be either vertex-disjoint or edge-disjoint. Linear-time algorithms are given for both problems when the graph is a tree, and for the edge-disjoint packing problem when the graph is general andk = 2. The vertex-disjoint packing problem for general graphs is shown to be NP-complete even when the graph has maximum degree three andk = 2. Similarly the edge-disjoint packing problem is NP-complete even when the graph has maximum degree four andk = 3.

Journal ArticleDOI
TL;DR: It is concluded that there are essentially two polynomial algorithms: Karmarkar's method and the algorithm that follows a central trajectory, and they differ only in a choice of parameters (respectively lower bound and penalty multiplier).
Abstract: Since Karmarkar published his algorithm for linear programming, several different interior directions have been proposed and much effort was spent on the problem transformations needed to apply these new techniques. This paper examines several search directions in a common framework that does not need any problem transformation. These directions prove to be combinations of two problem-dependent vectors, and can all be improved by a bidirectional search procedure. We conclude that there are essentially two polynomial algorithms: Karmarkar's method and the algorithm that follows a central trajectory, and they differ only in a choice of parameters (respectively lower bound and penalty multiplier).

Journal ArticleDOI
TL;DR: It is conjecture the impossibility of computing the Euclidean distance transform in polylogarithmic time on a mesh of trees, and approximate the distance transform up to a given error.
Abstract: Distance transforms are an important computational tool for the processing of binary images. For ann ×n image, distance transforms can be computed in time$$\mathcal{O}$$(n) on a mesh-connected computer and in polylogarithmic time on hypercube related structures. We investigate the possibilities of computing distance transforms in polylogarithmic time on the pyramid computer and the mesh of trees. For the pyramid, we obtain a polynomial lower bound using a result by Miller and Stout, so we turn our attention to the mesh of trees. We give a very simple $$\mathcal{O}$$ (logn) algorithm for the distance transform with respect to theL1-metric, an $$\mathcal{O}$$ (log2n) algorithm for the transform with respect to theLź-metric, and find that the Euclidean metric is much more difficult. Based on evidence from number theory, we conjecture the impossibility of computing the Euclidean distance transform in polylogarithmic time on a mesh of trees. Instead, we approximate the distance transform up to a given error. This works for anyLk-metric and takes time$$\mathcal{O}$$(log3n).

Journal ArticleDOI
TL;DR: It is shown that certain general rearrangement rules can be modified to reduce significantly the number of data moves, without affecting the asymptotic cost of a data access.
Abstract: We consider self-organizing data structures when the number of data accesses is unknown. We show that certain general rearrangement rules can be modified to reduce significantly the number of data moves, without affecting the asymptotic cost of a data access. As a special case, explicit formulae are given for the expected cost of a data access and the expected number of data moves for the modified move-to-front rules for linear lists and binary trees. Since a data move usually costs at least as much as a data access, the modified rule eventually leads to a savings in total cost (the sum of data accesses and moves).

Journal ArticleDOI
TL;DR: A general strategy is presented for solving (approximately) combinatorial optimization problems with a Boltzmann machine for two different problems, namely MATCHING and GRAPH PARTITIONING.
Abstract: The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial optimization problems with a Boltzmann machine. The strategy is illustrated by discussing the details for two different problems, namely MATCHING and GRAPH PARTITIONING.

Journal ArticleDOI
Hans Rohnert1
TL;DR: An algorithm is given for finding a collision-free path for a disc between a collection of polygons havingn corners in total and the answer whether a feasible path exists is given in timeO(logn).
Abstract: An algorithm is given for finding a collision-free path for a disc between a collection of polygons havingn corners in total. The polygons are fixed and can be preprocessed. A query specifies the radiusr of the disc to be moved and the start and destination points of the center of the disc. The answer whether a feasible path exists is given in timeO(logn). Returning a feasible path is done in additional time proportional to the length of the description of the path. Preprocessing time isO(n logn) and space complexity isO(n).

Journal ArticleDOI
Dipen Moitra1
TL;DR: This work derives an optimal parallel algorithm for theminimal square cover problem, which for any desired computation timeT in [logn,n] runs on an EREW-PRAM with (n/T) processors.
Abstract: Given a black-and-white image, represented by an array of źn × źn binary-valued pixels, we wish to cover the black pixels with aminimal set of (possibly overlapping) maximal squares. It was recently shown that obtaining aminimum square cover for a polygonal binary image with holes is NP-hard. We derive an optimal parallel algorithm for theminimal square cover problem, which for any desired computation timeT in [logn,n] runs on an EREW-PRAM with (n/T) processors. The cornerstone of our algorithm is a novel data structure, the cover graph, which compactly represents the covering relationships between the maximal squares of the image. The size of the cover graph is linear in the number of pixels. This algorithm has applications to problems in VLSI mask generation, incremental update of raster displays, and image compression.

Journal ArticleDOI
TL;DR: It is shown that the order-k Voronoi diagram of n sites with additive weights in the plane has at most (4k−2) vertices, (6k−3) edges, and (2k−1)(n−itk) + 1 regions.
Abstract: It is shown that the order-k Voronoi diagram of n sites with additive weights in the plane has at most (4kź2)(nźk) vertices, (6kź3)(nźk) edges, and (2kź1)(nźitk) + 1 regions. These bounds are approximately the same as the ones known for unweighted order-k Voronoi diagrams. Furthermore, tight upper bounds on the number of edges and vertices are given for the case that every weighted site has a nonempty region in the order-1 diagram. The proof is based on a new algorithm for the construction of these diagrams which generalizes a plane-sweep algorithm for order-1 diagrams developed by Steven Fortune. The new algorithm has time-complexityO(k2n logn) and space-complexityO(kn). It is the only nontrivial algorithm known for constructing order-kc Voronoi diagrams of sites withadditive weights. It is fairly simple and of practical interest also in the special case of unweighted sites.

Journal ArticleDOI
TL;DR: In this paper, the authors present several algorithms for rapidly four-coloring large planar graphs and discuss the results of extensive experimentation with over 140 graphs from two distinct classes of randomly generated instances having up to 128,000 vertices.
Abstract: We present several algorithms for rapidly four-coloring large planar graphs and discuss the results of extensive experimentation with over 140 graphs from two distinct classes of randomly generated instances having up to 128,000 vertices. Although the algorithms can potentially require exponential time, the observed running times of our more sophisticated algorithms are linear in the number of vertices over the range of sizes tested. The use of Kempe chaining and backtracking together with a fast heuristic which usually, but not always, resolves impasses gives us hybrid algorithms that: (1) successfully four-color all our test graphs, and (2) in practice run, on average, only twice as slow as the well-known, nonexact, simple to code, ź(n) saturation algorithm of Brelaz.

Journal ArticleDOI
TL;DR: An optimalO(n logn) plane-sweep algorithm for computing a Delaunay triangulation on a possibly degenerate set of sites in the plane under the L1 metric or the L∞ metric is described.
Abstract: TheDelaunay diagram on a set of points in the plane, calledsites, is the straight-line dual graph of the Voronoi diagram. When no degeneracies are present, the Delaunay diagram is a triangulation of the sites, called theDelaunay triangulation. When degeneracies are present, edges must be added to the Delaunay diagram to obtain a Delaunay triangulation. In this paper we describe an optimalO(n logn) plane-sweep algorithm for computing a Delaunay triangulation on a possibly degenerate set of sites in the plane under theL1 metric or theLź metric.

Journal ArticleDOI
TL;DR: A tight Ω(n3) lower bound is proved on the area of rectilinear grids which allow a permutation layout ofn inputs andn outputs which holds for permutation layouts in multilayer grids as long as a fixed fraction of the paths do not change layers.
Abstract: In this paper we prove a tight Ω(n3) lower bound on the area of rectilinear grids which allow a permutation layout ofn inputs andn outputs. Previously, the best lower bound for the area of permutation layouts with arbitrary placement of the inputs and outputs was Ω(n2), though Cutler and Shiloach [CS] proved an Ω(n2.5) lower bound for permutation layouts in which the set of inputs and the set of outputs each lie on horizontal lines. Our lower bound also holds for permutation layouts in multilayer grids as long as a fixed fraction of the paths do not change layers.

Journal ArticleDOI
TL;DR: This work considers several models of growth, including general birth-and-death processes, the M/G/∞ model, and a non-Markovian process for processing plane-sweep information in computational geometry, called “hashing with lazy deletion” (HwLD).
Abstract: We answer questions about the distribution of the maximum size of queues and data structures as a function of time. The concept of "maximum" occurs in many issues of resource allocation. We consider several models of growth, including general birth-and-death processes, the M/G/ź model, and a non-Markovian process (data structure) for processing plane-sweep information in computational geometry, called "hashing with lazy deletion" (HwLD). It has been shown that HwLD is optimal in terms of expected time and dynamic space; our results show that it is also optimal in terms of expectedpreallocated space, up to a constant factor. We take two independent and complementary approaches: first, in Section 2, we use a variety of algebraic and analytical techniques to derive exact formulas for the distribution of the maximum queue size in stationary birth-and-death processes and in a nonstationary model related to file histories. The formulas allow numerical evaluation and some asymptotics. In our second approach, in Section 3, we consider the M/G/ź model (which includes M/M/ź as a special case) and use techniques from the analysis of algorithms to get optimal big-oh bounds on the expected maximum queue size and on the expected maximum amount of storage used by HwLD in excess of the optimal amount. The techniques appear extendible to other models, such as M/M/1.

Journal ArticleDOI
TL;DR: This paper considers a variety of computational-geometry problems on images in a digitized plane, and presents optimal algorithms for solving these problems on a systolic screen and impliesO(√M)-time solutions to a number of other geometric problems: e.g., rectangular visibility, separability, detection of pseudo-star-shapedness, and optical clustering.
Abstract: Adigitized plane ź of sizeM is a rectangular źM × źM array of integer lattice points called pixels. A źM × źM mesh-of-processors in which each processorPij represents pixel (i,j) is a natural architecture to store and manipulate images in ź; such a parallel architecture is called asystolic screen. In this paper we consider a variety of computational-geometry problems on images in a digitized plane, and present optimal algorithms for solving these problems on a systolic screen. In particular, we presentO(źM)-time algorithms for determining all contours of an image; constructing all rectilinear convex hulls of an image (peeling); solving the parallel and perspective visibility problem forn disjoint digitized images; and constructing the Voronoi diagram ofn planar objects represented by disjoint images, for a large class of object types (e.g., points, line segments, circles, ellipses, and polygons of constant size) and distance functions (e.g., allLp metrics). These algorithms implyO(źM)-time solutions to a number of other geometric problems: e.g., rectangular visibility, separability, detection of pseudo-star-shapedness, and optical clustering. One of the proposed techniques also leads to a new parallel algorithm for determining all longest common subsequences of two words.