scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1983"


Proceedings ArticleDOI
07 Nov 1983
TL;DR: A simple but very general Monte-Carlo technique for the approximate solution of enumeration and reliability problems and several applications are given.
Abstract: 1. Introduction We present a simple but very general Monte-Carlo technique for the approximate solution of enumeration and reliability problems. Several applications are given, including: 1. Estimating the number of triangulated plane maps with a given number of ver-tices; 2. Estimating the cardinality of a union of sets; 3. Estimating the number of input combinations for which a boolean function, presented in disjunctive normal form,

342 citations


Journal ArticleDOI
TL;DR: A collection of efficient algorithms that deliver approximate solution to the weighted stable set, vertex cover and set packing problems and guarantee bounds on the ratio of the heuristic Solution to the optimal solution.

304 citations


Journal ArticleDOI
TL;DR: This work generalizes the classical one-dimensional bin packing model to include dynamic arrivals and departures of items over time, and shows that no on-line packing algorithm can satisfy a substantially better performance bound than that for First Fit.
Abstract: Motivated by potential applications to computer storage allocation, we generalize the classical one-dimensional bin packing model to include dynamic arrivals and departures of items over time. Within this setting, we prove close upper and lower bounds on the worst-case performance of the commonly used First Fit packing algorithm, and, using adversary-type arguments, we show that no on-line packing algorithm can satisfy a substantially better performance bound than that for First Fit.

168 citations


Journal ArticleDOI
TL;DR: It is shown that by choosing r appropriately, the asymptotic worst case performance of the shelf algorithms can be made arbitrarily close to that of the next-fit and first-fit level algorithms, without the restriction that items must be packed in order of decreasing height.
Abstract: This paper studies two approximation algorithms for packing rectangles, using the two-dimensional packing model of Baker, Coffman and Rivest [SIAM J. Comput., 9 (1980), pp. 846–855]. The algorithms studied are called next-fit and first-fit shelf algorithms, respectively. They differ from previous algorithms by packing the rectangles in the order given; the previous algorithms required sorting the rectangles by decreasing height or width before packing them, which is not possible in some applications. The shelf algorithms are a modification of the next-fit and first-fit decreasing height level algorithms of Coffman, Garey, Johnson and Tarjan [SIAM J. Comput., 9 (1980), pp. 808–826]. Each shelf algorithm takes a parameter r. It is shown that by choosing r appropriately, the asymptotic worst case performance of the shelf algorithms can be made arbitrarily close to that of the next-fit and first-fit level algorithms, without the restriction that items must be packed in order of decreasing height. Nonasymptoti...

166 citations


Journal ArticleDOI
TL;DR: Three fast and efficient "scan-along" algorithms for compressing digitized electrocardiographic data are described, based on the minimum perimeter polygonal approximation for digitized curves.
Abstract: Three fast and efficient "scan-along" algorithms for compressing digitized electrocardiographic data are described. These algorithms are "scan-along" in the sense that they produce the compressed data in real time as the electrocardiogram is generated. The algorithms are based on the minimum perimeter polygonal approximation for digitized curves. The approximation restricts the maximum error to be no greater than a specified value. Our algorithms achieve a compression ratio of ten on a database of 8000 5-beat abnormal electrocardiograms sampled at 250 Hz and a compression ratio of eleven on a database of 600 3-beat normal electrocardiograms (different from the preceding database) sampled at 500 Hz.

145 citations


Journal ArticleDOI
TL;DR: In this article, the problem of task allocation in fault-tolerant distributed systems is formulated as a constrained sum-of-squares minimization problem and an efficient approximation algorithm is proposed.
Abstract: This paper examines task allocation in fault-tolerant distributed systems. The problem is formulated as a constrained sum of squares minimization problem. The computational complexity of this problem prompts us to consider an efficient approximation algorithm. We show that the ratio of the performance of the approximation algorithm to that of the optimal solution is bounded by 9m/(8m?r+1)), wherem is the number of processors to be allocated andr is the number of times each task is to be replicated. Experience with the algorithm suggests that even better performance ratios can be expected.

113 citations


Proceedings ArticleDOI
Brenda S. Baker1
07 Nov 1983
TL;DR: A general technique that can be used to obtain approximation algorithms for various NP-complete problems on planar graphs, which includes maximum independent set, maximum tile salvage, partition into triangles, maximum H-matching, minimum vertex cover, minimum dominating set, and minimum edge dominating set.
Abstract: This paper describes a general technique that can be used to obtain approximation algorithms for various NP-complete problems on planar graphs. The strategy depends on decomposing a planar graph into subgraphs of a form we call k- outerplanar. For fixed k, the problems of interest are solvable optimally in linear time on k-outerplanar graphs by dynamic programming. For general planar graphs, if the problem is a maximization problem, such as maximum independent set, this technique gives for each k a linear time algorithm that produces a solution whose size is at least (k-1)/k optimal. If the problem is a minimization problem, such as minimum vertex cover, it gives for each k a linear time algorithm that produces a solution whose size is at most (k + 1)/k optimal. Taking k = c log log n or k = c log n, where n is the number of nodes and c is some constant, we get polynomial time approximation schemes, i.e. algorithms whose solution sizes converge toward optimal as n increases. The class of problems for which this approach provides approximation schemes includes maximum independent set, maximum tile salvage, partition into triangles, maximum H-matching, minimum vertex cover, minimum dominating set, and minimum edge dominating set. For these and certain other problems, the proof of solvability on k-outerplanar graphs also enlarges the class of planar graphs for which the problems are known to be solvable.

92 citations


Proceedings ArticleDOI
07 Nov 1983
TL;DR: This method gives a polynomial time attack on knapsack public key cryptosystems that can be expected to break them if they transmit information at rates below dc (n), as n → ∞.
Abstract: The subset sum problem is to decide whether or not the 0-1 integer programming problem Σi=1n aixi = M; all xi = 0 or 1; has a solution, where the ai and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public key cryptosystems of knapsack type. We propose an algorithm which when given an instance of the subset sum problem searches for a solution. This algorithm always halts in polynomial time, but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. We analyze the performance of the proposed algorithm. Let the density d of a subset sum problem be defined by d=n/log2(maxi ai). Then for "almost all" problems of density d ≪ .645 the vector v we are searching for is the shortest nonzero vector in the lattice. We prove that for "almost all" problems of density d ≪ 1/n the lattice basis reduction algorithm locates v. Extensive computational tests of the algorithm suggest that it works for densities d ≪ dc (n), where dc (n) is a cutoff value that is substantially larger than 1/n. This method gives a polynomial time attack on knapsack public key cryptosystems that can be expected to break them if they transmit information at rates below dc (n), as n → ∞.

77 citations


Proceedings ArticleDOI
01 Dec 1983
TL;DR: In this article, it was shown that every channel can be routed using 2d+O(1) tracks, where O(n) is the number of tracks required for each channel.
Abstract: @n) tracks. In practice, it appears that the flux never exceeds a small constant. In this case the algorithm performs asymptotically better than the best-known knock-knee algorithm [21], and almost as well as the best-known three-layer algorithm [19], without requiring the use of either knock-knees or three layers of interconnect. In addition, the three-parameter model, which is closer to the design rules of current fabrication technologies, is presented. Under this model, it is shown that every channel can be routed using 2d+O(1) tracks.

68 citations


Journal ArticleDOI
TL;DR: In this paper, a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file is presented and an algorithm that obtains a near optimal solution to the index selection problem in polynomial time is developed.
Abstract: A problem of considerable interest in the design of a database is the selection of indexes. In this paper, we present a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file. An evaluation function, which is based on the cost saving (in terms of the number of page accesses) attributable to the use of an index set, is then developed. The maximization of this function would yield an optimal set of indexes. Unfortunately, algorithms known to solve this maximization problem require an order of time exponential in the total number of attributes in the file. Consequently, we develop the theoretical basis which leads to an algorithm that obtains a near optimal solution to the index selection problem in polynomial time. The theoretical result consists of showing that the index selection problem can be solved by solving a properly chosen instance of the knapsack problem. A theoretical bound for the amount by which the solution obtained by this algorithm deviates from the true optimum is provided. This result is then interpreted in the light of evidence gathered through experiments.

58 citations


Proceedings ArticleDOI
07 Nov 1983
TL;DR: It is shown that there are tight space and time hierarchies of random languages, and that EXPTIME contains P-isomorphism classes containing only languages that are random with respect to polynomial-time computations.
Abstract: A language L is random with respect to a given complexity class C if for all ′ ∈ C L and ′ disagree on half of all strings. It is known that for any complexity class there are recursive languages that are random with respect to that class. Here it is shown that there are tight space and time hierarchies of random languages, and that EXPTIME contains P-isomorphism classes containing only languages that are random with respect to polynomial-time computations. The technique used is extended to show that for any constructible bound on time or space it is possible to deterministically generate binary sequences that appear random to all prediction algorithms subject to the given resource bound. Furthermore, the generation of such a sequence requires only slightly more resources than the given bound.

Proceedings ArticleDOI
07 Nov 1983
TL;DR: In this paper, the problem of routing wires on a VLSI chip, where the pins to be connected are arranged in a regular rectangular array, was examined, and tight bounds for the worst-case "channelwidth" needed to route an n × n array, and provably good heuristics for the general case were developed.
Abstract: We examine the problem of routing wires on a VLSI chip, where the pins to be connected are arranged in a regular rectangular array. We obtain tight bounds for the worst-case "channel-width" needed to route an n × n array, and develop provably good heuristics for the general case. An interesting "rounding algorithm" for obtaining integral approximations to solutions of linear equations is used to show the near-optimality of single-turn routings in the worst-case.

Proceedings Article
01 Jan 1983
TL;DR: In this paper, the authors present a general technique that can be used to obtain approximation algorithms for various NP-complete problems on planar graphs by decomposing a planar graph into subgraphs of a form called k-outerplanar.
Abstract: ABSTRACf This paper describes a general technique that can be used to obtain approximation algorithms for various NP-complete problems on planar graphs. The strategy depends on decom­ posing a planar graph into subgraphs of a form we call k­ outerplanar. For fixed k, the problems of interest are solv­ able optimally in linear time on k -outerplanar graphs by dynamic programming. For general planar graphs, if the problem is a maximization problem, such as maximum in­ dependent set, this technique gives for each k a linear time algorithm that produces a solution whose size is at least (k -1)/k optimal. If the problem is a minimization problem, such as minimum vertex cover, it gives for each k a linear time algorithm that produces a solution whose size is at most (k+l)/k optimal. Taking k-cloglogn or k-clogn, where n is the number of nodes and c is some constant, we get po­ lynomial time approximation schemes, i.e. algorithms whose solution sizes converge toward optimal as n increases. The class of problems for which this approach provides approxi­ mation schemes includes maximum independent set, max­ imum tile salvage, partition into triangles, maximum H­ matching, minimum vertex cover, minimum dominating set, and minimum edge dominating set. For these and certain other problems, the proof of solvability on k -outerplanar graphs also enlarges the class of planar graphs for which the problems are known to be solvable.

Proceedings ArticleDOI
07 Nov 1983
TL;DR: This paper shows the problem of partitioning a polygonal region into a minimum number of trapezoids with two horizontal sides to be NP-complete, and presents an O(n log n) natural approximation algorithm which uses only horizontal chords to partition a polyagonal region P into trapezoid, where n is the number of vertices of P.
Abstract: We consider the problem of partitioning a polygonal region into a minimum number of trapezoids with two horizontal sides. Triangles with a horizontal side are considered to be trapezoids with two horizontal sides one of which is degenerate. In this paper we show that this problem is equivalent to the problem of finding a maximum independent set of a straight-lines-in-the-plane graph. Thus it is shown to be NP-complete. Next we present an O(n log n) natural approximation algorithm which uses only horizontal chords to partition a polygonal region P into trapezoids, where n is the number of vertices of P. We show that the absolute performance ratio of the algorithm is three. We can also design another approximation algorithm with the ratio (1 + 2/c) if we have a (1 - 1/c) approximation algorithm for the maximum independent set problem on straight-lines-in-the-plane graphs, where c is some constant. Finally, we give an O(n3) exact algorithm for polygonal regions without windows.

Proceedings ArticleDOI
17 Aug 1983
TL;DR: This work identifies polynomial time solvable special eases and derive good performance bounds for several natural approximation algorithms in a problem of scheduling file transfers in a network so as to minimize overall finishing time.
Abstract: We consider a problem of scheduling file transfers in a network so as to minimize overall finishing time, which we formalize as a problem of scheduling the edges of a weighted multigraph. Although the general problem is NP-complete, we identify polynomial time solvable special eases and derive good performance bounds for several natural approximation algorithms. The above results assume the existence of a central controller, but we also show how the approximation algorithms, along with their performance guarantees, can be adapted to a distributed regime.

Journal ArticleDOI
01 Dec 1983
TL;DR: The relative difference between worst and optimal solution value tends to zero with probability tending to one as the size of the problem goes to infinity, suggesting that for high dimensional quadratic assignment problems even very simple approximation algorithms can in practice yield good suboptimal solutions.
Abstract: In this paper a surprising probabilistic behaviour of quadratic sum assignment problems is shown. The relative difference between worst and optimal solution value tends to zero with probability tending to one as the size of the problem goes to infinity. This result suggests that for high dimensional quadratic assignment problems even very simple approximation algorithms can in practice yield good suboptimal solutions.

Journal ArticleDOI
TL;DR: A new outer approximation algorithm is proposed for solving general convex programs that solves at each iteration a quadratic program whose constraints depend only on the current estimate of an optimal solution.
Abstract: A new outer approximation algorithm is proposed for solving general convex programs. A remarkable advantage of the algorithm over existing outer approximation methods is that the approximation of the constraint set is not cumulative. That is, the algorithm solves at each iteration a quadratic program whose constraints depend only on the current estimate of an optimal solution. Convergence of the algorithm is proved and possible applications are discussed.

Journal ArticleDOI
Y. Kamp1, C. Wellekens1
TL;DR: In this article, a constrained approximation procedure is used to obtain the magnitude function and the transmission zeros in the stopband(s), and the zeros of the transfer function inside the unit circle are calculated via a low-degree polynomial factorization.
Abstract: The paper presents a new method for the optimal design of minimum phase FIR filters. First, a constrained approximation procedure is used to obtain the magnitude function and the transmission zeros in the stopband(s). Next, the zeros of the transfer function inside the unit circle are calculated via a low-degree polynomial factorization. For low-pass filters, a straightforward exchange algorithm is presented which achieves the constrained approximation step; a convergence proof is given and it is shown that the algorithm can be implemented via a simple modification of the Parks-McClellan program. The efficacy of the method is illustrated by a numerical example. Attention is drawn to the fact that bandpass filters may in principle require more sophisticated means.

Proceedings ArticleDOI
01 Dec 1983
TL;DR: The bilinear reachabiiity and observability Gramians are shown to be obtainable from the solutions of generalized Lyapunov equations.
Abstract: High-dimensional mathematical models of bilinear control systems are often not amenable due to the difficulty in implementation. In this paper, we address the problem of order-reduction for both discrete and continuous time bilinear systems. Two model-reduction algorithms are presented; one is based on the singular value decomposition of the generalized Hankel matrix (the Hankel Approach) and the other is based on the eigenvalue / eigenvector decomposition of the product of reachability and observability Gramians (the Gramian Approach). Equivalence between these two algorithms is established. The main result of this paper is a systematic approach for obtaining reduced-order bilinear models. Furthermore, the bilinear reachabiiity and observability Gramians are shown to be obtainable from the solutions of generalized Lyapunov equations. Computer simulations of a neutron-kinetic system are presented to illustrate the effectiveness of the proposed model-reduction algorithms.

Proceedings ArticleDOI
22 Jun 1983
TL;DR: The problem of designing a feedback compensator to minimize a weighted L∞ norm of the sensitivity function of a MIMO linear time invariant system is considered and is solved by establishing its equivalence to the different but related problem of multivariable zeroeth order optimal Hankel approximation.
Abstract: The problem of designing a feedback compensator to minimize a weighted L∞ norm of the sensitivity function of a MIMO linear time invariant system is considered. The problem is solved by establishing its equivalence to the different but related problem of multivariable zeroeth order optimal Hankel approximation solved recently by Kung and Lin.

Journal ArticleDOI
TL;DR: In this paper, a class of decompositions to break large power flow problems down to sizes the Han-Powell algorithm can comfortably tackle was developed, called the Super Hybrid decomposition.
Abstract: The Han-Powell algorithm has proved to be extremely fast and robust for small optimum power flow problems (of the order of 100 buses). However, it balks at full size problems (of the order of 1000 buses). This paper develops a class of decompositions to break large problems down to sizes the Han-Powell algorithm can comfortably tackle. From this class we select one mem- ber-called the Super Hybrid-that seems to work best and describe it in detail.

Journal ArticleDOI
01 Mar 1983-Networks
TL;DR: Several linear-time approximation algorithms for the minimum-weight perfect matching in a plane are proposed, and their worst- and average-case behaviors are analyzed theoretically as well as experimentally, and an application to the drawing of a road map is shown.
Abstract: Several linear-time approximation algorithms for the minimum-weight perfect matching in a plane are proposed, and their worst- and average-case behaviors are analyzed theoretically as well as experimentally. A linear-time approximation algorithm, named the “spiral-rack algorithm (with preprocess and with tour),” is recommended for practical purposes. This algorithm is successfully applied to the drawing of road maps such as that of the Tokyo city area. I. INTRODUCTION Consider n (an even number) points in a plane. The problem of finding the minimumweight perfect matching, i.e., determining how to match the n points in pairs so as to minimize the sum of the distances between the matched points, as well as Euler’s problem of unicursal traversing on a graph, is of fundamental importance for optimizing the sequence of drawing lines by a mechanical plotter ([2-5, 81; details are discussed in Sec. V). The algorithm which exactly solves this problem in 0(n3) time [6] seems to be too complicated from the practical point of view. Even approximation algorithms of O(n2) or O(n log n) [lo] would not be satisfactory or need some improvement for the application to real-world problems of a size, say, n greater than lo4. In contrast with the matching problem, an Eulerian path can be found in linear time in the’number of edges. In this paper, linear-time* approximation algorithms are proposed for the matching problem in a unit square; their worst-case performances are analyzed theoretically; their average-case performances are investigated both theoretically and experimentally for the case where n points are uniformly distributed on the unit square; and an application to the drawing of a road map is shown. The quality of an approximate solution is measured by the absolute cost of the matching, i.e., the sum of the distances *We adopt the RAM model of computation which executes an arithmetic operation such as addition, multiplication, or integer division (hence, the “floor” operation) in a unit time [ 11.

Journal ArticleDOI
TL;DR: A natural approximation of backbone space curves in terms of helical approximating elements is introduced and a computer algorithm to implement the approximation is presented.

Journal ArticleDOI
TL;DR: This work defines two related approximation algorithms and derives bounds on the worst case performance of the packings they produce and defines a model for packing into a specified rectangle a maximum number of squares from a given set.
Abstract: We consider the NP-hard problem of packing into a specified rectangle a maximum number of squares from a given set. We define two related approximation algorithms and derive bounds on the worst case performance of the packings they produce.

Book ChapterDOI
09 Mar 1983
TL;DR: This paper describes an approximation algorihm for the vertex cover problem which has a worst case ratio strictly smaller than 2 for graphs which don't have too many nodes and presents algorithms which improve in the case of degree bounded graphs the worst case ratios known up to now.
Abstract: In this paper we describe an approximation algorihm for the vertex cover problem which has a worst case ratio Δ strictly smaller than 2 for graphs which don't have too many nodes (for example Δ≤1.9 if |V|≤1o13). Furthermore we present algorithms which improve in the case of degree bounded graphs the worst case ratios known up to now.

Proceedings ArticleDOI
07 Nov 1983
TL;DR: Several fast new algorithms are presented for sampling n records at random from a file containing N records, one of which deals with sampling when N is known, and the other considers the caseWhen N is unknown.
Abstract: Several fast new algorithms are presented for sampling n records at random from a file containing N records. The first problem we solve deals with sampling when N is known, and the the second problem considers the case when N is unknown. The two main results in this paper are Algorithms D and Z. Algorithm D solves the first problem by doing the sampling with a small constant amount of space and in O(n) time, on the average; roughly n uniform random variates are generated, and approximately n exponentiation operations are performed during the sampling The sample is selected sequentially and online; it answers an open problem in [Knuth 81]. Algorithm Z solves the second problem by doing the sampling using O(n) space, roughly n ln(N/n) uniform random variates and O(n(1 + log(N/n))) time, on the average. Both algorithms are time- and space-optimum and are short and easy to implement.

Proceedings ArticleDOI
21 Mar 1983
TL;DR: A polynomial appro,;'at:on algorithm is developed which guarantees a reallstlc worst case bound for file allocation in arbitrary computer networks.
Abstract: In this paper, we are going to examine the problem of optimal file allocation In arbitrary computer networks This problem has been shown to be NP-complete [CHAHD76, ESWAR74] Several authors TCHU69, CASEY72, CHAND76, MAHMD76] have stud3ed this problem using analytical, heurlstlc and lmearInteger programmlng methods But these techniques only tend to demonstrate their relative efficiencies in fIndIng solutions to known problems and for small values of the number of nodes In the network They did not provide any mathematical analysis of the worst case sltuatlon of their techniques Hence, they falled to establish bounds on the devlatlon of their solutions In terms of exact solutions for problems with no known optimal solutlons They also did not establish the time and space complexltles of their algorithms This has motivated us to develop a polynomial appro,;'at:on algorithm which guarantees a reallstlc worst case bound

Journal ArticleDOI
TL;DR: The optimization problem considered in this paper is that of deciding to how many bits each attribute should be mapped by the bashing function above, so that the expected number of buckets retrieved per query is minimized.
Abstract: We consider the problem of designing an information retrieval system on which partial match queries have to be answered. Each record in the system consists of a list of attributes, and a partial match query specifies the values of some of the attributes. The records are stored in buckets in a secondary memory, and in order to answer a partial match query all the buckets that may contain a record satisfying the specifications of that query must be retrieved. The bucket in which a given record is stored is found by a multiple key hashing function, which maps each attribute to a string of a fixed number of bits. The address of that bucket is then represented by the string obtained by concatenating the strings on which the various attributes were mapped. A partial match query may specify only part of the bits in the string representing the address, and the larger the number of bits specified, the smaller the number of buckets that have to be retrieved in order to answer the query.The optimization problem considered in this paper is that of deciding to how many bits each attribute should be mapped by the bashing function above, so that the expected number of buckets retrieved per query is minimized. Efficient solutions for special cases of this problem have been obtained in [1], [12], and [14]. It is shown that in general the problem is NP-hard, and that if P ≠ NP, it is also not fully approximable. Two heuristic algorithms for the problem are also given and compared.


Journal ArticleDOI
TL;DR: In this article, the authors propose an automated procedure for simultaneous order selection and finite impulse response (FIR) filter design and approximation using relational analysis, which is based on the linear predictive coding (LPC) equations.
Abstract: Kaiser's empirical formula for finite impulse response (FIR) digital filter length, as a function of transition width and rejection band loss (together with the I_{0}-\sinh window function), provides a simple FIR filter design algorithm. Recent developments in time series analysis provide a theoretical procedure for selecting the numerator and denominator orders of an infinite impulse response (IIR) digital filter whose frequency domain amplitude response is equivalent to that of a given FIR filter. In practice, this procedure has unfailingly indicated the correct denominator order, but has frequently selected a numerator order which produced a poor approximation to the desired frequency domain amplitude characteristics. Part of the problem has been due to the lack of a reasonable estimate of the quality of the approximation, and part has been due to a poor understanding of what the observed order selection criteria mean in an approximation setting. By relating the order selection algorithm to traditional methods for solving the linear predictive coding equations, this paper resolves both obstacles to minimal order approximation of FIR filters by IIR filters. As an added benefit, the relational analysis has the potential to provide an automated procedure for simultaneous order selection and IIR filter design and approximation.