scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1993"


01 Jan 1993
TL;DR: A survey of deterministic machine scheduling can be found in this article, where complexity results and optimization and approximation algorithms for problems involving a single machine, parallel machines, open shops, flow shops and job shops are presented.
Abstract: Sequencing and scheduling as a research area is motivated by questions that arise in production planning, in computer control, and generally in all situations in which scarce resources have to be allocated to activities over time. In this survey, we concentrate on the area of deterministic machine scheduling. We review complexity results and optimization and approximation algorithms for problems involving a single machine, parallel machines, open shops, flow shops and job shops. We also pay attention to two extensions of this area: resource-constrained project scheduling and stochastic machine scheduling.

1,108 citations


Journal ArticleDOI
TL;DR: It is shown that the existence of a polynomial-time relative approximation algorithm for major classes of problem instances implies that NP ⊆ P is NP -hard.

777 citations


Journal ArticleDOI
TL;DR: The generalized assignment problem can be viewed as the following problem of scheduling parallel machines with costs; each job is to be processed by exactly one machine; processing jobj on machinei requires timepij and incurs a cost ofcij; each machinei is available forTi time units, and the objective is to minimize the total cost incurred.
Abstract: The generalized assignment problem can be viewed as the following problem of scheduling parallel machines with costs. Each job is to be processed by exactly one machine; processing jobj on machinei requires timep ij and incurs a cost ofc ij ; each machinei is available forT i time units, and the objective is to minimize the total cost incurred. Our main result is as follows. There is a polynomial-time algorithm that, given a valueC, either proves that no feasible schedule of costC exists, or else finds a schedule of cost at mostC where each machinei is used for at most 2T i time units. We also extend this result to a variant of the problem where, instead of a fixed processing timep ij , there is a range of possible processing times for each machine—job pair, and the cost linearly increases as the processing time decreases. We show that these results imply a polynomial-time 2-approximation algorithm to minimize a weighted sum of the cost and the makespan, i.e., the maximum job completion time. We also consider the objective of minimizing the mean job completion time. We show that there is a polynomial-time algorithm that, given valuesM andT, either proves that no schedule of mean job completion timeM and makespanT exists, or else finds a schedule of mean job completion time at mostM and makespan at most 2T.

761 citations


Journal ArticleDOI
TL;DR: A randomised algorithm which evaluates the partition function of an arbitrary ferromagnetic Ising system to any specified degree of accuracy is presented.
Abstract: The paper presents a randomised algorithm which evaluates the partition function of an arbitrary ferromagnetic Ising system to any specified degree of accuracy. The running time of the algorithm in...

660 citations


Journal ArticleDOI
TL;DR: A polynomial-time approximation algorithm with worst-case ratio 7/6 is presented for the special case of the traveling salesman problem in which all distances are either one or two.
Abstract: We present a polynomial-time approximation algorithm with worst-case ratio 7/6 for the special case of the traveling salesman problem in which all distances are either one or two. We also show that this special case of the traveling salesman problem is MAX SNP-hard, and therefore it is unlikely that it has a polynomial-time approximation scheme.

448 citations


Journal ArticleDOI
TL;DR: This work considers the simplified case of a point mass under Newtonian mechanics, together with velocity and acceleration bounds, and provides the first provably good approximation algorithm, and shows that it runs in polynomial time.
Abstract: Kinodynamic planning attempts to solve a robot motion problem subject to simultaneous kinematic and dynamics constraints. In the general problem, given a robot system, we must find a minimal-time trajectory that goes from a start position and velocity to a goal position and velocity while avoiding obstacles by a safety margin and respecting constraints on velocity and acceleration. We consider the simplified case of a point mass under Newtonian mechanics, together with velocity and acceleration bounds. The point must be flown from a start to a goal, amidst polyhedral obstacles in 2D or 3D. Although exact solutions to this problem are not known, we provide the first provably good approximation algorithm, and show that it runs in polynomial time

438 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: The limited independence result implies that a reduced amount and weaker sources of randomness are sufficient for randomized algorithms whose analyses use the CH bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.
Abstract: Chernoff-Hoeffding bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables. We present a simple technique which gives slightly better bounds than these and which, more importantly, requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free. Additional methods are also presented, and the aggregate results are very sharp and provide a better understanding of the proof techniques behind these bounds. They also yield improved bounds for various tail probability distributions and enable improved approximation algorithms for jobshop scheduling. The ``limited independence'''' result implies that weaker sources of randomness are sufficient for randomized algorithms whose analyses use the Chernoff-Hoeffding bounds; further, it leads to algorithms that require a reduced amount of randomness for any analysis which uses the Chernoff-Hoeffding bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.

372 citations


Journal ArticleDOI
TL;DR: An instance of the Network Steiner Problem consists of an undirected graph with edge lengths and a subset of vertices; the goal is to find a minimum cost Steiner tree of the given subset (i.e., minimum cost subset of edges which spans it).
Abstract: An instance of the Network Steiner Problem consists of an undirected graph with edge lengths and a subset of vertices; the goal is to find a minimum cost Steiner tree of the given subset (i.e., minimum cost subset of edges which spans it). An 11/6-approximation algorithm for this problem is given. The approximate Steiner tree can be computed in the time0(¦V¦ ¦E¦ + ¦S¦4), whereV is the vertex set,E is the edge set of the graph, andS is the given subset of vertices.

320 citations


Proceedings ArticleDOI
01 Jun 1993
TL;DR: A fast parallel approximation algorithm for the positive linear programming optimization problem, where the input constraint matrix and constraint vector consist entirely of positive entries, that runs in polylog time using a linear number of processors.
Abstract: We introduce a fast parallel approximation algorithm for the positive linear programming optimization problem, i.e. the special case of the linear programming optimization problem where the input constraint matrix and constraint vector consist entirely of positive entries. The algorithm is elementary, and has a simple parallel implementation that runs in polylog time using a linear number of processors.

225 citations


Journal ArticleDOI
TL;DR: The paper discusses the significant parameters of center allocation, defines the resulting optimization problems, and proposes several approximation algorithms for selecting centers and for distributing the users among them.

185 citations


Proceedings ArticleDOI
01 Jun 1993
TL;DR: This work presents approximation algorithms for a variety of network-design problems on an n-node graph in which the degree of the output network is b and the cost of this network is $O(\log n)$ times that of the minimum-cost degree-b-bounded network.
Abstract: We study network-design problems with multiple design objectives. In particular, we look at two cost measures to be minimized simultaneously: the total cost of the network and the maximum degree of any node in the network. Our main result can be roughly stated as follows: given an integer $b$, we present approximation algorithms for a variety of network-design problems on an $n$-node graph in which the degree of the output network is $O(b \log (\frac{n}{b}))$ and the cost of this network is $O(\log n)$ times that of the minimum-cost degree-$b$-bounded network. These algorithms can handle costs on nodes as well as edges. Moreover, we can construct such networks so as to satisfy a variety of connectivity specifications including spanning trees, Steiner trees and generalized Steiner forests. The performance guarantee on the cost of the output network is nearly best-possible unless $NP = \tilde{P}$. We also address the special case in which the costs obey the triangle inequality. In this case, we obtain a polynomial-time approximation algorithm with a stronger performance guarantee. Given a bound $b$ on the degree, the algorithm finds a degree-$b$-bounded network of cost at most a constant time optimum. There is no algorithm that does as well in the absence of triangle inequality unless $P = NP$. We also show that in the case of spanning networks, we can simultaneously approximate within a constant factor yet another objective: the maximum cost of any edge in the network, also called the bottleneck cost of the network. We extend our algorithms to find TSP tours and $k$-connected spanning networks for any fixed $k$ that simultaneously approximate all these three cost measures.

Journal ArticleDOI
TL;DR: Convergence with probability 1 is proved for the multidimensional analog of the Kesten accelerated stochastic approximation algorithm.
Abstract: A technique to accelerate convergence of stochastic approximation algorithms is studied. It is based on Kesten’s idea of equalization of the gain coefficient for the Robbins–Monro algorithm. Convergence with probability 1 is proved for the multidimensional analog of the Kesten accelerated stochastic approximation algorithm. Asymptotic normality of the delivered estimates is also shown. Results of numerical simulations are presented that demonstrate the efficiency of the acceleration procedure.

Proceedings ArticleDOI
03 Nov 1993
TL;DR: This paper presents a method for converting an approximation algorithm for an unweighted graph problem (from a specific class of maximization problems) into one for the corresponding weighted problem, and apply it to the densest subgraph problem.
Abstract: This paper concerns the problem of computing the densest k-vertex subgraph of a given graph, namely, the subgraph with the most edges, or with the highest edges-to-vertices ratio. A sequence of approximation algorithms is developed for the problem, with each step yielding a better ratio at the cost of a more complicated solution. The approximation ratio of our final algorithm is O/spl tilde/(n/sup 0.3885/). We also present a method for converting an approximation algorithm for an unweighted graph problem (from a specific class of maximization problems) into one for the corresponding weighted problem, and apply it to the densest subgraph problem. >

Journal ArticleDOI
01 Sep 1993
TL;DR: The empirical results indicate that by using the appropriate local improvement operator, the genetic algorithm is able to find an optimal solution in all but a tiny fraction of the cases and at a speed orders of magnitude faster than exact algorithms.
Abstract: Genetic algorithms have demonstrated considerable success in providing good solutions to many NP-hard optimization problems. For such problems, exact algorithms that always find an optimal solution are only useful for small toy problems, so heuristic algorithms such as the genetic algorithm must be used in practice. In this paper, we apply the genetic algorithm to the NP-hard problem of multiple fault diagnosis (MFD). We compare a pure genetic algorithm with several variants that include local improvement operators. These operators, which are often domain-specific, are used to accelerate the genetic algorithm in converging on optimal solutions. Our empirical results indicate that by using the appropriate local improvement operator, the genetic algorithm is able to find an optimal solution in all but a tiny fraction of the cases and at a speed orders of magnitude faster than exact algorithms. >

Proceedings ArticleDOI
07 Jun 1993
TL;DR: This work refine the complexity analysis of approximation problems by relating it to a new parameter called gap location, and presents definitions and hardness results of new approximation versions of some NP-complete optimization problems.
Abstract: The author refines the complexity analysis of approximation problems, by relating it to a new parameter called gap location. Many of the results obtained so far for approximations yield satisfactory analysis also with respect to this refined parameter, but some known results (e.g. max-k-colorability, max-3-dimensional matching and max not-all-equal 3sat) fall short of doing so. A second contribution of is in filling the gap in these cases by presenting new reductions. Next, he presents definitions and hardness results of new approximation versions of some NP-complete optimization problems. The problems are: vertex cover, k-edge coloring, set splitting, and a restricted version of feedback vertex set and feedback arc set. >

Proceedings ArticleDOI
03 Nov 1993
TL;DR: A very simple (1+/spl epsi/)-approximation algorithm for the multicommodity flow problem that performs as well as or better than previously known algorithms, at least for certain test problems.
Abstract: In this paper, we describe a very simple (1+/spl epsi/)-approximation algorithm for the multicommodity flow problem. The algorithm runs in time that is polynomial in N (the number of nodes in the network) and /spl epsiv//sup -1/ (the closeness of the approximation to optimal). The algorithm is remarkable in that it is much simpler than all known polynomial time flow algorithms (including algorithms for the special case of one-commodity flow). In particular, the algorithm does not rely on augmenting paths, shortest paths, min-cost paths, or similar techniques to push flow through a network. In fact, no explicit attempt is ever made to push flow towards a sink during the algorithm. Because the algorithm is so simple, it can be applied to a variety of problems for which centralized decision making and flow planning is not possible. For example, the algorithm can be easily implemented with local control in a distributed network and it can be made tolerant to link failures. In addition, the algorithm appears to perform well in practice. Initial experiments using the DIMACS generator of test problems indicate that the algorithm performs as well as or better than previously known algorithms, at least for certain test problems. >

Proceedings ArticleDOI
01 Jun 1993
TL;DR: The first polynomial-time approximation algorithm for finding a minimum-cost subgraph having at least a specified number of edges in each cut is presented, which shows the importance of this technique in designing approximation algorithms.
Abstract: We present the first polynomial-time approximation algorithm for finding a minimum-cost subgraph having at least a specified number of edges in each cut This class of problems includes, among others, the generalized Steiner network problem, also called the survivable network design problem Ifk is the maximum cut requirement of the problem, our solution comes within a factor of 2k of optimal Our algorithm is primal-dual and shows the importance of this technique in designing approximation algorithms

BookDOI
01 Jul 1993
TL;DR: Average performance of self-dual interior point algorithm for linear programming and complexity results for a class of min-max problems with robust optimization applications.
Abstract: Average performance of self-dual interior point algorithm for linear programming, K.M. Anstreicher et al the complexity of approximating a nonlinear program, M. Bellare and P. Rogaway algorithms for the least distance problem, P. Berman et al translational cuts for convex minimization, J.V. Burke et al maximizing concave functions in fixed dimension, E. Cohen and N. Megiddo the complexity of allocating resources in parallel - upper and lower bounds, E.J. Friedman complexity issues in nonconvex network flow problems, G. Guisewite and P.M. Pardalos a classification of static scheduling problems, J.W. Herrmann et al complexity of single machine dual criteria and hierarchical scheduling - a survey, C.-Y. Lee and G. Vairaktarakis performance driven graph enhancement problems, D. Paik and S. Sahni weighted means of cuts, parametric flows and fractional combinatorial optimization, T. Radzik some complexity issues involved in the construction of test cases for NP-hard problems, L. Sanchis a note on the complexity of fixed-point computation for noncontractive maps, C.W. Tsai and K. Sikorski maximizing non-linear concave functions in fixed dimension, S. Toledo polynomial time weak approximation algorithms for quadratic programming, S. Vavasis complexity results for a class of min-max problems with robust optimization applications, G. Yu and P. Kouvelis. (Part contents).

Book ChapterDOI
Kenneth L. Clarkson1
11 Aug 1993
TL;DR: An algorithm for polytope covering is given that finds a cover of size no more than c(5dln c), for c large enough, and an approximation with error e requires c=O(d/e)d−1 vertices, and the algorithm gives a approximation with c( 5d3 ln(1/e)) vertices.
Abstract: This paper gives an algorithm for polytope covering: let L and U be sets of points in Rd, comprising n points altogether. A cover for L from U is a set C⊂U with L a subset of the convex hull of C. Suppose c is the size of a smallest such cover, if it exists. The randomized algorithm given here finds a cover of size no more than c(5dln c), for c large enough. The algorithm requires O(c2n1+δ) expected time. More exactly, the time bound is $$O(cn^{1 + \delta } + c(nc)^{1/(1 + \gamma /(1 + \delta ))} )$$ , where γγ1/[d/2]. The previous best bounds were cO(log n) cover size in O(nd) time.[MS92b] A variant algorithm is applied to the problem of approximating the boundary of a polytope with the boundary of a simpler polytope. For an appropriate measure, an approximation with error e requires c=O(d/e)d−1 vertices, and the algorithm gives an approximation with c(5d3 ln(1/e)) vertices. The algorithms apply ideas previously used for small-dimensional linear programming.

Journal ArticleDOI
TL;DR: In this article, it was shown that there is no constant e>0 for which this problem can be approximated within a factor of n 1−e in polynomial time unless P  NP.

Journal ArticleDOI
TL;DR: The algorithm described is, in particular, a 2-approximation algorithm for the problem of minimizing the total weight of true variables, among all truth assignments to the 2-satisfiability problem, which has an identifiable subset of integer components that retain their value in an integer optimal solution of the problem.
Abstract: The problem of integer programming in bounded variables, over constraints with no more than two variables in each constraint is NP-complete, even when all variables are binary. This paper deals with integer linear minimization problems inn variables subject tom linear constraints with at most two variables per inequality, and with all variables bounded between 0 andU. For such systems, a 2-approximation algorithm is presented that runs in time O(mnU2 log(Un2m)), so it is polynomial in the input size if the upper boundU is polynomially bounded. The algorithm works by finding first a super-optimal feasible solution that consists of integer multiples of 1/2. That solution gives a tight bound on the value of the minimum. It furthermore has an identifiable subset of integer components that retain their value in an integer optimal solution of the problem. These properties are a generalization of the properties of the vertex cover problem. The algorithm described is, in particular, a 2-approximation algorithm for the problem of minimizing the total weight of true variables, among all truth assignments to the 2-satisfiability problem.

Proceedings ArticleDOI
15 Dec 1993
TL;DR: The author uses these results to study the Q-learning algorithm, a reinforcement learning method for solving Markov decision problems, and establishes its convergence under conditions more general than previously available.
Abstract: Provides some general results on the convergence of a class of stochastic approximation algorithms and their parallel and asynchronous variants. The author then uses these results to study the Q-learning algorithm, a reinforcement learning method for solving Markov decision problems, and establishes its convergence under conditions more general than previously available. >

Book ChapterDOI
02 Jun 1993
TL;DR: This work considers the problem of computing the shortest series of reversals that transform one permutation to another, and takes an arbitrary substring of elements and reverses their order.
Abstract: Motivated by the problem in computational biology of reconstructing the series of chromosome inversions by which one organism evolved from another, we consider the problem of computing the shortest series of reversals that transform one permutation to another. The permutations describe the order of genes on corresponding chromosomes, and a reversal takes an arbitrary substring of elements and reverses their order.

Journal ArticleDOI
TL;DR: It is proved that the minimization of Horn functions (i.e. Boolean functions associated to Horn knowledge bases) is NP-complete.


Proceedings ArticleDOI
03 Nov 1993
TL;DR: This work builds on the classical greedy sequential set cover algorithm, in the spirit of the primal-dual schema, to obtain simple parallel approximation algorithms for the set cover problem and its generalizations.
Abstract: We build on the classical greedy sequential set cover algorithm, in the spirit of the primal-dual schema, to obtain simple parallel approximation algorithms for the set cover problem and its generalizations. Our algorithms use randomization, and our randomized voting lemmas may be of independent interest. Fast parallel approximation algorithms were known before for set cover, though not for any of its generalizations. >


01 Jan 1993
TL;DR: In this article, the authors considered the problem of parallel machine scheduling with the objective of finding an assignment of jobs to machines so as to minimize the maximum job completion time, and presented an optimization and an approximation algorithm that are both based on surrogate relaxation and duality.
Abstract: textWe consider the following parallel machine scheduling problem. Each of n independent jobs has to be scheduled on one of m unrelated parallel machines. The processing of job J[sub l] on machine Mi requires an uninterrupted period of positive length p[sub lj]. The objective is to find an assignment of jobs to machines so as to minimize the maximum Job completion time. The objective of this paper is to design practical algorithms for this NP-hard problem. We present an optimization algorithm and an approximation algorithm that are both based on surrogate relaxation and duality. The optimization algorithm solves quite large problems within reasonable time limits. The approximation algorithm is based upon a novel concept for iterative local search, in which the search direction is guided by surrogate multipliers.

Journal ArticleDOI
L.A. Sanchis1
TL;DR: It is shown that certain portions of the algorithm must be revised in order to maintain a relatively low time complexity for the modified algorithms.
Abstract: An adaptation to multiple blocks of a two-block network partitioning algorithm by Krishnamurthy was previously presented and analyzed by the author (see ibid., vol.38, p.62-81, 1989). The algorithm assumed one of several possible generalizations of two-way partitioning to multiple-way partitioning. The problem of adapting this algorithm to work with different generalizations more suitable for other types of applications of network partitioning is considered. It is shown that certain portions of the algorithm must be revised in order to maintain a relatively low time complexity for the modified algorithms. Experimental results are given. >

Book ChapterDOI
16 Jun 1993
TL;DR: A fully dynamic algorithm A1 is presented that, in an amortized fashion, efficiently accommodates such changes and is 2-competitive, thereby matching the competitive ratio of the best existing off-line approximation algorithms for vertex cover.
Abstract: The problem of maintaining an approximate solution for vertex cover when edges may be inserted and deleted dynamically is studied. We present a fully dynamic algorithm A1 that, in an amortized fashion, efficiently accommodates such changes. We further provide for a generalization of this method and present a family of algorithms A k , k >−1. The amortized running time of each A k is \(\Theta ((\upsilon + e)\tfrac{{1 + \sqrt {1 + 4(k + 1)(2k + 3)} }}{{2(2k + 3)}})\) per Insert/Delete operation, where e denotes the number of edges of the graph G at the time that the operation is initiated. It follows that this amortized running time may be made arbitrarily close to \(\Theta ((\upsilon + e)\tfrac{{\sqrt 2 }}{2})\). Each of the algorithms given here is 2-competitive, thereby matching the competitive ratio of the best existing off-line approximation algorithms for vertex cover.