scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1987"


Journal ArticleDOI
TL;DR: A new approach to constructing approximation algorithms, which the aim is find superoptimal, but infeasible solutions, and the performance is measured by the degree of infeasibility allowed, which should find wide applicability for any optimization problem where traditional approximation algorithms have been particularly elusive.
Abstract: The problem of scheduling a set of n jobs on m identical machines so as to minimize the makespan time is perhaps the most well-studied problem in the theory of approximation algorithms for NP-hard optimization problems. In this paper the strongest possible type of result for this problem, a polynomial approximation scheme, is presented. More precisely, for each e, an algorithm that runs in time O((n/e)1/e2) and has relative error at most e is given. In addition, more practical algorithms for e = 1/5 + 2-k and e = 1/6 + 2-k, which have running times O(n(k + log n)) and O(n(km4 + log n)) are presented. The techniques of analysis used in proving these results are extremely simple, especially in comparison with the baroque weighting techniques used previously.The scheme is based on a new approach to constructing approximation algorithms, which is called dual approximation algorithms, where the aim is to find superoptimal, but infeasible, solutions, and the performance is measured by the degree of infeasibility allowed. This notion should find wide applicability in its own right and should be considered for any optimization problem where traditional approximation algorithms have been particularly elusive.

766 citations


Proceedings ArticleDOI
12 Oct 1987
TL;DR: The problem of finding a sequence of commanded velocities which is guaranteed to move the point to the goal is shown to be non-deterministic exponential time hard, making it the first provably intractable problem in robotics.
Abstract: We present new techniques for establishing lower bounds in robot motion planning problems. Our scheme is based on path encoding and uses homotopy equivalence classes of paths to encode state. We first apply the method to the shortest path problem in 3 dimensions. The problem is to find the shortest path under an Lp metric (e.g. a euclidean metric) between two points amid polyhedral obstacles. Although this problem has been extensively studied, there were no previously known lower bounds. We show that there may be exponentially many shortest path classes in single-source multiple-destination problems, and that the single-source single-destination problem is NP-hard. We use a similar proof technique to show that two dimensional dynamic motion planning with bounded velocity is NP-hard. Finally we extend the technique to compliant motion planning with uncertainty in control. Specifically, we consider a point in 3 dimensions which is commanded to move in a straight line, but whose actual motion may differ from the commanded motion, possibly involving sliding against obstacles. Given that the point initially lies in some start region, the problem of finding a sequence of commanded velocities which is guaranteed to move the point to the goal is shown to be non-deterministic exponential time hard, making it the first provably intractable problem in robotics.

575 citations


Proceedings ArticleDOI
12 Oct 1987
TL;DR: In this paper, a polynomial algorithm was proposed to find a schedule that minimizes the makespan of a linear programming problem with a fixed number of machines and constant number of processing times.
Abstract: We consider the following scheduling problem. There are m parallel machines and n independent jobs. Each job is to be assigned to one of the machines. The processing of job j on machine i requires time pij. The objective is to find a schedule that minimizes the makespan. Our main result is a polynomial algorithm which constructs a schedule that is guaranteed to be no longer than twice the optimum. We also present a polynomial approximation scheme for the case that the number of machines is fixed. Both approximation results are corollaries of a theorem about the relationship of a class of integer programming problems and their linear programming relaxations. In particular, we give a polynomial method to round the fractional extreme points of the linear program to integral points that nearly satisfy the constraints. In contrast to our main result, we prove that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unless P = NP. We finally obtain a complexity classification for all special cases with a fixed number of processing times.

384 citations


Journal ArticleDOI
TL;DR: The design of FIR digital filters with a complex-valued desired frequency response using the Chebyshev error is investigated, and the desired constant group delay is found to be smaller than that of a linear phase filter of the same length.
Abstract: The design of FIR digital filters with a complex-valued desired frequency response using the Chebyshev error is investigated. The complex approximation problem is converted into a real approximation problem which is nearly equivalent to the complex problem. A standard linear programming algorithm for the Chebyshev solution of overdetermined equations is used to solve the real approximation problem. Additional constraints are introduced which allow weighting of the phase and/or group delay of the approximation. Digital filters are designed which have nearly constant group delay in the passbands. The desired constant group delay which gives the minimum Chebyshev error is found to be smaller than that of a linear phase filter of the same length. These filters, in addition to having a smaller, approximately constant group delay, have better magnitude characteristics than exactly linear phase filters with the same length. The filters have nearly equiripple magnitude and group delay.

256 citations


Proceedings ArticleDOI
10 Jun 1987
TL;DR: In this paper, the authors used stochastic approximation (SA) to construct maximum likelihood estimates of system parameters, and showed that this SA procedure is, relative to a Kiefer-Wolfowitz procedure, most efficient for large-scale systems.
Abstract: This paper shows how stochastic approximation (SA) can be used to construct maximum likelihood estimates of system parameters. The procedure described here relies on a derivative approximation other than the usual finite-difference approximation associated with a Kiefer-Wolfowitz SA procedure. This alternative derivative approximation requires fewer, by a factor equal to the dimension of the parameter vector being estimated, computations than the standard finite-difference approximation. Numerical evidence presented in the paper indicates that this SA procedure is, relative to a Kiefer-Wolfowitz procedure, most efficient when considering large-scale systems.

170 citations



Journal ArticleDOI
TL;DR: In this paper, the authors present a systematic and practical algorithm for load transfer by automatic sectionalizing switch operation in distribution systems at a fault occurrence subject to the transformer-capacity constraints and the line capacity constraints on condition that all the section loads are estimated.
Abstract: This paper presents a systematic and practical algorithm for load transfer by automatic sectionalizing switch operation in distribution systems at a fault occurrence subject to the transformer-capacity constraints and the line-capacity constraints on condition that all the section loads are estimated. The algorithm is developed by introducing the concept of subset sum problem and network structure. Computer experience with a real system indicates that the algorithm proposed here is valid and effective for practical operations.

96 citations


Journal ArticleDOI
TL;DR: A new method for the design of digital all-pass filters using Chebyshev criterion, based on a phase approximation algorithm for polynomial transfer functions, has the advantage that it finds the best uniform phase approximation to an arbitrarily specified phase response without any initial guess of the solution.
Abstract: A new method for the design of digital all-pass filters using Chebyshev criterion is introduced. It is based on a phase approximation algorithm for polynomial transfer functions. The algorithm exploits a scheme of iteratively linearizing the nonlinear constraints in a nonlinear programming and converges theoretically. The design method has the advantage that it finds the best uniform phase approximation to an arbitrarily specified phase response without any initial guess of the solution. Design examples of orders up to 80, obtained on an IBM-PC/XT personal computer, are given to show the practicability of the method.

87 citations


Journal ArticleDOI
TL;DR: In the classical bin packing problem one is required to pack a given list of items into the smallest possible number of unit-sized bins, because this problem is NP-complete, researchers have tried to find efficient approximation algorithms that solve this problem in a reasonable time.
Abstract: In the classical bin packing problem one is required to pack a given list of items into the smallest possible number of unit-sized bins. Because this problem is NP-complete, researchers have tried to find efficient approximation algorithms that solve this problem in a reasonable amount of time. If we let $A(I)$ be the number of bins used by algorithm A to pack a list of items I, and ${\textit{OPT}}(I)$ be the minimum number of bins necessary to pack I, one defines the asymptotic performance ratio of A to be ${{A(I)} / {{\textit{OPT}}(I)}}$, as ${\textit{OPT}}(I)$ tends to infinity. De la Vega and Lueker presented an approximation scheme, which for any $ \in > 0$, yields an approximation algorithm with performance ratio $1 + \varepsilon $. This scheme has time complexity polynomial in n, the number of items, but exponential in ${1 / \varepsilon }$. Karmarkar and Karp extended this scheme to one that has time complexity polynomial in n and ${1 / \varepsilon }$. In the variable-sized bin packing problem, one...

86 citations


Proceedings ArticleDOI
12 Oct 1987
TL;DR: This is the first deterministic algorithm for MIS whose running time is polylogarithmic and whose processor-time product is optimal up to a polylogARithmic factor.
Abstract: A new parallel algorithm for the maximal independent set problem (MIS) is constructed. It runs in O(log4 n) time when implemented on a linear number of EREW-processors. This is the first deterministic algorithm for MIS whose running time is polylogarithmic and whose processor-time product is optimal up to a polylogarithmic factor.

86 citations


Book ChapterDOI
01 Jan 1987
TL;DR: The performance analysis of an approximation algorithm concentrates on the quality of the final solution obtained by the algorithm, and the running time required by the algorithms.
Abstract: The performance analysis of an approximation algorithm concentrates on the following two quantities: the quality of the final solution obtained by the algorithm, i.e. the difference in cost value between the final solution and a globally minimal configuration; the running time required by the algorithm.

Journal ArticleDOI
01 Jan 1987
TL;DR: It is proved that the team can obtain the optimal classifier to an arbitrary approximation when posed as a game with common payoff played by a team of mutually cooperating learning automata.
Abstract: The problem of learning correct decision rules to minimize the probability of misclassification is a long-standing problem of supervised learning in pattern recognition. The problem of learning such optimal discriminant functions is considered for the class of problems where the statistical properties of the pattern classes are completely unknown. The problem is posed as a game with common payoff played by a team of mutually cooperating learning automata. This essentially results in a probabilistic search through the space of classifiers. The approach is inherently capable of learning discriminant functions that are nonlinear in their parameters also. A learning algorithm is presented for the team and convergence is established. It is proved that the team can obtain the optimal classifier to an arbitrary approximation. Simulation results with a few examples are presented where the team learns the optimal classifier.

Journal ArticleDOI
TL;DR: A general method is developed—the shifting strategy—nested applications of which a polynomial approximation scheme for strongly NP-complete problem is developed, and algorithms that are of practical interest in that their running time is bounded by low-degree polynomials are developed.

Journal ArticleDOI
TL;DR: This paper proposes a new representation of nets for gate matrix layout, called dynamic-net-lists, which is better suited for layout optimization than the traditional fixed-nets-list since with it net-bindings can be delayed until the gate-ordering has been constructed.
Abstract: This paper proposes a new representation of nets for gate matrix layout, called dynamic-net-lists. The dynamic-net-list representation is better suited for layout optimization than the traditional fixed-net-list since with it net-bindings can be delayed until the gate-ordering has been constructed. Based on dynamic-net-lists, an efficient modified min-net-cut algorithm has been developed to solve the gate ordering problem for gate matrix layout. This new approach is shown through theoretical analysis and experimental results to reduce the number of horizontal tracks and hence the area significantly. The time complexity of the algorithm is O(N log N), where N is the total number of transistors and gate-net contacts. It is also shown that an ideal min-net-cut algorithm for optimal gate matrix layout with n gate signals is at worst a log-n approximation algorithm and is conjectured to be a relative approximation algorithm.

Journal ArticleDOI
TL;DR: In this paper, the authors present a new approximation algorithm for the two-dimensional bin-packing problem based on two one-dimensional bag-packing algorithms, which can also be used for those cases where the output is required to be on-line.
Abstract: We present a new approximation algorithm for the two-dimensional bin-packing problem. The algorithm is based on two one-dimensional bin-packing algorithms. Since the algorithm is of next-fit type it can also be used for those cases where the output is required to be on-line (e. g. if we open an new bin we have no possibility to pack elements into the earlier opened bins). We give a tight bound for its worst-case and show that this bound is a parameter of the maximal sizes of the items to be packed. Moreover, we also present a probabilistic analysis of this algorithm.

Proceedings ArticleDOI
12 Oct 1987
TL;DR: This work introduces a new primitive, the Resource Controller, which abstracts the problem of controlling the total amount of resources consumed by a distributed algorithm, and presents an efficient distributed algorithm to implement this abstraction.
Abstract: We introduce a new primitive, the Resource Controller, which abstracts the problem of controlling the total amount of resources consumed by a distributed algorithm. We present an efficient distributed algorithm to implement this abstraction. The message complexity of our algorithm per participating node is polylogarithmic in the size of the network, compared to the linear cost per node of the naive algorithm. The implementation of our algorithm is simple and practical and the techniques used are interesting because a global quantity is managed in a distributed way. The Resource Controller can be used to construct efficient algorithms for a number of important problems, such as the problem of bounding the worst-case message complexity of a protocol and the problem of dynamically assigning unique names to nodes participating in a protocol.

Journal ArticleDOI
TL;DR: This paper proposes an algorithm which assumes that the auxiliary problems are solved only approximately, and proves that it gives an approximate solution to the original problem, of which the accuracy is at least as good as that of approximate solutions to the auxiliary problem.
Abstract: We are concerned with a combinatorial optimization problem which has the ratio of two linear functions as the objective function This type of problems can be solved by an algorithm that uses an auxiliary problem with a parametrized linear objective function Because of its combinatorial nature, however, it is often difficult to solve the auxiliary problem exactly In this paper, we propose an algorithm which assumes that the auxiliary problems are solved only approximately, and prove that it gives an approximate solution to the original problem, of which the accuracy is at least as good as that of approximate solutions to the auxiliary problems It is also shown that the time complexity is bounded by the square of the computation time of the approximate algorithm for the auxiliary problem As an example of the proposed algorithm, we present a fully polynomial time approximation scheme for the fractional 0–1 knapsack problem

Journal ArticleDOI
TL;DR: This paper provides a polynomial time algorithm for finding an optimum schedule for an odd cycle, whose complexity was left as an open question by the above authors.
Abstract: The scheduling of file transfers in networks to minimize the overall finishing time was studied by Coffman, et al. where the schedule does not permit interruption and each communication module can be used as a transmitter and as a receiver. They first presented complexity results under various conditions. Then they showed that the general problem is NP-complete and provided approximation algorithms. This paper first presents more efficient approximation algorithms with better performances than the above authors’ algorithms for the cases of trees and multitrees. Furthermore, there are simple distributed implementations of our approximation algorithms. Then this paper provides a polynomial time algorithm for finding an optimum schedule for an odd cycle, whose complexity was left as an open question by the above authors.

Journal ArticleDOI
TL;DR: A practical algorithm is presented to design stable 2-D separable-denominator digital filters that is computationally efficient and numerically stable.
Abstract: The paper proposes a technique for the design of 2-D separable-denominator digital filters. The technique is based on decomposing both 2-D separable-denominator state-space models and the given 2-D impulse response specifications into 1-D ones. It is proved that the given 2-D specifications can be optimally decomposed into 1-D specifications via singular value decompositions. Then, 1-D digital filters are designed to approximate the decomposed 1-D specifications. By using balanced approximation as the 1-D design algorithm, a practical algorithm is presented to design stable 2-D separable-denominator digital filters. The algorithm is computationally efficient and numerically stable.

01 Oct 1987
TL;DR: Two recent local search algorithmic schemes are considered, the Simulated Annealing method of Kirkpatrick, Gelatt and Vecchi and the Steepest Ascent Mildest Descent method, and adapt them to the Maximum Satisfiability problem and are shown empirically to be more efficient than the heuristics previously proposed in the literature.
Abstract: Old and new algorithms for the Maximum Satisfiability problem are studied. We first summarize the different heuristics previously proposed, i.e., the approximation algorithms of Johnson and of Lieberherr for the general Maximum Satisfiability problem, and the heuristics of Lieberherr and Specker, Poljak and Turzik for the Maximum 2-Satisfiability problem. We then consider two recent local search algorithmic schemes, the Simulated Annealing method of Kirkpatrick, Gelatt and Vecchi and the Steepest Ascent Mildest Descent method, and adapt them to the Maximum Satisfiability problem. The resulting algorithms, which avoid being blocked as soon as a local optimum has been found, are shown empirically to be more efficient than the heuristics previously proposed in the literature.


Journal ArticleDOI
TL;DR: This paper presents a linear-time algorithm to embed an outerplanar graphG into a spanning tree with cost at most maxdegree(G) + 1, and shows that this problem isNP-complete even whenG is planar; it is easily solved when G is a tree; and there is a simple characterization for all graphs with cost 2 or less.
Abstract: The Min Cut Linear Arrangement problem asks, for a given graphG and a positive integerk, if there exists a linear arrangement ofG's vertices so that any line separating consecutive vertices in the layout cuts at mostk of the edges. A variation of this problem insists that the arrangement be made on a (fixed-degree) tree instead of a line. We show that (1) this problem isNP-complete even whenG is planar; (2) it is easily solved whenG is a tree; and (3) there is a simple characterization for all graphs with cost 2 or less. Our main result is a linear-time algorithm to embed an outerplanar graphG into a spanning tree with cost at most maxdegree(G) + 1. This result is important because it extends to an approximation algorithm for the standard Min Cut Linear Arrangement Problem on outerplanar graphs.

Journal ArticleDOI
TL;DR: In this article, the authors present a method for finding the global minimum of a Lipschitzian function subject to a convex and a reverse convex constraint, which is the same complexity as the outer approximation algorithm for a concave minimization problem.
Abstract: We will present a new method for finding the global minimum of a Lipschitzian function under Lipschitzian constraints. The method consists in converting the given problem into one of globally minimizing a concave function subject to a convex and a reverse convex constraints. The resulting algorithm is of the same complexity as the outer approximation algorithm for a concave minimization problem.

01 Jan 1987
TL;DR: A survey of solution methods for routing problems with time window constraints is given in this article, including the traveling salesman problem, the vehicle routing problem, pickup and delivery problem, and the dial-a-ride problem.
Abstract: A survey of solution methods for routing problems with time window constraints. Among the problems considered are the traveling salesman problem, the vehicle routing problem, the pickup and delivery problem, and the dial-a-ride problem. Optimization algorithms that use branch and bound, dynamic programming and set partitioning, and approximation algorithms based on construction, iterative improvement and incomplete optimization are presented.

01 Jan 1987
TL;DR: A survey of solution methods for routing problems with time window constraints is given in this article, including the traveling salesman problem, the vehicle routing problem, pickup and delivery problem, and the dial-a-ride problem.
Abstract: A survey of solution methods for routing problems with time window constraints. Among the problems considered are the traveling salesman problem, the vehicle routing problem, the pickup and delivery problem, and the dial-a-ride problem. Optimization algorithms that use branch and bound, dynamic programming and set partitioning, and approximation algorithms based on construction, iterative improvement and incomplete optimization are presented.

Proceedings ArticleDOI
12 Oct 1987
TL;DR: In this article, it was shown that any algorithm for fixed points based on function evaluation (which includes all general purpose fixed-point algorithrna) must in the worst case take a number of steps which is exponential both in the number of digits of accuracy and in the size of variables.
Abstract: The Brouwer fixed point theorem has become a major tool for modeling economic systems during the 20th century. It was intractable to use the theorem in a computational manner until 1965 when Scarf provided the first practical algorithm for finding a fixed point of a Brouwer map. Scarf's work left open the question of worstcase complexity, although he hypothesized that his algorithm had "typical" behavior of polynomial time in the number of variables of the problem. Here we show that any algorithm for fixed points based on function evaluation (which includes all general purpose fixed-point algorithrna) must in the worst case take a number of steps which is exponential both in the number of digits of accuracy and in the number of variables.

01 Jan 1987
TL;DR: The combinatorial problem that consists of partitioning the nodes of a weighted graph into bounded size disjoint clusters such that the sum of the weights of the edges whose end vertices belong to the same cluster is maximum is studied.
Abstract: We study, in the first part of this thesis, the combinatorial problem that consists of partitioning the nodes of a weighted graph into bounded size disjoint clusters such that the sum of the weights of the edges whose end vertices belong to the same cluster is maximum The complexity of the problem is put into perspective with other graph partitioning problems We present a class of approximation algorithms, based on matching, for the problem of partitioning the nodes of a graph into equally sized subsets These approximation algorithms are analyzed and shown to yield practical worst case bounds Numerical experiments with the heuristics and exact solution procedures on real-world and random problems are reported In the second part of this thesis, we address the problem of allocating the relations of a database to the sites of a computer network A general model of the problem is presented and linked to hypergraph partitioning An optimization algorithm based on two lower bounding schemes is described Successful computational experiments are also reported

Journal ArticleDOI
TL;DR: The problem of connecting a set of n terminals belonging to m (signal) nets that lie on the sides of a rectangle to minimize the total area is discussed and an O(n(m + \log n)approximation algorithm is presented to solve this problem.
Abstract: The problem of connecting a set of n terminals belonging to m (signal) nets that lie on the sides of a rectangle to minimize the total area is discussed. We present an $O(n(m + \log n))$approximation algorithm to solve this problem. Our algorithm generates a solution with area $ \leqq 1.6 * {\operatorname{OPT}}$, where ${\operatorname{OPT}}$ is the area of an optimal solution. The nets are routed according to the following greedy strategy: the wire connecting all points from a net is one whose path crosses the least number of corners of the rectangle. For some nets there are several routes that cross the least number of corners. A subset of these nets is connected by wires whose paths blend with the paths for other nets. The remaining nets are routed using several strategies and $2^6 $ layouts are obtained. The best of these layouts is the solution generated by our algorithm.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: A fast linear equation solver based on recursive textured decompositions that is faster than the multigrid method, so far known as the fastest available method for two dimensional Poisson equation, which has time complexity of O(log N).
Abstract: A fast linear equation solver based on recursive textured decompositions is introduced in this paper. The computational time complexity for solving problems of N unknown variables is in the order of one if N processors are available. It is faster than the multigrid method, so far known as the fastest available method for two dimensional Poisson equation, which has time complexity of O(log N). The basic difference between this approach, and classical iterative algorithms, is that different approximations of the system matrix are used in round-robin fashion while one fixed approximation is used in the classical approach. We show that, with proper choice of approximation compositions, the spectral radius of error dynamic is reduced drastically; and with proper decomposition size, the spectral radius will approach to a constant strictly less than one, even if the dimension of the problem tends to infinity. This enables us to devise a parallel algorithm with order one time complexity.

ReportDOI
01 Sep 1987
TL;DR: In the present research, an algorithm is developed which uses a polynomial approximation to f(A) by interpolating the function f(z) in a certain set of points which is known to have some maximal properties and is almost best.
Abstract: During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.