scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1988"


Journal ArticleDOI
TL;DR: In this article, a weighted greedy algorithm is proposed for a version of the dynamic Steiner tree problem, which allows endpoints to come and go during the life of a connection.
Abstract: The author addresses the problem of routing connections in a large-scale packet-switched network supporting multipoint communications. He gives a formal definition of several versions of the multipoint problem, including both static and dynamic versions. He looks at the Steiner tree problem as an example of the static problem and considers the experimental performance of two approximation algorithms for this problem. A weighted greedy algorithm is considered for a version of the dynamic problem which allows endpoints to come and go during the life of a connection. One of the static algorithms serves as a reference to measure the performance of the proposed weighted greedy algorithm in a series of experiments. >

2,866 citations


Proceedings ArticleDOI
24 Oct 1988
TL;DR: The main result is an algorithm for performing the task provided that the capacity of each cut exceeds the demand across the cut by a Theta (log n) factor.
Abstract: A multicommodity flow problem is considered where for each pair of vertices (u, v) it is required to send f half-units of commodity (u, v) from u to v and f half-units of commodity (v, u) from v to u without violating capacity constraints. The main result is an algorithm for performing the task provided that the capacity of each cut exceeds the demand across the cut by a Theta (log n) factor. The condition on cuts is required in the worst case, and is trivially within a Theta (log n) factor of optimal for any flow problem. The result can be used to construct the first polylog-times optimal approximation algorithms for a wide variety of problems, including minimum quotient separators, 1/3-2/3 separators, bifurcators, crossing number, and VLSI layout area. It can also be used to route packets efficiently in arbitrary distributed networks. >

491 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: This work gives a polynomial time approximation scheme that estimates the optimal number of clusters under the second measure of cluster size within factors arbitrarily close to 1 for a fixed cluster size.
Abstract: In a clustering problem, the aim is to partition a given set of n points in d-dimensional space into k groups, called clusters, so that points within each cluster are near each other. Two objective functions frequently used to measure the performance of a clustering algorithm are, for any L4 metric, (a) the maximum distance between pairs of points in the same cluster, and (b) the maximum distance between points in each cluster and a chosen cluster center; we refer to either measure as the cluster size.We show that one cannot approximate the optimal cluster size for a fixed number of clusters within a factor close to 2 in polynomial time, for two or more dimensions, unless P=NP. We also present an algorithm that achieves this factor of 2 in time O(n log k), and show that this running time is optimal in the algebraic decision tree model. For a fixed cluster size, on the other hand, we give a polynomial time approximation scheme that estimates the optimal number of clusters under the second measure of cluster size within factors arbitrarily close to 1. Our approach is extended to provide approximation algorithms for the restricted centers, suppliers, and weighted suppliers problems that run in optimal O(n log k) time and achieve optimal or nearly optimal approximation bounds.

485 citations


Journal ArticleDOI
TL;DR: A family of polynomial-time algorithms are given such that the last job to finish is completed as quickly as possible and the algorithm delivers a solution that is within a relative error of the optimum.
Abstract: In this paper we present a polynomial approximation scheme for the minimum makespan problem on uniform parallel processors. More specifically, the problem is to find a schedule for a set of independent jobs on a collection of machines of different speeds so that the last job to finish is completed as quickly as possible. We give a family of polynomial-time algorithms {A∈} such that A∈ delivers a solution that is within a relative error of e of the optimum. The technique employed is the dual approximation approach, where infeasible but superoptimal solutions for a related (dual) problem are converted to the desired feasible but possibly suboptimal solution.

382 citations


01 Jan 1988
TL;DR: This work presents optimization algorithms that use branch and bound, dynamic programming and set partitioning, and approximation algorithms based on construction, iterative improvement and incomplete optimization for routing problems with time window constraints.
Abstract: This is a survey of solution methods for routing problems with time window constraints. Among the problems considered are the traveling salesman problem, the vehicle routing problem, the pickup and delivery problem, and the dial-a-ride problem. We present optimization algorithms that use branch and bound, dynamic programming and set partitioning, and approximation algorithms based on construction, iterative improvement and incomplete optimization. (Author/TRRL)

286 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: The permanent function arises naturally in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians for many years (see [14] for background).
Abstract: The permanent of an n x n matrix A with 0-1 entries aij is defined by per (A) = S/s P/n-1/i=oais(i), where the sum is over all permutations s of [n] = {0, …, n - 1}. Evaluating per (A) is equivalent to counting perfect matchings (1-factors) in the bipartite graph G = (V1, V2, E), where V1 = V2 = [n] and (i,j) ∈ E iff aij = 1. The permanent function arises naturally in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians for many years (see [14] for background). Despite considerable effort, and in contrast with the syntactically very similar determinant, no efficient procedure for computing this function is known.Convincing evidence for the inherent intractability of the permanent was provided in the late 1970s by Valiant [19], who demonstrated that it is complete for the class #P of enumeration problems and thus as hard as counting any NP structures. Interest has therefore recently turned to finding computationally feasible approximation algorithms (see, e.g., [11], [17]). The notion of approximation we shall use in this paper is as follows: let ƒ be a function from input strings to natural numbers. A fully-polynomial randomised approximation scheme (fpras) for ƒ is a probabilistic algorithm which, when presented with a string x and a real number e > 0, runs in time polynomial in |x| and 1/e and outputs a number which with high probability estimates ƒ(x) to within a factor of (1 + e).A promising approach to finding a fpras for the permanent was recently proposed by Broder [7], and involves reducing the problem of counting perfect matchings in a graph to that of generating them randomly from an almost uniform distribution. The latter problem is then amenable to the following dynamic stochastic technique: construct a Markov chain whose states correspond to perfect and 'near-perfect' matchings, and which converges to a stationary distribution which is uniform over the states. Transitions in the chain correspond to simple local perturbations of the structures. Then, provided convergence is fast enough, we can generate matchings by simulating the chain for a small number of steps and outputting the structure corresponding to the final state.When applying this technique, one is faced with the task of proving that a given Markov chain is rapidly mixing, i.e., that after a short period of evolution the distribution of the final state is essentially independent of the initial state. 'Short' here means bounded by a polynomial in the input size; since the state space itself may be exponentially large, the chain must typically be close to stationarity after visiting only a small fraction of the space.Recent work on the rate of convergence of Markov chains has focussed on stochastic concepts such as coupling [1] and stopping times [3]. While these methods are intuitively appealing and yield tight bounds for simple chains, the analysis involved becomes extremely complicated for more interesting processes which lack a high degree of symmetry. Using a complex coupling argument, Broder [7] claims that the perfect matchings chain above is rapidly mixing provided the bipartite graph is dense, i.e., has minimum vertex degree at least n/2. This immediately yields a fpras for the dense permanent. However, the coupling proof is hard to penetrate; more seriously, as has been observed by Mihail [13], it contains a fundamental error which is not easily correctable.In this paper, we propose an alternative technique for analysing the rate of convergence of Markov chains based on a structural property of the underlying weighted graph. Under fairly general conditions, a finite ergodic Markov chain is rapidly mixing iff the conductance of its underlying graph is not too small. This characterisation is related to recent work by Alon [4] and Alon and Milman [5] on eigenvalues and expander graphs.While similar characterisations of rapid mixing have been noted before (see, e.g., [2]), independent estimates of the conductance have proved elusive for non-trivial chains. Using a novel method of analysis, we are able to derive a lower bound on the conductance of Broder's perfect matchings chain under the same density assumption, thus verifying that it is indeed rapidly mixing. The existence of a fpras for the dense permanent is therefore established.Reductions from approximate counting to almost uniform generation similar to that mentioned above for perfect matchings also hold for the large class of combinatorial structures which are self-reducible [10]. Consequently, the Markov chain approach is potentially a powerful general tool for obtaining approximation algorithms for hard combinatorial enumeration problems. Moreover, our proof technique for rapid mixing also seems to generalise to other interesting chains. We substantiate this claim by considering an example from the field of statistical physics, namely the monomer-dimer problem (see, e.g., [8]). Here a physical system is modelled by a set of combinatorial structures, or configurations, each of which has an associated weight. Most interesting properties of the model can be computed from the partition function, which is just the sum of the weights of the configurations. By means of a reduction to the associated generation problem, in which configurations are selected with probabilities proportional to their weights, we are able to show the existence of a fpras for the monomer-dimer partition function under quite general conditions. Significantly, in such applications the generation problem is often of interest in its own right.Our final result concerns notions of approximate counting and their robustness. We show that, for all self-reducible NP structures, randomised approximate counting to within a factor of (1 + nb), where n is the input size, is possible in polynomial time either for all b ∈ R or for no b ∈ R. We are therefore justified in calling such a counting problem approximable iff there exists a polynomial time randomised procedure which with high probability estimates the number of structures within ratio (1 + nb) for some arbitrary b ∈ R. The connection with the earlier part of the paper is our use of a Markov chain simulation to reduce almost uniform generation to approximate counting within any factor of the above form: once again, the proof that the chain is rapidly mixing follows from the conductance characterisation.

236 citations


01 Jan 1988
TL;DR: A survey of solution methods for routing problems with time window constraints is given in this paper, including the traveling salesman problem, the vehicle routing problem, pickup and delivery problem, and the dial-a-ride problem.
Abstract: A survey of solution methods for routing problems with time window constraints. Among the problems considered are the traveling salesman problem, the vehicle routing problem, the pickup and delivery problem, and the dial-a-ride problem. Optimization algorithms that use branch and bound, dynamic programming and set partitioning, and approximation algorithms based on construction, iterative improvement and incomplete optimization are presented.

232 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown how one can adjust the Newton-Raphson procedure to attain monotonicity by the use of simple bounds on the curvature of the objective function.
Abstract: It is desirable that a numerical maximization algorithm monotonically increase its objective function for the sake of its stability of convergence. It is here shown how one can adjust the Newton-Raphson procedure to attain monotonicity by the use of simple bounds on the curvature of the objective function. The fundamental tool in the analysis is the geometric insight one gains by interpreting quadratic-approximation algorithms as a form of area approximation. The statistical examples discussed include maximum likelihood estimation in mixture models, logistic regression and Cox's proportional hazards regression.

184 citations


Journal ArticleDOI
01 Apr 1988
TL;DR: An approximation algorithm for the shortest common superstring problem is developed, based on the Knuth-Morris-Pratt string-matching procedure and on the greedy heuristics for finding longest Hamiltonian paths in weighted graphs, and it seems that the lengths always satisfy k⩽2·kmin but proving this remains open.
Abstract: An approximation algorithm for the shortest common superstring problem is developed, based on the Knuth-Morris-Pratt string-matching procedure and on the greedy heuristics for finding longest Hamiltonian paths in weighted graphs. Given a set R of strings, the algorithm constructs a common superstring for R in O(mn) steps where m is the number of strings in R and n is the total length of these strings. The performance of the algorithm is analysed in terms of the compression in the common superstrings constructed, that is, in terms of n−k where k is the length of the obtained superstring. We show that (n−k)⩾12(n−kmin) where kmin is the length of a shortest common superstring. Hence the compression achieved by the algorithm is at least half of the maximum compression. It also seems that the lengths always satisfy k⩽2·kmin but proving this remains open.

163 citations


Proceedings ArticleDOI
24 Oct 1988
TL;DR: The first provably good approximation algorithm is given and shown to run in polynomial time for the simplified case of a point mass under Newtonian mechanics together with velocity and acceleration bounds.
Abstract: The following problem, is considered: given a robot system find a minimal-time trajectory from a start position and velocity to a goal position and velocity, while avoiding obstacles and respecting dynamic constraints on velocity and acceleration. The simplified case of a point mass under Newtonian mechanics together with velocity and acceleration bounds is considered. The point must be flown from a start to a goal, amid 2-D or 3-D polyhedral obstacles. While exact solutions to this problem are not known, the first provably good approximation algorithm is given and shown to run in polynomial time.

160 citations


Proceedings ArticleDOI
07 Dec 1988
TL;DR: An SA algorithm is presented that is based on a simultaneous-perturbation gradient approximation instead of the standard finite-difference approximation of Kiefer-Wolfowitz type procedures, indicating that the algorithm can be significantly more efficient than thestandard finite-Difference-based algorithms in large-dimensional problems.
Abstract: The author considers the problem of finding a root of the multivariate gradient equation that arises in function maximization. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm of the general type due to Kiefer and Wolfowitz (1952) is appropriate for estimating the root. An SA algorithm is presented that is based on a simultaneous-perturbation gradient approximation instead of the standard finite-difference approximation of Kiefer-Wolfowitz type procedures. Theory and numerical experience indicate that the algorithm can be significantly more efficient than the standard finite-difference-based algorithms in large-dimensional problems. >

Book
31 May 1988
TL;DR: In this paper, an approximate decomposition and aggregation for finite-dimensional deterministic problems is proposed for weakly controllable input-output properties of discrete dynamic systems with weak or aggregatable control.
Abstract: 1: The Perturbation Method in Mathematical Programming.- 1.1. Formulation and peculiarities of problems.- 1.2. Perturbations in linear programs.- 1.3 Nonlinear programs: perturbations in objective functions.- 1.4. Necessary and sufficient conditions for an extremum. Quasiconvex and quasilinear programs.- 1.5. Perturbations in nonconvex programs.- 2: Approximate Decomposition and Aggregation for Finite Dimensional Deterministic Problems.- 2.1. Perturbed decomposable structures and two-level planning.- 2.2. Aggregation of activities.- 2.3 Weakly controllable input-output characteristics.- 2.4. Input-output analysis.- 2.5. Aggregation in optimization models based on input-output analysis.- 2.6. Aggregation in the interregional transportation problem with regard to price scales.- 2.7. Optimization of discrete dynamic systems.- 2.8. Control of weakly dynamic systems under state variable constraints.- 3: Singular Programs.- 3.1. Singularity and regularization in quasiconvex problems.- 3.2. The auxiliary problem in the singular case.- 3.3. An approximate aggregation of Markov chains with incomes.- 3.4. An approximation algorithm for Markov programming.- 3.5. An iterative algorithm for suboptimization.- 3.6. An artificial introduction of singular perturbations in compact inverse methods.- 4: The Perturbation Method in Stochastic Programming.- 4.1. One- and two-stage problems.- 4.2. Optimal control problems with small random perturbations.- 4.3. Discrete dynamic systems with weak or aggregatable controls. An asymptotic stochastic maximum principle.- 4.4. Sliding planning and suboptimal decomposition of operative control in a production system.- 4.5. Sliding planning on an infinite horizon.- 4.6. Control of weakly dynamic systems under random disturbances.- 5: Suboptimal Linear Regulator Design.- 5.1. The LQ problem. Suboptimal decomposition.- 5.2. Loss of controllability, singularity, and suboptimal aggregation.- 5.3. Examples of suboptimal regulator synthesis.- 5.4. Control of oscillatory systems.- 5.5. LQG problems.- 6: Nonlinear Optimal Control Problems.- 6.1. The maximum principle and smooth solutions.- 6.2. The general terminal problem.- 6.3. Difference approximations.- 6.4. Weak control (nonuniqueness of the reduced solution).- 6.5. Aggregation in a singular perturbed problem.- Related Literature.

Journal ArticleDOI
TL;DR: It is shown that some well-known moethods like first-fit- decreasing are P-complete, and it is hence very unlikely that they can be efficiently parallelized, and an optimal NC algorithm is exhibited that achieves the same performance bound as does FFD.
Abstract: We study the parallel complexity of polynomial heuristics for the bin packing problem. We show that some well-known (and simple) moethods like first-fit- decreasing are P-complete, and it is hence very unlikely that they can be efficiently parallelized. On the other hand, we exhibit an optimal NC algorithm that achieves the same performance bound as does FFD. Finally, we discuss parallelization of polynomial approximation algorithms for bin packing based on discretization.

Book ChapterDOI
11 Feb 1988
TL;DR: It is shown that the k-colouring problem for the class of circle graphs is NP-complete for k at least four and it is proven that thek-coloured graphs is solvable in polynomial time if the degree is bounded.
Abstract: It is shown that the k-colouring problem for the class of circle graphs is NP-complete for k at least four Until now this problem was still open For circle graphs with maximum clique size k a 2k-colouring is always possible and can be found in O(n2) This provides an approximation algorithm with a factor two Further it is proven that the k-colouring problem for circle graphs is solvable in polynomial time if the degree is bounded The complexity of the 3-colouring problem for circle graphs remains open

Proceedings ArticleDOI
01 Jan 1988
TL;DR: In this paper, a simple and efficient method for evaluating the performance of an algorithm, rendered as a directed acyclic graph, on any parallel computer is presented, where the crucial ingredient is an efficient approximation algorithm for a particular scheduling problem.
Abstract: A simple and efficient method for evaluating the performance of an algorithm, rendered as a directed acyclic graph, on any parallel computer is presented. The crucial ingredient is an efficient approximation algorithm for a particular scheduling problem. The only parameter of the parallel computer needed by our method is the message-to-instruction ratio $\tau$. Although the method used in this paper does not take into account the number of processors available, its application to several common algorithms shows that it is surprisingly accurate.

Journal ArticleDOI
TL;DR: It is proved that the worst-case performance of the Steiner tree approximation algorithm by Rayward-Smith is within two times optimal and that two is the best bound in the sense that there are instances for which RS will do worse than any value less than two.

Proceedings ArticleDOI
01 Dec 1988
TL;DR: A gradient updating procedure for using both “present” and “past” data to improve the convergence properties of a stochastic approximation algorithm using second derivatives estimated by perturbation analysis techniques.
Abstract: We propose a gradient updating procedure for using both "present" and "past" data to improve the convergence properties of a stochastic approximation algorithm. This procedure utilizes second derivatives estimated by perturbation analysis techniques. Experimental evidence piovided by simulation runs appear to confirm the improvement in convergence rate gained by this modified algorithm.

Journal ArticleDOI
TL;DR: This paper solves the version of these authors as well as a more difficult version of this scheduling problem by formulating them as a continuous form of the Hakimi-Kariv-de Werra generalization of the edge-coloring problem in bipartite graphs.
Abstract: The scheduling of the transfer of backlogged data in a network to minimize the finishing time is studied The most complete treatment (of a version) of the problem is due to Gopal, Bongiovanni, Bonucelli, Tang, and Wong, who attacked the problem using the Birkhoff-von Neumann theorem However, these authors do not provide a complexity analysis of their algorithm In this paper we solve the version of these authors as well as a more difficult version of this scheduling problem by formulating them as a continuous form of the Hakimi-Kariv-de Werra generalization of the edge-coloring problem in bipartite graphs This leads to polynomial time algorithms for these problems Furthermore, our solution of the previously solved version has the desirable feature of having a tighter bound for the number of "communication modes" than the solution of the above authors In the above scheduling problem, there may be a time associated with changing from one set of simultaneous data transfers (ie, a communication mode) to another It is shown that if the overall finishing time of our schedule includes these times, then even very simple instances of our problem become NP-hard However, approximation algorithms are presented which produce solutions whose finishing times are at most twice the optimal Finally, in the above scheduling problem the interruption (or pre-emption) of the performance of each task is permitted Essentially, the same problem when pre-emption is not permitted was studied by Coffman, Garey, Johnson, and LaPaugh The relation between the two problems are explored

Proceedings ArticleDOI
06 Jan 1988
TL;DR: The algorithm is based upon a modified multi-dimensional search technique which extends the applicability of the basic technique to a wider class of problems and finds exact solutionsbased upon geometric properties of the problems as opposed to approximate solutions based upon existing numerical techniques.
Abstract: This paper presents algorithms for approximating a set of n points by a linear function, or a line, that minimizes the L1 norm of vertical and orthogonal distances. The algorithms find exact solutions based upon geometric properties of the problems as opposed to approximate solutions based upon existing numerical techniques. The algorithmic complexity of these problems appears not to have been investigated before our work in [9], although O(n3) naive algorithms can be easily obtained based on some simple characteristics of optimal L1 solutions. In this paper, an O(n) optimal time algorithm for the weighted vertical L1 problem is presented. The algorithm is based upon a modified multi-dimensional search technique which extends the applicability of the basic technique to a wider class of problems. An O(n1.5 log2n) algorithm is presented for the unweighted orthogonal problem, and an O(n2) algorithm is presented for the weighted problem. An O(n log n) lower bound for the orthogonal L1 problem is shown under a certain model of computation. Also, the complexity of solving the orthogonal L1 problem is related to the construction of the k-belt of an arrangement of lines.

Journal ArticleDOI
07 Dec 1988
TL;DR: A stochastic optimization algorithm based on the idea of the gradient method which incorporates a new adaptive-precision technique that can avoid increasing the estimation precision unnecessarily unnecessarily, yet it retains its favorable convergence properties.
Abstract: The authors present a stochastic optimization algorithm based on the idea of the gradient method which incorporates a novel adaptive-precision technique. Unlike recent methods, the proposed algorithm adaptively selects the precision without any need for prior knowledge on the speed of convergence of the generated sequence. The algorithm can avoid increasing the estimation precision unnecessarily, yet it retains its favorable convergence properties. In fact, it tries to maintain a nice balance between the requirements for computational accuracy and those for computational expediency. The authors present two types of convergence results delineating under what assumptions various kinds of convergence can be obtained for the proposed algorithm. They also present a parallel version of the proposed algorithm. >

Proceedings ArticleDOI
01 Jan 1988
TL;DR: Two scaled-CRT algorithms have been proposed in this paper that are based on the D/A CRT described by Soderstrand and are an approximation of the 3-moduli, which results in the reduction in hardware complexity due to the embedded scaling.
Abstract: Two scaled-CRT algorithms have been proposed in this paper that are based on the D/A CRT described by Soderstrand[6]. The first algorithm, the L-CRT, is a generalization of the D/A CRT except that it returns an integer. The second - and the most efficient - algorithm, the 2/sup 2n-1/-CRT, is an approximation of the 3-moduli, {2/sup n/ - 1, 2/sup n/, 2/sup n/ + 1), RNS - Two Theorems, quantifying the error bounds, are presented and then verified through extensive experimental analysis. The most important consequence of the proposed scaled CRT algorithms is the reduction in hardware complexity due to the embedded scaling.

Journal ArticleDOI
TL;DR: An approximation algorithm for analyzing closed exponential queueing networks of the product-form type, in which some of the queues are finite, is incorporated in an approximation algorithm based on a variant of Norton's theorem.

Journal ArticleDOI
TL;DR: This article proves that every graph with a fixed bound on the vertex degree has a nested dissection order that achieves fill within a factor of O(log n) ofminimum, which does not lead to a polynomial-time approximation algorithm.

Proceedings ArticleDOI
01 Jun 1988
TL;DR: Though the optimization problem of using the minimum number of counting semaphore operations is shown to be NP-complete, this paper presents an approximation algorithm that is observed to be very close to optimal (within 0.5%) on small, randomly generated dependence graphs.
Abstract: This paper studies the optimization problem of enforcing a dependence graph with the minimum number of synchronization operations. For a dependence graph with N vertices, it is shown that binary semaphores may require O(N2) operations, compared to O(N) operations for counting semaphores. Though the optimization problem of using the minimum number of counting semaphore operations is shown to be NP-complete, we present an approximation algorithm that is observed to be very close to optimal (within 0.5%) on small, randomly generated dependence graphs. A surprising property of the problem is that the inclusion (rather than removal) of transitive edges can actually help reduce the number of synchronization operations.We characterize as class of dependence graphs for which the approximation algorithm is optimal. This class includes forests of fan-in trees, fan-out trees and series-parallel graphs. The number of synchronization operations needed for binary and counting semaphores are compared for randomly generated dependence graphs, using an implementation of the approximation algorithm in LISP/VM. The experimental results show that the use of counting semaphores significantly reduces the total number of synchronization operations, compared to binary semaphores.

Proceedings ArticleDOI
07 Dec 1988
TL;DR: A multigrid version of the successive approximation algorithm whose requirements are within a constant factor from the lower bounds when a certain mixing condition is satisfied is provided, and the algorithm is optimal.
Abstract: The application of multigrid methods to a class of discrete-time, continuous-state, discounted, infinite-horizon dynamic programming problems is studied. The authors analyze the computational complexity of computing the optimal cost function to within a desired accuracy of epsilon , as a function of epsilon and the discount factor alpha . Using an adversary argument, they obtain lower bound results on the computational complexity for this class of problems. They also provide a multigrid version of the successive approximation algorithm whose requirements are (as a function of alpha and epsilon ) within a constant factor from the lower bounds when a certain mixing condition is satisfied. Hence the algorithm is optimal. >

Journal ArticleDOI
TL;DR: This paper derives upper and lower bounds on the anomalous behavior of the algorithms which are anomalous and provides conditions under which a normally nonmonotonic algorithm becomes monotonic.

Journal Article
TL;DR: In this article, the authors improved some results given in [12] relating to approximate solutions for two-level optimization problems by considering an e-regularized problem, and they proved existence results for the solutions to the eregularized problems, whereas the initial twolevel optimization problem may fail to have a solution.
Abstract: The purpose of this work is to improve some results given in [12], relating to approximate solutions for two-level optimization problems. By considering an e-regularized problem, we get new properties, under convexity assumptions in the lower level problems. In particular, we prove existence results for the solutions to the e-regularized problem, whereas the initial two-level optimization problem may fail to have a solution. Finally, as an example, we consider an approximation method with interior penalty functions.

Proceedings ArticleDOI
01 Jan 1988
TL;DR: This paper presents a greedy mapping algorithm for hypercube interconnection structures, which utilizes the graph-oriented mapping strategy to map a communication graph to a hypercube.
Abstract: The mapping problem is the problem of implementing a computational task on a target architecture in order to maximize some performance metric. For a hypercube-interconnected multiprocessor, the mapping problem arises when the topology of a task graph is different from a hypercube. It is desirable to find a mapping of tasks to processors that minimizes average path length and hence interprocessor communication. The problem of finding an optimal mapping, however, has been proven to be NP-complete. Several different approaches have been taken to discover suitable mappings for a variety of target architectures. Since the mapping problem is NP-complete, approximation algorithms are used to find good mappings instead of optimal ones. Usually, greedy and/or local search algorithms are introduced to approximate the optimal solutions. This paper presents a greedy mapping algorithm for hypercube interconnection structures, which utilizes the graph-oriented mapping strategy to map a communication graph to a hypercube. The strategy is compared to previous strategies for attacking the mapping problem. A simulation is performed to estimate both the worst-case bounds for the greedy mapping strategy and the average performance.

Journal Article
TL;DR: A number of problems for which fast parallel approximation algorithms are known are surveyed, including the O-l knapsack problem, binpacking, the minimal makeshift problem, the list scheduling problem, greedy scheduling, and the high density subgraph problem.

Proceedings ArticleDOI
Kolen1
24 Jul 1988
TL;DR: The author proves that the learning problem in connections of networks is NP-complete, i.e. no polynomial-time algorithm exists which will correctly modify connection weights of a neural network, and presents a method called the probabilistic approximation algorithm, which would allow network designers to build networks with a predetermined probability of certain kind of error.
Abstract: The author proves that the learning problem in connections of networks is NP-complete, i.e. no polynomial-time algorithm exists which will correctly modify connection weights of a neural network. Although no perfect algorithm exists, a method called the probabilistic approximation algorithm is presented. This method, which can be used with any learning rule, would allow network designers to build networks with a predetermined probability of certain kind of error. He shows that for any learning rule that does not utilize probabilistic approximation, the probability of convergence will increase when the approximation method is employed. >