scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1986"


Journal ArticleDOI
TL;DR: An outer-approximation algorithm is presented for solving mixed-integer nonlinear programming problems of a particular class and a theoretical comparison with generalized Benders decomposition is presented on the lower bounds predicted by the relaxed master programs.
Abstract: An outer-approximation algorithm is presented for solving mixed-integer nonlinear programming problems of a particular class. Linearity of the integer (or discrete) variables, and convexity of the nonlinear functions involving continuous variables are the main features in the underlying mathematical structure. Based on principles of decomposition, outer-approximation and relaxation, the proposed algorithm effectively exploits the structure of the problems, and consists of solving an alternating finite sequence of nonlinear programming subproblems and relaxed versions of a mixed-integer linear master program. Convergence and optimality properties of the algorithm are presented, as well as a general discussion on its implementation. Numerical results are reported for several example problems to illustrate the potential of the proposed algorithm for programs in the class addressed in this paper. Finally, a theoretical comparison with generalized Benders decomposition is presented on the lower bounds predicted by the relaxed master programs.

1,258 citations


Journal ArticleDOI
TL;DR: An approximate fuzzy c-means (AFCM) implementation based upon replacing the necessary ``exact'' variates in the FCM equation with integer-valued or real-valued estimates enables AFCM to exploit a lookup table approach for computing Euclidean distances and for exponentiation.
Abstract: This paper reports the results of a numerical comparison of two versions of the fuzzy c-means (FCM) clustering algorithms. In particular, we propose and exemplify an approximate fuzzy c-means (AFCM) implementation based upon replacing the necessary ``exact'' variates in the FCM equation with integer-valued or real-valued estimates. This approximation enables AFCM to exploit a lookup table approach for computing Euclidean distances and for exponentiation. The net effect of the proposed implementation is that CPU time during each iteration is reduced to approximately one sixth of the time required for a literal implementation of the algorithm, while apparently preserving the overall quality of terminal clusters produced. The two implementations are tested numerically on a nine-band digital image, and a pseudocode subroutine is given for the convenience of applications-oriented readers. Our results suggest that AFCM may be used to accelerate FCM processing whenever the feature space is comprised of tuples having a finite number of integer-valued coordinates.

630 citations


Journal ArticleDOI
TL;DR: It is shown that an upper bound for the convergence time is the classical mean-square-error time constant, and examples are given to demonstrate that for broad signal classes the convergenceTime is reduced by a factor of up to 50 in noise canceller applications for the proper selection of variable step parameters.
Abstract: In recent work, a new version of an LMS algorithm has been developed which implements a variable feedback constant μ for each weight of an adaptive transversal filter. This technique has been called the VS (variable step) algorithm and is an extension of earlier ideas in stochastic approximation for varying the step size in the method of steepest descents. The method may be implemented in hardware with only modest increases in complexity ( \approx 15 percent) over the LMS Widrow-Hoff algorithm. It is shown that an upper bound for the convergence time is the classical mean-square-error time constant, and examples are given to demonstrate that for broad signal classes (both narrow-band and broad-band) the convergence time is reduced by a factor of up to 50 in noise canceller applications for the proper selection of variable step parameters. Finally, the VS algorithm is applied to an IIR filter and simulations are presented for applications of the VS FIR and IIR adaptive filters.

398 citations


Journal ArticleDOI
TL;DR: A powerful, and yet simple, technique for devising approximation algorithms for a wide variety of NP-complete problems in routing, location, and communication network design is investigated.
Abstract: In this paper a powerful, and yet simple, technique for devising approximation algorithms for a wide variety of NP-complete problems in routing, location, and communication network design is investigated. Each of the algorithms presented here delivers an approximate solution guaranteed to be within a constant factor of the optimal solution. In addition, for several of these problems we can show that unless P = NP, there does not exist a polynomial-time algorithm that has a better performance guarantee.

371 citations


Journal ArticleDOI
TL;DR: This work presents an algorithm that finds a piecewise linear curve with the minimal number of segments required to approximate a curve within a uniform error with fixed initial and final points.
Abstract: Two-dimensional digital curves are often uniformly approximated by polygons or piecewise linear curves. Several algorithms have been proposed in the literature to find such curves. We present an algorithm that finds a piecewise linear curve with the minimal number of segments required to approximate a curve within a uniform error with fixed initial and final points. We compare our optimal algorithm to several suboptimal algorithms with respect to the number of linear segments required in the approximation and the execution time of the algorithm.

293 citations


Proceedings ArticleDOI
Prabhakar Raghavan1
27 Oct 1986
TL;DR: A methodology for converting a probabilistic existence proof to a deterministic approximation algorithm that mimics the existence proof in a very strong sense is developed.
Abstract: We consider the problem of approximating an integer program by first solving its relaxation linear program and "rounding" the resulting solution. For several packing problems, we prove probabilistically that there exists an integer solution close to the optimum of the relaxation solution. We then develop a methodology for converting such a probabilistic existence proof to a deterministic approximation algorithm. The methodology mimics the existence proof in a very strong sense.

289 citations


Proceedings Article
11 Aug 1986
TL;DR: It is shown that finding a shortest solution for the extended puzzle is NP-hard and thus computationally infeasible and an approximation algorithm for transforming boards is presented that is guaranteed to use no more than c L (SP) moves.
Abstract: The 8-puzzle and the 15-puzzle have been used for many years as a domain for testing heuristic search techniques. From experience it is known that these puzzles are "difficult" and therefore useful for testing search techniques. In this paper we give strong evidence that these puzzles are indeed good test problems. We extend the 8-puzzle and the 15-puzzle to a nxn board and show that finding a shortest solution for the extended puzzle is NP-hard and thus computationally infeasible. We also present an approximation algorithm for transforming boards that is guaranteed to use no more than c L (SP) moves, where L (SP) is the length of the shortest solution and c is a constant which is independent of the given boards and their size n.

254 citations


Journal ArticleDOI
TL;DR: Three efficient approximation algorithms are described and analyzed that guarantee asymptotic worst-case performance bounds of 2, 3, and 4 in the classical bin packing problem.
Abstract: In the classical bin packing problem one seeks to pack a list of pieces in the minimum space using unit capacity bins. This paper addresses the more general problem in which a fixed collection of bin sizes is allowed. Three efficient approximation algorithms are described and analyzed. They guarantee asymptotic worst-case performance bounds of 2, ${3 / 2}$ and ${4 / 3}$.

206 citations


Proceedings ArticleDOI
27 Oct 1986
TL;DR: A novel scheduling problem is defined; it is solved by repeated, rapid, approximate reschedulings, which leads to a first optimal PRAM algorithm for list ranking, which runs in logarithmic time.
Abstract: We study two parallel scheduling problems and their use in designing parallel algorithms. First, we define a novel scheduling problem; it is solved by repeated, rapid, approximate reschedulings. This leads to a first optimal PRAM algorithm for list ranking, which runs in logarithmic time. Our second scheduling result is for computing prefix sums of logn bit numbers. We give an optimal parallel algorithm for the problem which runs in sublogarithmic time. These two scheduling results together lead to logarithmic time PRAM algorithms for the connectivity, biconnectivity and minimum spanning tree problems. The connectivity and biconnectivity algorithms are optimal unless m = o(nlog*n), in graphs of n vertices and m edges.

196 citations


Journal ArticleDOI
H. Wang1
TL;DR: It's important for you to start having that hobby that will lead you to join in better concept of life and reading will be a positive activity to do every time.
Abstract: recursive estimation and time series analysis What to say and what to do when mostly your friends love reading? Are you the one that don't have such hobby? So, it's important for you to start having that hobby. You know, reading is not the force. We're sure that reading will lead you to join in better concept of life. Reading will be a positive activity to do every time. And do you know our friends become fans of recursive estimation and time series analysis as the best book to read? Yeah, it's neither an obligation nor order. It is the referred book that will not make you feel disappointed.

188 citations


Journal ArticleDOI
TL;DR: An approximation procedure is developed for the analysis of tandem configurations consisting of single server finite queues linked in series and gives results in the form of the marginal probability distribution of the number of units in each queue of the tandem configuration.
Abstract: An approximation procedure is developed for the analysis of tandem configurations consisting of single server finite queues linked in series. External arrivals occur at the first queue which may be either finite or infinite. Departures from the queuing network may only occur from the last queue. All service times and interarrival times are assumed to be exponentially distributed. The approximation algorithm gives results in the form of the marginal probability distribution of the number of units in each queue of the tandem configuration. Other performance measures, such as mean queue-length and throughput, can be readily obtained. The approximation procedure was validated using exact and simulation data. The approximate results seem to have an acceptable error level.

Journal ArticleDOI
TL;DR: In this correspondence, anomalies of parallel branch-and-bound algorithms using the same search strategy as the corresponding serial algorithms are studied and sufficient conditions to guarantee no degradation in performance and necessary conditions for allowing parallelism to have a speedup greater than the number of processors are presented.
Abstract: A general technique that can be used to solve a wide variety of discrete optimization problems is the branch-and-bound algorithm. We have adapted and extended branch-and-bound algorithms for parallel processing. The computational efficiency of these algorithms depends on the allowance function, the data structure, and the search strategies. Anomalies owing to parallelism may occur. In this correspondence, anomalies of parallel branch-and-bound algorithms using the same search strategy as the corresponding serial algorithms are studied. Sufficient conditions to guarantee no degradation in performance due to parallelism and necessary conditions for allowing parallelism to have a speedup greater than the number of processors are presented.

Journal ArticleDOI
TL;DR: A fast heuristic for this important class of problems is presented and its worst-case performance is analyzed: the ratio of the heuristic value to the optimum does not exceed the maximum row sum of the matrix A.

Journal ArticleDOI
TL;DR: The proposed algorithm is based on an adaptive segmentation of the original signal into adjacent regions and on the approximation of the signal in each region by a two-dimensional polynomial function.

Journal ArticleDOI
TL;DR: It is demonstrated that these approximation formulations require a significantly smaller number of calculations than the original formulation, and that the relative error bounds are satisfactory for practical purposes.
Abstract: We consider two approximation formulations for the single-product capacitated lot size problem. One formulation restricts the number of production policies and the other rounds demands up to multiples of a constant. After briefly reviewing the literature within a new framework, we discuss the relations between these approximation formulations. Next, we provide relative error bounds and algorithms for solving the approximation problems. We demonstrate that these approximation formulations require a significantly smaller number of calculations than the original formulation, and that the relative error bounds are satisfactory for practical purposes.

Journal ArticleDOI
TL;DR: A new algorithm for generating the most probable states of a network is presented, which is a major improvement over the previous one in terms of efficiency and flexibility.
Abstract: A new approach for analyzing the performance of communication networks with unreliable components was given in a recent paper [2]. An algorithm was developed to generate the most probable states of a network, and an analysis of those states gave a good approximation of the network performance. In this paper, we present a new algorithm for generating the most probable states. This new algorithm is a major improvement over the previous one in terms of efficiency and flexibility.

Journal ArticleDOI
TL;DR: This paper analyzes the performance of the two efficient algorithms for time delay of arrival estimation (TDE) between two signals, and it is shown that these estimators are unbiased, and explicit expressions for the TDE mean-square error (MSE) are presented.
Abstract: This paper analyzes the performance of the two efficient algorithms (presented by Stein and Cabot) for time delay of arrival estimation (TDE) between two signals. It is shown that these estimators are unbiased, and explicit expressions for the TDE mean-square error (MSE) are presented. It is also shown how to improve the performance of these algorithms by combining them with generalized cross-correlation (GCC) methods. In the analysis, we only assume stationary signals which are not necessarily Gaussian. The first algorithm (Stein) is indirect and uses the symmetry of the cross-correlation function between the two signals. It is shown here that the TDE-MSE depends on the unknown delay. The performance of this algorithm can be improved by combining it with the GCC method, and the pertinent TDE-MSE expressions are presented. The second algorithm (Cabot) is based on finding the zero of the cross-correlation function between one signal and the Hilbert transform of the other signal. Here, too, the pertinent TDE-MSE expressions are presented. This algorithm is also combined with the GCC method, and the optimal weight function for which the TDE-MSE expression coincides with the Cramer-Rao lower bound is found.

Journal ArticleDOI
TL;DR: This framework is able to use powerful combinatorial theory to obtain strong bounds for network reliability which can be computed by efficient algorithms, and is demonstrated on a small example.
Abstract: This paper presents criteria for acceptable schemes to approximate system reliability and Investigates such schemes for a special clas of network reliability problems. In this framework, we are able to use powerful combinatorial theory to obtain strong bounds for network reliability which can be computed by efficient algorithms. We demonstrate these bounds on a small example, and give some computational experience.

Journal ArticleDOI
TL;DR: A detailed Monte Carlo analysis gives an insight into the practical robustness of these procedures indicating the most reliable ones.

Proceedings ArticleDOI
27 Oct 1986
TL;DR: This work gives an O(nlogn) algorithm for the All-Nearest-Neighbors problem, for fixed dimension k and fixed metric Lq, and shows that the running time of this algorithm is optimal upto a constant.
Abstract: Given a set V of n points in k-dimensional space, and an Lq-metric (Minkowski metric), the All-Nearest-Neighbors problem is defined as follows: For each point p in V, find all those points in V-{p} that are closest to p under the distance metric Lq. We give an O(nlogn) algorithm for the All-Nearest-Neighbors problem, for fixed dimension k and fixed metric Lq. Since there is an Ω(n logn) lower bound, in the algebraic decision tree model of computation, on the time complexity of any algorithm that solves the All-Nearest-Neighbors problem (for k = 1), the running time of our algorithm is optimal upto a constant.

Proceedings ArticleDOI
01 May 1986
TL;DR: In this paper, an approximation algorithm for two-dimensional (2-D) signals, e.g. images, is presented by partitioning the original signal into adjacent regions with each region being approximated in the least square sense by a 2-D analytical function.
Abstract: An approximation algorithm for two-dimensional (2-D) signals, e.g. images, is presented. This approximation is obtained by partitioning the original signal into adjacent regions with each region being approximated in the least square sense by a 2-D analytical function. The segmentation procedure is controlled iteratively to insure at each step the best possible quality between the original image and the segmented one. The segmentation is based on two successive steps: splitting the original picture into adjacent squares of different size, then merging them in an optimal way into the final region configuration. Some results are presented when the approximation is performed by polynomial functions.

Journal ArticleDOI
TL;DR: A class of heuristics introduced by Cuthill and McKee in 1969, and referred to here as the level algorithms, are the basis for bandwidth minimization routines that have been widely used for over a decade.
Abstract: Most research in algorithm design relies on worst-case analysis for performance comparisons. Unfortunately, worst-case analysis does not always provide an adequate measure of an algorithm’s effectiveness. This is particularly true in the case of heuristic algorithms for hard combinatorial problems. In such cases, analysis of the probable performance can yield more meaningful results and can provide insight leading to better algorithms. The problem of minimizing the bandwidth of a sparse symmetric matrix by performing simultaneous row and column permutations, is an example of a problem for which there are well-known heuristics whose practical success has lacked a convincing analytical explanation. A class of heuristics introduced by Cuthill and McKee in 1969, and referred to here as the level algorithms, are the basis for bandwidth minimization routines that have been widely used for over a decade. At the same time, it is easy to construct examples, showing that the level algorithms can produce solutions t...

Proceedings ArticleDOI
27 Oct 1986
TL;DR: The lower bound improves on Häggkvist-Hell's lower bound, and it is shown that "approximate sorting" in time 1 requires asymptotically more than nlogn processors.
Abstract: The time complexity of sorting n elements using p ≥ n processors on Valiant's parallel comparison tree model is considered. The following results are obtained. 1. We show that this time complexity is Θ(logn/log(1+p/n)). This complements the AKS sorting network in settling the wider problem of comparison sort of n elements by p processors, where the problem for p ≤ n was resolved. To prove the lower bound, we show that to achieve time k ≤ logn, we need Ω(kn1+1/k) comparisons. Haggkvist and Hell proved a similar result only for fixed k. 2. For every fixed time k, we show that: (a) Ω(n1+1/k lognl/k) comparisons are required, (O(n1+1/k logn) are known to be sufficient in this case), and (b) there exists a randomized algorithm for comparison sort in time k with an expected number of O(n1+1/k) comparisons. This implies that for every fixed k, any deterministic comparison sort algorithm must be asymptotically worse than this randomized algorithm. The lower bound improves on Haggkvist-Hell's lower bound. 3. We show that "approximate sorting" in time 1 requires asymptotically more than nlogn processors. This settles a problem raised by M. Rabin.

Book ChapterDOI
17 Jun 1986
TL;DR: This work presents and analyzes a new algorithm outperforming all of the most important heuristics of Steiner's problem in graphs, neither in terms of speed nor of the quality of the approximate solution.
Abstract: Steiner's problem in graphs lies at the very heart of many optimization problems. As the problem is NP-hard, fast and good approximation algorithms are being sought. We discuss some of the most important heuristics. None of these heuristics is superior to any other, neither in terms of speed nor in terms of the quality of the approximate solution. We present and analyze a new algorithm outperforming all of these heuristics in both aspects.

01 Jan 1986
TL;DR: One of the main open questions for the Preparata, Metze and Chien model is resolved by presenting the first polynomial time algorithm for the diagnosability problem, and a new time complexity bound is presented for the diagnosis problem, where t is the maximum number of faulty units.
Abstract: It is now possible to design and build systems which incorporate a large number of processing elements. For this reason, fault-diagnosis at the system level, a research area pioneered by the work of Preparata, Metze, and Chien, is of increasing importance. The formalization of their model utilizes directed graphs together with labelings on edges and vertices. The two central problems introduced by the model are called the diagnosis and diagnosability problems. In the diagnosis problem an algorithm must identify the faulty units of a system based on test results. In the diagnosability problem an algorithm must determine the maximum number of faulty units a system can contain and still be guaranteed capable of successfully testing itself. We resolve one of the main open questions for this model by presenting the first polynomial time algorithm for the diagnosability problem. The solution uses network flow techniques and runs in $O(\vert E\vert\vert V\vert\sp{3/2})$ time. We also present a new time complexity bound of $O(min(t\vert E\vert,t\sp3 +\vert E\vert))$ for the diagnosis problem, where t is the maximum number of faulty units. In addition, we examine the major generalizations of the Preparata, Metze and Chien model. Maheshwari and Hakimi introduced probabilistic and weighted models. Friedman introduced a model with a measure called t/s -diagnosability. We present several new polynomial time algorithms, NP-hardness results and an approximation algorithm for these models.

Proceedings ArticleDOI
27 Oct 1986
TL;DR: In this article, it was shown that an approximate version of this problem can be solved deterministically in O(K + ε log n) time on any expander graph with sufficiently large expansion factor.
Abstract: A solution to the following fundamental communication problem is presented. Suppose that n tokens are arbitrarily distributed among n processors with no processor having more than K tokens. The problem is to specify a bounded-degree network topology and an algorithm that can distribute the tokens uniformly among the processors.The first result is a tight $\Theta (K + \log n)$ bound on the complexity of this problem. It is also shown that an approximate version of this problem can be solved deterministically in $O(K + \log n)$ on any expander graph with sufficiently large expansion factor.In the second part of this work, it is shown how to extend the solution for the approximate distribution problem to an optimal probabilistic algorithm for the exact distribution problem on a similar class of expander graphs. Note that communication through an expander graph is a necessary condition for an $O(K + \log n)$ solution of the problem.These results have direct applications to the efficient implementation of many...

Journal ArticleDOI
TL;DR: Abstrucf-Reconstruction algorithms of the high-order perturbation approximation under Born and Rytov transforms according to the inverse scattering perturbations theory are provided, and their physical interpretations as multiscatterer and multi-inverse-interaction models are presented.
Abstract: Abstrucf-Reconstruction algorithms of the high-order perturbation approximation under Born and Rytov transforms according to the inverse scattering perturbation theory are provided, and their physical interpretations as multiscatterer and multi-inverse-interaction models are presented. Furthermore, a new perturbation theory called relaxation perturbafion theory is suggested. All these aim to solve the moderately or strongly scattering problems occuring in ultrasonic diffraction tomography and to improve tomograms’ resolution. All these reconstruction algorithms are similar in form to those in the conventional first-order Born or Rytov approximation.

Journal ArticleDOI
TL;DR: An algorithm for computing a set of knots which is optimal for the segment approximation problem is developed and yields a sequence of real numbers which converges to the minimal deviation and a corresponding sequence of knot sets.
Abstract: An algorithm for computing a set of knots which is optimal for the segment approximation problem is developed. The method yields a sequence of real numbers which converges to the minimal deviation and a corresponding sequence of knot sets. This sequence splits into at most two subsequences which converge to leveled sets of knots. Such knot sets are optimal. Numerical results concerning piecewise polynomial approximation are given.

Proceedings ArticleDOI
05 Feb 1986
TL;DR: A (time) efficient algorithm to approximate the values to be estimated, is proposed and appears to be possible to find restrictive upper and lower limits for these problems in many practical situations.
Abstract: In optimizing database queries one inevitably encounters two important estimation problems. The first problem is to estimate the number of page accesses when selecting k tuples from a relation. The other problem is to estimate the number of different equijoin values remaining after selecting k tuples from a relation. The estimated values strongly depend on how the tuples are distributed over the pages (first problem) and how the equijoin values are distributed over the relation (second problem). It appears to be possible to find restrictive upper and lower limits for these problems in many practical situations. Results derived elsewhere appear to fall significantly outside these limits. Finally, a (time) efficient algorithm to approximate the values to be estimated, is proposed.

01 Jan 1986
TL;DR: Numerical experience with this algorithm shows that the approximate results seem to have an acceptable error level, which makes implementation easy, and software development feasible.
Abstract: A computationally efficient algorithm for analyzing approximately open queue-ing networks with blocking is presented. This algorithm is based on an earlier procedure proposed by Altiok and Perros [3"]. However, unlike this earlier procedure, the proposed new algorithm has minimal time and space requirements. It permits, therefore, the analysis of large and complicated networks. It also makes implementation easy, and software development feasible. Numerical experience with this algorithm shows that the approximate results seem to have an acceptable error level.