scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1995"


Journal ArticleDOI
TL;DR: This algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.
Abstract: We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least.87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of 1/2 for MAX CUT and 3/4 or MAX 2SAT. Slight extensions of our analysis lead to a.79607-approximation algorithm for the maximum directed cut problem (MAX DICUT) and a.758-approximation algorithm for MAX SAT, where the best previously known approximation algorithms had performance guarantees of 1/4 and 3/4, respectively. Our algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.

3,932 citations


Journal ArticleDOI
TL;DR: In this article, the Lanczos process is used to compute the Pade approximation of Laplace-domain transfer functions of large linear networks via a Lanczos Process (PVL) algorithm.
Abstract: In this paper, we introduce PVL, an algorithm for computing the Pade approximation of Laplace-domain transfer functions of large linear networks via a Lanczos process. The PVL algorithm has significantly superior numerical stability, while retaining the same efficiency as algorithms that compute the Pade approximation directly through moment matching, such as AWE and its derivatives. As a consequence, it produces more accurate and higher-order approximations, and it renders unnecessary many of the heuristics that AWE and its derivatives had to employ. The algorithm also computes an error bound that permits to identify the true poles and zeros of the original network. We present results of numerical experiments with the PVL algorithm for several large examples. >

1,313 citations


Journal ArticleDOI
TL;DR: The first approximation algorithms for many NP-complete problems, including the non-fixed point-to-point connection problem, the exact path partitioning problem and complex location-design problems are derived.
Abstract: We present a general approximation technique for a large class of graph problems. Our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles or paths satisfying certain requirements. In particular, many basic combinatorial optimization problems fit in this framework, including the shortest path, minimum-cost spanning tree, minimum-weight perfect matching, traveling salesman and Steiner tree problems. Our technique produces approximation algorithms that run in $O(n^2\log n)$ time and come within a factor of 2 of optimal for most of these problems. For instance, we obtain a 2-approximation algorithm for the minimum-weight perfect matching problem under the triangle inequality. Our running time of $O(n^2\log n)$ time compares favorably with the best strongly polynomial exact algorithms running in $O(n^3)$ time for dense graphs. A similar result is obtained for the 2-matching problem and its variants. We also derive the first approximation algorithms for many NP-complete problems, including the non-fixed point-to-point connection problem, the exact path partitioning problem and complex location-design problems. Moreover, for the prize-collecting traveling salesman or Steiner tree problems, we obtain 2-approximation algorithms, therefore improving the previously best-known performance guarantees of 2.5 and 3, respectively [Math. Programming, 59 (1993), pp. 413--420].

809 citations


Proceedings ArticleDOI
Gabriel Taubin1
20 Jun 1995
TL;DR: This work describes a method to estimate the tensor of curvature of a surface at the vertices of a polyhedral approximation, obtained by computing in closed form the eigenvalues and eigenvectors of certain 3/spl times/3 symmetric matrices defined by integral formulas.
Abstract: Estimating principal curvatures and principal directions of a surface from a polyhedral approximation with a large number of small faces, such as those produced by iso-surface construction algorithms, has become a basic step in many computer vision algorithms, particularly in those targeted at medical applications. We describe a method to estimate the tensor of curvature of a surface at the vertices of a polyhedral approximation. Principal curvatures and principal directions are obtained by computing in closed form the eigenvalues and eigenvectors of certain 3/spl times/3 symmetric matrices defined by integral formulas, and closely related to the matrix representation of the tensor of curvature. The resulting algorithm is linear, both in time and in space, as a function of the number of vertices and faces of the polyhedral surface. >

628 citations


Journal ArticleDOI
TL;DR: The techniques developed in this paper greatly outperform the general methods in many applications, and are extensions of a method previously applied to find approximate solutions to multicommodity flow problems.
Abstract: This paper presents fast algorithms that find approximate solutions for a general class of problems, which we call fractional packing and covering problems. The only previously known algorithms for...

575 citations


Proceedings ArticleDOI
Gabriel Taubin1
20 Jun 1995
TL;DR: A new method for smoothing piecewise linear shapes of arbitrary dimension and topology is introduced, in fact a linear low-pass filter that removes high-curvature variations, and does not produce shrinkage.
Abstract: For a number of computational purposes, including visualization of scientific data and registration of multimodal medical data, smooth curves must be approximated by polygonal curves, and surfaces by polyhedral surfaces. An inherent problem of these approximation algorithms is that the resulting curves and surfaces appear faceted. Boundary-following and iso-surface construction algorithms are typical examples. To reduce the apparent faceting, smoothing methods are used. In this paper, we introduce a new method for smoothing piecewise linear shapes of arbitrary dimension and topology. This new method is in fact a linear low-pass filter that removes high-curvature variations, and does not produce shrinkage. Its computational complexity is linear in the number of edges or faces of the shape, and the required storage is linear in the number of vertices. >

472 citations


Journal ArticleDOI
TL;DR: This work gives the first approximation algorithm for the generalized network Steiner problem, a problem in network design, and proves a combinatorial min-max approximate equality relating minimum-cost networks to maximum packings of certain kinds of cuts.
Abstract: We give the first approximation algorithm for the generalized network Steiner problem, a problem in network design. An instance consists of a network with link-costs and, for each pair $\{i,j\}$ of nodes, an edge-connectivity requirement $r_{ij}$. The goal is to find a minimum-cost network using the available links and satisfying the requirements. Our algorithm outputs a solution whose cost is within $2\lceil \log_2(r+1)\rceil$ of optimal, where $r$ is the highest requirement value. In the course of proving the performance guarantee, we prove a combinatorial min-max approximate equality relating minimum-cost networks to maximum packings of certain kinds of cuts. As a consequence of the proof of this theorem, we obtain an approximation algorithm for optimally packing these cuts; we show that this algorithm has application to estimating the reliability of a probabilistic network.

398 citations


Proceedings ArticleDOI
04 Jan 1995
TL;DR: The approach combines the Feige-Lovasz (STOC92) semidefinite programming relaxation of one-round two-prover proof systems, together with rounding techniques for the solutions of semideFinite programs, as introduced by Goemans and Williamson (SToc94).
Abstract: It is well known that two prover proof systems are a convenient tool for establishing hardness of approximation results. In this paper, we show that two prover proof systems are also convenient starting points for establishing easiness of approximation results. Our approach combines the Feige-Lovasz (STOC92) semidefinite programming relaxation of one-round two-prover proof systems, together with rounding techniques for the solutions of semidefinite programs, as introduced by Goemans and Williamson (STOC94). As a consequence of our approach, we present improved approximation algorithms for MAX 2SAT and MAX DICUT. The algorithms are guaranteed to deliver solutions within a factor of 0.931 of the optimum for MAX 2SAT and within a factor of 0.859 for MAX DICUT, improving upon the guarantees of 0.878 and 0.796 of Goemans and Williamson (1994). >

367 citations


Journal ArticleDOI
TL;DR: A class of approximation algorithms is described for solving the minimum makespan problem of job shop scheduling and can find shorter makespans than the shifting bottleneck heuristic or a simulated annealing approach with the same running time.

356 citations


Journal ArticleDOI
TL;DR: Various parameters of graphs connected to sparse matrix factorization and other applications can be approximated using an algorithm of Leighton et al. that finds vertex separators of graphs, and it is shown that unless P = NP there are no absolute approximation algorithms for any of the parameters.

323 citations


Book ChapterDOI
29 May 1995
TL;DR: Polynomial-time approximation algorithms with non-trivial performance guarantees are presented for the problems of partitioning the vertices of a weighted graph into k blocks so as to maximise the weight of crossing edges.
Abstract: Polynomial-time approximation algorithms with non-trivial performance guarantees are presented for the problems of (a) partitioning the vertices of a weighted graph into k blocks so as to maximise the weight of crossing edges, and (b) partitioning the vertices of a weighted graph into two blocks of equal cardinality, again so as to maximise the weight of crossing edges. The approach, pioneered by Goemans and Williamson, is via a semidefinite programming relaxation.

Journal ArticleDOI
TL;DR: In this paper, the authors present a simple technique that gives slightly better bounds than these and that more importantly requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free.
Abstract: Chernoff-Hoeffding (CH) bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables (r.v.'s). We present a simple technique that gives slightly better bounds than these and that more importantly requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free. Additional methods are also presented, and the aggregate results are sharp and provide a better understanding of the proof techniques behind these bounds. These results also yield improved bounds for various tail probability distributions and enable improved approximation algorithms for jobshop scheduling. The limited independence result implies that a reduced amount and weaker sources of randomness are sufficient for randomized algorithms whose analyses use the CH bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.

Journal ArticleDOI
TL;DR: The greedy algorithm is the first to come within a constant factor of the optimum; it guarantees a solution that uses no more than twice the minimum number of reversals, and the lower and upper bounds of the branch- and-bound algorithm are a novel application of maximum-weight matchings, shortest paths, and linear programming.
Abstract: Motivated by the problem in computational biology of reconstructing the series of chromosome inversions by which one organism evolved from another, we consider the problem of computing the shortest series of reversals that transform one permutation to another. The permutations describe the order of genes on corresponding chromosomes, and areversal takes an arbitrary substring of elements, and reverses their order.

Journal ArticleDOI
TL;DR: An exact numerical algorithm is developed that shows that the effective-bandwidth approximation can overestimate the target small blocking probabilities by several orders of magnitude when there are many sources that are more bursty than Poisson.
Abstract: Although ATM seems to be the wave of the future, one analysis requires that the utilization of the network be quite low. That analysis is based on asymptotic decay rates of steady-state distributions used to develop a concept of effective bandwidths for connection admission control. The present authors have developed an exact numerical algorithm that shows that the effective-bandwidth approximation can overestimate the target small blocking probabilities by several orders of magnitude when there are many sources that are more bursty than Poisson. The bad news is that the appealing simple connection admission control algorithm using effective bandwidths based solely on tail-probability asymptotic decay rates may actually not be as effective as many have hoped. The good news is that the statistical multiplexing gain on ATM networks may actually be higher than some have feared. For one example, thought to be realistic, the analysis indicates that the network actually can support twice as many sources as predicted by the effective-bandwidth approximation. The authors also show that the effective bandwidth approximation is not always conservative. Specifically, for sources less bursty than Poisson, the asymptotic constant grows exponentially in the number of sources (when they are scaled as above) and the effective-bandwidth approximation can greatly underestimate the target blocking probabilities. Finally, they develop new approximations that work much better than the pure effective-bandwidth approximation.

Journal ArticleDOI
TL;DR: A four-phase approach based on rigorous design criteria is presented, and has been found to be very accurate in practice and can accommodate high sequencing error rates.
Abstract: The trend toward very large DNA sequencing projects, such as those being undertaken as part of the Human Genome Program, necessitates the development of efficient and precise algorithms for assembling a long DNA sequence from the fragments obtained by shotgun sequencing or other methods. The sequence reconstruction problem that we take as our formulation of DNA sequence assembly is a variation of the shortest common superstring problem, complicated by the presence of sequencing errors and reverse complements of fragments. Since the simpler superstring problem is NP-hard, any efficient reconstruction procedure must resort to heuristics. In this paper, however, a four-phase approach based on rigorous design criteria is presented, and has been found to be very accurate in practice. Our method is robust in the sense that it can accommodate high sequencing error rates, and list a series of alternate solutions in the event that several appear equally good. Moreover, it uses a limited form of multiple sequence alignment to detect, and often correct, errors in the data. Our combined algorithm has successfully reconstructed nonrepetitive sequences of length 50,000 sampled at error rates of as high as 10%.

Journal ArticleDOI
TL;DR: It is proved that SCS does not have a polynomial time linear approximation algorithm, unless {\bf P} = {\bf NP", and a new method for analyzing the average-case performance of algorithms for sequences, based on Kolmogorov complexity is introduced.
Abstract: The problems of finding shortest common supersequences (SCS) and longest common subsequences (LCS) are two well-known {\bf NP}-hard problems that have applications in many areas including computational molecular biology, data compression, robot motion planning and scheduling, text editing, etc. A lot of fruitless effort has been spent in searching for good approximation algorithms for these problems. In this paper, we show that these problems are inherently hard to approximate in the worst case. In particular, we prove that (i) SCS does not have a polynomial time linear approximation algorithm, unless {\bf P} = {\bf NP}; (ii) There exists a constant $\delta > 0$ such that, if SCS has a polynomial time approximation algorithm with ratio $\log^{\delta} n$, where $n$ is the number of input sequences, then {\bf NP} is contained in {\bf DTIME}$(2^{\polylog n})$; (iii) There exists a constant $\delta > 0$ such that, if LCS has a polynomial time approximation algorithm with performance ratio $n^{\delta}$, then {\bf P} = {\bf NP}. The proofs utilize the recent results of Arora et al. [Proc. 23rd IEEE Symposium on Foundations of Computer Science, 1992, pp. 14-23] on the complexity of approximation problems. In the second part of the paper, we introduce a new method for analyzing the average-case performance of algorithms for sequences, based on Kolmogorov complexity. Despite the above nonapproximability results, we show that near optimal solutions for both SCS and LCS can be found on the average. More precisely, consider a fixed alphabet $\Sigma$ and suppose that the input sequences are generated randomly according to the uniform probability distribution and are of the same length $n$. Moreover, assume that the number of input sequences is polynomial in $n$. Then, there are simple greedy algorithms which approximate SCS and LCS with expected additive errors $O(n^{0.707})$ and $O(n^{\frac{1}{2}+\epsilon})$ for any $\epsilon > 0$, respectively. Incidentally, our analyses also provide tight upper and lower bounds on the expected LCS and SCS lengths for a set of random sequences, solving a generalization of another well-known open question on the expected LCS length for two random sequences [K. Alexander, The rate of convergence of the mean length of the longest common subsequence, 1992, manuscript],[V. Chvatal and D. Sankoff, J. Appl. Probab., 12 (1975), pp. 306-315], [D. Sankoff and J. Kruskall, eds., Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison, Addison-Wesley, Reading, MA, 1983].

Journal Article
TL;DR: In this article, the authors compare the syntactically defined class MAX SNP with the computationally defined class APX and show that every problem in APX can be "placed" (i.e., has approximation-preserving reduction to a problem) in MAX SNP.
Abstract: We attempt to reconcile the two distinct views of approximation classes: syntactic and computational. Syntactic classes such as MAX SNP permit structural results and have natural complete problems, while computational classes such as APX allow us to work with classes of problems whose approximability is well understood. Our results provide a syntactic characterization of computational classes and give a computational framework for syntactic classes. We compare the syntactically defined class MAX SNP with the computationally defined class APX and show that every problem in APX can be "placed" (i.e., has approximation-preserving reduction to a problem) in MAX SNP. Our methods introduce a simple, yet general, technique for creating approximation-preserving reductions which shows that any "well"-approximable problem can be reduced in an approximation-preserving manner to a problem which is hard to approximate to corresponding factors. The reduction then follows easily from the recent nonapproximability results for MAX SNP-hard problems. We demonstrate the generality of this technique by applying it to other classes such as MAX SNP-RMAX(2) and MIN F$^{+}\Pi_2(1)$ which have the clique problem and the set cover problem, respectively, as complete problems. The syntactic nature of MAX SNP was used by Papadimitriou and Yannakakis [J. Comput. System Sci., 43 (1991), pp. 425--440] to provide approximation algorithms for every problem in the class. We provide an alternate approach to demonstrating this result using the syntactic nature of MAX SNP. We develop a general paradigm, nonoblivious local search, useful for developing simple yet efficient approximation algorithms. We show that such algorithms can find good approximations for all MAX SNP problems, yielding approximation ratios comparable to the best known for a variety of specific MAX SNP-hard problems. Nonoblivious local search provably outperforms standard local search in both the degree of approximation achieved and the efficiency of the resulting algorithms.

Journal Article
TL;DR: In this paper, the Steiner tree problem is solved using a novel technique of choosing Steiner points in dependence on the possible deviation from the optimal solutions, achieving an approximation ratio of 1.644 in arbitrary metric and 1.267 in rectilinear plane, respectively.
Abstract: The Steiner tree problem asks for the shortest tree connecting a given set of terminal points in a metric space. We design new approximation algorithms for the Steiner tree problems using a novel technique of choosing Steiner points in dependence on the possible deviation from the optimal solutions. We achieve the best up to now approximation ratios of 1.644 in arbitrary metric and 1.267 in rectilinear plane, respectively.

Journal ArticleDOI
TL;DR: Two simple approximation algorithms for the minimum-k-cut problem are presented, each algorithm finds a cut having weight within a factor of $(2-2/k)$ of the optimal.
Abstract: Two simple approximation algorithms for the minimum $k$-cut problem are presented. Each algorithm finds a $k$ cut having weight within a factor of $(2-2/k)$ of the optimal. One of our algorithms is particularly efficient---it requires a total of only $n-1$ maximum flow computations for finding a set of near-optimal $k$ cuts, one for each value of $k$ between 2 and $n$.

Journal ArticleDOI
TL;DR: The algorithm is used in the CARMEN system for airline crew scheduling used by several major airlines, and it is shown that the algorithm performs well for large set covering problems, in comparison to the CPLEX system, in terms of both time and quality.
Abstract: We present an approximation algorithm for solving large 0–1 integer programming problems whereA is 0–1 and whereb is integer. The method can be viewed as a dual coordinate search for solving the LP-relaxation, reformulated as an unconstrained nonlinear problem, and an approximation scheme working together with this method. The approximation scheme works by adjusting the costs as little as possible so that the new problem has an integer solution. The degree of approximation is determined by a parameter, and for different levels of approximation the resulting algorithm can be interpreted in terms of linear programming, dynamic programming, and as a greedy algorithm. The algorithm is used in the CARMEN system for airline crew scheduling used by several major airlines, and we show that the algorithm performs well for large set covering problems, in comparison to the CPLEX system, in terms of both time and quality. We also present results on some well known difficult set covering problems that have appeared in the literature.

Journal ArticleDOI
TL;DR: The first polynomial-time approximation algorithm for finding a minimum-cost subgraph having at least a specified number of edges in each cut is presented, which shows the importance of this technique in designing approximation algorithms.
Abstract: We present the first polynomial-time approximation algorithm for finding a minimum-cost subgraph having at least a specified number of edges in each cut. This class of problems includes, among others, the generalized Steiner network problem, also called the survivable network design problem. Ifk is the maximum cut requirement of the problem, our solution comes within a factor of 2k of optimal. Our algorithm is primal-dual and shows the importance of this technique in designing approximation algorithms.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: This work considers the class of densely embedded, nearly-Eulerian graphs, which includes the two-dimensional mesh and other planar and locally planar interconnection networks, and obtains a constant-factor approximation algorithm for the maximum disjoint paths problem for this class of graphs.
Abstract: We consider the following maximum disjoint paths problem (MDPP). We are given a large network, and pairs of nodes that wish to communicate over paths through the network-the goal is to simultaneously connect as many of these pairs as possible in such a way that no two communication paths share an edge in the network. This classical problem has been brought into focus recently in papers discussing applications to routing in high-speed networks, where the current lack of understanding of the MDPP is an obstacle to the design of practical heuristics. We consider the class of densely embedded, nearly-Eulerian graphs, which includes the two-dimensional mesh and other planar and locally planar interconnection networks. We obtain a constant-factor approximation algorithm for the maximum disjoint paths problem for this class of graphs; this improves on an O(log n)-approximation for the special case of the two-dimensional mesh due to Aumann-Rabani and the authors. For networks that are not explicitly required to be "high-capacity," this is the first constant-factor approximation for the MDPP in any class of graphs other than trees. We also consider the MDPP in the on-line setting, relevant to applications in which connection requests arrive over time and must be processed immediately. Here we obtain an asymptptically optimal O(log n)competitive on-line algorithm for the same class of graphs; this improves on an O(log n log log n) competitive algorithm for the special case of the mesh due to B. Awerbuch et al (1994).

Journal ArticleDOI
TL;DR: A suboptimal approach to the fixed-interval smoothing problem for Markovian switching systems is examined and a smoothing algorithm is developed that uses two multiple-model filters, where one of the filters propagate in the forward-time direction and the other one propagates in the backward- time direction.
Abstract: A suboptimal approach to the fixed-interval smoothing problem for Markovian switching systems is examined. A smoothing algorithm is developed that uses two multiple-model filters, where one of the filters propagates in the forward-time direction and the other one propagates in the backward-time direction. A backward-time filtering algorithm based on the interacting multiple model concept is also developed. Results from a simulation example are given to illustrate the performance of the smoothing algorithm with respect to that of filtering. The example involves radar tracking of a Mach 1 aircraft.

Proceedings Article
16 Aug 1995
TL;DR: This paper shows that the Independent Set problem for bounded degree graphs remains MAX SNP-complete when the maximum degree is bounded by 3, and studies better poly-time approximation of the problem for degree 3 graphs, and improves the previously best ratio.
Abstract: The main problem we consider in this paper is the Independent Set problem for bounded degree graphs. It is shown that the problem remains MAX SNP-complete when the maximum degree is bounded by 3. Some related problems are also shown to be MAX SNP-complete at the lowest possible degree bounds. Next we study better poly-time approximation of the problem for degree 3 graphs, and improve the previously best ratio, 5/4, to arbitrarily close to 6/5. This result also provides improved poly-time approximation ratios, B+3/5+e, for odd degree B.

Journal ArticleDOI
TL;DR: An algorithm which, given a labeled graph on n vertices and a list of all labeled graphs on k vertices, provides for each graph H of this list an approximation to the number of induced copies of H in G with total error small is given.
Abstract: In this paper we give an algorithm which, given a labeled graph on $n$ vertices and a list of all labeled graphs on $k$ vertices, provides for each graph $H$ of this list an approximation to the number of induced copies of $H$ in $G$ with total error small. This algorithm has running time $O(n^{{1 \over \log \log n}} \cdot M(n))$, where $M(n)$ is the time needed to square a $n$ by $n$ matrix with 0, 1-entries over the integers. The main tool in designing this algorithm is a variant of the regularity lemma of Szemeredi.

Journal ArticleDOI
TL;DR: Lower bounds, approximation algorithms and a branch-and-bound procedure are introduced for the exact solution of the classical problem of scheduling n tasks with given processing time on m identical parallel processors so as to minimize the maximum completion time of a task.
Abstract: We consider the classical problem of scheduling n tasks with given processing time on m identical parallel processors so as to minimize the maximum completion time of a task. We introduce lower bounds, approximation algorithms and a branch-and-bound procedure for the exact solution of the problem. Extensive computational results show that, in many cases, large-size instances of the problem can be solved exactly. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.

Proceedings ArticleDOI
27 Aug 1995
TL;DR: This work discusses fundamental formation and agreement problems for autonomous, synchronous robots with limited visibility, and presents algorithms for these problems, except for the problem of agreement on direction, which is not solvable even for robots with unlimited visibility.
Abstract: Discusses fundamental formation and agreement problems for autonomous, synchronous robots with limited visibility. Each robots is a mobile processor that, at each discrete time instant, observes the relative positions of those robots that are within distance V of itself, computes its new position using the given algorithm, and then moves to that position. The main difference between this work and many of the previous ones is that, here, the visibility of the robots is assumed to be limited to within distance V, for some constant V>0. The problems the authors discuss include the formation of a single point by the robots and agreement on a common x-y coordinate system and the initial distribution, and they present algorithms for these problems, except for the problem of agreement on direction (a subproblem of agreement on a coordinate system), which is not solvable even for robots with unlimited visibility. The discussions the authors present indicate that the correctness proofs of the algorithms for robots with limited visibility can be considerably more complex than those for robots with unlimited visibility.

Proceedings ArticleDOI
29 May 1995
TL;DR: The problem of finding minimum weight spanning subgraphs with a given connectivity requirement is considered and polynomial time approximation algorithms for various weighted and unweighed connectivity problems are given.
Abstract: The problem of finding minimum weight spanning subgraphs with a given connectivity requirement is considered. The problem is NP-hard when the connectivity requirement is greater than one. Polynomial time approximation algorithms for various weighted and unweighed connectivity problems are given. The following results are presented:

Journal ArticleDOI
TL;DR: After reviewing the most important combinatorial characterizations of the classes PTAS and FPTAS, this paper focuses on the class APX and shows that this class coincides with the class of optimization problems which are reducible to the maximum satisfiability problem with respect to a polynomial-time approximation preserving reducibility.

Proceedings ArticleDOI
Edward Chlebus1, W. Ludwin
06 Nov 1995
TL;DR: A one-moment model for grade of service evaluation in cellular mobile networks is developed and it is proven that handoff traffic is Poissonian only in a nonblocking environment.
Abstract: A one-moment model for grade of service evaluation in cellular mobile networks is developed Traffic flows are modelled as a function of fresh traffic and users' movement A model of a single flow is proposed at first, then it is generalized for the whole network An iterative relaxation algorithm following the Erlang fixed-point approximation is used to produce numerical results From the usual assumptions such as Poissonian fresh call arrivals and exponential call holding times it is proven that handoff traffic is Poissonian only in a nonblocking environment Since this assumption is common in the literature with reference to a blocking environment, we examine its validity under such circumstances Network grade of service is evaluated by using the Erlang fixed-point approximation as if handoff traffic were Poissonian although it is smooth due to blocking The performance of the presented model is given in comparison to the solution of the exact Markov chain formulation for an isolated traffic stream or the results of simulations run to evaluate blocking of the whole network A perfect agreement between exact and approximate analytical results is shown