scispace - formally typeset
Search or ask a question

Showing papers on "Vertex cover published in 1996"



Proceedings ArticleDOI
01 Jul 1996
TL;DR: The first substantial improvement of the 20 year old classical harmonic upper bound, H(m), of Johnson, Lovssz, and ChvAt al is provided, and the approximation guarantee for the greedy algorithm is better than the guarantee recently established by Srinivasan for the randomized rounding technique, thus improving the bounds on the integralit~ gap.
Abstract: We establish significantly improved bounds on the performance of the greedy algorithm for approximatingset cover. In particular, we provide the first substantial improvement of the 20-year-old classical harmonic upper bound,H(m), of Johnson, Lovasz, and Chvatal, by showing that the performance ratio of the greedy algorithm is, in fact,exactlylnm?lnlnm+?(1), wheremis the size of the ground set. The difference between the upper and lower bounds turns out to be less than 1.1. This provides the first tight analysis of the greedy algorithm, as well as the first upper bound that lies belowH(m) by a function going to infinity withm. We also show that the approximation guarantee for the greedy algorithm is better than the guarantee recently established by Srinivasan for the randomized rounding technique, thus improving the bounds on theintegrality gap. Our improvements result from a new approach which might be generally useful for attacking other similar problems.

249 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: An outline of aquadratic algorithm to 4-color planar graphs is presented, based upon anew proof of the Four Color Theorem, which improves aquartic algorithm of Appel and Haken.
Abstract: An outline of aquadratic algorithm to 4-color planar graphs is presented, based upon anew proof of the Four Color Theorem. This algorithm improves aquartic algorithm of Appel and Haken.

108 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: It is demonstrated that isomorphism of strongly regular graphs may be tested in time n in light of Neumaier’s claw bound, which implies that low degree stronglyRegular graphs have a small second-largest eigenvalue, unless they are Steiner or Latin square graphs.
Abstract: We demonstrate that isomorphism of strongly regular graphs may be tested in time n~m’’’’ogm). Our approach is to analyze the standard individualization and refinement algorithm in light of Neumaier’s claw bound, which implies that low degree strongly regular graphs have a small second-largest eigenvalue, unless they are Steiner or Latin square graphs.

83 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: A new methodology for “result checking” is suggested that enables us to extend the notion of Blum’s program result checking to the on-line checking of cryptographic functions and uses a new “witnessbased” approach that gives constructions that apply to various cryptographic scenarios while making sure that the checker/program interaction releases no extra knowledge.
Abstract: We suggest a new methodology for “result checking” that enables us to extend the notion of Blum’s program result checking to the on-line checking of cryptographic functions. In our model, the checker not only needs to be assured of the correctness of the result but the owner of the program needs to be sure not to give away anything but the requested result on the (authorized) input. The existing approaches for program result checking of numerical problems often ask the program a number of extra queries (different from the actual input). In the case of cryptographic functions, this may be in contradiction with the security requirement of the program owner. Additional queries, in fact, may be used to gain unauthorized advantage (for example, imagine the implications of the on-line checking of a decryption device that requires the decryption ofextra ciphertexts). In [Blum88], the notion of a simple checlcer was introduced where, for the purpose of efficiency, extra queries are not allowed. In our model, we do allow extra queries, but only when the response does not carry ‘(knowledge,” (namely computational advantage). We use a new “witnessbased” approach and give constructions that apply to various cryptographic scenarios while making sure that the checker/program interaction releases no extra knowledge. It is based on the fact that with certain homomorphic functions, having a witness which is an initial correct value will enable checking the entire function domain, and the fact that having a random value of a cryptographic function typically does not reduce its security. The notion has various applications. A particularly use* Sandla National Labs Albuquerque, NM 87185; yair@cssandia.gov; This work was performed under U.S. Department of Energy contract number DE-AC04-76AL85000 tSandla National Labs Albuquerque, NM 87185, psgemme@cs.sandla. gov, This work was performed under US Department of Energy contract number DE-AC04-76AL85000 $IBM T J. Watson Research Center, Yorktown Heights, NY; moti@watson. ibm com STOC’96, Philadelphia PA, USA O-89791 -785-5J9610S MOTI YUNG $ ful application is achieving “efficient robust function sharing”, a method by which the power to apply a cryptographic function (e.g., RSA decryption / signature) is shared among multiple trustees. As long as a quorum of the trustees is not corrupted and is available, we can apply the function on the input parameters while maintaining the security of the function. With robustness we are able to tolerate and identify misbehaving trust ees, both with efficiency and on-line, when computing a function value.

78 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: An Ω(nǫ) lower bound on the competitive ratio of randomized online algorithms for virtual circuit routing on general networks is obtained, in contrast to the known results for some specific networks.
Abstract: We present lower bounds on the competitive ratio of randomized algorithms for a wide class of on-line graph optimization problems and we apply such results to on-line virtual circuit and optical routing problems. Lund and Yannakakis [LY93a] give inapproximability results for the problem of finding the largest vertex induced subgraph satisfying any non-trivial, hereditary, property π. E.g., independent set, planar, acyclic, bipartite. We consider the on-line version of this family of problems, where some graph G is fixed and some subgraph H of G is presented on-line, vertex by vertex. The on-line algorithm must choose a subset of the vertices of H , choosing or rejecting a vertex when it is presented, whose vertex induced subgraph satisfies property π. Furthermore, we study the on-line version of graph coloring whose off-line version has also been shown to be inapproximable [LY93b], on-line max edge-disjoint paths and on-line path coloring problems. Irrespective of the time complexity, we show an Ω(nǫ) lower bound on the competitive ratio of randomized on-line algorithms for any of these problems. As a consequence, we obtain an Ω(nǫ) lower bound on the competitive ratio of randomized online algorithms for virtual circuit routing on general networks, in contrast to the known results for some specific networks. Moreover, this lower bound holds even if the use of preemption is allowed. Similar lower bounds are obtained for on-line optical routing as well. A preliminary version of this work appears in the Proceedings of the 28th Annual ACM Symposium on Theory of Computing, 1996. International Computer Science institute, Berkeley. Research supported in part by the Rotschild Postdoctoral fellowship. e-mail: yairb@icsi.berkeley.edu Department of Computer Science, Tel Aviv University, Tel Aviv. Research supported in part by two grants from the Israel Academy of Sciences. e-mail: fiat@math.tau.ac.il Dipartimento di Informatica e Sistemistica, Universita di Roma “La Sapienza”, via Salaria113, 00198 Roma, Italy. This work is partly supported by EU ESPRIT Long Term Research Project ALCOM-IT under contract n 20244, and by Italian Ministry of Scientific Research Project 40% “Algoritmi, Modelli di Calcolo e Strutture Informative”. e-mail: leon@dis.uniroma1.it

73 citations


Journal ArticleDOI
TL;DR: In this paper, a polynomial approximation theory linked to combinatorial optimization is defined and a notion of equivalence among optimization problems is introduced, which includes translation or affine transformation of the objective function or yet some aspects of equivalencies between maximization and minimization problems.

70 citations


Journal ArticleDOI
TL;DR: An approximation-preserving reduction from MINSAT to the minimum vertex cover (MINVC) problem is presented, and it is observed thatMINSAT remains NP-complete even when restricted to planar instances.

65 citations


20 Dec 1996
TL;DR: The dense set cover problem can be approximated with the performance ratio $c\log n$ for any $c<0$ and it is unlikely to be NP-hard, and the vertex cover problem in $\epsilon$-dense graphs is studied.
Abstract: We study dense cases of several covering problems. An instance of the set cover problem with $m$ sets is dense if there is $\epsilon<0$ such that any element belongs to at least $\epsilon m$ sets. We show that the dense set cover problem can be approximated with the performance ratio $c\log n$ for any $c<0$ and it is unlikely to be NP-hard. We construct a polynomial-time approximation scheme for the dense Steiner tree problem in $n$-vertex graphs, i.e. for the case when each terminal is adjacent to at least $\epsilon n$ vertices. We also study the vertex cover problem in $\epsilon$-dense graphs. Though thisproblem is shown to be still MAX-SNP-hard as in general graphs, we find a better approximation algorithm with the performance ratio $2\over{1+\epsilon}$. The {\em superdense} cases of all these problems are shown to be solvable in polynomial time.

59 citations


Proceedings ArticleDOI
14 Oct 1996
TL;DR: An 8-approximation algorithm for the problem of finding a minimum weight subset feedback vertex set in undirected graphs and a subset of vertices S called special vertices is presented.
Abstract: We present an 8-approximation algorithm for the problem of finding a minimum weight subset feedback vertex set. The input in this problem consists of an undirected graph G=(V,E) with vertex weights w(v) and a subset of vertices S called special vertices. A cycle is called interesting if it contains at least one special vertex. A subset of vertices is called a subset feedback vertex set with respect to S if it intersects every interesting cycle The goal is to find a minimum weight subset feedback vertex set. The best pervious algorithm for the general case provided only a logarithmic approximation factor. The minimum weight subset feedback vertex set problem generalizes two NP-Complete problems: the minimum weight feedback vertex set problem in undirected graphs and the minimum weight multiway vertex cut problem. The main tool that we use in our algorithm and its analysis is a new version of multi-commodity flow which we call relaxed multi-commodity flow. Relaxed multi-commodity flow is a hybrid of multi-commodity flow and multi-terminal flow.

45 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: A simple approximation algorithm is presented that finds a solution whose cost is less than 17 times the cost of the optimum for the k-MST problem and is identical to an approximation algorithm of Goemans and Williamson for the prize-collecting Steiner tree problem.
Abstract: R. Ravit Given an undirected graph with non-negative edge costs and an integer k, the k-MST problem is that of finding a tree of minimum cost on k nodes. This problem is known to be NP-hard. We present a simple approximation algorithm that finds a solution whose cost is less than 17 times the cost of the optimum. This improves upon previous performance ratios for this problem – O(w) due to Ravi et al., 0(log2 k) due to Awerbuch et al, and the previous best bound of O(log k) due to Rajagopalan and Vazirani. Given any O < cr < 1, we first present a bicriteria approximation algorithm that ~o~tputs a tree on p z cYk vertices of total cost at most ~1~, where L is the cost of the optimal kMST. The running time of the algorithm is 0(rz2 log2 n) on an n-node graph. We then show how to use this algorithm to derive a constant factor approximation algorithm for the k-MST problem. The main subroutine in our algorithm is identical to an approximation algorithm of Goemans and Williamson for the prize-collecting Steiner tree problem. ● School of Computer Science, Carnegie Mellon University, Pittsburgh PA 15213. Supported in part by NSF National Young Investigator grant CCR-9357793 and a Sloan Foundation Research Fellowship. Em ail: avrim@cs.cmu .edu. * Graduate School Of Industrial Administration, Carnegie Mellon University, Pittsburgh PA 15213. Em ail: ravi+@cmu .edu. ~ School of Computer Science, Carnegie Mellon university, Pittsburgh PA 15213. Em ail: svempala@cs.cmu .edu. Permission to make digital/hard copies of all or pari of tlds material for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, tbe c~yright notice, the title of the publication and ita date appear, and notice la given that copyright ia by permission of the ACM, he. To copy otherwise, to republish, to poet on servers or to redistribute to lists, requires specific permission andlor fm. STOC’96, Philadelphia PA, USA @1996 ACM ()-89791.785+/96/()5. .$3.50 Santosh Vempala*

Journal ArticleDOI
TL;DR: By reducing a restricted version of SAT, it is shown that the problem of computing the minimum number er(C) of internal simplexes that need to be removed from a simplicial 2-complex C so that the remaining complex can be nulled by deleting a sequence of external simplexes is NP-complete.
Abstract: We analyze the problem of computing the minimum number er(C) of internal simplexes that need to be removed from a simplicial 2-complex C so that the remaining complex can be nulled by deleting a sequence of external simplexes. This is equivalent to requiring that the resulting complex be collapsible to a 1-complex. By reducing a restricted version of SAT, we show that this problem is NP-complete and therefore computationally intractable. This implies that there is no simple formula for er(C) terms of the Betti numbers of the complex. The problem remains NP-complete for higher dimensional complexes, but can be solved in polynomial time for graphs.


Journal ArticleDOI
TL;DR: This paper describes a technique to obtain NC Approximations Schemes for the Maximum Independent Set in planar graphs and related optimization problems by decomposing the graph into K-outerplanar subgraphs and solving for each K- outerplanar using tree contraction techniques.
Abstract: This paper describes a technique to obtain NC Approximations Schemes for the Maximun Independent Set in planar graphs and related optimization problems.

Book ChapterDOI
17 Jun 1996
TL;DR: It is observed that the Min Vertex Cover problem remains APX-complete when restricted to dense graph and thus recent techniques developed by Arora et al. for several Max SNP problems restricted to “dense” instances cannot be applied.
Abstract: We provide new non-approximability results for the restrictions of the Min Vertex Cover problem to bounded-degree, sparse and dense graphs. We show that, for a sufficiently large B, the recent 16/15 lower bound proved by Bellare et al. [3] extends with negligible loss to graphs with bounded degree B. Then, we consider sparse graphs with no dense components (i.e. everywhere sparse graphs), and we show a similar result but with a better trade-off between non-approximability and sparsity. Finally we observe that the Min Vertex Cover problem remains APX-complete when restricted to dense graph and thus recent techniques developed by Arora et al. [1] for several Max SNP problems restricted to “dense” instances cannot be applied.

Book ChapterDOI
03 Jun 1996
TL;DR: This paper considers a special variant, namely the problem of finding a maximum weight node induced acyclic subdigraph, and discusses valid and facet defining inequalities for the associated polytope and presents computational results with a branch-and-cut algorithm.
Abstract: Feedback problems consist of removing a minimal number of arcs or nodes of a directed or undirected graph in order to make it acyclic. In this paper we consider a special variant, namely the problem of finding a maximum weight node induced acyclic subdigraph. We discuss valid and facet defining inequalities for the associated polytope and present computational results with a branch-and-cut algorithm.

Journal ArticleDOI
TL;DR: This paper provides a collection of approximation algorithms for various clique sizes with proven worst-case bounds, and shows that these special classes of set covering problems can be solved with better worst- case bounds and/or complexity than if treated as general set coveringblems.
Abstract: The problem of covering edges and vertices in a graph (or in a hypergraph) was motivated by a problem arising in the context of the component assembly problem. The problem is as follows: given a graph and a clique size $k$, find the minimum number of $k$-cliques such that all edges and vertices of the graph are covered by (included in) the cliques. This paper provides a collection of approximation algorithms for various clique sizes with proven worst-case bounds. The problem has a natural extension to hypergraphs, for which we consider one particular class. The $k$-clique covering problem can be formulated as a set covering problem. It is shown that the algorithms we design, which exploit the structure of this special set covering problem, have better performance than those derived from direct applications of general purpose algorithms for the set covering. In particular, these special classes of set covering problems can be solved with better worst-case bounds and/or complexity than if treated as general set covering problems.

Journal Article
TL;DR: Khanna et al. as discussed by the authors gave new results on the structure of several computationally-defined approximation classes, after defining a new approximation preserving reducibility to be used for as many approximation classes as possible, and gave the first examples of natural NPO-complete problems and the first natural APX-intermediate problems.
Abstract: The study of the approximability properties of NP-hard optimization problems has recently made great advances mainly due to the results obtained in the field of proof checking. The last important breakthrough proves the APX-completeness of several important optimization problems and thus reconciles "two distinct views of approximation classes: syntactic and computational" [S. Khanna et al., in Proc. 35th IEEE Symp. on Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA, 1994, pp. 819--830]. In this paper we obtain new results on the structure of several computationally-defined approximation classes. In particular, after defining a new approximation preserving reducibility to be used for as many approximation classes as possible, we give the first examples of natural NPO-complete problems and the first examples of natural APX-intermediate problems. Moreover, we state new connections between the approximability properties and the query complexity of NPO problems.

Proceedings ArticleDOI
02 Dec 1996
TL;DR: A polynomial time solution for the 3-D version of the Art Gallery problem using techniques from computational geometry, graph coloring and set coverage is presented.
Abstract: The Art Gallery Problem is the problem of determining the number of observers necessary to cover an art gallery room such that every point is seen by at least one observer. This problem is well known and has a linear solution for the 2 dimensional case, but little is known in the 3-D case. In this paper we present a polynomial time solution for the 3-D version of the Art Gallery problem. Because the problem is NP-hard, the solution presented is an approximation, and we present the bounds to our solution. Our solution uses techniques from computational geometry, graph coloring and set coverage. A complexity analysis is presented for each step and an analysis of the overall quality of the solution is given.

Book ChapterDOI
25 Sep 1996
TL;DR: This paper considers a unified approximation method for node-deletion problems with nontrivial and hereditary graph properties through a few generic approximation preserving reductions from the Vertex Cover problem.
Abstract: In this paper we consider a unified approximation method for node-deletion problems with nontrivial and hereditary graph properties It was proved 16 years ago that every node-deletion problems for a nontrivial hereditary property is NP-complete via a few generic approximation preserving reductions from the Vertex Cover problem An open problem posed at that time is concerned with the other direction of approximability: can other node-deletion problems be approximated as good as the Vertex Cover ?

Journal ArticleDOI
TL;DR: The vertex cover and feedback vertex set problems for undirected graphs are considered, and it is shown that they are structurally closely related via the standard vector spaces associated with the graphs.

Journal Article
TL;DR: In this paper, it was shown that the Min Vertex Cover problem remains APX-complete when restricted to dense graphs and thus recent techniques developed by Arora et al. [1] for several Max SNP problems restricted to sparse instances cannot be applied.
Abstract: We provide new non-approximability results for the restrictions of the Min Vertex Cover problem to bounded-degree, sparse and dense graphs. We show that, for a sufficiently large B, the recent 16/15 lower bound proved by Bellare et al. [3] extends with negligible loss to graphs with bounded degree B. Then, we consider sparse graphs with no dense components (i.e. everywhere sparse graphs), and we show a similar result but with a better trade-off between non-approximability and sparsity. Finally we observe that the Min Vertex Cover problem remains APX-complete when restricted to dense graph and thus recent techniques developed by Arora et al. [1] for several Max SNP problems restricted to “dense” instances cannot be applied.

Proceedings ArticleDOI
01 Jul 1996
TL;DR: A new method is presented for deriving lower bounds to the expected number of queries made by noisy decision trees computing Boolean functions that has the feature that expectations are taken with respect to a uniformly distributed random input, as well as to the random noise, thus yielding stronger lower bounds.
Abstract: We present a new method for deriving lower bounds to the expected number of queries made by noisy decision trees computing Boolean functions. The new method has the feature that expectations are taken with respect to a uniformly distributed random input, as well as with respect to the random noise, thus yielding stronger lower bounds. It also applies to many more functions than do previous results. The method yields a simple proof of the result (previously established by Reischuk and Schmeltz) that almost all Boolean functions of n arguments require fl(n log n) queries, and strengthens this bound from the worst-case over inputs to the average over inputs. The method also yields bounds for specific Boolean functions in terms of their spectra (their Fourier transforms). The simplest instance of this spectral bound yields the result (previously established by Feige, Peleg, Raghavan and Upfal) that the parity function of n arguments requires $J(n log n) queries, and again strengthens this bound from the worst-case over inputs to the average over inputs. In its full generality, the spectral bound applies to the “highly resilient” functions introduced by Chor, Friedman, Goldreich, Hast ad, Rudich and Smolensky, and it yields non-linear lower bounds whenever the resilient y is asymptotic to the number of arguments.

Journal ArticleDOI
TL;DR: Using this method, a linear-time algorithm for finding vertex-disjoint paths of a prescribed homotopy is derived and the algorithm is modified to solve the more general linkage problem in linear time, as well.
Abstract: In this paper we present a linear-time algorithm for the vertex-disjoint Two-Face Paths Problem in planar graphs, i.e., the problem of finding k vertex-disjoint paths between pairs of terminals which lie on two face boundaries. The algorithm is based on the idea of finding rightmost paths with a certain property in planar graphs. Using this method, a linear-time algorithm for finding vertex-disjoint paths of a prescribed homotopy is derived. Moreover, the algorithm is modified to solve the more general linkage problem in linear time, as well.

Journal ArticleDOI
TL;DR: It is believed that the idea of local optimality suggested in this paper can also be applied to other combinatorial problems such as the clique problem, the dominating set problem and the graph coloring problem.
Abstract: In this paper, we introduce a new notion of local optimality and demonstrate its application to the problem of finding optimal independent sets and vertex covers in k-claw free graphs. The maximum independent set problem in k-claw free graphs has interesting applications in the design of electronic testing fixtures for printed circuit boards [13]. For this problem, our concept of local optimality enabled us to devise an efficient heuristic algorithm which outperforms the currently best approximation algorithm by nearly a factor of two in terms of worst case bound. We believe that the idea of local optimality suggested in this paper can also be applied to other combinatorial problems such as the clique problem, the dominating set problem and the graph coloring problem.

Journal ArticleDOI
TL;DR: The algorithm is a special implementation of the primal simplex algorithm applied to the linear programming statement of the problem, using a simple node labeling scheme, and has a strongly polynomial complexity.

03 Oct 1996
TL;DR: A fully dynamic approximation algorithm for bin packing MMP that is ${5\over4}$-competitive and requires $\Theta(\log n)$ time per an Insert or a Delete of an item.
Abstract: In this dissertation, we study fully dynamic approximation algorithms. In general, fully dynamic algorithms model situations where the problem instance changes (slowly) over time. In this context, we study fully dynamic algorithms that incorporate Insert and Delete operations, and certain (problem dependent) queries. We consider a fully dynamic algorithm efficient if its running time, either uniform or amortized over a sequence of operations or queries, is asymptotically faster than a repeated execution of the best (known) off-line algorithm after every change. Fully dynamic approximation algorithms maintain approximate solutions within a constant multiplicative factor, called the competitive ratio, from an optimal solution under a sequence of Insert and Delete operations and queries. The running times of interest must be faster than mere recomputation of the entire solution after each operation or query via the best off-line algorithms. Competitive ratios of fully dynamic approximation algorithms should be (nearly) as good as those of the best off-line algorithms. We first study fully dynamic approximation algorithms for vertex cover, a classic NP-complete minimization problem on graphs. We present $A\sb1$, a fully dynamic approximation algorithm for vertex cover. We further provide for a generalization of this algorithm and present a family of algorithms $A\sb{k},\ k\geq1$. Algorithms $A\sb{k}$ support Insert and Deletes of edges. Each $A\sb k$ requires ${\cal O}((v+e){1+\sqrt{1+4(k+1)(2k+3)}\over2(2k+3)})$ amortized running time per Insert/Delete operation. It follows that this amortized running time may be made arbitrarily close to ${\cal O}((v+e){\sqrt{2}\over2})$. Each of the algorithms $A\sb{k}$ is 2-competitive, thereby matching the competitive ratio of the best existing off-line approximation algorithms for vertex cover. The algorithms $A\sb{k}$ are the first known fully dynamic approximation algorithms for vertex cover. We then study fully dynamic approximation algorithms for bin packing, another classic NP-complete minimization problem. Our main result is a fully dynamic approximation algorithm for bin packing MMP that is ${5\over4}$-competitive and requires $\Theta(\log n)$ time per an Insert or a Delete of an item. This competitive ratio of ${5\over4}$ is nearly as good as that of the best practical off-line algorithms. Further, in the case where there are no Deletes of items, we provide an approximation scheme such that for any competitive ratio exceeding 1, there is an algorithm having that competitive ratio, and an amortized running time of $\Theta(\log n)$ per Insert operation. Again, MMP is the first known fully dynamic approximation algorithm for bin packing. Some of the techniques developed in the course of designing the fully dynamic algorithms for bin packing lead to the development of LINBP, a practical ${4\over3}$-competitive off-line algorithm for linear time bin packing. This algorithm significantly decreases the gap between the best practical linear time bin packing approximation algorithms and the existing linear time polynomial time approximation schemes. For a bounded bin packing problem, where the item sizes are from $({1\over3},1\rbrack$, we present a family of linear time approximation algorithms $A\sb\epsilon$ with a competitive ratio of ${5+\epsilon\over4}$ for arbitrarily small $\epsilon>0$. Finally, we note that there is a simple parallel version of LINBP that is optimal and requires ${\cal O}(\log n\log\sp* n)$ time and n processors on the EREW PRAM.

Proceedings ArticleDOI
Dae-Hyun Lee1, Hoon Choi1, Lae-Jeong Park1, Cheol Hoon Park1, Seung Ho Hwang1 
20 May 1996
TL;DR: This stochastic evolution algorithm is applied to solve the graph covering problem in which a set of patterns that fully covers a subject graph with a minimal cost is sought and incorporates the tree matching algorithm at the initial solution generation stage for speed-up.
Abstract: A stochastic evolution algorithm is applied to solve the graph covering problem in which a set of patterns that fully covers a subject graph with a minimal cost is sought. This problem is a typical constrained combinatorial optimization problem and is proven to be NP-complete. Many branch-and-bound algorithms with different heuristics have been proposed but most of them cannot handle practical sized problems like the technology mapping problem from the VLSI synthesis area. Our stochastic evolution is based on a problem-specific encoding scheme to reduce the size of the search space and incorporates the tree matching algorithm at the initial solution generation stage for speed-up. Experimental results show that the proposed algorithm produces good solutions within a reasonable amount of time.

Proceedings ArticleDOI
18 Nov 1996
TL;DR: A generalized plural cover problem is defined and an efficient algorithm for the generalized pluralCover problem is proposed for the purpose of solving this problem.
Abstract: Location theory on networks is concerned with the problem of selecting the best location in a specified network for facilities. Many studies for the theory have been done. We have studied location theory from the standpoint of measuring the closeness between two vertices by the capacity (maximum flow value) between two vertices. In the previous paper, we have considered location problems, called covering problems and proposed polynomial time algorithms for these problems. These problems are applicable to assigning files to some computers in a computer network. This paper concerns a problem called plural cover problem. We define a generalized plural cover problem and we propose an efficient algorithm for the generalized plural cover problem.

Dissertation
03 Jun 1996
TL;DR: In this paper, the edge cover time of a random walk on the path started at an endpoint has been studied in terms of coefficients related to the Bernoulli Numbers of the Second Kind, and a tight bound of (n-1)sq + O(n-sq/log n) has been established.
Abstract: : In recent years a great deal of attention has been focused on answering questions regarding the cover times of random walks on simple graphs. In this dissertation we answer questions about the edge cover time of such walks. We begin by reviewing many of the definitions and established results for vertex cover time. We present the little that has been published regarding bounds on the edge cover time. Having completed this review we establish a new, and sometimes more useful, global upper bound for edge cover time. We then narrow our focus and consider the edge cover time of the path. We establish an exact description of the edge cover time for a random walk on the path started at an endpoint in terms of coefficients related to the Bernoulli Numbers of the Second Kind. Studying these coefficients carefully allows us to develop a tight bound on this cover time of (n-1)sq+(Theta)(n-sq/logn). Using these results, and generalizing, provides a description of the edge cover time for walks on the path started from an arbitrary vertex. This generalization gives us a bound of (5/4)(n - 1)sq + O(n-sq/log n) for the edge cover time for the path. Having established a tight bound for walks on paths we then focus on other trees. We prove that the edge cover time for a random walk started from the center of a star graph minimizes edge cover time for walks on all trees on n vertices. We also establish the fact that in all graphs the edge cover time for a walk started from a leaf is always greater than for a walk started from its point of attachment. We continue our study of trees by establishing a global upper bound on the edge cover time for all trees and use it to study balanced k-ary trees. Finally we show the connection between our previous developments for the edge cover time for paths and that of the edge cover time on the cycle.