scispace - formally typeset
Open AccessProceedings ArticleDOI

On the random 2-stage minimum spanning tree

Reads0
Chats0
TLDR
The directed version of the problem is discussed, where the task is to construct a spanning out‐arborescence rooted at a fixed vertex r, and it is shown that in this case a simple variant of the threshold heuristic gives the asymptotically optimal value 1 − 1/e + o(1).
Abstract
It is known [7] that if the edge costs of the complete graph Kn are independent random variables, uniformly distributed between 0 and 1, then the expected cost of the minimum spanning tree is asymptotically equal to ζ(3) = Σ ∞i=1i-3. Here we consider the following stochastic two-stage version of this optimization problem. There are two sets of edge costs cM: E ← R and cT: E ← R, called Monday's prices and Tuesday's prices, respectively. For each edge e, both costs cM(e) and cT(e) are independent random variables, uniformly distributed in [0, 1]. The Monday costs are revealed first. The algorithm has to decide on Monday for each edge e whether to buy it at Monday's price cM(e), or to wait until its Tuesday price cT(e) appears. The set of edges XM bought on Monday is then completed by the set of edges XT bought on Tuesday to form a spanning tree. If both Monday's and Tuesday's prices were revealed simultaneously, then the optimal solution would have expected cost ζ(3)/2 + o(1). We show that in the case of two-stage optimization, the expected value of the optimal cost exceeds ζ(3)/2 by an absolute constant ∈ > 0. We also consider a threshold heuristic, where the algorithm buys on Monday only edges of cost less than α and completes them on Tuesday in an optimal way, and show that the optimal choice for α is α = 1/n with the expected cost ζ(3) - 1/2 + o(1). The threshold heuristic is shown to be sub-optimal. Finally we discuss the directed version of the problem, where the task is to construct a spanning out-arborescence rooted at a fixed vertex r, and show, somewhat surprisingly, that in this case a simple variant of the threshold heuristic gives the asymptotically optimal value 1 - 1/e + o(1).

read more

Citations
More filters
Book ChapterDOI

Sampling bounds for stochastic optimization

TL;DR: A different proof is given, based on earlier methods of Kleywegt, Shapiro, Homem-De-Mello, and others, that a polynomial number of samples suffice for the SAA method, which applies to integer programs.
Proceedings ArticleDOI

Stochastic minimum spanning trees in euclidean spaces

TL;DR: In a general metric space the tail bounds of the distribution of the MST length cannot be approximated to any multiplicative factor in polynomial time under the assumption that P ≠ NP.
Book ChapterDOI

On two-stage stochastic minimum spanning trees

TL;DR: For the two-stage stochastic optimization formulation with finite scenarios, a simple iterative randomized rounding method on a natural LP formulation of the problem yields a nearly best-possible approximation algorithm.
Journal ArticleDOI

Online stochastic optimization under time constraints

TL;DR: This paper considers online stochastic combinatorial optimization problems where uncertainties, i.e., which requests come and when, are characterized by distributions that can be sampled and where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions.
Journal ArticleDOI

Commitment under uncertainty: Two-stage stochastic matching problems

TL;DR: Two versions of the bipartite matching problem are defined and study in the framework of two-stage stochastic optimization with recourse, and lower bounds are proved, and efficient strategies are analyzed for both cases.
References
More filters
Book

Random Graphs

BookDOI

Introduction to Stochastic Programming

TL;DR: This textbook provides a first course in stochastic programming suitable for students with a basic knowledge of linear programming, elementary analysis, and probability to help students develop an intuition on how to model uncertainty into mathematical problems.
Journal ArticleDOI

The birth of the giant component

TL;DR: In this paper, the authors derived the limiting distributions for the sparse connected components that are present when a random graph on n vertices has approximately 1/2n edges, and showed that such a graph consists entirely of trees, unicyclic components, and bicyclic components with probability approaching √2/3 cosh √5/18 ≈ 0.9957; the limiting probability that it is planar lies between 0.987 and 0.9325 as n∞.
Posted Content

The birth of the giant component

TL;DR: A “uniform” model of random graphs, which allows self-loops and multiple edges, is shown to lead to formulas that are substanitially simpler than the analogous formulas for the classical random graphs of Erdos and Renyi.
Related Papers (5)