scispace - formally typeset
Search or ask a question

Showing papers by "Abraham D. Flaxman published in 2005"


Proceedings ArticleDOI
23 Jan 2005
TL;DR: It is possible to use gradient descent without seeing anything more than the value of the functions at a single point, and the guarantees hold even in the most general case: online against an adaptive adversary.
Abstract: We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c1, c2,..., and in each period, we choose a feasible point xt in S, and learn the cost ct(xt). If the function ct is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, minx Σ ct(x).We extend this to the "bandit" setting, where, in each period, only the cost ct(xt) is revealed, and bound the expected regret as O(n3/4).Our approach uses a simple approximation of the gradient that is computed from evaluating ct at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.

636 citations


Book ChapterDOI
TL;DR: In this paper, it was shown that the largest k eigenvalues of the adjacency matrix of the preferential attachment graph have λ k = (1± o(1))Δ k 1/2 whp.
Abstract: The preferential attachment graph is a random graph formed by adding a new vertex at each time step, with a single edge which points to a vertex selected at random with probability proportional to its degree. Every m steps the most recently added m vertices are contracted into a single vertex, so at time t there are roughly t/m vertices and exactly t edges. This process yields a graph which has been proposed as a simple model of the world wide web [BA99]. For any constant k, let Δ1 ≥ Δ2 ≥ ⋯ ≥ Δ k be the degrees of the k highest degree vertices. We show that at time t, for any function f with f(t)→ ∞ as t→ ∞, \(\frac{t^{1/2}}{f(t)} \leq \Delta_1 \leq t^{1/2}f(t),\) and for i = 2,..., k, \(\frac{t^{1/2}}{f(t)} \leq \Delta_i \leq \Delta_{i-1} -- \frac{t^{1/2}}{f(t)},\) with high probability (whp). We use this to show that at time t the largest k eigenvalues of the adjacency matrix of this graph have λ k = (1± o(1))Δ k 1/2 whp.

64 citations


Book ChapterDOI
15 Dec 2005
TL;DR: It is shown that on the equal revenue input, where any sale price gives the same revenue, random sampling is exactly a factor of four from optimal.
Abstract: We give a simple analysis of the competitive ratio of the random sampling auction from [10]. The random sampling auction was first shown to be worst-case competitive in [9] (with a bound of 7600 on its competitive ratio); our analysis improves the bound to 15. In support of the conjecture that random sampling auction is in fact 4-competitive, we show that on the equal revenue input, where any sale price gives the same revenue, random sampling is exactly a factor of four from optimal.

49 citations


Book ChapterDOI
24 Feb 2005
TL;DR: To the best of the knowledge, this is the first algorithm working efficiently beyond the magnitude bound of $\mathcal{O}(log n)$, thus narrowing the interval of hard-to-solve SSP instances.
Abstract: The subset sum problem (SSP) (given n numbers and a target bound B, find a subset of the numbers summing to B), is a classic NP-hard problem. The hardness of SSP varies greatly with the density of the problem. In particular, when m, the logarithm of the largest input number, is at least c · n for some constant c, the problem can be solved by a reduction to finding a short vector in a lattice. On the other hand, when $m=\mathcal{O}(log n)$ the problem can be solved in polynomial time using dynamic programming or some other algorithms especially designed for dense instances. However, as far as we are aware, all known algorithms for dense SSP take at least Ω(2m) time, and no polynomial time algorithm is known which solves SSP when m = ω(log n) (and m = o(n)). We present an expected polynomial time algorithm for solving uniformly random instances of the subset sum problem over the domain ℤM, with $m=\mathcal{O}((log n)^{2})$. To the best of our knowledge, this is the first algorithm working efficiently beyond the magnitude bound of $\mathcal{O}(log n)$, thus narrowing the interval of hard-to-solve SSP instances.

41 citations


Proceedings ArticleDOI
23 Jan 2005
TL;DR: In this paper, the authors study a dynamic evolving random graph which adds vertices and edges using preferential attachment and is "attacked by an adversary", where the adversary is allowed to delete vertices.
Abstract: We study a dynamically evolving random graph which adds vertices and edges using preferential attachment and is "attacked by an adversary". At time t, we add a new vertex xt and m random edges incident with xt, where m is constant. The neighbors of xt are chosen with probability proportional to degree. After adding the edges, the adversary is allowed to delete vertices. The only constraint on the adversarial deletions is that the total number of vertices deleted by time n must be no larger than δn, where δ is a constant. We show that if δ is sufficiently small then with high probability at time n the generated graph has a component of size Ω(n).

32 citations


Journal ArticleDOI
TL;DR: This game is analyzed in the offline and online setting, for arbitrary and random instances, which provides for interesting comparisons and finds that the competitive ratio (the best possible solution value divided by best possible online solution value) is large.
Abstract: Consider a game in which edges of a graph are provided a pair at a time, and the player selects one edge from each pair, attempting to construct a graph with a component as large as possible. This game is in the spirit of recent papers on avoiding a giant component, but here we embrace it. We analyze this game in the offline and online setting, for arbitrary and random instances, which provides for interesting comparisons. For arbitrary instances, we find that the competitive ratio (the best possible solution value divided by best possible online solution value) is large. For "sparse" random instances the competitive ratio is also large, with high probability (whp): If the instance has 1/4(1 + e)n random edge pairs, with 0 < e ≤ 0.003, then any online algorithm generates a component of size O((log n)3/2) whp, while the optimal offline solution contains a component of size Ω(n) whp. For "dense" random instances, the average-case competitive ratio is much smaller. If the instance has ½(1 - e)n random edge pairs, with 0 < e ≤ 0.015, we give an online algorithm which finds a component of size Ω(n) whp.

31 citations


Proceedings ArticleDOI
23 Jan 2005
TL;DR: The directed version of the problem is discussed, where the task is to construct a spanning out‐arborescence rooted at a fixed vertex r, and it is shown that in this case a simple variant of the threshold heuristic gives the asymptotically optimal value 1 − 1/e + o(1).
Abstract: It is known [7] that if the edge costs of the complete graph Kn are independent random variables, uniformly distributed between 0 and 1, then the expected cost of the minimum spanning tree is asymptotically equal to ζ(3) = Σ ∞i=1i-3. Here we consider the following stochastic two-stage version of this optimization problem. There are two sets of edge costs cM: E ← R and cT: E ← R, called Monday's prices and Tuesday's prices, respectively. For each edge e, both costs cM(e) and cT(e) are independent random variables, uniformly distributed in [0, 1]. The Monday costs are revealed first. The algorithm has to decide on Monday for each edge e whether to buy it at Monday's price cM(e), or to wait until its Tuesday price cT(e) appears. The set of edges XM bought on Monday is then completed by the set of edges XT bought on Tuesday to form a spanning tree. If both Monday's and Tuesday's prices were revealed simultaneously, then the optimal solution would have expected cost ζ(3)/2 + o(1). We show that in the case of two-stage optimization, the expected value of the optimal cost exceeds ζ(3)/2 by an absolute constant ∈ > 0. We also consider a threshold heuristic, where the algorithm buys on Monday only edges of cost less than α and completes them on Tuesday in an optimal way, and show that the optimal choice for α is α = 1/n with the expected cost ζ(3) - 1/2 + o(1). The threshold heuristic is shown to be sub-optimal. Finally we discuss the directed version of the problem, where the task is to construct a spanning out-arborescence rooted at a fixed vertex r, and show, somewhat surprisingly, that in this case a simple variant of the threshold heuristic gives the asymptotically optimal value 1 - 1/e + o(1).

25 citations


Proceedings ArticleDOI
22 May 2005
TL;DR: This paper analyzes the performance of 3 related approximation algorithms for the uncapacitated facility location problem and finds that, with high probability, these 3 algorithms do not find asymptotically optimal solutions, and a simple plane partitioning heuristic does find an asymptonically optimal solution.
Abstract: In combinatorial optimization, a popular approach toNP-hard problems is the design of approximation algorithms. These algorithms typically run in polynomial time and are guaranteed to produce a solution which is within a known multiplicative factor of optimal. Unfortunately, the known factor is often known to be large in pathological instances. Conventional wisdom holds that, in practice, approximation algorithms will produce solutions closer to optimal than their proven guarantees. In this paper, we use the rigorous-analysis-of-heuristics framework to investigate this conventional wisdom.We analyze the performance of 3 related approximation algorithms for the uncapacitated facility location problem (from [Jain, Mahdian, Markakis, Saberi, Vazirani, 2003] and [Mahdian, Ye, Zhang, 2002]) when each is applied to an instances created by placing n points uniformly at random in the unit square. We find that, with high probability, these 3 algorithms do not find asymptotically optimal solutions, and, also with high probability, a simple plane partitioning heuristic does find an asymptotically optimal solution.

9 citations