scispace - formally typeset
Search or ask a question

Showing papers by "Adam Tauman Kalai published in 2007"


Proceedings ArticleDOI
11 Jun 2007
TL;DR: This work shows how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime, and combines Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm.
Abstract: In an online linear optimization problem, on each period t, an online algorithm chooses st ∈ S from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ R, and the algorithm incurs cost c(st,wt), where c is a fixed cost function that is linear in the weight vector. In the full-information setting, the vector wt is then revealed to the algorithm, and in the bandit setting, only the cost experienced, c(st,wt), is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed s ∈ S in hindsight. Many repeated decision-making problems with weights fit naturally into this framework, such as online shortest-path, online TSP, online clustering, and online weighted set cover.Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online bandit algorithm in both the full-information and the bandit settings, with average cost nearly as good as that of the best fixed s ∈ S in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio α > 1, the previous approach only worked for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an α-approximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than α times that of the best s ∈ S, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a "Barycentric Spanner" for the problem is also (provably) necessary as input.Our algorithm can also be viewed as a method for playing largerepeated games, where one can only compute approximate best-responses, rather than best-responses.

61 citations


Posted Content
TL;DR: In this paper, the authors show that in any equilibrium where not all types of the agents choose the same action, the average productivity of an agent choosing a less informative action is greater.
Abstract: Consider an agent (manager,artist, etc.) who has imperfect private information about his productivity. At the beginning of his career (period 1, “short run”), the agent chooses among publicly observable actions that generate imperfect signals of his productivity. The actions can be ranked according to the informativeness of the signals they generate. The market observes the agent’s action and the signal generated by it, and pays a wage equal to his expected productivity. In period 2 (the “long run”), the agent chooses between a constant payoff and a wage proportional to his true productivity, and the game ends. We show that in any equilibrium where not all types of the agent choose the same action, the average productivity of an agent choosing a less informative action is greater. However, the types choosing that action are not uniformly higher. In particular, we derive conditions for the existence of a tripartite equilibrium where low and high types pool on a less informative action while medium (on average, lower) types choose to send a more informative signal.

5 citations


Book ChapterDOI
13 Jun 2007
TL;DR: This work shows that two rich classes of real-valued functions are learnable in the probabilistic-concept framework of Kearns and Schapire, and gives an efficient algorithm that provably learns nested halfspace functions on the unit ball.
Abstract: Predicting class probabilities and other real-valued quantities is often more useful than binary classification, but comparatively little work in PAC-style learning addresses this issue. We show that two rich classes of real-valued functions are learnable in the probabilistic-concept framework of Kearns and Schapire. Let X be a subset of Euclidean space and f be a real-valued function on X. We say f is a nested halfspace function if, for each real threshold t, the set {x ∈ X|f(x) ≤ t}, is a halfspace. This broad class of functions includes binary halfspaces with a margin (e.g., SVMs) as a special case. We give an efficient algorithm that provably learns (Lipschitz-continuous) nested halfspace functions on the unit ball. The sample complexity is independent of the number of dimensions. We also introduce the class of uphill decision trees, which are real-valued decision trees (sometimes called regression trees) in which the sequence of leaf values is non-decreasing. We give an efficient algorithm for provably learning uphill decision trees whose sample complexity is polynomial in the number of dimensions but independent of the size of the tree (which may be exponential). Both of our algorithms employ a real-valued extension of Mansour and McAllester's boosting algorithm.

4 citations