scispace - formally typeset
Search or ask a question
Author

Guido Schäfer

Bio: Guido Schäfer is an academic researcher from Centrum Wiskunde & Informatica. The author has contributed to research in topics: Price of anarchy & Approximation algorithm. The author has an hindex of 24, co-authored 97 publications receiving 1578 citations. Previous affiliations of Guido Schäfer include Sapienza University of Rome & University of Amsterdam.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents the first polynomial-time approximation schemes for maximum-weight matching andmaximum-weight matroid intersection with one additional budget constraint, and exploits the adjacency relations on the solution polytope and, surprisingly, the solution to an old combinatorial puzzle.
Abstract: Many polynomial-time solvable combinatorial optimization problems become NP-hard if an additional complicating constraint is added to restrict the set of feasible solutions. In this paper, we consider two such problems, namely maximum-weight matching and maximum-weight matroid intersection with one additional budget constraint. We present the first polynomial-time approximation schemes for these problems. Similarly to other approaches for related problems, our schemes compute two solutions to the Lagrangian relaxation of the problem and patch them together to obtain a near-optimal solution. However, due to the richer combinatorial structure of the problems considered here, standard patching techniques do not apply. To circumvent this problem, we crucially exploit the adjacency relations on the solution polytope and, surprisingly, the solution to an old combinatorial puzzle.

88 citations

Proceedings ArticleDOI
23 Jan 2005
TL;DR: The cost-sharing method presented in this paper is 2-approximate budget-balanced and this is tight with respect to the budget-balance factor and the dual solution computed by the algorithm is infeasible but it is proved that its total value is at most the cost of a minimum-cost Steiner forest for the given instance.
Abstract: In this paper we design an approximately budget-balanced and group-strategyproof cost-sharing mechanism for the Steiner forest game. An instance of this game consists of an undirected graph G = (V, E), non-negative costs ce for all edges e ∈ E, and a set R ⊆ V x V of k terminal pairs. Each terminal pair (s, t) ∈ R is associated with an agent that wishes to establish a connection between nodes s and t in the underlying network. A feasible solution is a forest F that contains an s, t-path for each connection request (s, t) ∈ R.Previously, Jain and Vazirani [4] gave a 2-approximate budget-balanced and group-strategyproof cost-sharing mechanism for the Steiner tree game --- a special case of the game considered here. Such a result for Steiner forest games has proved to be elusive so far, in stark contrast to the well known primal-dual (2 -- 1/k)-approximate algorithms [1, 2] for the problem.The cost-sharing method presented in this paper is 2-approximate budget-balanced and this is tight with respect to the budget-balance factor.Our algorithm is an original extension of known primal-dual methods for Steiner forests [1]. An interesting byproduct of the work in this paper is that our Steiner forest algorithm is (2 -- 1/k)-approximate despite the fact that the forest computed by our method is usually costlier than those computed by known primal-dual algorithms. In fact the dual solution computed by our algorithm is infeasible but we can still prove that its total value is at most the cost of a minimum-cost Steiner forest for the given instance.

69 citations

Proceedings Article
20 Jan 2008
TL;DR: This work presents a simple randomized algorithmic framework for connected facility location problems that significantly improves over the previously best known approximation ratios for several NP-hard network design problems.
Abstract: We present a simple randomized algorithmic framework for connected facility location problems. The basic idea is as follows: We run a black-box approximation algorithm for the unconnected facility location problem, randomly sample the clients, and open the facilities serving sampled clients in the approximate solution. Via a novel analytical tool, which we term core detouring, we show that this approach significantly improves over the previously best known approximation ratios for several NP-hard network design problems. For example, we reduce the approximation ratio for the connected facility location problem from 8.55 to 4.00 and for the single-sink rent-or-buy problem from 3.55 to 2.92. We show that our connected facility location algorithms can be derandomized at the expense of a slightly worse approximation ratio. The versatility of our framework is demonstrated by devising improved approximation algorithms also for other related problems.

60 citations

Journal ArticleDOI
28 Oct 2014
TL;DR: The authors' bounds show that for atomic congestion games and cost-sharing games, the robust price of anarchy gets worse with increasing altruism, while for valid utility games, it remains constant and is not affected by altruism.
Abstract: We study the inefficiency of equilibria for congestion games when players are (partially) altruistic. We model altruistic behavior by assuming that player i's perceived cost is a convex combination of αi times his direct cost and αi times the social cost. Tuning the parameters αi allows smooth interpolation between purely selfish and purely altruistic behavior. Within this framework, we study primarily altruistic extensions of (atomic and nonatomic) congestion games, but also obtain some results on fair cost-sharing games and valid utility games.We derive (tight) bounds on the price of anarchy of these games for several solution concepts. Thereto, we suitably adapt the smoothness notion introduced by Roughgarden and show that it captures the essential properties to determine the robust price of anarchy of these games. Our bounds show that for atomic congestion games and cost-sharing games, the robust price of anarchy gets worse with increasing altruism, while for valid utility games, it remains constant and is not affected by altruism.However, the increase in the price of anarchy is not a universal phenomenon: For general nonatomic congestion games with uniform altruism, the price of anarchy improves with increasing altruism. For atomic and nonatomic symmetric singleton congestion games, we derive bounds on the pure price of anarchy that improve as the average level of altruism increases. (For atomic games, we only derive such bounds when cost functions are linear.) Since the bounds are also strictly lower than the robust price of anarchy, these games exhibit natural examples in which pure Nash equilibria are more efficient than more permissive notions of equilibrium.

56 citations

Proceedings Article
01 Jan 2005
TL;DR: A constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution is shown.
Abstract: In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, i.e., the initial processing times are smoothed by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\psigma)^3 + (2^k/\psigma)^2 2^{K-k}})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothed according to the partial bit randomization model. For various other smoothing models, including the additive symmetric smoothing one, which is a variant of the model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform one.

55 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: This handbook is a very useful handbook for engineers, especially those working in signal processing, and provides real data bootstrap applications to illustrate the theory covered in the earlier chapters.
Abstract: tions. Bootstrap has found many applications in engineering field, including artificial neural networks, biomedical engineering, environmental engineering, image processing, and radar and sonar signal processing. Basic concepts of the bootstrap are summarized in each section as a step-by-step algorithm for ease of implementation. Most of the applications are taken from the signal processing literature. The principles of the bootstrap are introduced in Chapter 2. Both the nonparametric and parametric bootstrap procedures are explained. Babu and Singh (1984) have demonstrated that in general, these two procedures behave similarly for pivotal (Studentized) statistics. The fact that the bootstrap is not the solution for all of the problems has been known to statistics community for a long time; however, this fact is rarely touched on in the manuscripts meant for practitioners. It was first observed by Babu (1984) that the bootstrap does not work in the infinite variance case. Bootstrap Techniques for Signal Processing explains the limitations of bootstrap method with an example. I especially liked the presentation style. The basic results are stated without proofs; however, the application of each result is presented as a simple step-by-step process, easy for nonstatisticians to follow. The bootstrap procedures, such as moving block bootstrap for dependent data, along with applications to autoregressive models and for estimation of power spectral density, are also presented in Chapter 2. Signal detection in the presence of noise is generally formulated as a testing of hypothesis problem. Chapter 3 introduces principles of bootstrap hypothesis testing. The topics are introduced with interesting real life examples. Flow charts, typical in engineering literature, are used to aid explanations of the bootstrap hypothesis testing procedures. The bootstrap leads to second-order correction due to pivoting; this improvement in the results due to pivoting is also explained. In the second part of Chapter 3, signal processing is treated as a regression problem. The performance of the bootstrap for matched filters as well as constant false-alarm rate matched filters is also illustrated. Chapters 2 and 3 focus on estimation problems. Chapter 4 introduces bootstrap methods used in model selection. Due to the inherent structure of the subject matter, this chapter may be difficult for nonstatisticians to follow. Chapter 5 is the most impressive chapter in the book, especially from the standpoint of statisticians. It provides real data bootstrap applications to illustrate the theory covered in the earlier chapters. These include applications to optimal sensor placement for knock detection and land-mine detection. The authors also provide a MATLAB toolbox comprising frequently used routines. Overall, this is a very useful handbook for engineers, especially those working in signal processing.

1,292 citations

Book ChapterDOI
Eric V. Denardo1
01 Jan 2011
TL;DR: This chapter sees how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models” and finds an optimal solution that is integer-valued.
Abstract: In this chapter, you will see how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models.” You will also see that if a network flow model has “integer-valued data,” the simplex method finds an optimal solution that is integer-valued.

828 citations

Book ChapterDOI
01 Jan 2014
TL;DR: This chapter is devoted to a more detailed examination of game theory, and two game theoretic scenarios were examined: Simultaneous-move and multi-stage games.
Abstract: This chapter is devoted to a more detailed examination of game theory. Game theory is an important tool for analyzing strategic behavior, is concerned with how individuals make decisions when they recognize that their actions affect, and are affected by, the actions of other individuals or groups. Strategic behavior recognizes that the decision-making process is frequently mutually interdependent. Game theory is the study of the strategic behavior involving the interaction of two or more individuals, teams, or firms, usually referred to as players. Two game theoretic scenarios were examined in this chapter: Simultaneous-move and multi-stage games. In simultaneous-move games the players effectively move at the same time. A normal-form game summarizes the players, possible strategies and payoffs from alternative strategies in a simultaneous-move game. Simultaneous-move games may be either noncooperative or cooperative. In contrast to noncooperative games, players of cooperative games engage in collusive behavior. A Nash equilibrium, which is a solution to a problem in game theory, occurs when the players’ payoffs cannot be improved by changing strategies. Simultaneous-move games may be either one-shot or repeated games. One-shot games are played only once. Repeated games are games that are played more than once. Infinitely-repeated games are played over and over again without end. Finitely-repeated games are played a limited number of times. Finitely-repeated games have certain or uncertain ends.

814 citations

Book
05 Jun 2012
TL;DR: In this paper, the authors present a survey of the central algorithmic techniques for designing approximation algorithms, including greedy and local search algorithms, dynamic programming, linear and semidefinite programming, and randomization.
Abstract: Discrete optimization problems are everywhere, from traditional operations research planning problems, such as scheduling, facility location, and network design; to computer science problems in databases; to advertising issues in viral marketing. Yet most such problems are NP-hard. Thus unless P = NP, there are no efficient algorithms to find optimal solutions to such problems. This book shows how to design approximation algorithms: efficient algorithms that find provably near-optimal solutions. The book is organized around central algorithmic techniques for designing approximation algorithms, including greedy and local search algorithms, dynamic programming, linear and semidefinite programming, and randomization. Each chapter in the first part of the book is devoted to a single algorithmic technique, which is then applied to several different problems. The second part revisits the techniques but offers more sophisticated treatments of them. The book also covers methods for proving that optimization problems are hard to approximate. Designed as a textbook for graduate-level algorithms courses, the book will also serve as a reference for researchers interested in the heuristic solution of discrete optimization problems.

759 citations