scispace - formally typeset
Search or ask a question

Showing papers by "Kurt Mehlhorn published in 2014"


Journal ArticleDOI
TL;DR: A simple coordination mechanism is exhibited that achieves for any network of parallel links an engineered Price of Anarchy strictly less than 4/3; for the case of two parallel links, this basic mechanism gives 5/4=1.25.
Abstract: We reconsider the well-studied Selfish Routing game with affine latency functions. The Price of Anarchy for this class of games takes maximum value 4/3; this maximum is attained already for a simple network of two parallel links, known as Pigou’s network. We improve upon the value 4/3 by means of Coordination Mechanisms. We increase the latency functions of the edges in the network, i.e., if l e (x) is the latency function of an edge e, we replace it by $\hat{\ell}_{e}(x)$ with $\ell_{e}(x) \le \hat{\ell}_{e}(x)$ for all x. Then an adversary fixes a demand rate as input. The engineered Price of Anarchy of the mechanism is defined as the worst-case ratio of the Nash social cost in the modified network over the optimal social cost in the original network. Formally, if $\hat{C}_{N} (r)$ denotes the cost of the worst Nash flow in the modified network for rate r and C opt (r) denotes the cost of the optimal flow in the original network for the same rate then $$\mathit{ePoA} = \max_{r \ge 0} \frac{\hat{C}_N(r)}{C_{\mathit{opt}}(r)}. $$ We first exhibit a simple coordination mechanism that achieves for any network of parallel links an engineered Price of Anarchy strictly less than 4/3. For the case of two parallel links our basic mechanism gives 5/4=1.25. Then, for the case of two parallel links, we describe an optimal mechanism; its engineered Price of Anarchy lies between 1.191 and 1.192.

38 citations


Journal ArticleDOI
TL;DR: A framework to seamlessly verify certifying computations and uses the automatic verifier VCC for establishing the correctness of the checker and the interactive theorem prover Isabelle/HOL for high-level mathematical properties of algorithms.
Abstract: Formal verification of complex algorithms is challenging. Verifying their implementations goes beyond the state of the art of current automatic verification tools and usually involves intricate mathematical theorems. Certifying algorithms compute in addition to each output a witness certifying that the output is correct. A checker for such a witness is usually much simpler than the original algorithm--yet it is all the user has to trust. The verification of checkers is feasible with current tools and leads to computations that can be completely trusted. We describe a framework to seamlessly verify certifying computations. We use the automatic verifier VCC for establishing the correctness of the checker and the interactive theorem prover Isabelle/HOL for high-level mathematical properties of algorithms. We demonstrate the effectiveness of our approach by presenting the verification of typical examples of the industrial-level and widespread algorithmic library LEDA.

35 citations


Book ChapterDOI
29 Apr 2014
TL;DR: The feasibility of performing the entire verification within Isabelle provides higher trust guarantees and it is particularly promising for checkers that require domain-specific reasoning.
Abstract: Certifying algorithms compute not only an output, but also a witness that certifies the correctness of the output for a particular input. A checker program uses this certificate to ascertain the correctness of the output. Recent work used the verification tools VCC and Isabelle to verify checker implementations and their mathematical background theory. The checkers verified stem from the widely-used algorithms library LEDA and are written in C. The drawback of this approach is the use of two different tools. The advantage is that it could be carried out with reasonable effort in 2011. In this article, we evaluate the feasibility of performing the entire verification within Isabelle. For this purpose, we consider checkers written in the imperative languages C and Simpl. We re-verify the checker for connectedness of graphs and present a verification of the LEDA checker for non-planarity of graphs. For the checkers written in C, we translate from C to Isabelle using the AutoCorres tool set and then reason in Isabelle. For the checkers written in Simpl, Isabelle is the only tool needed. We compare the new approach with the previous approach and discuss advantages and disadvantages. We conclude that the new approach provides higher trust guarantees and it is particularly promising for checkers that require domain-specific reasoning.

19 citations


Book ChapterDOI
02 Jul 2014
TL;DR: In this article, it was shown that the APX-hardness of the k-median problem on uniform and line metrics is Ω(logm/ log log logm).
Abstract: We consider a variant of the classical k-median problem, introduced by Anthony et al.[1]. In the Robust k-Median problem, we are given an n-vertex metric space (V,d) and m client sets \(\left\{ S_i \subseteq V \right\}_{i=1}^m\). We want to open a set F ⊆ V of k facilities such that the worst case connection cost over all client sets is minimized; that is, minimize \(\max_{i}\sum_{v \in S_i} d(F,v)\). Anthony et al. showed an O(logm) approximation algorithm for any metric and APX-hardness even in the case of uniform metric. In this paper, we show that their algorithm is nearly tight by providing Ω(logm/ loglogm) approximation hardness, unless \({\sf NP} \subseteq \bigcap_{\delta >0} {\sf DTIME}(2^{n^{\delta}})\). This result holds even for uniform and line metrics. To our knowledge, this is one of the rare cases in which a problem on a line metric is hard to approximate to within logarithmic factor. We complement the hardness result by an experimental evaluation of different heuristics that shows that very simple heuristics achieve good approximations for realistic classes of instances.

13 citations


Posted Content
TL;DR: In this paper, the authors proposed to use the multiplicative weights update method instead of the Ellipsoid method for obtaining truthful-in-expectation mechanisms from linear programming based approximation algorithms.
Abstract: R. Lavy and C. Swamy (FOCS 2005, J. ACM 2011) introduced a general method for obtaining truthful-in-expectation mechanisms from linear programming based approximation algorithms. Due to the use of the Ellipsoid method, a direct implementation of the method is unlikely to be efficient in practice. We propose to use the much simpler and usually faster multiplicative weights update method instead. The simplification comes at the cost of slightly weaker approximation and truthfulness guarantees.

4 citations


Posted Content
TL;DR: The VAT-model (virtual address translation model) extends the EM-model and takes the cost of address translation in virtual memories into account, and shows that the VAT-cost of cache-oblivious algorithms is only a constant factor larger than their EM-cost.
Abstract: The VAT-model (virtual address translation model) extends the EM-model (external memory model) and takes the cost of address translation in virtual memories into account In this model, the cost of a single memory access may be logarithmic in the largest address used We show that the VAT-cost of cache-oblivious algorithms is only a constant factor larger than their EM-cost; this requires a somewhat more stringent tall cache assumption than for the EM-model

2 citations


Book ChapterDOI
01 Jan 2014
TL;DR: In this paper, the authors show that the Schulmethode fur die Multiplikation von naturlichen Zahlen is nicht der beste multiplikationsalgorithmus.
Abstract: Eine Vorspeise soll zu Beginn eines Essens den Appetit anregen. Genau dies ist der Zweck dieses Kapitels: Um das Interesse der Leserin fur algorithmische Techniken zu wecken, wollen wir ein uberraschenden Ergebnis vorstellen: Die Schulmethode fur die Multiplikation von naturlichen Zahlen ist nicht der beste Multiplikationsalgorithmus. Fur die Multiplikation sehr groser Zahlen, d. h. solcher mit Tausenden oder sogar Millionen von Ziffern, gibt es viel schnellere Methoden. Eine solche Methode wird die Leserin in diesem Kapitel kennenlernen.

1 citations


Book ChapterDOI
Kurt Mehlhorn1
13 Feb 2014
TL;DR: In this paper, the authors introduce the idea of equilibrium prices in a general market model and present an algorithm to find a set of prices at which the market clears, i.e., all goods are sold and all money is spent.
Abstract: Near the end of the 19th century, Leon Walrus [Wal74] and Irving Fisher [Fis91] introduced general market models and asked for the existence of equilibrium prices. Chapters 5 and 6 of [NRTV07] are an excellent introduction into the algorithmic theory of market models. In Walrus’ model, each person comes to the market with a set of goods and a utility function for bundles of goods. At a set of prices, a person will only buy goods that give him maximal satisfaction.1 The question is to find a set of prices at which the market clears, i.e., all goods are sold and all money is spent. Observe that the money available to an agent is exactly the money earned by selling his goods. Fisher’s model is somewhat simpler. In Fisher’s model every agent comes with a predetermined amount of money. Market clearing prices are also called equilibrium prices. Walrus and Fisher took it for granted that equilibrium prices exist. Fisher designed a hydromechanical computing machine that would compute the prices in a market with three buyers, three goods, and linear utilities [BS00].