scispace - formally typeset
Search or ask a question

Showing papers by "Kurt Mehlhorn published in 2016"


Journal ArticleDOI
TL;DR: A hybrid of the Descartes method and Newton iteration, denoted ANewDsc, is introduced, which is simpler than Pan's method, but achieves a run-time comparable to it.

95 citations


Proceedings ArticleDOI
10 Jan 2016
TL;DR: An improved combinatorial algorithm for the computation of equilibrium prices in the linear Arrow-Debreu model for a market with n agents and integral utilities bounded by U and improves upon the previously best algorithm of Ye by a factor of Ω(n).
Abstract: We present an improved combinatorial algorithm for the computation of equilibrium prices in the linear Arrow-Debreu model. For a market with n agents and integral utilities bounded by U, the algorithm runs in O(n7 log3(nU)) time. This improves upon the previously best algorithm of Ye by a factor of Ω(n). The algorithm refines the algorithm described by Duan and Mehlhorn and improves it by a factor of Ω(n3). The improvement comes from a better understanding of the iterative price adjustment process, the improved balanced flow computation for nondegenerate instances, and a novel perturbation technique for achieving nondegeneracy.

38 citations


Journal ArticleDOI
TL;DR: This work shows an efficient combinatorial algorithm based on LP duality to compute a fair matching in bipartite graphs, and shows a scaling based algorithm for the fair b-matching problem.
Abstract: Let $$G = (A \cup B, E)$$G=(A?B,E) be a bipartite graph, where every vertex ranks its neighbors in an order of preference (with ties allowed) and let $$r$$r be the worst rank used. A matching $$M$$M is fair in $$G$$G if it has maximum cardinality, subject to this, $$M$$M matches the minimum number of vertices to rank $$r$$r neighbors, subject to that, $$M$$M matches the minimum number of vertices to rank $$(r-1)$$(r-1) neighbors, and so on. We show an efficient combinatorial algorithm based on LP duality to compute a fair matching in $$G$$G. We also show a scaling based algorithm for the fair b-matching problem. Our two algorithms can be extended to solve other profile-based matching problems. In designing our combinatorial algorithm, we show how to solve a generalized version of the minimum weighted vertex cover problem in bipartite graphs, using a single-source shortest paths computation--this can be of independent interest.

21 citations


Posted Content
TL;DR: This paper introduces novel properties that explain the efficiency of sequences not captured by any of the previously known properties, and which provide new barriers to the dynamic optimality conjecture, and establishes connections between various properties.
Abstract: Binary search trees (BSTs) with rotations can adapt to various kinds of structure in search sequences, achieving amortized access times substantially better than the Theta(log n) worst-case guarantee. Classical examples of structural properties include static optimality, sequential access, working set, key-independent optimality, and dynamic finger, all of which are now known to be achieved by the two famous online BST algorithms (Splay and Greedy). (...) In this paper, we introduce novel properties that explain the efficiency of sequences not captured by any of the previously known properties, and which provide new barriers to the dynamic optimality conjecture. We also establish connections between various properties, old and new. For instance, we show the following. (i) A tight bound of O(n log d) on the cost of Greedy for d-decomposable sequences. The result builds on the recent lazy finger result of Iacono and Langerman (SODA 2016). On the other hand, we show that lazy finger alone cannot explain the efficiency of pattern avoiding sequences even in some of the simplest cases. (ii) A hierarchy of bounds using multiple lazy fingers, addressing a recent question of Iacono and Langerman. (iii) The optimality of the Move-to-root heuristic in the key-independent setting introduced by Iacono (Algorithmica 2005). (iv) A new tool that allows combining any finite number of sound structural properties. As an application, we show an upper bound on the cost of a class of sequences that all known properties fail to capture. (v) The equivalence between two families of BST properties. The observation on which this connection is based was known before - we make it explicit, and apply it to classical BST properties. (...)

14 citations


Journal ArticleDOI
TL;DR: This work proposes to use the much simpler and usually faster multiplicative weights update method instead of the Ellipsoid method, which comes at the cost of slightly weaker approximation and truthfulness guarantees.
Abstract: R. Lavi and C. Swamy (FOCS 2005, J. ACM 58(6), 25, 2011) introduced a general method for obtaining truthful-in-expectation mechanisms from linear programming based approximation algorithms. Due to the use of the Ellipsoid method, a direct implementation of the method is unlikely to be efficient in practice. We propose to use the much simpler and usually faster multiplicative weights update method instead. The simplification comes at the cost of slightly weaker approximation and truthfulness guarantees.

8 citations


Journal ArticleDOI
TL;DR: A new algorithm for computing balanced flows in equality networks arising in market equilibrium computations that requires only a single parametric flow computation and improves the running time of some market equilibrium algorithms.

7 citations


Proceedings ArticleDOI
23 Mar 2016
TL;DR: In this article, the first analysis of Fisher markets with buyers that have budget-additive utility functions was presented, and the main result is an efficient combinatorial algorithm to compute a market equilibrium with a Pareto-optimal allocation of goods.
Abstract: We present the first analysis of Fisher markets with buyers that have budget-additive utility functions. Budget-additive utilities are elementary concave functions with numerous applications in online adword markets and revenue optimization problems. They extend the standard case of linear utilities and have been studied in a variety of other market models. In contrast to the frequently studied CES utilities, they have a global satiation point which can imply multiple market equilibria with quite different characteristics. Our main result is an efficient combinatorial algorithm to compute a market equilibrium with a Pareto-optimal allocation of goods. It relies on a new descending-price approach and, as a special case, also implies a novel combinatorial algorithm for computing a market equilibrium in linear Fisher markets. We complement this positive result with a number of hardness results for related computational questions. We prove that it isNP-hard to compute a market equilibrium that maximizes social welfare, and it is PPAD-hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good.

6 citations


Posted Content
TL;DR: It is proved that it is NP- hard to compute a market equilibrium that maximizes social welfare, and it is PPAD-hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good.
Abstract: We present the first analysis of Fisher markets with buyers that have budget-additive utility functions. Budget-additive utilities are elementary concave functions with numerous applications in online adword markets and revenue optimization problems. They extend the standard case of linear utilities and have been studied in a variety of other market models. In contrast to the frequently studied CES utilities, they have a global satiation point which can imply multiple market equilibria with quite different characteristics. Our main result is an efficient combinatorial algorithm to compute a market equilibrium with a Pareto-optimal allocation of goods. It relies on a new descending-price approach and, as a special case, also implies a novel combinatorial algorithm for computing a market equilibrium in linear Fisher markets. We complement these positive results with a number of hardness results for related computational questions. We prove that it is NP-hard to compute a market equilibrium that maximizes social welfare, and it is PPAD-hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good.

5 citations


Proceedings ArticleDOI
24 May 2016
TL;DR: The Slime Mold Graph Repository is introduced, a publicly available collection of pairs (I, G) that lets I denote an image of P. polycephalum's signature network and G an equivalent weighted graph representation of I.
Abstract: We introduce the Slime Mold Graph Repository, a publicly available collection of pairs (I, G). Let I denote an image of P. polycephalum's signature network and G an equivalent weighted graph representation of I. To convert I to G we have developed a dedicated software called NEFI. Working with graphs makes data analysis using techniques from graph theory and network science straight-forward.

1 citations


Journal ArticleDOI
TL;DR: This work gives a self-contained treatment of an interior-point method which is particularly tailored to the typical mathematical background of CS students, with only limited knowledge of linear algebra and calculus assumed.

1 citations



Posted Content
TL;DR: An interior point method is presented for the min-cost flow problem that uses arc contractions and deletions to steer clear from the boundary of the polytope when path-following methods come too close and is guaranteed to be free of any numerical issues.
Abstract: We present an interior point method for the min-cost flow problem that uses arc contractions and deletions to steer clear from the boundary of the polytope when path-following methods come too close. We obtain a randomized algorithm running in expected $\tilde O( m^{3/2} )$ time that only visits integer lattice points in the vicinity of the central path of the polytope. This enables us to use integer arithmetic like classical combinatorial algorithms typically do. We provide explicit bounds on the size of the numbers that appear during all computations. By presenting an integer arithmetic interior point algorithm we avoid the tediousness of floating point error analysis and achieve a method that is guaranteed to be free of any numerical issues. We thereby eliminate one of the drawbacks of numerical methods in contrast to combinatorial min-cost flow algorithms that still yield the most efficient implementations in practice, despite their inferior worst-case time complexity.

Journal ArticleDOI
Kurt Mehlhorn1
TL;DR: This article is based on the Erasmus Lecture that I delivered at the 2014 Annual Meeting of the Academia Europaea in Barcelona and will discuss my early fascination for the field, as well as algorithms, programs, laws of computation, the double role of informatics as a mathematical and engineering discipline, and my effort to teach informatics to non-majors and the general public.
Abstract: This article is based on the Erasmus Lecture that I delivered at the 2014 Annual Meeting of the Academia Europaea in Barcelona. I will discuss my early fascination for the field, as well as algorithms, programs, laws of computation, the double role of informatics as a mathematical and engineering discipline, and my effort to teach informatics to non-majors and the general public.

Proceedings ArticleDOI
24 May 2016
TL;DR: In this article, a simple model of P. polycephalum can be used to compute shortest paths between two points in a graph and the algorithm gradually identifies the shortest s-t path by retreating from everywhere else in the graph.
Abstract: We demonstrate that a simple model of P. polycephalum can be used to compute shortest paths between two points in a graph. Reminiscent of how the real organism behaves in a maze when presented with two distinct food sources s and t, the algorithm gradually identifies the shortest s-t path by retreating from everywhere else in the graph.