scispace - formally typeset
Search or ask a question

Showing papers by "Richard Cole published in 2012"


Proceedings ArticleDOI
04 Jun 2012
TL;DR: Fast convergence occurs for the following type of markets: All pairs of goods are complements to each other, and the demand and income elasticities are suitably bounded, and all buyers in the market are equipped with CES utilities.
Abstract: This paper continues the study, initiated by Cole and Fleischer in [Cole and Fleischer 2008], of the behavior of a tatonnement price update rule in Ongoing Fisher Markets. The prior work showed fast convergence toward an equilibrium when the goods satisfied the weak gross substitutes property and had bounded demand and income elasticities.The current work shows that fast convergence also occurs for the following type of markets: All pairs of goods are complements to each other, and the demand and income elasticities are suitably bounded.In particular, these conditions hold when all buyers in the market are equipped with CES utilities, where all the parameters ρ, one per buyer, satisfy -1

28 citations


Journal ArticleDOI
01 Oct 2012-Networks
TL;DR: The study of the price of anarchy with variable demand and with broad classes of nonlinear aggregation functions is initiated, focusing on selfish routing in single- and multicommodity networks, and on the lp norms for 1 ≤ p ≤ ∞; the main results are as follows.
Abstract: We study the price of anarchy of selfish routing with variable traffic rates and when the path cost is a nonadditive function of the edge costs. Nonadditive path costs are important, for example, in networking applications, where a key performance metric is the achievable throughput along a path, which is controlled by its bottleneck (most congested) edge. We prove the following results. In multicommodity networks, the worst-case price of anarchy under the lp path cost with 1 < p ≤∞ can be dramatically larger than under the standard l1 path cost. In single-commodity networks, the worst-case price of anarchy under the lp path cost with 1 < p < ∞ is no more than with the standard l1 path norm. (A matching lower bound follows trivially from known results.) This upper bound also applies to the l∞ path cost if and only if attention is restricted to the natural subclass of equilibria generated by distributed shortest path routing protocols. For a natural cost-minimization objective function, the price of anarchy with endogenous traffic rates (and under any lp path cost) is no larger than that in fixed-demand networks. Intuitively, the worst-case inefficiency arising from the “tragedy of the commons” is no more severe than that from routing inefficiencies. © 2012 Wiley Periodicals, Inc. NETWORKS, 2012 (An extended abstract of this article appeared in the Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, January 2006.)

27 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: Algorithms for a multicore environment in which each core has its own private cache and false sharing can occur are considered, and block-resilient HBP algorithms with low false sharing costs for several fundamental problems are developed.
Abstract: We consider algorithms for a multicore environment in which each core has its own private cache and false sharing can occur False sharing happens when two or more processors access the same block (ie, cache-line) in parallel, and at least one processor writes into a location in the block False sharing causes different processors to have inconsistent views of the data in the block, and many of the methods currently used to resolve these inconsistencies can cause large delays We analyze the cost of false sharing both for variables stored on the execution stacks of the parallel tasks and for output variables Our main technical contribution is to establish a low cost for this overhead for the class of multithreaded block-resilient HBP (Hierarchical Balanced Parallel) computations Using this and other techniques, we develop block-resilient HBP algorithms with low false sharing costs for several fundamental problems including scans, matrix multiplication, FFT, sorting, and hybrid block-resilient HBP algorithms for list ranking and graph connected components Most of these algorithms are derived from known multicore algorithms, but are further refined to achieve a low false sharing overhead Our algorithms make no mention of machine parameters, and our analysis of the false sharing overhead is mostly in terms of the the number of tasks generated in parallel during the computation, and thus applies to a variety of schedulers

24 citations


Book ChapterDOI
16 Apr 2012
TL;DR: This paper revisits the cache miss analysis of algorithms when scheduled using randomized work stealing (RWS) in a parallel environment where processors have private caches, and focuses on the effect of task migration on cache miss costs.
Abstract: This paper revisits the cache miss analysis of algorithms when scheduled using randomized work stealing (RWS) in a parallel environment where processors have private caches. We focus on the effect of task migration on cache miss costs, and in particular, the costs of accessing "hidden" data typically stored on execution stacks (such as the return location for a recursive call). Prior analyses, with the exception of [1], do not account for such costs, and it is not clear how to extend them to account for these costs. By means of a new analysis, we show that for a variety of basic algorithms these task migration costs are no larger than the costs for the remainder of the computation, and thereby recover existing bounds. We also analyze a number of algorithms implicitly analyzed by [1], namely Scans (including Prefix Sums and Matrix Transposition), Matrix Multiply (the depth n in-place algorithm, the standard 8-way divide and conquer algorithm, and Strassen's algorithm), I-GEP, finding a longest common subsequence, FFT, the SPMS sorting algorithm, list ranking and graph connected components; we obtain sharper bounds in many cases. While this paper focusses on the RWS scheduler, the bounds we obtain are a function of the number of steals, and thus would apply to any scheduler given bounds on the number of steals it induces.

11 citations


Posted Content
TL;DR: In this paper, the authors study the behavior of a tatonnement price update rule in Ongoing Fisher Markets and show that fast convergence also occurs for the following types of markets: all pairs of goods are complements to each other, and the demand and income elasticities are suitably bounded.
Abstract: This paper continues the study, initiated by Cole and Fleischer, of the behavior of a tatonnement price update rule in Ongoing Fisher Markets. The prior work showed fast convergence toward an equilibrium when the goods satisfied the weak gross substitutes property and had bounded demand and income elasticities. The current work shows that fast convergence also occurs for the following types of markets: - All pairs of goods are complements to each other, and - the demand and income elasticities are suitably bounded. In particular, these conditions hold when all buyers in the market are equipped with CES utilities, where all the parameters $\rho$, one per buyer, satisfy $-1 < \rho \le 0$. In addition, we extend the above result to markets in which a mixture of complements and substitutes occur. This includes characterizing a class of nested CES utilities for which fast convergence holds. An interesting technical contribution, which may be of independent interest, is an amortized analysis for handling asynchronous events in settings in which there are a mix of continuous changes and discrete events.

9 citations


Posted Content
TL;DR: This work revisits the classic problem of fair division from a mechanism design perspective and provides an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness.
Abstract: We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option. For a multi-dimensional domain with an arbitrary number of agents and items, and for the very large class of homogeneous valuation functions, we prove that our mechanism provides every agent with at least a 1/e � 0.368 fraction of her Proportionally Fair valuation. To the best of our knowledge, this is the first result that gives a constant factor approximation to every agent for the Proportionally Fair solution. To complement this result, we show that no truthful mechanism can guarantee more than 0.5 approximation, even for the restricted class of additive linear valuations. In addition to this, we uncover a connection between the Partial Allocation mechanism and VCG-based mechanism design. We also ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works really well when all the items are highly demanded.

8 citations


Posted Content
TL;DR: A strong notion of approximation is used, requiring the mechanism to give each agent a good approximation of its proportionally fair utility, and one of the mechanisms provides a better and better approximation factor as the minimum demand for every good increases.
Abstract: How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not feasible to use money to compensate agents for otherwise unsatisfactory outcomes This paper studies this question, looking at both fairness and efficiency measures We employ the proportionally fair solution, which is a well-known fairness concept for money-free settings But although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful fashion Consequently, we seek approximate solutions We give several truthful mechanisms which achieve proportional fairness in an approximate sense We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility In particular, one of our mechanisms provides a better and better approximation factor as the minimum demand for every good increases A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s With regard to efficiency, prior work has shown a lower bound of 05 on the approximation factor of any swap-dictatorial mechanism approximating a social welfare measure even for the two agents and multiple goods case We surpass this lower bound by designing a non-swap-dictatorial mechanism for this case Interestingly, the new mechanism builds on the notion of proportional fairness

1 citations