scispace - formally typeset
Search or ask a question

Showing papers by "Richard Cole published in 2017"


Proceedings ArticleDOI
20 Jun 2017
TL;DR: In this article, a new integer program for the Nash social welfare maximization problem whose fractional relaxation has a bounded integrality gap was given. But the integrality-gap of this program is at most 2.89.
Abstract: We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: A new integer program for the NSW maximization problem whose fractional relaxation has a bounded integrality gap. In contrast, the natural integer program has an unbounded integrality gap. An improved, and tight, factor 2 analysis of the algorithm of [7]; in turn showing that the integrality gap of the above relaxation is at most 2. The approximation factor shown by [7] was 2e 1/e ≈ 2.89. A lower bound of e 1/e ≈ 1.44 on the integrality gap of this relaxation. New convex programs for natural generalizations of linear Fisher markets and proofs that these markets admit rational equilibria. These results were obtained by establishing connections between previously known disparate results, and they help uncover their mathematical underpinnings. We show a formal connection between the convex programs of Eisenberg and Gale and that of Shmyrev, namely that their duals are equivalent up to a change of variables. Both programs capture equilibria of linear Fisher markets. By adding suitable constraints to Shmyrev’s program, we obtain a convex program that captures equilibria of the spendingrestricted market model defined by [7] in the context of the NSW maximization problem. Further, adding certain integral constraints to this program we get the integer program for the NSW mentioned above. The basic tool we use is convex programming duality. In the special case of convex programs with linear constraints (but convex objectives), we show a particularly simple way of obtaining dual programs, putting it almost at par with linear program duality. This simple way of finding duals has been used subsequently for many other applications.

109 citations


Journal ArticleDOI
23 Mar 2017
TL;DR: In this paper, a deterministic sorting algorithm, Sample, Partition, and Merge Sort (SPMS), is presented, which interleaves the partitioning of a sample sort with merging, and sorts n elements in O(n log n) time with an optimal number of cache misses.
Abstract: We present a deterministic sorting algorithm, Sample, Partition, and Merge Sort (SPMS), that interleaves the partitioning of a sample sort with merging. Sequentially, it sorts n elements in O(nlog n) time cache-obliviously with an optimal number of cache misses. The parallel complexity (or critical path length) of the algorithm is O(log nlog log n), which improves on previous bounds for deterministic sample sort. The algorithm also has low false sharing costs. When scheduled by a work-stealing scheduler in a multicore computing environment with a global shared memory and p cores, each having a cache of size M organized in blocks of size B, the costs of the additional cache misses and false sharing misses due to this parallel execution are bounded by the cost of O(S · M/B) and O(S · B) cache misses, respectively, where S is the number of steals performed during the execution. Finally, SPMS is resource oblivious in that the dependence on machine parameters appear only in the analysis of its performance and not within the algorithm itself.

31 citations



Journal ArticleDOI
22 Dec 2017
TL;DR: This article investigates five distinct auction settings for which good expected revenue bounds are known when the bidders’ valuations are given by MHR distributions, and shows that these bounds degrade gracefully when extended to α-SR distributions.
Abstract: Two classes of distributions that are widely used in the analysis of Bayesian auctions are the monotone hazard rate (MHR) and regular distributions. They can both be characterized in terms of the rate of change of the associated virtual value functions: for MHR distributions, the condition is that for values v

14 citations


Posted Content
TL;DR: In this paper, the authors investigate the effect of diversity in the context of routing games and show that diversity is sometimes beneficial and sometimes harmful in terms of overall cost in the case of single-commodity routing games.
Abstract: We seek to understand when heterogeneity in user preferences yields improved outcomes in terms of overall cost. That this might be hoped for is based on the common belief that diversity is advantageous in many settings. We investigate this in the context of routing. Our main result is a sharp characterization of the network settings in which diversity always helps, versus those in which it is sometimes harmful. Specifically, we consider routing games, where diversity arises in the way that users trade-off two criteria (such as time and money, or, in the case of stochastic delays, expectation and variance of delay). Our main contributions are the following: 1) A participant-oriented measure of cost in the presence of user diversity, together with the identification of the natural benchmark: the same cost measure for an appropriately defined average of the diversity. 2) A full characterization of those network topologies for which diversity always helps, for all latency functions and demands. For single-commodity routings, these are series-parallel graphs, while for multi-commodity routings, they are the newly-defined "block-matching" networks. The latter comprise a suitable interweaving of multiple series-parallel graphs each connecting a distinct source-sink pair. While the result for the single-commodity case may seem intuitive in light of the well-known Braess paradox, the two problems are different: there are instances where diversity helps although the Braess paradox occurs, and vice-versa. But the main technical challenge is to establish the "only if" direction of the result for multi-commodity networks. This follows by constructing an instance where diversity hurts, and showing how to embed it in any network which is not block-matching, by carefully exploiting the way the simple source-sink paths of the commodities intersect in the "non-block-matching" portion of the network.

10 citations


Proceedings ArticleDOI
24 Jul 2017
TL;DR: In this paper, the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler was analyzed, and bounds that match or improve upon the well-known O(Q+S · (M/B)) caching cost for the randomized work stealing (RWS) scheduler were obtained.
Abstract: We analyze the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler. We obtain bounds that match or improve upon the well-known O(Q+S · (M/B)) caching cost for the randomized work stealing (RWS) scheduler, where S is the number of steals, Q is the sequential caching cost, and M and B are the cache size and block (or cache line) size respectively.

7 citations


Posted Content
TL;DR: Borders are obtained that match or improve upon the well-known O(Q+S · (M/B)) caching cost for the randomized work stealing (RWS) scheduler.
Abstract: We analyze the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler. We obtain bounds that match or improve upon the well-known $O(Q+S \cdot (M/B))$ caching cost for the randomized work stealing (RWS) scheduler, where $S$ is the number of steals, $Q$ is the sequential caching cost, and $M$ and $B$ are the cache size and block (or cache line) size respectively.

1 citations