scispace - formally typeset
Search or ask a question

Showing papers by "Richard Cole published in 2018"


Book
05 Feb 2018
TL;DR: It is shown that 3n probes are sufficient, but 3n − 1 are necessary, to determine the shape and position of any n-gon, under a mild assumption.
Abstract: We consider a new problem motivated by robotics: how to determine shape and position from probes. We show that 3n probes are sufficient, but 3n − 1 are necessary, to determine the shape and position of any n-gon. Under a mild assumption, 3n probes are necessary.

139 citations


Journal ArticleDOI
TL;DR: This work studies the problem of allocating a set of indivisible items among agents with additive valuations with the goal of maximizing the geometric mean of the agents' valuations, i.e., the Nash social ...
Abstract: We study the problem of allocating a set of indivisible items among agents with additive valuations, with the goal of maximizing the geometric mean of the agents' valuations, i.e., the Nash social ...

110 citations


Book
05 Feb 2018
TL;DR: It is shown that randomized parallel CREW PRAM algorithms for building trapezoidal diagrams of line segments in the plane with parallel time bounds require the assumption that enough processors are available, with processor allocations every log n steps.
Abstract: We describe randomized parallel algorithms for building trapezoidal diagrams of line segments in the plane. The algorithms are designed for a CRCW PRAM. For general segments, we give an algorithm requiring optimal O(A+n log n) expected work and optimal O(log n) time, where A is the number of intersecting pairs of segments. If the segments form a simple chain, we give an algorithm requiring optimal O(n) expected work and O(log n log log n log* n) expected time, and a simpler algorithm requiring O(n log* n) expected work. The serial algorithm corresponding to the latter is among the simplest known algorithms requiring O(n log* n) expected operations. For a set of segments forming K chains, we give an algorithm requiring O(A+n log* n+K log n) expected work and O(log n log log n log* n) expected time. The parallel time bounds require the assumption that enough processors are available, with processor allocations every log n steps.

24 citations


Proceedings ArticleDOI
11 Jun 2018
TL;DR: In this paper, the authors established new convergence results for two generalizations of proportional response in Fisher markets with buyers having CES utility functions, and showed a linear rate of convergence in a Fisher market with buyers whose utility functions cover the full spectrum of CES utilities aside from linear and Leontief utilities.
Abstract: A major goal in Algorithmic Game Theory is to justify equilibrium concepts from an algorithmic and complexity perspective. One appealing approach is to identify natural distributed algorithms that converge quickly to an equilibrium. This paper established new convergence results for two generalizations of proportional response in Fisher markets with buyers having CES utility functions. The starting points are respectively a new convex and a new convex-concave formulation of such markets. The two generalizations correspond to suitable mirror descent algorithms applied to these formulations. Several of our new results are a consequence of new notions of strong Bregman convexity and of strong Bregman convex-concave functions, and associated linear rates of convergence, which may be of independent interest. Among other results, we analyze a damped generalized proportional response and show a linear rate of convergence in a Fisher market with buyers whose utility functions cover the full spectrum of CES utilities aside the extremes of linear and Leontief utilities; when these utilities are included, we obtain an empirical $O(1/T)$ rate of convergence.

16 citations


Posted Content
TL;DR: New convergence results for two generalizations of proportional response in Fisher markets with buyers having CES utility functions are established, including a linear rate of convergence in a Fisher market with buyers whose utility functions cover the full spectrum of CES utilities aside the extremes of linear and Leontief utilities.
Abstract: A major goal in Algorithmic Game Theory is to justify equilibrium concepts from an algorithmic and complexity perspective. One appealing approach is to identify natural distributed algorithms that converge quickly to an equilibrium. This paper established new convergence results for two generalizations of Proportional Response in Fisher markets with buyers having CES utility functions. The starting points are respectively a new convex and a new convex-concave formulation of such markets. The two generalizations correspond to suitable mirror descent algorithms applied to these formulations. Several of our new results are a consequence of new notions of strong Bregman convexity and of strong Bregman convex-concave functions, and associated linear rates of convergence, which may be of independent interest. Among other results, we analyze a damped generalized Proportional Response and show a linear rate of convergence in a Fisher market with buyers whose utility functions cover the full spectrum of CES utilities aside the extremes of linear and Leontief utilities; when these utilities are included, we obtain an empirical O(1/T) rate of convergence.

10 citations


Book
Richard Cole1
24 Mar 2018
TL;DR: It is proved there exists a parallel planes partition of any set of n points in 4 dimensions that yields a data structure for the half-space retrieval problem in 3 dimensions that has linear size and achieves a sublinear query time.
Abstract: We introduce a new type of partition called a parallel planes partition. We prove there exists a parallel planes partition of any set of n points in 4 dimensions. This partition yields a data structure for the half-space retrieval problem in 4 dimensions; it has linear size and achieves a sublinear query time.

10 citations


Proceedings ArticleDOI
01 Jul 2018
TL;DR: A participant-oriented measure of cost in the presence of agent diversity and a full characterization of those network topologies for which diversity always helps, versus those in which it is sometimes harmful.

7 citations


Proceedings ArticleDOI
01 Aug 2018
TL;DR: In this paper, an asynchronous version of tatonnement, a fundamental price dynamic widely studied in general equilibrium theory, converges toward a market equilibrium for Fisher markets with CES utilities or Leontief utilities.
Abstract: We extend a recently developed framework for analyzing asynchronous coordinate descent algorithms to show that an asynchronous version of tatonnement, a fundamental price dynamic widely studied in general equilibrium theory, converges toward a market equilibrium for Fisher markets with CES utilities or Leontief utilities, for which tatonnement is equivalent to coordinate descent.

5 citations


Posted Content
TL;DR: This work considers an asynchronous parallel version of the accelerated coordinate descent algorithm proposed and analyzed by Lin, Liu and Xiao (SIOPT'15), and gives an analysis based on the efficient implementation of this algorithm.
Abstract: Gradient descent, and coordinate descent in particular, are core tools in machine learning and elsewhere. Large problem instances are common. To help solve them, two orthogonal approaches are known: acceleration and parallelism. In this work, we ask whether they can be used simultaneously. The answer is "yes". More specifically, we consider an asynchronous parallel version of the accelerated coordinate descent algorithm proposed and analyzed by Lin, Liu and Xiao (SIOPT'15). We give an analysis based on the efficient implementation of this algorithm. The only constraint is a standard bounded asynchrony assumption, namely that each update can overlap with at most q others. (q is at most the number of processors times the ratio in the lengths of the longest and shortest updates.) We obtain the following three results: 1. A linear speedup for strongly convex functions so long as q is not too large. 2. A substantial, albeit sublinear, speedup for strongly convex functions for larger q. 3. A substantial, albeit sublinear, speedup for convex functions.

4 citations


Posted Content
TL;DR: In this article, an asynchronous version of tatonnement, a fundamental price dynamic widely studied in general equilibrium theory, converges toward a market equilibrium for Fisher markets with CES utilities or Leontief utilities.
Abstract: We extend a recently developed framework for analyzing asynchronous coordinate descent algorithms to show that an asynchronous version of tatonnement, a fundamental price dynamic widely studied in general equilibrium theory, converges toward a market equilibrium for Fisher markets with CES utilities or Leontief utilities, for which tatonnement is equivalent to coordinate descent.

4 citations


Posted Content
TL;DR: This paper shows the bound is tight for almost all possible values of these parameters of stochastic coordinate descent with up to $\Theta(\sqrt n L_{\max}/L_{\overline{\mathrm{res}}})$ processors, where L Max and L Overline are suitable Lipschitz parameters.
Abstract: Several works have shown linear speedup is achieved by an asynchronous parallel implementation of stochastic coordinate descent so long as there is not too much parallelism. More specifically, it is known that if all updates are of similar duration, then linear speedup is possible with up to $\Theta(\sqrt n L_{\max}/L_{\overline{\mathrm{res}}})$ processors, where $L_{\max}$ and $L_{\overline{\mathrm{res}}}$ are suitable Lipschitz parameters. This paper shows the bound is tight for almost all possible values of these parameters.

Posted Content
08 Nov 2018
TL;DR: This work improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and shows that the new bound is almost optimal.
Abstract: When solving massive optimization problems in areas such as machine learning, it is a common practice to seek speedup via massive parallelism. However, especially in an asynchronous environment, there are limits on the possible parallelism. Accordingly, we seek tight bounds on the viable parallelism in asynchronous implementations of coordinate descent. We focus on asynchronous coordinate descent (ACD) algorithms on convex functions $F:\mathbb{R}^n \rightarrow \mathbb{R}$ of the form $$F(x) = f(x) ~+~ \sum_{k=1}^n \Psi_k(x_k),$$ where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a smooth convex function, and each $\Psi_k:\mathbb{R} \rightarrow \mathbb{R}$ is a univariate and possibly non-smooth convex function. Our approach is to quantify the shortfall in progress compared to the standard sequential stochastic gradient descent. This leads to a truly simple yet optimal analysis of the standard stochastic ACD in a partially asynchronous environment, which already generalizes and improves on the bounds in prior work. We also give a considerably more involved analysis for general asynchronous environments in which the only constraint is that each update can overlap with at most $q$ others, where $q$ is at most the number of processors times the ratio in the lengths of the longest and shortest updates. The main technical challenge is to demonstrate linear speedup in the latter environment. This stems from the subtle interplay of asynchrony and randomization. This improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and we show that our new bound is almost optimal.

Posted Content
TL;DR: In this article, tight bounds on the viable parallelism in asynchronous implementations of coordinate descent that achieves linear speedup were derived. But the performance of these bounds is not as good as that of the standard sequential stochastic gradient descent.
Abstract: We seek tight bounds on the viable parallelism in asynchronous implementations of coordinate descent that achieves linear speedup. We focus on asynchronous coordinate descent (ACD) algorithms on convex functions which consist of the sum of a smooth convex part and a possibly non-smooth separable convex part. We quantify the shortfall in progress compared to the standard sequential stochastic gradient descent. This leads to a simple yet tight analysis of the standard stochastic ACD in a partially asynchronous environment, generalizing and improving the bounds in prior work. We also give a considerably more involved analysis for general asynchronous environments in which the only constraint is that each update can overlap with at most q others. The new lower bound on the maximum degree of parallelism attaining linear speedup is tight and improves the best prior bound almost quadratically.