scispace - formally typeset
Search or ask a question

Showing papers by "Richard Cole published in 2008"


Proceedings ArticleDOI
17 May 2008
TL;DR: This paper formalizes the setting of Ongoing Markets, by contrast with the classic market scenario, which is term One-Time Markets, and defines and analyzes variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis.
Abstract: Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, this paper formalizes the setting of Ongoing Markets, by contrast with the classic market scenario, which we term One-Time Markets. The Ongoing Market allows trade at non-equilibrium prices, and, as its name suggests, continues over time. As such, it appears to be a more plausible model of actual markets. For both market settings, this paper defines and analyzes variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis in three significant respects: the price update for a good depends only on the price, demand, and supply for that good, and on no other information; the price update for each good occurs distributively and asynchronously; the algorithms work (and the analyses hold) from an arbitrary starting point. Our algorithm introduces a new and natural update rule. We show that this update rule leads to fast convergence toward equilibrium prices in a broad class of markets that satisfy the weak gross substitutes property. These are the first analyses for computationally and informationally distributed algorithms that demonstrate polynomial convergence. Our analysis identifies three parameters characterizing the markets, which govern the rate of convergence of our protocols. These parameters are, broadly speaking: 1. A bound on the fractional rate of change of demand for each good with respect to fractional changes in its price. 2. A bound on the fractional rate of change of demand for each good with respect to fractional changes in wealth. 3. The closeness of the market to a Fisher market (a market with buyers starting with money alone). We give two types of protocols. The first type assumes global knowledge of only (an upper bound on) the first parameter. For this protocol, we also provide a matching lower bound in terms of these parameters for the One-Time Market. Our second protocol, which is analyzed for the One-Time Market alone, assumes no global knowledge whatsoever.

76 citations


Book ChapterDOI
30 Apr 2008
TL;DR: Two prompt mechanisms are presented, one deterministic and the other randomized, that guarantee a constant competitive ratio and are presented as a guide to truthful mechanisms that maximize the welfare, the sum of the utilities of winning bidders.
Abstract: We study the following online problem: at each time unit, one of midentical items is offered for sale. Bidders arrive and depart dynamically, and each bidder is interested in winning one item between his arrival and departure. Our goal is to design truthful mechanisms that maximize the welfare, the sum of the utilities of winning bidders. We first consider this problem under the assumption that the private information for each bidder is his value for getting an item. In this model constant-competitive mechanisms are known, but we observe that these mechanisms suffer from the following disadvantage: a bidder might learn his payment only when he departs. We argue that these mechanism are essentially unusable, because they impose several seemingly undesirable requirements on any implementation of the mechanisms. To crystalize these issues, we define the notions of promptand tardymechanisms. We present two prompt mechanisms, one deterministic and the other randomized, that guarantee a constant competitive ratio. We show that our deterministic mechanism is optimal for this setting. We then study a model in which both the value and the departure time are private information. While in the deterministic setting only a trivial competitive ratio can be guaranteed, we use randomization to obtain a prompt truthful ${\it \Theta}(\frac 1 {\log m})$-competitive mechanism. We then show that no truthful randomized mechanism can achieve a ratio better than $\frac 1 2$ in this model.

35 citations


Journal ArticleDOI
TL;DR: A linear-time algorithm for coloring planar graphs with maximum degree Δ with max {Δ,9} colors is optimal for graphs withmaximum degree Δ≥9 and is shown to be efficient over the algorithms of Chrobak and Yung and Nishizeki.
Abstract: We show efficient algorithms for edge-coloring planar graphs. Our main result is a linear-time algorithm for coloring planar graphs with maximum degree Δ with max {Δ,9} colors. Thus the coloring is optimal for graphs with maximum degree Δ≥9. Moreover for Δ=4,5,6 we give linear-time algorithms that use Δ+2 colors. These results improve over the algorithms of Chrobak and Yung (J. Algorithms 10:35–51, 1989) and of Chrobak and Nishizeki (J. Algorithms 11:102–116, 1990) which color planar graphs using max {Δ,19} colors in linear time or using max {Δ,9} colors in $\mathcal{O}(n\log n)$ time.

24 citations


Book
01 Jan 2008
TL;DR: It is shown that finding equilibrium prices in discrete markets is NP-hard and complement the hardness result with a matching polynomial time approximation algorithm and a new way of measuring the quality of an approximation to equilibrium prices that is based on a natural aggregation of the dissatisfaction of individual market participants.
Abstract: The unprecedented growth of the Internet over the past decade and of data collection, more generally, has given rise to vast quantities of digital information, ranging from web documents and images, genomic databases to a vast array of business customer information. Consequently, it is of growing importance to develop tools and models that enable us to better understand this data and to design data-driven algorithms that leverage this information. This thesis provides several fundamental theoretical and algorithmic results for tackling such problems with applications to speech recognition, image processing, natural language processing, computational biology and web-based algorithms. (1) Probabilistic automata provide an efficient and compact way to model sequence-oriented data such as speech or web documents. Measuring the similarity of such automata provides a way of comparing the objects they model, and is an essential first step in organizing this type of data. We present algorithmic and hardness results for computing various discrepancies (or dissimilarities) between probabilistic automata, including the relative entropy and the Lp distance; we also give an efficient algorithm to determine if two probabilistic automata are equivalent. In addition, we study the complexity of computing the norms of probabilistic automata. (2) Widespread success of search engines and information retrieval systems has led to large scale collection of rating information which is being used to provide personalized rankings. We examine an alternate formulation of the ranking problem for search engines motivated by the requirement that in addition to accurately predicting pairwise ordering, ranking systems must also preserve the magnitude of the preferences or the difference between ratings. We present algorithms with sound theoretical properties, and verify their efficacy through experiments. (3) Organizing and querying large amounts of digitized data such as images and videos is a challenging task because little or no label information is available. This motivates transduction, a setting in which the learning algorithm can leverage unlabeled data during training to improve performance. We present novel error bounds for a family of transductive regression algorithms and validate their usefulness through experiments. (4) Finally, price discovery in a market setting can be viewed as an (ongoing) learning problem. Specifically, the problem is to find and maintain a set of prices that balance supply and demand, a core topic in economics. This appears to involve complex implicit and possibly large-scale information transfers. We show that finding equilibrium prices, even approximately, in discrete markets is NP-hard and complement the hardness result with a matching polynomial time approximation algorithm. We also give a new way of measuring the quality of an approximation to equilibrium prices that is based on a natural aggregation of the dissatisfaction of individual market participants.

5 citations