scispace - formally typeset
Search or ask a question
Author

Olivier Guéant

Bio: Olivier Guéant is an academic researcher from University of Paris. The author has contributed to research in topics: Market liquidity & Market maker. The author has an hindex of 25, co-authored 86 publications receiving 2562 citations. Previous affiliations of Olivier Guéant include École Normale Supérieure & Paris Diderot University.


Papers
More filters
Posted Content
TL;DR: This text is inspired from a “Cours Bachelier” held in January 2009 and taught by Jean-Michel Lasry, based upon the articles of the three authors and upon unpublished materials they developed.
Abstract: This text is inspired from a "Cours Bachelier" held in January 2009 and taught by Jean-Michel Lasry. This course was based upon the articles of the three authors and upon unpublished materials developed by the authors. Proofs were not presented during the conferences and are now available. So are some issues that were only rapidly tackled during class. The content of this text is therefore far more important than the actual "Cours Bachelier" conferences, though the guiding principle is the same and consists in a progressive introduction of the concepts, methodologies and mathematical tools of mean field games theory.

487 citations

Book ChapterDOI
01 Jan 2011
TL;DR: The Course Bachelier 2009 as discussed by the authors was inspired from a course inspired by the work of Jean-Michel Lasry, and the course was based upon the articles of the three authors and upon unpublished materials they developed.
Abstract: This text is inspired from a “Cours Bachelier” held in January 2009 and taught by Jean-Michel Lasry. This course was based upon the articles of the three authors and upon unpublished materials they developed. Proofs were not presented during the conferences and are now available. So are some issues that were only rapidly tackled during class.

479 citations

Journal ArticleDOI
TL;DR: In this paper, instead of focusing only on the scheduling aspect, Almgren and Chriss link the optimal trade schedule to the price of the limit orders that have to be sent to the limit order book to optimally liquidate a portfolio.
Abstract: This paper addresses portfolio liquidation using a new angle. Instead of focusing only on the scheduling aspect like Almgren and Chriss in [J. Risk, 3 (2000), pp. 5--39], or only on the liquidity-consuming orders like Obizhaeva and Wang in [Optimal Trading Strategy and Supply/Demand Dynamics, SSRN eLibrary, 2005], we link the optimal trade schedule to the price of the limit orders that have to be sent to the limit order book to optimally liquidate a portfolio. Most practitioners address these two issues separately: they compute an optimal trading curve, and they then send orders to the markets to try to follow it. The results obtained in this paper can be interpreted and used in two ways: (i) we solve simultaneously the two problems and provide a strategy to liquidate a portfolio over a few hours, and (ii) we provide a tactic for following a trading curve over slices of a few minutes. As far as the model is concerned, the interactions of limit orders with the market are modeled via a point process pegged ...

165 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider a stochastic control problem similar to the one introduced by Ho and Stoll and formalized mathematically by Avellaneda and Stoikov, where market makers continuously set bid and ask quotes for the stocks they have under consideration.
Abstract: Market makers continuously set bid and ask quotes for the stocks they have under consideration. Hence they face a complex optimization problem in which their return, based on the bid-ask spread they quote and the frequency they indeed provide liquidity, is challenged by the price risk they bear due to their inventory. In this paper, we consider a stochastic control problem similar to the one introduced by Ho and Stoll and formalized mathematically by Avellaneda and Stoikov. The market is modeled using a reference price S_t following a Brownian motion, arrival rates of buy or sell liquidity-consuming orders depend on the distance to the reference price S_t and a market maker maximizes the expected utility of its PnL over a short time horizon. We show that the Hamilton-Jacobi-Bellman equations can be transformed into a system of linear ordinary differential equations and we solve the market making problem under inventory constraints. We also provide a spectral characterization of the asymptotic behavior of the optimal quotes and propose closed-form approximations.

158 citations

Posted Content
TL;DR: In this paper, the authors consider a stochastic control problem similar to the one introduced by Ho and Stoll and formalized mathematically by Avellaneda and Stoikov, where market makers continuously set bid and ask quotes for the stocks they have under consideration.
Abstract: Market makers continuously set bid and ask quotes for the stocks they have under consideration. Hence they face a complex optimization problem in which their return, based on the bid-ask spread they quote and the frequency at which they indeed provide liquidity, is challenged by the price risk they bear due to their inventory. In this paper, we consider a stochastic control problem similar to the one introduced by Ho and Stoll and formalized mathematically by Avellaneda and Stoikov. The market is modeled using a reference price $S_t$ following a Brownian motion with standard deviation $\sigma$, arrival rates of buy or sell liquidity-consuming orders depend on the distance to the reference price $S_t$ and a market maker maximizes the expected utility of its P&L over a finite time horizon. We show that the Hamilton-Jacobi-Bellman equations associated to the stochastic optimal control problem can be transformed into a system of linear ordinary differential equations and we solve the market making problem under inventory constraints. We also shed light on the asymptotic behavior of the optimal quotes and propose closed-form approximations based on a spectral characterization of the optimal quotes.

130 citations


Cited by
More filters
17 Oct 2011
TL;DR: As a measure of market capacity and not economic well-being, the authors pointed out that the two can lead to misleading indications about how well-off people are and entail the wrong policy decisions.
Abstract: As GDP is a measure of market capacity and not economic well-being, this report has been commissioned to more accurately understand the social progress indicators of any given state. Gross domestic product (GDP) is the most widely used measure of economic activity. There are international standards for its calculation, and much thought has gone into its statistical and conceptual bases. But GDP mainly measures market production, though it has often been treated as if it were a measure of economic well-being. Conflating the two can lead to misleading indications about how well-off people are and entail the wrong policy decisions. One reason why money measures of economic performance and living standards have come to play such an important role in our societies is that the monetary valuation of goods and services makes it easy to add up quantities of a very different nature. When we know the prices of apple juice and DVD players, we can add up their values and make statements about production and consumption in a single figure. But market prices are more than an accounting device. Economic theory tells us that when markets are functioning properly, the ratio of one market price to another is reflective of the relative appreciation of the two products by those who purchase them. Moreover, GDP captures all final goods in the economy, whether they are consumed by households, firms or government. Valuing them with their prices would thus seem to be a good way of capturing, in a single number, how well-off society is at a particular moment. Furthermore, keeping prices unchanged while observing how quantities of goods and services that enter GDP move over time would seem like a reasonable way of making a statement about how society’s living standards are evolving in real terms. As it turns out, things are more complicated. First, prices may not exist for some goods and services (if for instance government provides free health insurance or if households are engaged in child care), raising the question of how these services should be valued. Second, even where there are market prices, they may deviate from society’s underlying valuation. In particular, when the consumption or production of particular products affects society as a whole, the price that individuals pay for those products will differ from their value to society at large. Environmental damage caused by production or consumption activities that is not reflected in market prices is a well-known example.

4,432 citations

Journal ArticleDOI
TL;DR: The concept of federated learning (FL) as mentioned in this paperederated learning has been proposed to enable collaborative training of an ML model and also enable DL for mobile edge network optimization in large-scale and complex mobile edge networks, where heterogeneous devices with varying constraints are involved.
Abstract: In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloud-based Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.

895 citations

Journal ArticleDOI
TL;DR: This paper provides a survey-style introduction to dense small cell networks and considers many research directions, namely, user association, interference management, energy efficiency, spectrum sharing, resource management, scheduling, backhauling, propagation modeling, and the economics of UDN deployment.
Abstract: The exponential growth and availability of data in all forms is the main booster to the continuing evolution in the communications industry. The popularization of traffic-intensive applications including high definition video, 3-D visualization, augmented reality, wearable devices, and cloud computing defines a new era of mobile communications. The immense amount of traffic generated by today’s customers requires a paradigm shift in all aspects of mobile networks. Ultradense network (UDN) is one of the leading ideas in this racetrack. In UDNs, the access nodes and/or the number of communication links per unit area are densified. In this paper, we provide a survey-style introduction to dense small cell networks. Moreover, we summarize and compare some of the recent achievements and research findings. We discuss the modeling techniques and the performance metrics widely used to model problems in UDN. Also, we present the enabling technologies for network densification in order to understand the state-of-the-art. We consider many research directions in this survey, namely, user association, interference management, energy efficiency, spectrum sharing, resource management, scheduling, backhauling, propagation modeling, and the economics of UDN deployment. Finally, we discuss the challenges and open problems to the researchers in the field or newcomers who aim to conduct research in this interesting and active area of research.

828 citations

Journal ArticleDOI
26 Sep 2018
TL;DR: In this article, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is provided.
Abstract: Ensuring ultrareliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay, and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is sorely lacking. The overarching goal of this paper is a first step to filling this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a wide variety of techniques and methodologies pertaining to the requirements of URLLC, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliability wireless networks.

779 citations

01 Jan 2009
TL;DR: This volume provides a systematic treatment of stochastic optimization problems applied to finance by presenting the different existing methods: dynamic programming, viscosity solutions, backward stochastically differential equations, and martingale duality methods.
Abstract: Stochastic optimization problems arise in decision-making problems under uncertainty, and find various applications in economics and finance. On the other hand, problems in finance have recently led to new developments in the theory of stochastic control. This volume provides a systematic treatment of stochastic optimization problems applied to finance by presenting the different existing methods: dynamic programming, viscosity solutions, backward stochastic differential equations, and martingale duality methods. The theory is discussed in the context of recent developments in this field, with complete and detailed proofs, and is illustrated by means of concrete examples from the world of finance: portfolio allocation, option hedging, real options, optimal investment, etc. This book is directed towards graduate students and researchers in mathematical finance, and will also benefit applied mathematicians interested in financial applications and practitioners wishing to know more about the use of stochastic optimization methods in finance.

759 citations