scispace - formally typeset
Search or ask a question

Showing papers by "Benjamin Jourdain published in 2017"


Journal ArticleDOI
TL;DR: In this article, the Keller-Segel partial differential equation is approximated by a two-dimensional Brownian particle system, where particles interact through a singular attractive kernel in the drift term.
Abstract: The Keller-Segel partial differential equation is a two-dimensional model for chemotaxis. When the total mass of the initial density is one, it is known to exhibit blow-up in finite time as soon as the sensitivity $\chi$ of bacteria to the chemo-attractant is larger than $8\pi$. We investigate its approximation by a system of $N$ two-dimensional Brownian particles interacting through a singular attractive kernel in the drift term. In the very subcritical case $\chi<2\pi$, the diffusion strongly dominates this singular drift: we obtain existence for the particle system and prove that its flow of empirical measures converges, as $N\to\infty$ and up to extraction of a subsequence, to a weak solution of the Keller-Segel equation. We also show that for any $N\ge 2$ and any value of $\chi>0$, pairs of particles do collide with positive probability: the singularity of the drift is indeed visited. Nevertheless, when $\chi<2\pi N$, it is possible to control the drift and obtain existence of the particle system until the first time when at least three particles collide. We check that this time is a.s. infinite, so that global existence holds for the particle system, if and only if $\chi\leq 8\pi(N-2)/(N-1)$. Finally, we remark that in the system with $N=2$ particles, the difference between the two positions provides a natural two-dimensional generalization of Bessel processes, which we study in details.

57 citations


Posted Content
TL;DR: It turns out that, in dimension 1, the projections do not depend on $\rho$ and their quantile functions are explicit, which leads to efficient algorithms for convex combinations of Dirac masses.
Abstract: Motivated by the approximation of Martingale Optimal Transport problems, we study sampling methods preserving the convex order for two probability measures $\mu$ and $ u$ on $\mathbb{R}^d$, with $ u$ dominating $\mu$. When $(X_i)_{1\le i\le I}$ (resp. $(Y_j)_{1\le j\le J}$) are i.i.d. according $\mu$ (resp. $ u$), the empirical measures $\mu_I$ and $ u_J$ are not in the convex order. We investigate modifications of $\mu_I$ (resp. $ u_J$) smaller than $ u_J$ (resp. greater than $\mu_I$) in the convex order and weakly converging to $\mu$ (resp. $ u$) as $I,J\to\infty$. In dimension 1, according to Kertz and R\"osler (1992), the set of probability measures with a finite first order moment is a lattice for the increasing and the decreasing convex orders. From this result, we can define $\mu\vee u$ (resp. $\mu\wedge u$) that is greater than $\mu$ (resp. smaller than $ u$) in the convex order. We give efficient algorithms permitting to compute $\mu\vee u$ and $\mu\wedge u$ when $\mu$ and $ u$ are convex combinations of Dirac masses. In general dimension, when $\mu$ and $ u$ have finite moments of order $\rho\ge 1$, we define the projection $\mu\curlywedge_\rho u$ (resp. $\mu\curlyvee_\rho u$) of $\mu$ (resp. $ u$) on the set of probability measures dominated by $ u$ (resp. larger than $\mu$) in the convex order for the Wasserstein distance with index $\rho$. When $\rho=2$, $\mu_I\curlywedge_2 u_J$ can be computed efficiently by solving a quadratic optimization problem with linear constraints. It turns out that, in dimension 1, the projections do not depend on $\rho$ and their quantile functions are explicit, which leads to efficient algorithms for convex combinations of Dirac masses. Last, we illustrate by numerical experiments the resulting sampling methods that preserve the convex order and their application to approximate Martingale Optimal Transport problems.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors prove existence of the Fokker-Planck equation for the SDE nonlinear in the sense of McKean for the case where the local volatility function is equal to the inverse of the root conditional mean square of the stochastic volatility factor multiplied by the spot value.
Abstract: By Gyongy's theorem, a local and stochastic volatility (LSV) model is calibrated to the market prices of all European call options with positive maturities and strikes if its local volatility function is equal to the ratio of the Dupire local volatility function over the root conditional mean square of the stochastic volatility factor given the spot value. This leads to a SDE nonlinear in the sense of McKean. Particle methods based on a kernel approximation of the conditional expectation, as presented by Guyon and Henry-Labordere (2011), provide an efficient calibration procedure even if some calibration errors may appear when the range of the stochastic volatility factor is very large. But so far, no global existence result is available for the SDE nonlinear in the sense of McKean. In the particular case where the local volatility function is equal to the inverse of the root conditional mean square of the stochastic volatility factor multiplied by the spot value given this value and the interest rate is zero, the solution to the SDE is a fake Brownian motion. When the stochastic volatility factor is a constant (over time) random variable taking finitely many values and the range of its square is not too large, we prove existence to the associated Fokker-Planck equation. Thanks to Figalli (2008), we then deduce existence of a new class of fake Brownian motions. We then extend these results to the special case of the LSV model called regime switching local volatility, where the stochastic volatility factor is a jump process taking finitely many values and with jump intensities depending on the spot level. Under the same condition on the range of its square, we prove existence to the associated Fokker-Planck PDE. Finally, we deduce existence of the calibrated model by extending the results in Figalli (2008).

13 citations


Journal ArticleDOI
TL;DR: The Self-Healing Umbrella Sampling (SHUS) algorithm is an adaptive biasing algorithm which has been proposed in Marsili et al. as mentioned in this paper in order to efficiently sample a multimodal probability measure.
Abstract: The Self-Healing Umbrella Sampling (SHUS) algorithm is an adaptive biasing algorithm which has been proposed in Marsili et al. (J Phys Chem B 110(29):14011---14013, 2006) in order to efficiently sample a multimodal probability measure. We show that this method can be seen as a variant of the well-known Wang---Landau algorithm Wang and Landau (Phys Rev E 64:056101, 2001a; Phys Rev Lett 86(10):2050---2053, 2001b). Adapting results on the convergence of the Wang-Landau algorithm obtained in Fort et al. (Math Comput 84(295):2297---2327, 2014a), we prove the convergence of the SHUS algorithm. We also compare the two methods in terms of efficiency. We finally propose a modification of the SHUS algorithm in order to increase its efficiency, and exhibit some similarities of SHUS with the well-tempered metadynamics method Barducci et al. (Phys Rev Lett 100:020,603, 2008).

9 citations


Journal ArticleDOI
TL;DR: In this paper, a central limit theorem for post-stratified Monte Carlo estimators with an associated infinite number of strata is developed. But the central limit is not applicable to the case of debiased multi-level Monte Carlo (MLMC) algorithms.
Abstract: This paper develops a general central limit theorem (CLT) for post-stratified Monte Carlo estimators with an associated infinite number of strata. In addition, consistency of the corresponding variance estimator is established in the same setting. With these results in hand, one can then construct asymptotically valid confidence interval procedures for such infinitely stratified estimators. We then illustrate our general theory, by applying it to the specific case of debiased multi-level Monte Carlo (MLMC) algorithms. This leads to the first asymptotically valid confidence interval procedure for such stratified debiased MLMC procedures.

6 citations


Posted Content
TL;DR: In this paper, the authors define the respective projections for the Wasserstein distance of two probability measures on the convex order with finite moments of order (i.e., ρ, ρ + 1).
Abstract: In this paper, for $\mu$ and $ u$ two probability measures on $\mathbb{R}^d$ with finite moments of order $\rho\ge 1$, we define the respective projections for the $W_\rho$-Wasserstein distance of $\mu$ and $ u$ on the sets of probability measures dominated by $ u$ and of probability measures larger than $\mu$ in the convex order. The $W_2$-projection of $\mu$ can be easily computed when $\mu$ and $ u$ have finite support by solving a quadratic optimization problem with linear constraints. In dimension $d=1$, Gozlan et al.~(2018) have shown that the projections do not depend on $\rho$. We explicit their quantile functions in terms of those of $\mu$ and $ u$. The motivation is the design of sampling techniques preserving the convex order in order to approximate Martingale Optimal Transport problems by using linear programming solvers. We prove convergence of the Wasserstein projection based sampling methods as the sample sizes tend to infinity and illustrate them by numerical experiments.

3 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the convergence rate of the NVM with the Lie brackets between the Brownian vector fields coincides with the solution to the SDE on the discretization grid.

3 citations


Journal ArticleDOI
TL;DR: The results about the strong convergence rate of the Ninomiya-Victoir scheme and the stable convergence in law of its normalized error and the properties of the multilevel Monte Carlo estimators involving this scheme are summarized.
Abstract: In this paper, we summarize the results about the strong convergence rate of the Ninomiya-Victoir scheme and the stable convergence in law of its normalized error that we obtained in previous papers. We then recall the properties of the multilevel Monte Carlo estimators involving this scheme that we introduced and studied before. Last, we are interested in the error introduced by discretizing the ordinary differential equations involved in the Ninomiya-Victoir scheme. We prove that this error converges with strong order 2 when an explicit Runge-Kutta method with order 4 (resp. 2) is used for the ODEs corresponding to the Brownian (resp. Stratonovich drift) vector fields. We thus relax the order 5 for the Brownian ODEs needed by Ninomiya and Ninomiya (2009) to obtain the same order of strong convergence. Moreover, the properties of our multilevel Monte-Carlo estimators are preserved when these Runge-Kutta methods are used.

2 citations


Journal ArticleDOI
TL;DR: In this paper, an enhancement of the regression-based variance reduction approaches was proposed based on a truncation of the control variate and allows for a significant reduction of the computing time, while the complexity stays of the same order.
Abstract: In this paper we present an enhancement of the regression-based variance reduction approaches recently proposed in Belomestny et al. [1] and [4]. This enhancement is based on a truncation of the control variate and allows for a significant reduction of the computing time, while the complexity stays of the same order. The performances of the proposed truncated algorithms are illustrated by a numerical example.

1 citations


Posted Content
TL;DR: In this article, the authors study sampling methods preserving the convex order for two probability measures, i.i.d., on the Wasserstein distance, and give efficient algorithms for convex combinations of Dirac masses.
Abstract: Motivated by the approximation of Martingale Optimal Transport problems, we study sampling methods preserving the convex order for two probability measures $\mu$ and $ u$ on $\mathbb{R}^d$, with $ u$ dominating $\mu$. When $(X_i)_{1\le i\le I}$ (resp. $(Y_j)_{1\le j\le J}$) are i.i.d. according $\mu$ (resp. $ u$), the empirical measures $\mu_I$ and $ u_J$ are not in the convex order. We investigate modifications of $\mu_I$ (resp. $ u_J$) smaller than $ u_J$ (resp. greater than $\mu_I$) in the convex order and weakly converging to $\mu$ (resp. $ u$) as $I,J\to\infty$. In dimension 1, according to Kertz and R\"osler (1992), the set of probability measures with a finite first order moment is a lattice for the increasing and the decreasing convex orders. From this result, we can define $\mu\vee u$ (resp. $\mu\wedge u$) that is greater than $\mu$ (resp. smaller than $ u$) in the convex order. We give efficient algorithms permitting to compute $\mu\vee u$ and $\mu\wedge u$ when $\mu$ and $ u$ are convex combinations of Dirac masses. In general dimension, when $\mu$ and $ u$ have finite moments of order $\rho\ge 1$, we define the projection $\mu\curlywedge_\rho u$ (resp. $\mu\curlyvee_\rho u$) of $\mu$ (resp. $ u$) on the set of probability measures dominated by $ u$ (resp. larger than $\mu$) in the convex order for the Wasserstein distance with index $\rho$. When $\rho=2$, $\mu_I\curlywedge_2 u_J$ can be computed efficiently by solving a quadratic optimization problem with linear constraints. It turns out that, in dimension 1, the projections do not depend on $\rho$ and their quantile functions are explicit, which leads to efficient algorithms for convex combinations of Dirac masses. Last, we illustrate by numerical experiments the resulting sampling methods that preserve the convex order and their application to approximate Martingale Optimal Transport problems.

1 citations


Posted Content
TL;DR: In this article, the authors derived non-asymptotic error bounds for the multilevel Monte Carlo method with explicit Euler discretization of stochastic differential equations with a constant diffusion coefficient and showed that, as long as the deviation is below an explicit threshold, a Gaussian-type concentration inequality optimal in terms of the variance holds for the multi-level estimator.
Abstract: In this paper, we are interested in deriving non-asymptotic error bounds for the multilevel Monte Carlo method. As a first step, we deal with the explicit Euler discretization of stochastic differential equations with a constant diffusion coefficient. We prove that, as long as the deviation is below an explicit threshold, a Gaussian-type concentration inequality optimal in terms of the variance holds for the multilevel estimator. To do so, we use the Clark-Ocone representation formula and derive bounds for the moment generating functions of the squared difference between a crude Euler scheme and a finer one and of the squared difference of their Malliavin derivatives.