Institution
Paris Dauphine University
Education•Paris, France•
About: Paris Dauphine University is a education organization based out in Paris, France. It is known for research contribution in the topics: Context (language use) & Population. The organization has 1766 authors who have published 6909 publications receiving 162747 citations. The organization is also known as: Paris Dauphine & Dauphine.
Topics: Context (language use), Population, Approximation algorithm, Bounded function, Nonlinear system
Papers published on a yearly basis
Papers
More filters
•
TL;DR: In this article, the motion of a finite number of point vortices on a two-dimensional periodic domain is considered, and it is shown that when a generic stochastic perturbation compatible with the Eulerian description is introduced, the point vortex motion becomes well posed for every initial configuration, in particular coalescence disappears.
Abstract: The motion of a finite number of point vortices on a two-dimensional periodic domain is considered. In the deterministic case it is known to be well posed only for almost every initial configuration. Coalescence of vortices may occur for certain initial conditions. We prove that when a generic stochastic perturbation compatible with the Eulerian description is introduced, the point vortex motion becomes well posed for every initial configuration, in particular coalescence disappears.
56 citations
•
TL;DR: In this paper, it was shown that the difference between the Heisenberg observable Bt = U (t)BU (−t) and its semiclassical approximation of order N −1 is majorized by KN (6n+1)N~−4/9(−~ log ~) for t ∈ [0,Tn(~)], where Tn (~) := −2 log ~/[α(6n + 3)(N −1)] and α := ‖Hess(x,ξ)H‖.
Abstract: Let H be a holomorphic Hamiltonian of quadratic growth on R, b a holomorphic exponentially localized observable, H ,B the corresponding operators on L(R) generated by Weyl quantization, and U (t) = exp iHt/~. It is proved that the L norm of the difference between the Heisenberg observable Bt = U (t)BU (−t) and its semiclassical approximation of order N −1 is majorized by KN (6n+1)N~−4/9(−~ log ~) for t ∈ [0,Tn(~)], where Tn(~) := −2 log ~/[α(6n+ 3)(N −1)] and α := ‖Hess(x,ξ)H‖. Choosing a suitable N(~) the error is majorized by C~ | log ~|, 0 6 t 6 | log ~|/ log | log ~| (here K and C are explicit constants independent of N , ~).
56 citations
••
TL;DR: A multilevel framework is used to analyze jointly economic networks between firms and informal networks between their members in order to reframe this embeddedness hypothesis and shows that while each level has its own specific processes they are partly nested.
56 citations
••
TL;DR: A monotonically convergent algorithm which can enforce spectral constraints on the control field (and extends to arbitrary filters) and determines an optimal solution that could be implemented experimentally with this technique.
Abstract: We propose a monotonically convergent algorithm which can enforce spectral constraints on the control field (and extends to arbitrary filters). The procedure differs from standard algorithms in that at each iteration, the control field is taken as a linear combination of the control field (computed by the standard algorithm) and the filtered field. The parameter of the linear combination is chosen to respect the monotonic behavior of the algorithm and to be as close to the filtered field as possible. We test the efficiency of this method on molecular alignment. Using bandpass filters, we show how to select particular rotational transitions to reach high alignment efficiency. We also consider spectral constraints corresponding to experimental conditions using pulse-shaping techniques. We determine an optimal solution that could be implemented experimentally with this technique.
56 citations
••
TL;DR: In this paper, the Langevin Monte Carlo (LMC) algorithm was extended to compactly supported measures via a projection step, akin to projected Stochastic Gradient Descent (SGD).
Abstract: We extend the Langevin Monte Carlo (LMC) algorithm to compactly supported measures via a projection step, akin to projected Stochastic Gradient Descent (SGD). We show that (projected) LMC allows to sample in polynomial time from a log-concave distribution with smooth potential. This gives a new Markov chain to sample from a log-concave distribution. Our main result shows in particular that when the target distribution is uniform, LMC mixes in O(n 7) steps (where n is the dimension). We also provide preliminary experimental evidence that LMC performs at least as well as hit-and-run, for which a better mixing time of O(n 4) was proved by Lovasz and Vempala.
56 citations
Authors
Showing all 1819 results
Name | H-index | Papers | Citations |
---|---|---|---|
Pierre-Louis Lions | 98 | 283 | 57043 |
Laurent D. Cohen | 94 | 417 | 42709 |
Chris Bowler | 87 | 288 | 35399 |
Christian P. Robert | 75 | 535 | 36864 |
Albert Cohen | 71 | 368 | 19874 |
Gabriel Peyré | 65 | 303 | 16403 |
Kerrie Mengersen | 65 | 737 | 20058 |
Nader Masmoudi | 62 | 245 | 10507 |
Roland Glowinski | 61 | 393 | 20599 |
Jean-Michel Morel | 59 | 302 | 29134 |
Nizar Touzi | 57 | 224 | 11018 |
Jérôme Lang | 57 | 277 | 11332 |
William L. Megginson | 55 | 169 | 18087 |
Alain Bensoussan | 55 | 417 | 22704 |
Yves Meyer | 53 | 128 | 14604 |