scispace - formally typeset
Search or ask a question

Showing papers by "Paris Dauphine University published in 2009"


Journal ArticleDOI
TL;DR: This work surveys complexity results for the min-max and min- max regret versions of some combinatorial optimization problems: shortest path, spanning tree, assignment, min cut, min s-t cut, knapsack, and investigates the approximability of these problems.

488 citations


Journal ArticleDOI
TL;DR: A new class of models for natural signals and images that constrain the set of patches extracted from the data to analyze to be close to a low-dimensional manifold that can be used to regularize inverse problems in signal and image processing.

240 citations


Posted Content
17 Apr 2009
TL;DR: In this paper, the authors reinterpreted the Landau damping phenomenon in terms of transfer of regularity between kinetic and spatial variables, rather than exchanges of energy, and derived the analytic norm of the Coulomb potential.
Abstract: Going beyond the linearized study has been a longstanding problem in the theory of the Landau damping. In this paper we establish Landau damping for the nonlinear Vlasov equation, for any interaction potential less singular than Coulomb. The damping phenomenon is reinterpreted in terms of transfer of regularity between kinetic and spatial variables, rather than exchanges of energy. The analysis involves new families of analytic norms, measuring regularity by comparison with solutions of the free transport equation; new functional inequalities; a control of nonlinear echoes; sharp scattering estimates; and a Newton approximation scheme. We point out the (a priori unexpected) critical nature of the Coulomb potential and analytic regularity, which can be seen only at the nonlinear level; in this case we derive Landau damping over finite but exponentially long times. Physical implications are discussed.

224 citations


Proceedings Article
11 Jul 2009
TL;DR: Nested Monte-Carlo Search addresses the problem of guiding the search toward better states when there is no available heuristic, and uses nested levels of random games to guide the search.
Abstract: Many problems have a huge state space and no good heuristic to order moves so as to guide the search toward the best positions. Random games can be used to score positions and evaluate their interest. Random games can also be improved using random games to choose a move to try at each step of a game. Nested Monte-Carlo Search addresses the problem of guiding the search toward better states when there is no available heuristic. It uses nested levels of random games in order to guide the search. The algorithm is studied theoretically on simple abstract problems and applied successfully to three different games: Morpion Solitaire, SameGame and 16×16 Sudoku.

194 citations


Journal ArticleDOI
TL;DR: A generative model for textures that uses a local sparse description of the image content that enforces the sparsity of the expansion of local texture patches on adapted atomic elements is presented.
Abstract: This paper presents a generative model for textures that uses a local sparse description of the image content. This model enforces the sparsity of the expansion of local texture patches on adapted atomic elements. The analysis of a given texture within this framework performs the sparse coding of all the patches of the texture into the dictionary of atoms. Conversely, the synthesis of a new texture is performed by solving an optimization problem that seeks for a texture whose patches are sparse in the dictionary. This paper explores several strategies to choose this dictionary. A set of hand crafted dictionaries composed of edges, oscillations, lines or crossings elements allows to synthesize synthetic images with geometric features. Another option is to define the dictionary as the set of all the patches of an input exemplar. This leads to computer graphics methods for synthesis and shares some similarities with non-local means filtering. The last method we explore learns the dictionary by an optimization process that maximizes the sparsity of a set of exemplar patches. Applications of all these methods to texture synthesis, inpainting and classification shows the efficiency of the proposed texture model.

191 citations


Journal ArticleDOI
TL;DR: In this article, the role played by childhood circumstances, especially social and family background in explaining health status among older adults, was analyzed and the hypothesis of an intergenerational transmission of health inequalities was explored.
Abstract: This article analyses the role played by childhood circumstances, especially social and family background in explaining health status among older adults. We explore the hypothesis of an intergenerational transmission of health inequalities using the French part of SHARE. As the impact of both social background and parents’ health on health status in adulthood represents circumstances independent of individual responsibility, this study allows us testing the existence in France of inequalities of opportunity in health related to family and social background. Empirically, our study relies on both tests of stochastic dominance at first order and multivariate regressions, supplemented by a counterfactual analysis to evaluate the long-lasting impact of childhood conditions on inequality in health. Allocating the best circumstances in both parents’ SES and parents’ health reduces inequality in health by an impressive 57% using the Gini coefficient. The mother’s social status has a direct effect on the health of her offspring. By contrast, the effect on the descendant’s health from the father’s social status is indirect only, going through the descendant’s social status as an adult. There is also a strong effect of the father vital status on health in adulthood, revealing a selection effect.

187 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce a new class of distances between nonnegative Radon measures, which are modeled on the dynamical characterization of the Kantorovich-Rubinstein-Wasserstein distances and provide a wide family interpolating between the Wasserstein and the homogeneous Sobolev distances.
Abstract: We introduce a new class of distances between nonnegative Radon measures in \({\mathbb{R}^d}\) . They are modeled on the dynamical characterization of the Kantorovich-Rubinstein-Wasserstein distances proposed by Benamou and Brenier (Numer Math 84:375–393, 2000) and provide a wide family interpolating between the Wasserstein and the homogeneous \({W^{-1,p}_\gamma}\) -Sobolev distances. From the point of view of optimal transport theory, these distances minimize a dynamical cost to move a given initial distribution of mass to a final configuration. An important difference with the classical setting in mass transport theory is that the cost not only depends on the velocity of the moving particles but also on the densities of the intermediate configurations with respect to a given reference measure γ. We study the topological and geometric properties of these new distances, comparing them with the notion of weak convergence of measures and the well established Kantorovich-Rubinstein-Wasserstein theory. An example of possible applications to the geometric theory of gradient flows is also given.

169 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered nonnegative solutions of the fast diffusion equation with m ∈ (0, 1) in the Euclidean space and studied the asymptotic behavior of a natural class of solutions in the limit corresponding to t → ∞ for m ≧ mc = (d − 2)/d, or as t approaches the extinction time when m < mc.
Abstract: We consider non-negative solutions of the fast diffusion equation ut = Δum with m ∈ (0, 1) in the Euclidean space \({{\mathbb R}^d}\), d ≧ 3, and study the asymptotic behavior of a natural class of solutions in the limit corresponding to t → ∞ for m ≧ mc = (d − 2)/d, or as t approaches the extinction time when m < mc. For a class of initial data, we prove that the solution converges with a polynomial rate to a self-similar solution, for t large enough if m ≧ mc, or close enough to the extinction time if m < mc. Such results are new in the range m ≦ mc where previous approaches fail. In the range mc < m < 1, we improve on known results.

154 citations


Journal ArticleDOI
TL;DR: Dolbeault et al. as discussed by the authors proposed a method for proving the hypocoercivity associated to a kinetic equation involving a linear time relaxation operator, which is based on the construction of an adapted Lyapunov functional satisfying a Gronwall-type inequality.

151 citations


Journal ArticleDOI
TL;DR: The main idea of the approach relies on the use of several complementary dominance relations to discard partial solutions that cannot lead to new non-dominated criterion vectors to obtain an efficient method that outperforms the existing methods both in terms of CPU time and size of solved instances.

136 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a rigorous mathematical proof of the existence of travelling wave solutions to the Gross-Pitaevskii equation in dimensions two and three, based on minimization under constraints, yield a full branch of solutions, and extend earlier results where only a part of the branch was built.
Abstract: The purpose of this paper is to provide a rigorous mathematical proof of the existence of travelling wave solutions to the Gross-Pitaevskii equation in dimensions two and three. Our arguments, based on minimization under constraints, yield a full branch of solutions, and extend earlier results (see [3,4,8]) where only a part of the branch was built. In dimension three, we also show that there are no travelling wave solutions of small energy.

Journal ArticleDOI
TL;DR: A new derivation of the dynamic programming equation for general stochastic target problems with unbounded controls is provided, together with the appropriate boundary conditions, which are applied to the problem of quantile hedging in financial mathematics.
Abstract: We consider the problem of finding the minimal initial data of a controlled process which guarantees to reach a controlled target with a given probability of success or, more generally, with a given level of expected loss. By suitably increasing the state space and the controls, we show that this problem can be converted into a stochastic target problem, i.e., finding the minimal initial data of a controlled process which guarantees to reach a controlled target with probability one. Unlike in the existing literature on stochastic target problems, our increased controls are valued in an unbounded set. In this paper, we provide a new derivation of the dynamic programming equation for general stochastic target problems with unbounded controls, together with the appropriate boundary conditions. These results are applied to the problem of quantile hedging in financial mathematics and are shown to recover the explicit solution of Follmer and Leukert [Finance Stoch., 3 (1999), pp. 251-273].

Journal ArticleDOI
TL;DR: Using a small volatility of volatility expansion and Malliavin calculus techniques, an accurate analytical formula is derived for the price of vanilla options for any time dependent Heston model (the accuracy is less than a few bps for various strikes and maturities).
Abstract: The use of the Heston model is still challenging because it has a closed formula only when the parameters are constant [Hes93] or piecewise constant [MN03]. Hence, using a small volatility of volatility expansion and Malliavin calculus techniques, we derive an accurate analytical formula for the price of vanilla options for any time dependent Heston model (the accuracy is less than a few bps for various strikes and maturities). In addition, we establish tight error estimates. The advantage of this approach over Fourier based methods is its rapidity (gain by a factor 100 or more), while maintaining a competitive accuracy. From the approximative formula, we also derive some corollaries related first to equivalent Heston models (extending some work of Piterbarg on stochastic volatility models [Pit05]) and second, to the calibration procedure in terms of ill-posed problems.

Journal ArticleDOI
TL;DR: In this paper, an extension of the comprehensive (overall) concordance index of ELECTRE methods is presented, which takes the interaction between criteria into account, by imposing such conditions as boundary, monotonicity, and continuity.

Journal ArticleDOI
TL;DR: In this paper, the authors combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark energy equation of state parameter w.
Abstract: We combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark energy equation of state parameter w. We assess the influence of systematics in the data on the results and look for possible correlations with cosmological parameters. We implement an MCMC algorithm to sample the parameter space of a flat CDM model with a dark-energy component of constant w. Systematics in the data are parametrised and included in the analysis. We determine the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations. The weak lensing data set is tested for anomalous field-to-field variations and a systematic shape measurement bias for high-z galaxies. Ignoring photometric uncertainties for SNLS biases cosmological parameters by at most 20% of the statistical errors, using supernovae only; the parameter uncertainties are underestimated by 10%. The weak lensing field-to-field variance pointings is 5%-15% higher than that predicted from N-body simulations. We do not find evidence for a multiplicative bias of the lensing signal at high redshift, within the framework of a simple model. When restricting the bias to values smaller than unity, the normalisation sigma_8 increases by up to 8%. Combining all three probes we obtain -0.10<1+w<0.06 at 68% confidence (-0.18<1+w<0.12 at 95%), including systematic errors. Systematics in the data increase the error bars by up to 35%; the best-fit values change by less than 0.15sigma. [Abridged]

Journal ArticleDOI
TL;DR: In this paper, the authors extend Afriat's theorem to a class of nonlinear, nonconvex budget sets and show that by increasing in a regular way the number of observed choices from their class of budget sets one can fully identify the underlying preference relation.

Journal ArticleDOI
TL;DR: In this article, a case study of how French institutions use the CBA method is presented, and the authors examine how the use of cost-benefit analysis interacts with public debate and stakeholder participation in France today.

Journal ArticleDOI
TL;DR: In this article, the authors established the limiting distribution of the resulting estimator of the mode M(f0) and established a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
Abstract: We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, that is, a density of the form f0=exp ϕ0 where ϕ0 is a concave function on ℝ. The pointwise limiting distributions depend on the second and third derivatives at 0 of Hk, the “lower invelope” of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of ϕ0=log f0 at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f0) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

Journal ArticleDOI
TL;DR: A family of sequential voting rules are defined, defined as the sequential composition of local voting rules, related to the setting of conditional preference networks (CP-nets) recently developed in the Artificial Intelligence literature.

Journal ArticleDOI
TL;DR: A Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits.
Abstract: We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.

Posted Content
TL;DR: In this article, the authors studied the asymptotic posterior distribution of linear functionals of the density and gave general conditions to obtain a semiparametric version of the Bernstein-Von Mises theorem.
Abstract: In this paper, we study the asymptotic posterior distribution of linear functionals of the density. In particular, we give general conditions to obtain a semiparametric version of the Bernstein-Von Mises theorem. We then apply this general result to nonparametric priors based on infinite dimensional exponential families. As a byproduct, we also derive adaptive nonparametric rates of concentration of the posterior distributions under these families of priors on the class of Sobolev and Besov spaces.

01 Jan 2009
TL;DR: In this article, the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations was assessed.
Abstract: Aims We combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark-energy equation-of-state parameter w. We assess the influence of systematics in the data on the results and look for possible correlations with cosmological parameters. Methods We implemented an MCMC algorithm to sample the parameter space of a flat CDM model with a dark-energy component of constant w. Systematics in the data are parametrised and included in the analysis. We determine the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations. The weak lensing data set is tested for anomalous field-to-field variations and a systematic shape measurement bias for high-redshift galaxies. Results Ignoring photometric uncertainties for SNLS biases cosmological parameters by at most 20% of the statistical errors, using supernovae alone; the parameter uncertainties are underestimated by 10%. The weak-lensing field-to-field variance between 1 deg2-MegaCam pointings is 5-15% higher than predicted from N-body simulations. We find no bias in the lensing signal at high redshift, within the framework of a simple model, and marginalising over cosmological parameters. Assuming a systematic underestimation of the lensing signal, the normalisation increases by up to 8%. Combining all three probes we obtain -0.10 < 1 + w < 0.06 at 68% confidence ( -0.18 < 1 + w < 0.12 at 95%), including systematic errors. Our results are therefore consistent with the cosmological constant . Systematics in the data increase the error bars by up to 35%; the best-fit values change by less than 0.15.

Journal ArticleDOI
TL;DR: In this article, the authors investigated whether the form/depth of regional trade agreements (RTAs) matters concerning their effect on trade and found that creating any kind of RTAs providing trade preferences to their member countries significantly increases bilateral trade.
Abstract: Regional trade agreements (RTAs) are usually classified according to their form into four broad categories: preferential arrangements, free trade agreements, customs unions and common markets. This paper investigates whether the form/depth of RTAs matters concerning their effect on trade. I use a proper specification of the gravity model with panel data on the 1960–2000 period, which specifically control for self-selection into agreements. Results show that creating any kind of RTAs providing trade preferences to their member countries significantly increases bilateral trade. Nevertheless, their average treatment effect on bilateral trade does not significantly differ according to the depth of agreements.

Journal ArticleDOI
TL;DR: A new method for segmenting closed contours and surfaces using a variant of the minimal path approach, which can be used for finding an open curve giving extra information as stopping criteria and applied to 3D data with promising results.
Abstract: In this paper, we present a new method for segmenting closed contours and surfaces. Our work builds on a variant of the minimal path approach. First, an initial point on the desired contour is chosen by the user. Next, new keypoints are detected automatically using a front propagation approach. We assume that the desired object has a closed boundary. This a-priori knowledge on the topology is used to devise a relevant criterion for stopping the keypoint detection and front propagation. The final domain visited by the front will yield a band surrounding the object of interest. Linking pairs of neighboring keypoints with minimal paths allows us to extract a closed contour from a 2D image. This approach can also be used for finding an open curve giving extra information as stopping criteria. Detection of a variety of objects on real images is demonstrated. Using a similar idea, we can extract networks of minimal paths from a 3D image called Geodesic Meshing. The proposed method is applied to 3D data with promising results.

Journal ArticleDOI
TL;DR: In this article, the authors present a reference case of mean field games, where the Bellman functions are quadratic, stationary measures are normal and stability can be dealt with explicitly using Hermite polynomials.

Journal ArticleDOI
TL;DR: In this article, it was shown that the time average of the replicator dynamics is a perturbed solution of the best-reply dynamics, and that the logit map can be represented in terms of logit maps.
Abstract: Using an explicit representation in terms of the logit map, we show, in a unilateral framework, that the time average of the replicator dynamics is a perturbed solution of the best-reply dynamics.

Journal ArticleDOI
TL;DR: In this paper, the relation between early life conditions and adult obesity in France, using a rich data set collected through the 2003 nationally representative Life History Survey, was investigated, with a strong association between early disadvantage and obesity in women, but not in men.

Journal ArticleDOI
TL;DR: In this article, the authors present short proofs for transport density absolute continuity and L p ≥ 0.1 for L 1 -approximation using displacement interpolation and discrete measures.
Abstract: The paper presents some short proofs for transport density absolute continuity and L p estimates Most of the previously existing results which were proven by geometric arguments are re-proved through a strategy based on displacement interpolation and on approximation by discrete measures; some of them are partially extended

Proceedings Article
11 Jul 2009
TL;DR: Conditional importance networks (CI-nets) as mentioned in this paper have been proposed to represent combinatorial preferences over sets of alternatives and are well-suited for the description of fair division problems.
Abstract: While there are several languages for representing combinatorial preferences over sets of alternatives, none of these are well-suited to the representation of ordinal preferences over sets of goods (which are typically required to be monotonic). We propose such a language, taking inspiration from previous work on graphical languages for preference representation, specifically CP-nets, and introduce conditional importance networks (CI-nets). A CI-net includes statements of the form "if I have a set A of goods, and I do not have any of the goods from some other set B, then I prefer the set of goods C over the set of goods D." We investigate expressivity and complexity issues for CI-nets. Then we show that CI-nets are well-suited to the description of fair division problems.

Journal ArticleDOI
TL;DR: While the theoretical model showed very little difference between SCIG and hospital‐based IVIG costs, SCIG appears to be 25% less expensive with field data because of lower doses used in SCIG patients, and the reality of the dose difference between both routes of administration needs to be confirmed.
Abstract: Lifelong immunoglobulin replacement is the standard, expensive therapy for severe primary antibody deficiencies. This treatment can be administrated either by intravenous immunoglobulin (IVIG) or subcutaneous infusions (SCIG) and delivered at home or in an out-patient setting. This study aims to determine whether SCIG is cost-effective compared with IVIG from a French social insurance perspective. Because both methods of administration provide similar efficacies, a cost-minimization analysis was performed. First, costs were calculated through a simulation testing different hypothesis on costs drivers. Secondly, costs were estimated on the basis of field data collected by a questionnaire completed by a population of patients suffering from agammaglobulinaemia and hyper-immunoglobulin (Ig)M syndrome. Patients' satisfaction was also documented. Results of the simulation showed that direct medical costs ranged from €19 484 for home-based IVIG to €25 583 for hospital-based IVIG, with home-based SCIG in between at €24 952 per year. Estimations made from field data were found to be different, with significantly higher costs for IVIG. This result was explained mainly by a higher immunoglobulin mean dose prescribed for IVIG. While the theoretical model showed very little difference between SCIG and hospital-based IVIG costs, SCIG appears to be 25% less expensive with field data because of lower doses used in SCIG patients. The reality of the dose difference between both routes of administration needs to be confirmed by further and more specific studies.