scispace - formally typeset
Search or ask a question

Showing papers by "Paris Dauphine University published in 2022"


Journal ArticleDOI
TL;DR: In this article, the first outranking method to assign a score to each alternative is presented, which is part of the Electre family, and is called Electre-score, which does not construct a value function for each criterion to proceed then to the aggregation into a single value.

11 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a different approach based on convex duality instead of martingale measures duality: the prices are expressed using Fenchel conjugate and bi-conjugate without using any no-arbitrage condition.

11 citations


Journal ArticleDOI
TL;DR: In this article , the authors studied the optimal release strategies in order to maximize the efficiency of the sterile insect technique (SIT) and determined in a precise way optimal strategies, which allows them to tackle numerically the underlying optimization problem in a very simple way.

10 citations


Journal ArticleDOI
TL;DR: A branch-and-price algorithm is developed, whose linear relaxation of the set-cover formulation is solved by a dynamic column generation, and the performance of this algorithm is investigated to investigate what protection against uncertainty is offered by each approach, and at which cost of robustness.

9 citations


Journal ArticleDOI
TL;DR: In this article, the notion of twin width on graphs and matrices was introduced, inspired by a width invariant defined on permutations by Guillemot and Marx [SODA].
Abstract: Inspired by a width invariant defined on permutations by Guillemot and Marx [SODA’14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-wid...

8 citations


Journal ArticleDOI
TL;DR: In this paper , the authors revisited the approximation of the effective sample size (ESS) in the specific context of importance sampling, and extended the discussion of the ESS in the multiple importance sampling (MIS) setting, and discussed several avenues for developing alternative metrics.
Abstract: The effective sample size (ESS) is widely used in sample-based simulation methods for assessing the quality of a Monte Carlo approximation of a given distribution and of related integrals. In this paper, we revisit the approximation of the ESS in the specific context of importance sampling (IS). The derivation of this approximation, that we will denote as $\widehat{\text{ESS}}$, is partially available in Kong (1992). This approximation has been widely used in the last 25 years due to its simplicity as a practical rule of thumb in a wide variety of importance sampling methods. However, we show that the multiple assumptions and approximations in the derivation of $\widehat{\text{ESS}}$, makes it difficult to be considered even as a reasonable approximation of the ESS. We extend the discussion of the $\widehat{\text{ESS}}$ in the multiple importance sampling (MIS) setting, we display numerical examples, and we discuss several avenues for developing alternative metrics. This paper does not cover the use of ESS for MCMC algorithms.

5 citations


Journal ArticleDOI
TL;DR: In this article, a shape derivative approach was proposed to obtain quantitatively the shape hessians in the context of parabolic optimal control problems, where the domain Ω is a ball.
Abstract: In this article, we present two different approaches for obtaining quantitative inequalities in the context of parabolic optimal control problems. Our model consists of a linearly controlled heat equation with Dirichlet boundary condition ( u f ) t − Δ u f = f , f being the control. We seek to maximise the functional J T ( f ) ≔ 1 2 ∬ ( 0 ; T ) × Ω u f 2 or, for some ɛ > 0 , J T ɛ ( f ) ≔ 1 2 ∬ ( 0 ; T ) × Ω u f 2 + ɛ ∫ Ω u f 2 ( T , ⋅ ) and to obtain quantitative estimates for these maximisation problems. We offer two approaches in the case where the domain Ω is a ball. In that case, if f satisfies L 1 and L ∞ constraints and does not depend on time, we propose a shape derivative approach that shows that, for any competitor f = f ( x ) satisfying the same constraints, we have J T ( f ∗ ) − J T ( f ) ≳ ‖ f − f ∗ ‖ L 1 ( Ω ) 2 , f ∗ being the maximiser. Through our proof of this time-independent case, we also show how to obtain coercivity norms for shape hessians in such parabolic optimisation problems. We also consider the case where f = f ( t , x ) satisfies a global L ∞ constraint and, for every t ∈ ( 0 ; T ) , an L 1 constraint. In this case, assuming ɛ > 0 , we prove an estimate of the form J T ɛ ( f ∗ ) − J T ɛ ( f ) ≳ ∫ 0 T a ɛ ( t ) ‖ f ( t , ⋅ ) − f ∗ ( t , ⋅ ) ‖ L 1 ( Ω ) 2 where a ɛ ( t ) > 0 for any t ∈ ( 0 ; T ) . The proof of this result relies on a uniform bathtub principle.

4 citations


Journal ArticleDOI
TL;DR: In this paper , it was shown that the class of non-negative Ollivier-Ricci curvatures and uniform expansion do not exist, and the same conclusion applies to the Bakry-Émery curvature condition.
Abstract: We prove that bounded-degree expanders with non-negative Ollivier–Ricci curvature do not exist, thereby solving a long-standing open problem suggested by A. Naor and E. Milman and publicized by Y. Ollivier (2010). In fact, this remains true even if we allow for a vanishing proportion of large degrees, large eigenvalues, and negatively-curved edges. Moreover, the same conclusion applies to the Bakry–Émery curvature condition CD $$(0,\infty )$$ , thereby settling a recent conjecture of D. Cushing, S. Liu and N. Peyerimhoff (2019). To establish those results, we work directly at the level of Benjamini–Schramm limits, and exploit the entropic characterization of the Liouville property on stationary random graphs to show that non-negative curvature and uniform expansion are incompatible “at infinity”. We then transfer this conclusion to finite graphs via local weak convergence. Our approach also shows that the class of finite graphs with non-negative curvature and degrees at most d is hyperfinite, for any fixed $$d\in {\mathbb {N}}$$ .

3 citations


Journal ArticleDOI
TL;DR: In this paper , a formal approach to describe neutralization versus weak (or non-)neutralization scenarios and compare them with the possible effects of antibody-dependent enhancement (ADE) was proposed.
Abstract: The interplay between the virus, infected cells and immune responses to SARS-CoV-2 is still under debate. By extending the basic model of viral dynamics, we propose here a formal approach to describe neutralisation versus weak (or non-)neutralisation scenarios and compare them with the possible effects of antibody-dependent enhancement (ADE). The theoretical model is consistent with the data available in the literature; we show that both weakly neutralising antibodies and ADE can result in final viral clearance or disease progression, but that the immunodynamics are different in each case. As a significant proportion of the world's population is already naturally immune or vaccinated, we also discuss the implications for secondary infections after vaccination or in the presence of immune system dysfunctions.

3 citations


Journal ArticleDOI
TL;DR: In this article, a split point procedure based on the explicit likelihood was proposed to save time when searching for the best split for a given splitting variable, and a simulation study was performed to assess the computational gain when building GLM trees.
Abstract: Classification and regression trees (CART) prove to be a true alternative to full parametric models such as linear models (LM) and generalized linear models (GLM). Although CART suffer from a biased variable selection issue, they are commonly applied to various topics and used for tree ensembles and random forests because of their simplicity and computation speed. Conditional inference trees and model-based trees algorithms for which variable selection is tackled via fluctuation tests are known to give more accurate and interpretable results than CART, but yield longer computation times. Using a closed-form maximum likelihood estimator for GLM, this paper proposes a split point procedure based on the explicit likelihood in order to save time when searching for the best split for a given splitting variable. A simulation study for non-Gaussian response is performed to assess the computational gain when building GLM trees. We also propose a benchmark on simulated and empirical datasets of GLM trees against CART, conditional inference trees and LM trees in order to identify situations where GLM trees are efficient. This approach is extended to multiway split trees and log-transformed distributions. Making GLM trees possible through a new split point procedure allows us to investigate the use of GLM in ensemble methods. We propose a numerical comparison of GLM forests against other random forest-type approaches. Our simulation analyses show cases where GLM forests are good challengers to random forests.

3 citations


Journal ArticleDOI
TL;DR: In this article, the authors estimate that about 40% of global assets (FDI, portfolio equity and debt) are "abnormal" and operated through tax havens, including Cayman, Bermuda, Luxembourg, Hong Kong, Ireland and the Netherlands.

Book ChapterDOI
01 Jan 2022
TL;DR: In this paper, a shift of paradigm for the control of fluid flows based on the application of deep reinforcement learning (DRL) is proposed, which is quickly spreading in the machine learning community and is known for its connection with nonlinear control theory.
Abstract: We propose a shift of paradigm for the control of fluid flows based on the application of deep reinforcement learning (DRL). This strategy is quickly spreading in the machine learning community and it is known for its connection with nonlinear control theory. The origin of DRL can be traced back to the generalization of the optimal control to nonlinear problems, leading—in the continuous formulation—to the Hamilton-Jacobi-Bellman (HJB) equation, of which DRL aims at providing a discrete, data-driven approximation. The only a priori requirement in DRL is the definition of an instantaneous reward as measure of the relevance of an action when the system is in a given state. The value function is then defined as the expected cumulative rewards and it is the objective to be maximized. The control action and the value function are approximated by means of neural networks. In this work, we clarify the connection between DRL and rediscuss our recent results for the control of the Kuramoto-Sivashinsky (KS) equation in one-dimension [4] by means of a parametric analysis.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the asymptotics of a parabolically scaled, continuous and space-time stationary in time version of the well-known Funaki-Spohn model in Statistical Physics.

Journal ArticleDOI
TL;DR: In this paper , a stored and inherited relation (SIR) is defined as a 1NF stored relation (SR) enlarged with inherited attributes (IAs), and a logical navigation free (LNF) query to SIR R with foreign keys is proposed.

Journal ArticleDOI
TL;DR: In this paper, a randomly weighted kernel estimator with a fully data-driven bandwidth selection method was proposed, in the spirit of the Goldenshluger and Lepski method.

Journal ArticleDOI
TL;DR: In this paper , the sharpness of the phase transition of attractive absorbing probabilistic cellular automata, a class of bootstrap percolation models and kinetically constrained models is shown.
Abstract: We establish new connections between percolation, bootstrap percolation, probabilistic cellular automata and deterministic ones. Surprisingly, by juggling with these in various directions, we effortlessly obtain a number of new results in these fields. In particular, we prove the sharpness of the phase transition of attractive absorbing probabilistic cellular automata, a class of bootstrap percolation models and kinetically constrained models. We further show how to recover a classical result of Toom on the stability of cellular automata w.r.t. noise and, inversely, how to deduce new results in bootstrap percolation universality from his work.

Journal ArticleDOI
TL;DR: In this paper , the authors focused on the regularity and diversity of use of cancer screening and found that screening behaviors were fairly stable over time, with few typical screening patterns for each cancer.
Abstract: Despite several incentive policies for cancer screenings over the last two decades, the overall and regular use of cancer screenings remains insufficient in France. While the individual determinants of cancer screening uptake have been fairly well studied, the literature has rarely focused on the regularity of screening uptake, which is key to early cancer detection. We aimed to address this issue by studying cancer screening behaviors over 15 years, emphasizing the regularity and diversity of use. Using data from 40,021 women in the French E3N cohort, we studied the individual trajectories of screenings for breast, colorectal and cervical cancer between 2000 and 2014. We employed optimal matching methods to identify typical behaviors of use for each cancer screening. Then, we determined the associations between the identified behavior screening patterns for the different cancer screenings and, finally, assessed the associated individual determinants with logistical and multinomial models. We found that screening behaviors were fairly stable over time, with few typical screening patterns for each cancer. Overall, once a woman starts screening, she continues, and once she stops, she no longer returns. Cancer screening behaviors appear consistent; in particular, insufficient use of mammography appears to be associated with long-term nonuse of other cancer screenings. Factors associated with low or nonuse of screening are overall common between cancer screenings and are similar to those identified in the literature of screening use at a single point in time. Ultimately, these barriers prevent some women from entering a screening process in the long run, ultimately reinforcing social inequalities in health. Targeting women with insufficient mammography uptake may reach women outside of cancer screening settings more generally and, thus, both increase the overall uptake of cancer screening and reduce social inequalities in cancer screening.

Journal ArticleDOI
TL;DR: In this article , a finite element method, called ϕ -FEM, was proposed to solve numerically elliptic partial differential equations with natural boundary conditions using simple computational grids, not fitted to the boundary of the physical domain.
Abstract: We present a new finite element method, called ϕ -FEM, to solve numerically elliptic partial differential equations with natural (Neumann or Robin) boundary conditions using simple computational grids, not fitted to the boundary of the physical domain. The boundary data are taken into account using a level-set function, which is a popular tool to deal with complicated or evolving domains. Our approach belongs to the family of fictitious domain methods (or immersed boundary methods) and is close to recent methods of CutFEM/XFEM type. Contrary to the latter, ϕ -FEM does not need any nonstandard numerical integration on cut mesh elements or on the actual boundary, while assuring the optimal convergence orders with finite elements of any degree and providing reasonably well conditioned discrete problems. In the first version of ϕ -FEM, only essential (Dirichlet) boundary conditions was considered. Here, to deal with natural boundary conditions, we introduce the gradient of the primary solution as an auxiliary variable. This is done only on the mesh cells cut by the boundary, so that the size of the numerical system is only slightly increased. We prove theoretically the optimal convergence of our scheme and a bound on the discrete problem conditioning, independent of the mesh cuts. The numerical experiments confirm these results.

DissertationDOI
26 Jul 2022
TL;DR: In this article , a pregunta general por la esencia del arte con the meditación en torno a la experiencia originaria del ser (Seyn) was articular, and the lugar of poesía in the consideración heideggeriana sobre the esencia of arte was examined.
Abstract: El propósito del presente trabajo es examinar de manera juiciosa el lugar de la poesía en la consideración heideggeriana sobre la esencia del arte. Partiendo de la famosa conferencia de 1935-1936 titulada El origen de la obra de arte, intentaremos articular la pregunta general por la esencia del arte con la meditación en torno a la experiencia originaria del ser (Seyn). Esta experiencia es precisamente el lugar de lo poético que acontece en la obra de arte. Para examinar el alcance de esta formulación, tomaremos distancia de cualquier intento de ver en la preocupación de Heidegger por el arte una reactivación de la “estética”, y a su vez indicaremos la necesidad de enmarcar su consideración de lo poético en lo que hemos denominado la constitución onto-mitológica de la poesía, apartándonos así de la histórica constitución onto-teológica de la metafísica. Al señalar que la poesía es asumida en Heidegger como una experiencia del ser, indicaremos el lugar desde el cual debe comprenderse la peculiar posición heideggeriana frente al arte.

Posted ContentDOI
19 Feb 2022
TL;DR: The authors proposed a kernel-based measure representation method that can produce new objects from a given target measure by approximating the measure as a whole and even staying away from objects already drawn from that distribution.
Abstract: The machine learning generative algorithms such as GAN and VAE show impressive results in practice when constructing images similar to those in a training set. However, the generation of new images builds mainly on the understanding of the hidden structure of the training database followed by a mere sampling from a multi-dimensional normal variable. In particular each sample is independent from the other ones and can repeatedly propose same type of images. To cure this drawback we propose a kernel-based measure representation method that can produce new objects from a given target measure by approximating the measure as a whole and even staying away from objects already drawn from that distribution. This ensures a better variety of the produced images. The method is tested on some classic machine learning benchmarks.\end{abstract}



Journal ArticleDOI
TL;DR: In the 2017 annual meeting of the Western Regional Center to Enhance Food Safety, 52 representatives from 15 western states/territories, regional centers funded through USDA-NIFA Food Safety Outreach Program, federal and state government agencies, and non-governmental organizations prioritized topics for the Food Safety Modernization Act (FSMA) training materials that address region-specific agricultural production and processing systems as discussed by the authors .
Abstract: During the 2017 annual meeting of the Western Regional Center to Enhance Food Safety, 52 representatives from 15 western states/territories, regional centers funded through USDA-NIFA Food Safety Outreach Program, federal and state government agencies, and non-governmental organizations prioritized topics for the Food Safety Modernization Act (FSMA) training materials that address region-specific agricultural production and processing systems. This article describes supplemental materials or “add-ons” developed to support FSMA-related food safety trainings. Although the materials were developed for the western region stakeholders, they can be applied or adapted to other regions in or outside the U.S. to enhance food safety trainings.


Posted ContentDOI
15 Mar 2022
TL;DR: In this paper , a notion of optimal performance in terms of the smallest reconstruction error that any reconstruction algorithm can achieve was defined, and practical numerical algorithms based on nonlinear reduced models for which one can prove that they can deliver a performance close to optimal.
Abstract: These lecture notes summarize various summer schools that I have given on the topic of solving inverse problems (state and parameter estimation) by combining optimally measurement observations and parametrized PDE models. After defining a notion of optimal performance in terms of the smallest reconstruction error that any reconstruction algorithm can achieve, the notes present practical numerical algorithms based on nonlinear reduced models for which one can prove that they can deliver a performance close to optimal. We also discuss algorithms for sensor placement with the approach. The proposed concepts may be viewed as exploring alternatives to Bayesian inversion in favor of more deterministic notions of accuracy quantification.


Book
01 Jan 2022

Posted ContentDOI
16 May 2022
TL;DR: In this article , the authors considered mean field games in which the players do not know the repartition of the other players and derived a master equation and partial results of uniqueness.
Abstract: This paper is concerned with mean field games in which the players do not know the repartition of the other players. First a case in which the players do not gain information is studied. Results of existence and uniqueness are proved and discussed. Then, a case in which the players observe the payments is investigated. A master equation is derived and partial results of uniqueness are given for this more involved case.

Journal ArticleDOI
TL;DR: In this article , the authors present an analogue for finite exchangeable sequences of the de Finetti, Hewitt and Savage theorem and investigate its implications for multi-marginal optimal transport (MMOT) and Bayesian statistics.
Abstract: We present a novel analogue for finite exchangeable sequences of the de Finetti, Hewitt and Savage theorem and investigate its implications for multi-marginal optimal transport (MMOT) and Bayesian statistics. If $$(Z_1, \ldots ,Z_N)$$ is a finitely exchangeable sequence of N random variables taking values in some Polish space X, we show that the law $$\mu _k$$ of the first k components has a representation of the form $$\begin{aligned} \mu _k=\int _{\mathcal{P}_{\frac{1}{N}}(X)} F_{N,k}(\lambda ) \, \text{ d } \alpha (\lambda ) \end{aligned}$$ for some probability measure $$\alpha $$ on the set of $$\frac{1}{N}$$ -quantized probability measures on X and certain universal polynomials $$F_{N,k}$$ . The latter consist of a leading term $$N^{k-1}\! /\prod _{j=1}^{k-1}(N\! -\! j) \lambda ^{\otimes k}$$ and a finite, exponentially decaying series of correlated corrections of order $$N^{-j}$$ ( $$j=1, \ldots ,k$$ ). The $$F_{N,k}(\lambda )$$ are precisely the extremal such laws, expressed via an explicit polynomial formula in terms of their one-point marginals $$\lambda $$ . Applications include novel approximations of MMOT via polynomial convexification and the identification of the remainder which is estimated in the celebrated error bound of Diaconis and Freedman (Ann Probab 8(4):745–764, 1980) between finite and infinite exchangeable laws.

Book ChapterDOI
09 Jun 2022