scispace - formally typeset
Search or ask a question

Showing papers by "Paris Dauphine University published in 2016"


Journal ArticleDOI
TL;DR: The authors show that standard theories, which build on a random growth mechanism, generate transition dynamics that are too slow relative to those observed in the data and suggest two parsimonious deviations from the canonical model that can explain such changes: scale dependence that may arise from changes in skill prices and type dependence, that is, the presence of some high-growth types.
Abstract: The past forty years have seen a rapid rise in top income inequality in the United States While there is a large number of existing theories of the Pareto tail of the long-run income distributions, almost none of these address the fast rise in top inequality observed in the data We show that standard theories, which build on a random growth mechanism, generate transition dynamics that are too slow relative to those observed in the data We then suggest two parsimonious deviations from the canonical model that can explain such changes: “scale dependence” that may arise from changes in skill prices, and “type dependence,” that is, the presence of some “high-growth types” These deviations are consistent with theories in which the increase in top income inequality is driven by the rise of “superstar” entrepreneurs or managers

297 citations


Journal ArticleDOI
TL;DR: This work proposes a novel approach based on a machine learning tool named random forests (RF) to conduct selection among the highly complex models covered by ABC algorithms, modifying the way Bayesian model selection is both understood and operated.
Abstract: Approximate Bayesian computation (ABC) methods provide an elaborate approach to Bayesian inference on complex models, including model choice. Both theoretical arguments and simulation experiments indicate, however, that model posterior probabilities may be poorly evaluated by standard ABC techniques.Results: We propose a novel approach based on a machine learning tool named random forests (RF) to conduct selection among the highly complex models covered by ABC algorithms. We thus modify the way Bayesian model selection is both understood and operated, in that we rephrase the inferential goal as a classification problem, first predicting the model that best fits the data with RF and postponing the approximation of the posterior probability of the selected model for a second stage also relying on RF. Compared with earlier implementations of ABC model choice, the ABC RF approach offers several potential improvements: (i) it often has a larger discriminative power among the competing models, (ii) it is more robust against the number and choice of statistics summarizing the data, (iii) the computing effort is drastically reduced (with a gain in computation efficiency of at least 50) and (iv) it includes an approximation of the posterior probability of the selected model. The call to RF will undoubtedly extend the range of size of datasets and complexity of models that ABC can handle. We illustrate the power of this novel methodology by analyzing controlled experiments as well as genuine population genetics datasets.Availability and implementation: The proposed methodology is implemented in the R package abcrf available on the CRAN.

283 citations


Proceedings Article
19 Jun 2016
TL;DR: This paper presents a new technique for computing the barycenter of a set of distance or kernel matrices, which define the interrelationships between points sampled from individual domains, and provides a fast iterative algorithm for the resulting nonconvex optimization problem.
Abstract: This paper presents a new technique for computing the barycenter of a set of distance or kernel matrices. These matrices, which define the interrelationships between points sampled from individual domains, are not required to have the same size or to be in row-by-row correspondence. We compare these matrices using the softassign criterion, which measures the minimum distortion induced by a probabilistic map from the rows of one similarity matrix to the rows of another; this criterion amounts to a regularized version of the Gromov-Wasserstein (GW) distance between metric-measure spaces. The barycenter is then defined as a Frechet mean of the input matrices with respect to this criterion, minimizing a weighted sum of softassign values. We provide a fast iterative algorithm for the resulting nonconvex optimization problem, built upon state-of-the-art tools for regularized optimal transportation. We demonstrate its application to the computation of shape barycenters and to the prediction of energy levels from molecular configurations in quantum chemistry.

275 citations


Journal ArticleDOI
TL;DR: In this study, a measure of the misfit computed with an optimal transport distance allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion.
Abstract: Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.

264 citations


Journal ArticleDOI
TL;DR: In this article, a theory of periodically driven, many-body localized (MBL) systems is presented, where the Floquet operator (evolution operator over one driving period) can be represented as an exponential of an effective time-independent Hamiltonian, which is a sum of quasi-local terms and is itself fully MBL.

200 citations


Journal ArticleDOI
11 Jul 2016
TL;DR: This work presents an algorithm for probabilistic correspondence that optimizes an entropy-regularized Gromov-Wasserstein (GW) objective that is compact, provably convergent, and applicable to any geometric domain expressible as a metric measure matrix.
Abstract: Many shape and image processing tools rely on computation of correspondences between geometric domains. Efficient methods that stably extract "soft" matches in the presence of diverse geometric structures have proven to be valuable for shape retrieval and transfer of labels or semantic information. With these applications in mind, we present an algorithm for probabilistic correspondence that optimizes an entropy-regularized Gromov-Wasserstein (GW) objective. Built upon recent developments in numerical optimal transportation, our algorithm is compact, provably convergent, and applicable to any geometric domain expressible as a metric measure matrix. We provide comprehensive experiments illustrating the convergence and applicability of our algorithm to a variety of graphics tasks. Furthermore, we expand entropic GW correspondence to a framework for other matching problems, incorporating partial distance matrices, user guidance, shape exploration, symmetry detection, and joint analysis of more than two domains. These applications expand the scope of entropic GW correspondence to major shape analysis problems and are stable to distortion and noise.

188 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that whenever a system is not localized at all thermodynamic parameters (particle density, energy density), then local fluctuations into the non-localized phase, dubbed bubbles, can slowly destroy localization globally.
Abstract: Phenomenon of many-body localization violates one of the basic rules of statistical mechanics: It states that certain `localized' macroscopic systems cannot act as a bath for themselves and hence do not relax to equilibrium. The authors of this paper find that, whenever a system is not localized at all thermodynamic parameters (particle density, energy density), then local fluctuations into the non-localized phase, dubbed bubbles, can slowly destroy localization globally. This result provides a rather strong restriction on the existence of many-body localized phases and runs contrary to the idea that there could be genuine localization transitions as a function of temperature.

153 citations


Journal ArticleDOI
TL;DR: In this paper, the use of a distance based on the Kantorovich-Rubinstein norm is introduced to overcome the local minima of the associated L2 misfit function, which correspond to velocity models matching the data up to one or several phase shifts.
Abstract: The use of optimal transport distance has recently yielded significant progress in image processing for pattern recognition, shape identification, and histograms matching. In this study, the use of this distance is investigated for a seismic tomography problem exploiting the complete waveform; the full waveform inversion. In its conventional formulation, this high resolution seismic imaging method is based on the minimization of the L2 distance between predicted and observed data. Application of this method is generally hampered by the local minima of the associated L2 misfit function, which correspond to velocity models matching the data up to one or several phase shifts. Conversely, the optimal transport distance appears as a more suitable tool to compare the misfit between oscillatory signals, for its ability to detect shifted patterns. However, its application to the full waveform inversion is not straightforward, as the mass conservation between the compared data cannot be guaranteed, a crucial assumption for optimal transport. In this study, the use of a distance based on the Kantorovich–Rubinstein norm is introduced to overcome this difficulty. Its mathematical link with the optimal transport distance is made clear. An efficient numerical strategy for its computation, based on a proximal splitting technique, is introduced. We demonstrate that each iteration of the corresponding algorithm requires solving the Poisson equation, for which fast solvers can be used, relying either on the fast Fourier transform or on multigrid techniques. The development of this numerical method make possible applications to industrial scale data, involving tenths of millions of discrete unknowns. The results we obtain on such large scale synthetic data illustrate the potentialities of the optimal transport for seismic imaging. Starting from crude initial velocity models, optimal transport based inversion yields significantly better velocity reconstructions than those based on the L2 distance, in 2D and 3D contexts.

141 citations


Journal ArticleDOI
TL;DR: An informal introduction to the foundational ideas behind Bayesian data analysis, using a linear mixed models analysis of data from a typical psycholinguistics experiment, and some examples illustrating the flexibility of model specification in the Bayesian framework.
Abstract: We present the fundamental ideas underlying statistical hypothesis testing using the frequentist framework. We start with a simple example that builds up the one-sample t-test from the beginning, explaining important concepts such as the sampling distribution of the sample mean, and the iid assumption. Then, we examine the meaning of the p-value in detail and discuss several important misconceptions about what a p-value does and does not tell us. This leads to a discussion of Type I, II error and power, and Type S and M error. An important conclusion from this discussion is that one should aim to carry out appropriately powered studies. Next, we discuss two common issues that we have encountered in psycholinguistics and linguistics: running experiments until significance is reached and the ‘garden-of-forking-paths’ problem discussed by Gelman and others. The best way to use frequentist methods is to run appropriately powered studies, check model assumptions, clearly separate exploratory data analysis from planned comparisons decided upon before the study was run, and always attempt to replicate results.

133 citations


Journal ArticleDOI
TL;DR: In this paper, a higher regularity theory for general quasilinear elliptic equations and systems in divergence form with random coefficients is developed, and a large-scale L∞-type estimate for the gradient of a solution is proved with optimal stochastic integrability under a one-parameter family of mixing assumptions, allowing for very weak mixing with nonintegrable correlations to very strong mixing (for example finite range of dependence).
Abstract: We develop a higher regularity theory for general quasilinear elliptic equations and systems in divergence form with random coefficients. The main result is a large-scale L∞-type estimate for the gradient of a solution. The estimate is proved with optimal stochastic integrability under a one-parameter family of mixing assumptions, allowing for very weak mixing with non-integrable correlations to very strong mixing (for example finite range of dependence). We also prove a quenched L2 estimate for the error in homogenization of Dirichlet problems. The approach is based on subadditive arguments which rely on a variational formulation of general quasilinear divergence-form equations.

126 citations


Journal ArticleDOI
TL;DR: In this article, the authors revisited the spectral analysis of semigroups in a general Banach space setting, and provided comprehensible proofs of classical results such as the spectral mapping theorem, some (quantified) Weyl's Theorems and the Krein-Rutman Theorem.
Abstract: The aim of this paper is twofold: (1) On the one hand, the paper revisits the spectral analysis of semigroups in a general Banach space setting. It presents some new and more general versions, and provides comprehensible proofs, of classical results such as the spectral mapping theorem, some (quantified) Weyl's Theorems and the Krein-Rutman Theorem. Motivated by evolution PDE applications, the results apply to a wide and natural class of generators which split as a dissipative part plus a more regular part, without assuming any symmetric structure on the operators nor Hilbert structure on the space, and give some growth estimates and spectral gap estimates for the associated semigroup. The approach relies on some factorization and summation arguments reminiscent of the Dyson-Phillips series in the spirit of those used in [87,82,48,81]. (2) On the other hand, we present the semigroup spectral analysis for three important classes of ''growth-fragmentation" equations, namely the cell division equation, the self-similar fragmentation equation and the McKendrick-Von Foerster age structured population equation. By showing that these models lie in the class of equations for which our general semigroup analysis theory applies, we prove the exponential rate of convergence of the solutions to the associated remarkable profile for a very large and natural class of fragmentation rates. Our results generalize similar estimates obtained in \cite{MR2114128,MR2536450} for the cell division model with (almost) constant total fragmentation rate and in \cite{MR2832638,MR2821681} for the self-similar fragmentation equation and the cell division equation restricted to smooth and positive fragmentation rate and total fragmentation rate which does not increase more rapidly than quadratically. It also improves the convergence results without rate obtained in \cite{MR2162224,MR2114413} which have been established under similar assumptions to those made in the present work.

Journal ArticleDOI
11 Jul 2016
TL;DR: A new way to perform intuitive and geometrically faithful regressions on histogram-valued data is defined, which leverages the theory of optimal transport, and in particular the definition of Wasserstein barycenters, to introduce for the first time the notion of barycentric coordinates for histograms.
Abstract: This article defines a new way to perform intuitive and geometrically faithful regressions on histogram-valued data. It leverages the theory of optimal transport, and in particular the definition of Wasserstein barycenters, to introduce for the first time the notion of barycentric coordinates for histograms. These coordinates take into account the underlying geometry of the ground space on which the histograms are defined, and are thus particularly meaningful for applications in graphics to shapes, color or material modification. Beside this abstract construction, we propose a fast numerical optimization scheme to solve this backward problem (finding the barycentric coordinates of a given histogram) with a low computational overhead with respect to the forward problem (computing the barycenter). This scheme relies on a backward algorithmic differentiation of the Sinkhorn algorithm which is used to optimize the entropic regularization of Wasserstein barycenters. We showcase an illustrative set of applications of these Wasserstein coordinates to various problems in computer graphics: shape approximation, BRDF acquisition and color editing.

Journal ArticleDOI
TL;DR: In this article, the authors studied the regularising properties of a continuous path on the existence and uniqueness of solutions to the ODE dxt =b(t,xt)dt+dwt where w is a continuous driving function and b is a time-dependent vector field which possibly is only a distribution in the space variable.

Journal ArticleDOI
TL;DR: In this article, the authors examined the long-term effects on individual economic outcomes of a set of earthquakes that occurred in rural Indonesia since 1985, using longitudinal individual-level data from large-scale household surveys, together with precise measures of local ground tremors obtained from a US Geological Survey database.

Posted Content
TL;DR: In this article, the NASDAQ-OMX speedup has a non-trivial effect on liquidity and the effect depends on a security's news-to-liquidity-trader ratio.
Abstract: Speeding up the exchange has a non-trivial effect on liquidity. On the one hand, more speed enables high-frequency market makers (HFMs) to update their quotes more quickly on incoming news. This reduces adverse-selection cost and lowers the competitive bid-ask spread. On the other hand, HFM price quotes are more likely to meet speculative high-frequency “bandits,” thus less likely to meet liquidity traders. This raises the spread. The net effect depends on a security’s news-to-liquidity-trader ratio. Empirical analysis of a NASDAQ-OMX speed upgrade shows that a faster market can indeed raise the spread and thus lower liquidity.

Journal ArticleDOI
TL;DR: A focused survey about the presence and the use of the concept of “preferences” in Artificial Intelligence, which essentially covers the basics of preference modelling, theUse of preference in reasoning and argumentation, the problem of compact representations of preferences, preference learning and theuse of non conventional preference models based on extended logical languages.
Abstract: The paper presents a focused survey about the presence and the use of the concept of "preferences" in Artificial Intelligence. Preferences are a central concept for decision making and have extensively been studied in disciplines such as economy, operational research, decision analysis, psychology and philosophy. However, in the recent years it has also become an important topic both for research and applications in Computer Science and more specifically in Artificial Intelligence, in fields spanning from recommender systems to automatic planning, from non monotonic reasoning to computational social choice and algorithmic decision theory. The survey essentially covers the basics of preference modelling, the use of preference in reasoning and argumentation, the problem of compact representations of preferences, preference learning and the use of non conventional preference models based on extended logical languages. It aims at providing a general reference for all researchers both in Artificial Intelligence and Decision Analysis interested in this exciting interdisciplinary topic.

Journal ArticleDOI
TL;DR: In this paper, uniform Lipschitz estimates for second-order elliptic systems in divergence form with rapidly oscillating, almost-periodic coefficients were established, and the results for the Neumann conditions are new even in the periodic setting, since they can treat nonsymmetric coefficients.
Abstract: We establish uniform Lipschitz estimates for second-order elliptic systems in divergence form with rapidly oscillating, almost-periodic coefficients. We give interior estimates as well as estimates up to the boundary in bounded C1,α domains with either Dirichlet or Neumann data. The main results extend those in the periodic setting due to Avellaneda and Lin for interior and Dirichlet boundary estimates and later Kenig, Lin, and Shen for the Neumann boundary conditions. In contrast to these papers, our arguments are constructive (and thus the constants are in principle computable) and the results for the Neumann conditions are new even in the periodic setting, since we can treat nonsymmetric coefficients. We also obtain uniform W1,p estimates.© 2015 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A convergence rate analysis for the inexact Krasnosel’skiĭ–Mann iteration built from non-expansive operators and develops easily verifiable termination criteria for finding an approximate solution.
Abstract: In this paper, we present a convergence rate analysis for the inexact Krasnosel'skiaź---Mann iteration built from non-expansive operators. The presented results include two main parts: we first establish the global pointwise and ergodic iteration-complexity bounds; then, under a metric sub-regularity assumption, we establish a local linear convergence for the distance of the iterates to the set of fixed points. The obtained results can be applied to analyze the convergence rate of various monotone operator splitting methods in the literature, including the Forward---Backward splitting, the Generalized Forward---Backward, the Douglas---Rachford splitting, alternating direction method of multipliers and Primal---Dual splitting methods. For these methods, we also develop easily verifiable termination criteria for finding an approximate solution, which can be seen as a generalization of the termination criterion for the classical gradient descent method. We finally develop a parallel analysis for the non-stationary Krasnosel'skiaź---Mann iteration.

Journal ArticleDOI
TL;DR: This work provides a framework to verify global optimality of a discrete transport plan locally and explicitly describes how to select the sparse sub-problems for several cost functions, including the noisy squared Euclidean distance.
Abstract: Discrete optimal transport solvers do not scale well on dense large problems since they do not explicitly exploit the geometric structure of the cost function. In analogy to continuous optimal transport, we provide a framework to verify global optimality of a discrete transport plan locally. This allows the construction of an algorithm to solve large dense problems by considering a sequence of sparse problems instead. The algorithm lends itself to being combined with a hierarchical multiscale scheme. Any existing discrete solver can be used as internal black-box. We explicitly describe how to select the sparse sub-problems for several cost functions, including the noisy squared Euclidean distance. Significant reductions in run-time and memory requirements have been observed.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of price display in the luxury sector, which has always been thought of as "bad practice" in luxury marketing but never explored empirically.

Journal ArticleDOI
TL;DR: In this article, an abstract method for deriving decay estimates on the semigroup associated to non-symmetric operators in Banach spaces is presented. But the authors do not consider the shrinkage of the functional space.
Abstract: The aim of the present paper is twofold: 1. We carry on with developing an abstract method for deriving decay estimates on the semigroup associated to non-symmetric operators in Banach spaces as introduced in [10]. We extend the method so as to consider the shrinkage of the functional space. Roughly speaking, we consider a class of operators written as a dissipative part plus a mild perturbation, and we prove that if the associated semigroup satisfies a decay estimate in some reference space then it satisfies the same decay estimate in another—smaller or larger—Banach space under the condition that a certain iterate of the “mild perturbation” part of the operator combined with the dissipative part of the semigroup maps the larger space to the smaller space in a bounded way. The cornerstone of our approach is a factorization argument, reminiscent of the Dyson series. 2. We apply this method to the kinetic Fokker-Planck equation when the spatial domain is either the torus with periodic boundary conditions, or the whole space with a confinement potential. We then obtain spectral gap estimates for the associated semigroup for various metrics, including Lebesgue norms, negative Sobolev norms, and the Monge-Kantorovich-Wasserstein distance W1.


Posted Content
TL;DR: It is conjecture that the classical algorithm of alternating projections (Gerchberg–Saxton) succeeds with high probability when no special initialization procedure is used, and it is conjectured that this result is still true when nospecial initialization process is used.
Abstract: We consider a phase retrieval problem, where we want to reconstruct a $n$-dimensional vector from its phaseless scalar products with $m$ sensing vectors. We assume the sensing vectors to be independently sampled from complex normal distributions. We propose to solve this problem with the classical non-convex method of alternating projections. We show that, when $m\geq Cn$ for $C$ large enough, alternating projections succeed with high probability, provided that they are carefully initialized. We also show that there is a regime in which the stagnation points of the alternating projections method disappear, and the initialization procedure becomes useless. However, in this regime, $m$ has to be of the order of $n^2$. Finally, we conjecture from our numerical experiments that, in the regime $m=O(n)$, there are stagnation points, but the size of their attraction basin is small if $m/n$ is large enough, so alternating projections can succeed with probability close to $1$ even with no special initialization.

Journal ArticleDOI
TL;DR: The real object size implied by a word appears to be primarily encoded in early visual regions, while the taxonomic category and sub-categorical cluster in more anterior temporal regions, indicating that different areas along the ventral stream encode complementary dimensions of the semantic space.

Journal ArticleDOI
TL;DR: This work considers the Train Timetabling Problem in a railway node in which different Train Operators wish to run trains according to timetables that they propose, called ideal timetables, in the context of a highly congested railway node.
Abstract: We consider the Train Timetabling Problem (TTP) in a railway node (i.e. a set of stations in an urban area interconnected by tracks), which calls for determining the best schedule for a given set of trains during a given time horizon, while satisfying several track operational constraints. In particular, we consider the context of a highly congested railway node in which dierent Train Operators wish to run trains according to timetables that they propose, called ideal timetables. The ideal timetables altogether may be (and usually are) conicting, i.e. they do not respect one or more of the track operational constraints. The goal is to determine conict-free timetables that dier as little as possible from the ideal ones. The problem was studied for a research project funded by Rete Ferroviaria Italiana (RFI), the main Italian railway Infrastructure Manager, who also provided us with real-world instances. We present an Integer Linear Programming (ILP) model for the problem, which adapts previous ILP models from the literature to deal with the case of a railway node. The Linear Programming (LP) relaxation of the model is used to derive a dual bound. In addition, we propose an iterative heuristic algorithm that is able to obtain good solutions to real-world instances with up to 1500 trains in short computing times. The proposed algorithm is also used to evaluate the capacity saturation of the railway nodes.

Journal ArticleDOI
TL;DR: In this article, the authors explore the connection between CSR and competition in order to contribute to the CSR concept through analysis of the conditions for its implementation through drawing upon the academic literature in economics and strategic management, on mainstream CSR papers and on the official disclosure and communication from companies listed on the CAC 40 of the French stock market.
Abstract: Purpose – While fierce global competition has negative environmental and social impacts and may lead large companies to act irresponsibly, corporate social responsibility (CSR) academic literature, especially stakeholder theory, pays little attention to competition and market pressure. It only highlights the competitive advantage a CSR strategy represents for companies. The purpose of this paper is to explore the connection between CSR and competition in order to contribute to the CSR concept through analysis of the conditions for its implementation.Design/methodology/approach – The paper draws upon the academic literature in economics and strategic management, on mainstream CSR papers and on the official disclosure and communication from companies listed on the “CAC 40” of the French stock market. The paper uses the definition of corporate responsibility which integrates companies' environmental and social concerns into all their activities.Findings – The following three major findings arise. First, on a...

Posted Content
TL;DR: In this article, the authors consider the asymptotic behavior of the posterior distribution obtained by approximate Bayesian computation and give general results on the rate at which posterior distribution concentrates on sets containing the true parameter.
Abstract: Approximate Bayesian computation allows for statistical analysis in models with intractable likelihoods. In this paper we consider the asymptotic behaviour of the posterior distribution obtained by this method. We give general results on the rate at which the posterior distribution concentrates on sets containing the true parameter, its limiting shape, and the asymptotic distribution of the posterior mean. These results hold under given rates for the tolerance used within the method, mild regularity conditions on the summary statistics, and a condition linked to identification of the true parameters. Implications for practitioners are discussed.

Journal ArticleDOI
13 Apr 2016-PLOS ONE
TL;DR: In this paper, a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects was provided, and in a case study example provided accurate probabilistic statements that correspond to the intended magnitudebased inferences.
Abstract: The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the general case of Robin boundary conditions on ∆-Omega and showed that the optimal spatial arrangement is obtained by minimizing the positive principal eigenvalue with respect to ∆ under a volume constraint.
Abstract: In this paper, we are interested in the analysis of a well-known free boundary/shape optimization problem motivated by some issues arising in population dynamics. The question is to determine optimal spatial arrangements of favorable and unfavorable regions for a species to survive. The mathematical formulation of the model leads to an indefinite weight linear eigenvalue problem in a fixed box $\Omega$ and we consider the general case of Robin boundary conditions on $\partial\Omega$. It is well known that it suffices to consider {\it bang-bang} weights taking two values of different signs, that can be parametrized by the characteristic function of the subset $E$ of $\Omega$ on which resources are located. Therefore, the optimal spatial arrangement is obtained by minimizing the positive principal eigenvalue with respect to $E$, under a volume constraint. By using symmetrization techniques, as well as necessary optimality conditions, we prove new qualitative results on the solutions. Namely, we completely solve the problem in dimension 1, we prove the counter-intuitive result that the ball is almost never a solution in dimension 2 or higher, despite what suggest the numerical simulations. We also introduce a new rearrangement in the ball allowing to get a better candidate than the ball for optimality when Neumann boundary conditions are imposed. We also provide numerical illustrations of our results and of the optimal configurations.

Journal ArticleDOI
TL;DR: In this article, a new method for obtaining quantitative results in stochastic homogenization for linear elliptic equations in divergence form is introduced, which does not use concentration inequalities (such as Poincare or logarithmic Sobolev inequalities in the probability space) and relies instead on a higher (Ck, k ≥ 1) regularity theory for solutions of the heterogeneous equation, which is valid on length scales larger than a specified mesoscopic scale.
Abstract: We introduce a new method for obtaining quantitative results in stochastic homogenization for linear elliptic equations in divergence form. Unlike previous works on the topic, our method does not use concentration inequalities (such as Poincare or logarithmic Sobolev inequalities in the probability space) and relies instead on a higher (Ck, k ≥ 1) regularity theory for solutions of the heterogeneous equation, which is valid on length scales larger than a certain specified mesoscopic scale. This regularity theory, which is of independent interest, allows us to, in effect, localize the dependence of the solutions on the coefficients and thereby accelerate the rate of convergence of the expected energy of the cell problem by a bootstrap argument. The fluctuations of the energy are then tightly controlled using subadditivity. The convergence of the energy gives control of the scaling of the spatial averages of gradients and fluxes (that is, it quantifies the weak convergence of these quantities), which yields, by a new “multiscale” Poincare inequality, quantitative estimates on the sublinearity of the corrector.