scispace - formally typeset
Search or ask a question

Showing papers by "Paris Dauphine University published in 2014"


Journal ArticleDOI
TL;DR: An overview of developments in robust optimization since 2007 is provided to give a representative picture of the research topics most explored in recent years, highlight common themes in the investigations of independent research teams and highlight the contributions of rising as well as established researchers both to the theory of robust optimization and its practice.

742 citations


Journal ArticleDOI
23 Apr 2014-Chance
TL;DR: Cressie and Wikle as mentioned in this paper present a reference book about spatial and spatio-temporal statistical modeling for spatial and temporal modeling, which is based on the work of Cressie et al.
Abstract: Noel Cressie and Christopher WikleHardcover: 624 pagesYear: 2011Publisher: John WileyISBN-13: 978-0471692744Here is the new reference book about spatial and spatio-temporal statistical modeling! No...

680 citations


Journal ArticleDOI
20 Jun 2014
TL;DR: This book fills an important gap in the methodological literature on networks and allows the reader familiar with network analysis to start working with Exponential Random Graph Models (2013) and their most recent developments.
Abstract: This book by Lusher, Robins and Koskinen fills an important gap in the methodological literature on networks. It allows the reader familiar with network analysis to start working with Exponential Random Graph Models (2013) and their most recent developments.

261 citations


Posted Content
TL;DR: In this paper, a machine learning tool named random forests is proposed to conduct selection among the highly complex models covered by approximate Bayesian computation (ABC) algorithms, and the proposed methodologies are implemented in the R package abcrf available on the CRAN.
Abstract: Approximate Bayesian computation (ABC) methods provide an elaborate approach to Bayesian inference on complex models, including model choice. Both theoretical arguments and simulation experiments indicate, however, that model posterior probabilities may be poorly evaluated by standard ABC techniques. We propose a novel approach based on a machine learning tool named random forests to conduct selection among the highly complex models covered by ABC algorithms. We thus modify the way Bayesian model selection is both understood and operated, in that we rephrase the inferential goal as a classification problem, first predicting the model that best fits the data with random forests and postponing the approximation of the posterior probability of the predicted MAP for a second stage also relying on random forests. Compared with earlier implementations of ABC model choice, the ABC random forest approach offers several potential improvements: (i) it often has a larger discriminative power among the competing models, (ii) it is more robust against the number and choice of statistics summarizing the data, (iii) the computing effort is drastically reduced (with a gain in computation efficiency of at least fifty), and (iv) it includes an approximation of the posterior probability of the selected model. The call to random forests will undoubtedly extend the range of size of datasets and complexity of models that ABC can handle. We illustrate the power of this novel methodology by analyzing controlled experiments as well as genuine population genetics datasets. The proposed methodologies are implemented in the R package abcrf available on the CRAN.

198 citations


Journal ArticleDOI
TL;DR: In this article, the shape of the thick disc of the Milky Way was studied in detail using photometric data at high and intermediate latitudes from SDSS and 2MASS surveys.
Abstract: Aims. More than 30 years after its discovery, the thick disc of the Milky Way is not fully explored. We examine the shape of the thick disc in order to gain insight into the process of its formation.Methods. The shape of the thick disc is studied in detail using photometric data at high and intermediate latitudes from SDSS and 2MASS surveys. We adopted the population synthesis approach using an approximate Bayesian computation – Markov chain Monte Carlo (ABC-MCMC) method to determine the potential degeneracies in the parameters that can be caused by the mixing with the halo and the thin disc. We characterised the thick-disc shape, scale height, scale length, local density, and flare, and we investigated the extent of the thick-disc formation period by simulating several formation episodes. Results. We find that the vertical variation in density is not exponential, but much closer to a hyperbolic secant squared. Assuming a single formation epoch, the thick disc is better fitted with a sech 2 scale height of 470 pc and a scale length of 2.3 kpc. However, if one simulates two successive formation episodes, which mimicks an extended formation period, the older episode has a higher scale height and a longer scale length than the younger episode, which indicates a contraction during the collapse phase. The scale height decreases from 800 pc to 340 pc, the scale length from 3.2 kpc to 2 kpc. The likelihood is much higher when the thick disc formation extends over a longer period. We also show that star formation increases from the old episode to the young and that there is no flare in the outskirt of the thick disc during the main episode. We compare our results with formation scenarios of the thick disc. During the fitting process, the halo parameters are determined as well. If a power-law density is assumed, it has an exponent of 3.3 and an axis ratio of 0.7. Alternatively, a Hernquist shape would have an exponent of 2.76, an axis ratio of 0.77, and a core radius of 2.1 kpc. The constraint on the halo shows that a transition between an inner and outer halo, if it exists, cannot be at a distance shorter than about 30 kpc, which is the limit of our investigation using turnoff halo stars. Finally, we show that extrapolating the thick disc towards the bulge region explains well the stellar populations observed there that there is no longer need to invoke a classical bulge.Conclusions. The facts that the thick-disc episode lasted for several billion years, that a contraction is observed during the collapse phase, and that the main thick disc has a constant scale height with no flare argue against the formation of the thick disc through radial migration. The most probable scenario for the thick disc is that it formed while the Galaxy was gravitationally collapsing from well-mixed gas-rich giant clumps that were sustained by high turbulence, which prevented a thin disc from forming for a time, as proposed previously. This scenario explains well the observations in the thick-disc region and in the bulge region.

175 citations


Journal ArticleDOI
TL;DR: In this article, the second-order Boltzmann-Gibbs principle is used to replace local functionals of a conservative, one-dimensional stochastic process by a possibly nonlinear function of the conserved quantity.
Abstract: We introduce what we call the second-order Boltzmann–Gibbs principle, which allows one to replace local functionals of a conservative, one-dimensional stochastic process by a possibly nonlinear function of the conserved quantity. This replacement opens the way to obtain nonlinear stochastic evolutions as the limit of the fluctuations of the conserved quantity around stationary states. As an application of this second-order Boltzmann–Gibbs principle, we introduce the notion of energy solutions of the KPZ and stochastic Burgers equations. Under minimal assumptions, we prove that the density fluctuations of one-dimensional, stationary, weakly asymmetric, conservative particle systems are sequentially compact and that any limit point is given by energy solutions of the stochastic Burgers equation. We also show that the fluctuations of the height function associated to these models are given by energy solutions of the KPZ equation in this sense. Unfortunately, we lack a uniqueness result for these energy solutions. We conjecture these solutions to be unique, and we show some regularity results for energy solutions of the KPZ/Burgers equation, supporting this conjecture.

158 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined whether the financial performances of socially responsible investment (SRI) mutual funds are related to the features of the screening process and found evidence that a greater screening intensity slightly reduces financial performance.
Abstract: In this study, we examine whether the financial performances of socially responsible investment (SRI) mutual funds are related to the features of the screening process. Based on a sample of French SRI funds, we find evidence that a greater screening intensity slightly reduces financial performance (but the relationship runs in the opposite direction when screening gets tougher). Further, we show that only sectoral screens – such as avoiding ‘sin’ stocks – decrease financial performance, while transversal screens – commitment to UN Global Compact Principles, ILO/Rights at Work, etc. – have no impact. Lastly, when the quality of the SRI selection process is proxied by the rating provided by Novethic, its impact is not significant, while a higher strategy distinctiveness amongst SRI funds, which also gives information on the quality of the selection process, is associated with better financial performance.

157 citations


Journal ArticleDOI
TL;DR: In this article, a renormalization procedure is needed to make this precise, since X oscillates between −∞ and ∞ and is not a function in the usual sense.
Abstract: Gaussian Multiplicative Chaos is a way to produce a measure on \({\mathbb{R}^d}\) (or subdomain of \({\mathbb{R}^d}\)) of the form \({e^{\gamma X(x)} dx}\), where X is a log-correlated Gaussian field and \({\gamma \in [0, \sqrt{2d})}\) is a fixed constant. A renormalization procedure is needed to make this precise, since X oscillates between −∞ and ∞ and is not a function in the usual sense. This procedure yields the zero measure when \({\gamma = \sqrt{2d}}\).

153 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a size-independent timer mechanism for division control, though theoretically possible, is quantitatively incompatible with the data and extremely sensitive to slight variations in the growth law.
Abstract: Background Many organisms coordinate cell growth and division through size control mechanisms: cells must reach a critical size to trigger a cell cycle event. Bacterial division is often assumed to be controlled in this way, but experimental evidence to support this assumption is still lacking. Theoretical arguments show that size control is required to maintain size homeostasis in the case of exponential growth of individual cells. Nevertheless, if the growth law deviates slightly from exponential for very small cells, homeostasis can be maintained with a simple 'timer' triggering division. Therefore, deciding whether division control in bacteria relies on a 'timer' or 'sizer' mechanism requires quantitative comparisons between models and data. Results The timer and sizer hypotheses find a natural expression in models based on partial differential equations. Here we test these models with recent data on single-cell growth of Escherichia coli. We demonstrate that a size-independent timer mechanism for division control, though theoretically possible, is quantitatively incompatible with the data and extremely sensitive to slight variations in the growth law. In contrast, a sizer model is robust and fits the data well. In addition, we tested the effect of variability in individual growth rates and noise in septum positioning and found that size control is robust to this phenotypic noise. Conclusions Confrontations between cell cycle models and data usually suffer from a lack of high-quality data and suitable statistical estimation techniques. Here we overcome these limitations by using high precision measurements of tens of thousands of single bacterial cells combined with recent statistical inference methods to estimate the division rate within the models. We therefore provide the first precise quantitative assessment of different cell cycle models.

146 citations


Journal ArticleDOI
TL;DR: This paper proposes a mixed integer program formulation to solve the quadratic recourse problem of the location transportation problem and defines a tight bound for this reformulation.

143 citations


Journal ArticleDOI
TL;DR: In this paper, the heterogeneity of the health gap between migrants and natives across four European countries was explored, and the association between migratory status and self-assessed health was firstly explored separately in Belgium, France, Spain and Italy.
Abstract: BACKGROUND: Even if health status of immigrants constitutes an important public health issue, the literature provides contradictory results on the existence of a 'healthy migrant' effect in Europe. This study proposes to explore the heterogeneity of the health gap between migrants and natives across four European countries. DATA AND METHODS: Based on several harmonized national health interview surveys, the association between migratory status and self-assessed health was firstly explored separately in Belgium, France, Spain and Italy. To explore whether differences in health gap between countries reflect differences in health status of immigrants between host countries or whether they are because of differences in health status of natives between host countries, the association between the host country and health was secondly analysed separately among a pooled sample of immigrants and one of natives, controlling for socio-economic status and country of origin. RESULTS: After controlling for socio-economic status, immigrants report a poorer health status than natives in France, Belgium and Spain, whereas they report a better health status than natives in Italy, among both women and men. A North-South gradient in immigrants' health status appears: their health status is better in Italy and in Spain than in France and Belgium. Conversely, health status of natives is poorer in Italy and in Belgium than in France and in Spain. CONCLUSION: Differences in health gap reflect differences in health status of both natives and immigrants between host countries. This suggests differences in health selection at migration and in immigrants' integration between European countries.

Journal ArticleDOI
TL;DR: This work derives necessary and sufficient conditions on summary statistics for the corresponding Bayes factor to be convergent, namely to select the true model asymptotically under the two models.
Abstract: The choice of the summary statistics that are used in Bayesian inference and in particular in approximate Bayesian computation algorithms has bearings on the validation of the resulting inference. Those statistics are nonetheless customarily used in approximate Bayesian computation algorithms without consistency checks. We derive necessary and sufficient conditions on summary statistics for the corresponding Bayes factor to be convergent, namely to select the true model asymptotically. Those conditions, which amount to the expectations of the summary statistics differing asymptotically under the two models, are quite natural and can be exploited in approximate Bayesian computation settings to infer whether or not a choice of summary statistics is appropriate, via a Monte Carlo validation.

Book
01 Mar 2014
TL;DR: This Synthesis Lecture aims at presenting the key motivations, results, abstractions and techniques underpinning judgment aggregation, as a unifying paradigm for the formalization and understanding of aggregation problems.
Abstract: Judgment aggregation is a mathematical theory of collective decision-making. It concerns the methods whereby individual opinions about logically interconnected issues of interest can, or cannot, be aggregated into one collective stance. Aggregation problems have traditionally been of interest for disciplines like economics and the political sciences, as well as philosophy, where judgment aggregation itself originates from, but have recently captured the attention of disciplines like computer science, artificial intelligence and multi-agent systems. Judgment aggregation has emerged in the last decade as a unifying paradigm for the formalization and understanding of aggregation problems. Still, no comprehensive presentation of the theory is available to date. This Synthesis Lecture aims at filling this gap presenting the key motivations, results, abstractions and techniques underpinning it.

Journal ArticleDOI
TL;DR: In this paper, the authors identify the conditions for making the coming years of the EU ETS a success and draw historical lessons from the eight years the scheme has been in operation, and then present the various interventions by the public authorities currently under discussion in order to revive the market.

Journal ArticleDOI
TL;DR: A new method based on subsampling is proposed to deal with plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity, and some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted.
Abstract: When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785 , 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a quantum lattice system with infinite-dimensional on-site Hilbert space and showed that the Green-Kubo conductivity κ(β) decays faster than any polynomial in the inverse temperature β.
Abstract: We consider a quantum lattice system with infinite-dimensional on-site Hilbert space, very similar to the Bose–Hubbard model. We investigate many-body localization in this model, induced by thermal fluctuations rather than disorder in the Hamiltonian. We provide evidence that the Green–Kubo conductivity κ(β), defined as the time-integrated current autocorrelation function, decays faster than any polynomial in the inverse temperature β as \({\beta \to 0}\). More precisely, we define approximations \({\kappa_{\tau}(\beta)}\) to κ(β) by integrating the current-current autocorrelation function up to a large but finite time \({\tau}\) and we rigorously show that \({\beta^{-n}\kappa_{\beta^{-m}}(\beta)}\) vanishes as \({\beta \to 0}\), for any \({n,m \in \mathbb{N}}\) such that m−n is sufficiently large.

Journal ArticleDOI
TL;DR: This paper shall present a useful theoretical synthesis for the analyst in his/her decision aiding activity, and provide some practical instructions concerning the approach to follow for assigning the values to these discriminating thresholds.
Abstract: This article deals with preference modeling. It concerns the concepts of discriminating thresholds as a tool to cope with the imperfect nature of knowledge in decision aiding. Such imperfect knowledge is related with the definition of each criterion as well as with the data we have to take into account. On the one hand, we shall present a useful theoretical synthesis for the analyst in his/her decision aiding activity, and, on the other hand, we shall provide some practical instructions concerning the approach to follow for assigning the values to these discriminating thresholds.

Journal ArticleDOI
TL;DR: In this paper, the authors highlight four main tensions that emerge from strategists' discourses on strategizing work: social tension, cognitive tension, focus tension, and time tension.
Abstract: Until recently, the field of strategy has neglected the question of what it means to be a strategist. Based on an analysis of 68 interviews with strategy practitioners, our results highlight four main tensions that emerge from strategists' discourses on strategizing work: the social tension, the cognitive tension, the focus tension, and the time tension. This tension-based representation of strategy enables us to differentiate between three forms of strategists' subjectivities, i.e. the ways by which strategists discursively cope with tensions as a means of constituting their identity and legitimacy: the mythicizing subjectivity, the concretizing subjectivity, and the dialogizing subjectivity. Such results shed light on what a strategist is, suggesting that strategizing can be conceptualized as the art of balancing tensions and that multiple strategists' subjectivities within a paradox lens on strategy may in fact co-exist.

Journal ArticleDOI
TL;DR: In this article, the authors developed and explained the concept of regulatory scripts, defined as the practices shared by a group of organizations in an industry in response to international frameworks and standards, which they call "institutional expectations".

Journal ArticleDOI
TL;DR: Non-deterministic lot-sizing models are considered which serve for an explicit determination of lot sizes in an uncertain environment and taxonomy components for such models are suggested and a bibliography structured according to these components is presented.
Abstract: Non-deterministic lot-sizing models are considered which serve for an explicit determination of lot sizes in an uncertain environment. Taxonomy components for such models are suggested and a bibliography structured according to these components is presented. The taxonomy components are numeric characteristics of a lot-sizing problem, names of uncertain parameters and names of approaches to model the uncertainty. The bibliography covers more than 300 publications since the year 2000.

Journal ArticleDOI
TL;DR: In this paper, the authors use the case of the market for CSR consultancy in Quebec to make visible the hand of management consultants in the creation of markets for virtue, focusing on three distinctive roles of CSR consultants as social and environmental issues translators, market boundary negotiators and responsive regulation enactors.
Abstract: Although the resurgence of Corporate Social Responsibility (CSR) has been described as the development of ‘markets for virtue’, little is known about the social construction of CSR markets. Prior works either focus on the economic potential of these markets or criticize the social commodification they reflect, denying them any virtue other than generating profit or maintaining the capitalist status quo. This article uses the case of the market for CSR consultancy in Quebec to make ‘visible’ the hand of management consultants in the creation of markets for virtue. Building on interviews with 23 consultants and secondary data, we relate three narrative accounts that highlight complementary facets of the construction of the market for CSR consultancy. Our narratives shed light on three distinctive roles of CSR consultants as social and environmental issues translators, market boundary negotiators and responsive regulation enactors. These roles clarify the regulative dynamics underlying CSR commodification and advance our understanding of consultancy work in the CSR domain.

Journal ArticleDOI
TL;DR: In this paper, the authors examined how organizational socialization tactics interact with perceived organizational support to influence socialization outcomes above and beyond proactive personality, and found that POS significantly moderated the relationship between socialization tactic and three important socialization outcome (learning the job, learning work-group norms, and role innovation).
Abstract: Understanding and facilitating new hires' adjustment are critical to maximizing the effectiveness of recruitment and selection. The aim of the current study is to examine how organizational socialization tactics interact with perceived organizational support (POS) to influence socialization outcomes above and beyond proactive personality. Our sample consisted of 103 blue-collar apprentices from a well-established apprenticeship program that began in the Middle Ages in France. Using a time-lagged design, we surveyed apprentices in their first months of employment, while they were learning their trade (carpentry, roofing, and stone cutting). We found that POS significantly moderated the relationship between socialization tactics and three important socialization outcomes (learning the job, learning work-group norms, and role innovation), such that there was a positive relationship under low POS and a non-significant relationship under high POS. Unexpectedly, POS was negatively related to role innovation. Implications for the organizational socialization literature are discussed.

Journal ArticleDOI
TL;DR: It is demonstrated that upper-bounding the thresholds by a constant may significantly alleviate the search for efficiently solvable, but still meaningful special cases of Target Set Selection.
Abstract: Target Set Selection, which is a prominent NP-hard problem occurring in social network analysis and distributed computing, is notoriously hard both in terms of achieving useful polynomial-time approximation as well as fixed-parameter algorithms. Given an undirected graph, the task is to select a minimum number of vertices into a "target set" such that all other vertices will become active in the course of a dynamic process (which may go through several activation rounds). A vertex, equipped with a threshold value t, becomes active once at least t of its neighbors are active; initially, only the target set vertices are active. We contribute further insights into the existence of islands of tractability for Target Set Selection by spotting new parameterizations characterizing some sparse graphs as well as some "cliquish" graphs and developing corresponding fixed-parameter tractability and (parameterized) hardness results. In particular, we demonstrate that upper-bounding the thresholds by a constant may significantly alleviate the search for efficiently solvable, but still meaningful special cases of Target Set Selection.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the possibility of many-body localization in translation-invariant Hamiltonian systems, and showed that resonant spots are mobile, unlike in models with strong quenched disorder, and that these mobile spots constitute a possible mechanism for delocalization.
Abstract: We investigate the possibility of many-body localization in translation-invariant Hamiltonian systems, which was recently brought up by several authors. A key feature of many-body localized disordered systems is recovered, namely the fact that resonant spots are rare and far-between. However, we point out that resonant spots are mobile, unlike in models with strong quenched disorder, and that these mobile spots constitute a possible mechanism for delocalization, albeit possibly only on very long timescales. In some models, this argument for delocalization can be made very explicit in first order of perturbation theory in the hopping. For models where this does not work, we present instead a nonperturbative argument that relies solely on ergodicity inside the resonant spots.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the results from both standard parametric and more flexible, semiparametric models of hedonic property and show that the parametric model might structurally lead to important biases in the estimated value of the impact of hazardous plants on housing values.

Journal ArticleDOI
TL;DR: A formal semantic analysis of the alarm calls used by Campbell’s monkeys in the Tai forest and on Tiwai island—two sites that differ in the main predators that the monkeys are exposed to— develops models based on a compositional semantics in which concatenation is interpreted as conjunction, roots have lexical meanings, and an all-purpose alarm parameter is raised with each individual call.
Abstract: We develop a formal semantic analysis of the alarm calls used by Campbell’s monkeys in the Tai forest (Ivory Coast) and on Tiwai island (Sierra Leone)—two sites that differ in the main predators that the monkeys are exposed to (eagles on Tiwai vs. eagles and leopards in Tai). Building on data discussed in Ouattara et al. (PLoS ONE 4(11):e7808, 2009a; PNAS 106(51): 22026–22031, 2009b and Arnold et al. (Population differences in wild Campbell’s monkeys alarm call use, 2013), we argue that on both sites alarm calls include the roots krak and hok, which can optionally be affixed with -oo, a kind of attenuating suffix; in addition, sentences can start with boom boom, which indicates that the context is not one of predation. In line with Arnold et al., we show that the meaning of the roots is not quite the same in Tai and on Tiwai: krak often functions as a leopard alarm call in Tai, but as a general alarm call on Tiwai. We develop models based on a compositional semantics in which concatenation is interpreted as conjunction, roots have lexical meanings, -oo is an attenuating suffix, and an all-purpose alarm parameter is raised with each individual call. The first model accounts for the difference between Tai and Tiwai by way of different lexical entries for krak. The second model gives the same underspecified entry to krak in both locations (= general alarm call), but it makes use of a competition mechanism akin to scalar implicatures. In Tai, strengthening yields a meaning equivalent to non-aerial dangerous predator and turns out to single out leopards. On Tiwai, strengthening yields a nearly contradictory meaning due to the absence of ground predators, and only the unstrengthened meaning is used.

Posted Content
TL;DR: In this paper, the Forward-Backward proximal splitting algorithm is used to minimize the sum of two proper convex functions, one having a Lipschitz continuous gradient and the other being partly smooth relative to an active manifold.
Abstract: In this paper, we consider the Forward--Backward proximal splitting algorithm to minimize the sum of two proper convex functions, one of which having a Lipschitz continuous gradient and the other being partly smooth relative to an active manifold $\mathcal{M}$. We propose a generic framework under which we show that the Forward--Backward (i) correctly identifies the active manifold $\mathcal{M}$ in a finite number of iterations, and then (ii) enters a local linear convergence regime that we characterize precisely. This gives a grounded and unified explanation to the typical behaviour that has been observed numerically for many problems encompassed in our framework, including the Lasso, the group Lasso, the fused Lasso and the nuclear norm regularization to name a few. These results may have numerous applications including in signal/image processing processing, sparse recovery and machine learning.

Journal ArticleDOI
TL;DR: In this article, an algebraic convergence rate for the homogenization of level-set convex Hamilton-Jacobi equations in i.i.d. random environments was shown.
Abstract: We present exponential error estimates and demonstrate an algebraic convergence rate for the homogenization of level-set convex Hamilton-Jacobi equations in i.i.d. random environments, the first quantitative homogenization results for these equations in the stochastic setting. By taking advantage of a connection between the metric approach to homogenization and the theory of first-passage percolation, we obtain estimates on the fluctuations of the solutions to the approximate cell problem in the ballistic regime (away from flat spot of the effective Hamiltonian). In the sub-ballistic regime (on the flat spot), we show that the fluctuations are governed by an entirely different mechanism and the homogenization may proceed, without further assumptions, at an arbitrarily slow rate. We identify a necessary and sufficient condition on the law of the Hamiltonian for an algebraic rate of convergence to hold in the sub-ballistic regime and show, under this hypothesis, that the two rates may be merged to yield comprehensive error estimates and an algebraic rate of convergence for homogenization. Our methods are novel and quite different from the techniques employed in the periodic setting, although we benefit from previous works in both first-passage percolation and homogenization. The link between the rate of homogenization and the flat spot of the effective Hamiltonian, which is related to the nonexistence of correctors, is a purely random phenomenon observed here for the first time.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a bottom-up marginal abatement cost curve (MACC) representing the cost of mitigation measures applicable in addition to business-as-usual agricultural practices.
Abstract: China is now the world's biggest annual emitter of greenhouse gases with 7467 million tons (Mt) carbon dioxide equivalent (CO2e) in 2005, with agriculture accounting for 11% of this total. As elsewhere, agricultural emissions mitigation policy in China faces a range of challenges due to the biophysical complexity and heterogeneity of farming systems, as well as other socioeconomic barriers. Existing research has contributed to improving our understanding of the technical potential of mitigation measures in this sector (i.e. what works). But for policy purposes it is important to convert these measures into a feasible economic potential, which provides a perspective on whether agricultural emissions reduction (measures) are low cost relative to mitigation measures and overall potential offered by other sectors of the economy. We develop a bottom-up marginal abatement cost curve (MACC) representing the cost of mitigation measures applicable in addition to business-as-usual agricultural practices. The MACC results demonstrate that while the sector offers a maximum technical potential of 402 MtCO2e in 2020, a reduction of 135 MtCO2e is potentially available at zero or negative cost (i.e. a cost saving), and 176 MtCO2e (approximately 44% of the total) can be abated at a cost below a threshold carbon price ≤¥ 100 (approximately €12) per tCO2e. Our findings highlight the relative cost effectiveness of nitrogen fertilizer and manure best management practices, and animal breeding practices. We outline the assumptions underlying MACC construction and discuss some scientific, socioeconomic and institutional barriers to realizing the indicated levels of mitigation.

Posted Content
TL;DR: A contracting problem in which a principal hires an agent to manage a risky project is considered, and it is shown that the optimal contract is linear in these factors: the contractible sources of risk, including the output, the quadratic variation of the output and the cross-variations between theoutput and the contractable risk sources.
Abstract: We consider a contracting problem in which a principal hires an agent to manage a risky project. When the agent chooses volatility components of the output process and the principal observes the output continuously, the principal can compute the quadratic variation of the output, but not the individual components. This leads to moral hazard with respect to the risk choices of the agent. We identify a family of admissible contracts for which the optimal agent's action is explicitly characterized, and, using the recent theory of singular changes of measures for Ito processes, we study how restrictive this family is. In particular, in the special case of the standard Homlstrom-Milgrom model with fixed volatility, the family includes all possible contracts. We solve the principal-agent problem in the case of CARA preferences, and show that the optimal contract is linear in these factors: the contractible sources of risk, including the output, the quadratic variation of the output and the cross-variations between the output and the contractible risk sources. Thus, like sample Sharpe ratios used in practice, path-dependent contracts naturally arise when there is moral hazard with respect to risk management. In a numerical example, we show that the loss of efficiency can be significant if the principal does not use the quadratic variation component of the optimal contract.