scispace - formally typeset
Search or ask a question

Showing papers by "École Normale Supérieure published in 2020"


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +229 moreInstitutions (70)
TL;DR: In this article, the authors present cosmological parameter results from the full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction Compared to the 2015 results, improved measurements of large-scale polarization allow the reionization optical depth to be measured with higher precision, leading to significant gains in the precision of other correlated parameters Improved modelling of the small-scale polarization leads to more robust constraints on manyparameters,withresidualmodellinguncertaintiesestimatedtoaffectthemonlyatthe05σlevelWefindgoodconsistencywiththestandard spatially-flat6-parameter ΛCDMcosmologyhavingapower-lawspectrumofadiabaticscalarperturbations(denoted“base ΛCDM”inthispaper), from polarization, temperature, and lensing, separately and in combination A combined analysis gives dark matter density Ωch2 = 0120±0001, baryon density Ωbh2 = 00224±00001, scalar spectral index ns = 0965±0004, and optical depth τ = 0054±0007 (in this abstract we quote 68% confidence regions on measured parameters and 95% on upper limits) The angular acoustic scale is measured to 003% precision, with 100θ∗ = 10411±00003Theseresultsareonlyweaklydependentonthecosmologicalmodelandremainstable,withsomewhatincreasederrors, in many commonly considered extensions Assuming the base-ΛCDM cosmology, the inferred (model-dependent) late-Universe parameters are: HubbleconstantH0 = (674±05)kms−1Mpc−1;matterdensityparameterΩm = 0315±0007;andmatterfluctuationamplitudeσ8 = 0811±0006 We find no compelling evidence for extensions to the base-ΛCDM model Combining with baryon acoustic oscillation (BAO) measurements (and consideringsingle-parameterextensions)weconstraintheeffectiveextrarelativisticdegreesoffreedomtobe Neff = 299±017,inagreementwith the Standard Model prediction Neff = 3046, and find that the neutrino mass is tightly constrained toPmν < 012 eV The CMB spectra continue to prefer higher lensing amplitudesthan predicted in base ΛCDM at over 2σ, which pulls some parameters that affect thelensing amplitude away from the ΛCDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAOdataThejointconstraintwithBAOmeasurementsonspatialcurvatureisconsistentwithaflatuniverse, ΩK = 0001±0002Alsocombining with Type Ia supernovae (SNe), the dark-energy equation of state parameter is measured to be w0 = −103±003, consistent with a cosmological constant We find no evidence for deviations from a purely power-law primordial spectrum, and combining with data from BAO, BICEP2, and Keck Array data, we place a limit on the tensor-to-scalar ratio r0002 < 006 Standard big-bang nucleosynthesis predictions for the helium and deuterium abundances for the base-ΛCDM cosmology are in excellent agreement with observations The Planck base-ΛCDM results are in good agreement with BAO, SNe, and some galaxy lensing observations, but in slight tension with the Dark Energy Survey’s combined-probe results including galaxy clustering (which prefers lower fluctuation amplitudes or matter density parameters), and in significant, 36σ, tension with local measurements of the Hubble constant (which prefer a higher value) Simple model extensions that can partially resolve these tensions are not favoured by the Planck data

4,688 citations


Journal ArticleDOI
Pierre Friedlingstein1, Pierre Friedlingstein2, Michael O'Sullivan2, Matthew W. Jones3, Robbie M. Andrew, Judith Hauck, Are Olsen, Glen P. Peters, Wouter Peters4, Wouter Peters5, Julia Pongratz6, Julia Pongratz7, Stephen Sitch1, Corinne Le Quéré3, Josep G. Canadell8, Philippe Ciais9, Robert B. Jackson10, Simone R. Alin11, Luiz E. O. C. Aragão12, Luiz E. O. C. Aragão1, Almut Arneth, Vivek K. Arora, Nicholas R. Bates13, Nicholas R. Bates14, Meike Becker, Alice Benoit-Cattin, Henry C. Bittig, Laurent Bopp15, Selma Bultan6, Naveen Chandra16, Naveen Chandra17, Frédéric Chevallier9, Louise Chini18, Wiley Evans, Liesbeth Florentie4, Piers M. Forster19, Thomas Gasser20, Marion Gehlen9, Dennis Gilfillan, Thanos Gkritzalis21, Luke Gregor22, Nicolas Gruber22, Ian Harris23, Kerstin Hartung6, Kerstin Hartung24, Vanessa Haverd8, Richard A. Houghton25, Tatiana Ilyina7, Atul K. Jain26, Emilie Joetzjer27, Koji Kadono28, Etsushi Kato, Vassilis Kitidis29, Jan Ivar Korsbakken, Peter Landschützer7, Nathalie Lefèvre30, Andrew Lenton31, Sebastian Lienert32, Zhu Liu33, Danica Lombardozzi34, Gregg Marland35, Nicolas Metzl30, David R. Munro11, David R. Munro36, Julia E. M. S. Nabel7, S. Nakaoka16, Yosuke Niwa16, Kevin D. O'Brien37, Kevin D. O'Brien11, Tsuneo Ono, Paul I. Palmer, Denis Pierrot38, Benjamin Poulter, Laure Resplandy39, Eddy Robertson40, Christian Rödenbeck7, Jörg Schwinger, Roland Séférian27, Ingunn Skjelvan, Adam J. P. Smith3, Adrienne J. Sutton11, Toste Tanhua41, Pieter P. Tans11, Hanqin Tian42, Bronte Tilbrook43, Bronte Tilbrook31, Guido R. van der Werf44, N. Vuichard9, Anthony P. Walker45, Rik Wanninkhof38, Andrew J. Watson1, David R. Willis23, Andy Wiltshire40, Wenping Yuan46, Xu Yue47, Sönke Zaehle7 
University of Exeter1, École Normale Supérieure2, Norwich Research Park3, Wageningen University and Research Centre4, University of Groningen5, Ludwig Maximilian University of Munich6, Max Planck Society7, Commonwealth Scientific and Industrial Research Organisation8, Université Paris-Saclay9, Stanford University10, National Oceanic and Atmospheric Administration11, National Institute for Space Research12, Bermuda Institute of Ocean Sciences13, University of Southampton14, PSL Research University15, National Institute for Environmental Studies16, Japan Agency for Marine-Earth Science and Technology17, University of Maryland, College Park18, University of Leeds19, International Institute of Minnesota20, Flanders Marine Institute21, ETH Zurich22, University of East Anglia23, German Aerospace Center24, Woods Hole Research Center25, University of Illinois at Urbana–Champaign26, University of Toulouse27, Japan Meteorological Agency28, Plymouth Marine Laboratory29, University of Paris30, Hobart Corporation31, Oeschger Centre for Climate Change Research32, Tsinghua University33, National Center for Atmospheric Research34, Appalachian State University35, University of Colorado Boulder36, University of Washington37, Atlantic Oceanographic and Meteorological Laboratory38, Princeton University39, Met Office40, Leibniz Institute of Marine Sciences41, Auburn University42, University of Tasmania43, VU University Amsterdam44, Oak Ridge National Laboratory45, Sun Yat-sen University46, Nanjing University47
TL;DR: In this paper, the authors describe and synthesize data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere in a changing climate – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe and synthesize data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions (EFOS) are based on energy statistics and cement production data, while emissions from land-use change (ELUC), mainly deforestation, are based on land use and land-use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate (GATM) is computed from the annual changes in concentration. The ocean CO2 sink (SOCEAN) and terrestrial CO2 sink (SLAND) are estimated with global process models constrained by observations. The resulting carbon budget imbalance (BIM), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ. For the last decade available (2010–2019), EFOS was 9.6 ± 0.5 GtC yr−1 excluding the cement carbonation sink (9.4 ± 0.5 GtC yr−1 when the cement carbonation sink is included), and ELUC was 1.6 ± 0.7 GtC yr−1. For the same decade, GATM was 5.1 ± 0.02 GtC yr−1 (2.4 ± 0.01 ppm yr−1), SOCEAN 2.5 ± 0.6 GtC yr−1, and SLAND 3.4 ± 0.9 GtC yr−1, with a budget imbalance BIM of −0.1 GtC yr−1 indicating a near balance between estimated sources and sinks over the last decade. For the year 2019 alone, the growth in EFOS was only about 0.1 % with fossil emissions increasing to 9.9 ± 0.5 GtC yr−1 excluding the cement carbonation sink (9.7 ± 0.5 GtC yr−1 when cement carbonation sink is included), and ELUC was 1.8 ± 0.7 GtC yr−1, for total anthropogenic CO2 emissions of 11.5 ± 0.9 GtC yr−1 (42.2 ± 3.3 GtCO2). Also for 2019, GATM was 5.4 ± 0.2 GtC yr−1 (2.5 ± 0.1 ppm yr−1), SOCEAN was 2.6 ± 0.6 GtC yr−1, and SLAND was 3.1 ± 1.2 GtC yr−1, with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 409.85 ± 0.1 ppm averaged over 2019. Preliminary data for 2020, accounting for the COVID-19-induced changes in emissions, suggest a decrease in EFOS relative to 2019 of about −7 % (median estimate) based on individual estimates from four studies of −6 %, −7 %, −7 % (−3 % to −11 %), and −13 %. Overall, the mean and trend in the components of the global carbon budget are consistently estimated over the period 1959–2019, but discrepancies of up to 1 GtC yr−1 persist for the representation of semi-decadal variability in CO2 fluxes. Comparison of estimates from diverse approaches and observations shows (1) no consensus in the mean and trend in land-use change emissions over the last decade, (2) a persistent low agreement between the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent discrepancy between the different methods for the ocean sink outside the tropics, particularly in the Southern Ocean. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding of the global carbon cycle compared with previous publications of this data set (Friedlingstein et al., 2019; Le Quere et al., 2018b, a, 2016, 2015b, a, 2014, 2013). The data presented in this work are available at https://doi.org/10.18160/gcp-2020 (Friedlingstein et al., 2020).

1,764 citations


Journal ArticleDOI
Yashar Akrami1, Yashar Akrami2, M. Ashdown3, J. Aumont4  +180 moreInstitutions (59)
TL;DR: In this paper, a power-law fit to the angular power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky is presented.
Abstract: The study of polarized dust emission has become entwined with the analysis of the cosmic microwave background (CMB) polarization. We use new Planck maps to characterize Galactic dust emission as a foreground to the CMB polarization. We present Planck EE, BB, and TE power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky. We present power-law fits to the angular power spectra, yielding evidence for statistically significant variations of the exponents over sky regions and a difference between the values for the EE and BB spectra. The TE correlation and E/B power asymmetry extend to low multipoles that were not included in earlier Planck polarization papers. We also report evidence for a positive TB dust signal. Combining data from Planck and WMAP, we determine the amplitudes and spectral energy distributions (SEDs) of polarized foregrounds, including the correlation between dust and synchrotron polarized emission, for the six sky regions as a function of multipole. This quantifies the challenge of the component separation procedure required for detecting the reionization and recombination peaks of primordial CMB B modes. The SED of polarized dust emission is fit well by a single-temperature modified blackbody emission law from 353 GHz to below 70 GHz. For a dust temperature of 19.6 K, the mean spectral index for dust polarization is $\beta_{\rm d}^{P} = 1.53\pm0.02 $. By fitting multi-frequency cross-spectra, we examine the correlation of the dust polarization maps across frequency. We find no evidence for decorrelation. If the Planck limit for the largest sky region applies to the smaller sky regions observed by sub-orbital experiments, then decorrelation might not be a problem for CMB experiments aiming at a primordial B-mode detection limit on the tensor-to-scalar ratio $r\simeq0.01$ at the recombination peak.

1,749 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Frederico Arroja4  +251 moreInstitutions (72)
TL;DR: In this paper, the authors present the cosmological legacy of the Planck satellite, which provides the strongest constraints on the parameters of the standard cosmology model and some of the tightest limits available on deviations from that model.
Abstract: The European Space Agency’s Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009. It scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857 GHz. This paper presents the cosmological legacy of Planck, which currently provides our strongest constraints on the parameters of the standard cosmological model and some of the tightest limits available on deviations from that model. The 6-parameter ΛCDM model continues to provide an excellent fit to the cosmic microwave background data at high and low redshift, describing the cosmological information in over a billion map pixels with just six parameters. With 18 peaks in the temperature and polarization angular power spectra constrained well, Planck measures five of the six parameters to better than 1% (simultaneously), with the best-determined parameter (θ*) now known to 0.03%. We describe the multi-component sky as seen by Planck, the success of the ΛCDM model, and the connection to lower-redshift probes of structure formation. We also give a comprehensive summary of the major changes introduced in this 2018 release. The Planck data, alone and in combination with other probes, provide stringent constraints on our models of the early Universe and the large-scale structure within which all astrophysical objects form and evolve. We discuss some lessons learned from the Planck mission, and highlight areas ripe for further experimental advances.

879 citations


Journal ArticleDOI
08 Oct 2020-Nature
TL;DR: A global N2O inventory is presented that incorporates both natural and anthropogenic sources and accounts for the interaction between nitrogen additions and the biochemical processes that control N 2O emissions, using bottom-up, top-down and process-based model approaches.
Abstract: Nitrous oxide (N2O), like carbon dioxide, is a long-lived greenhouse gas that accumulates in the atmosphere. Over the past 150 years, increasing atmospheric N2O concentrations have contributed to stratospheric ozone depletion1 and climate change2, with the current rate of increase estimated at 2 per cent per decade. Existing national inventories do not provide a full picture of N2O emissions, owing to their omission of natural sources and limitations in methodology for attributing anthropogenic sources. Here we present a global N2O inventory that incorporates both natural and anthropogenic sources and accounts for the interaction between nitrogen additions and the biochemical processes that control N2O emissions. We use bottom-up (inventory, statistical extrapolation of flux measurements, process-based land and ocean modelling) and top-down (atmospheric inversion) approaches to provide a comprehensive quantification of global N2O sources and sinks resulting from 21 natural and human sectors between 1980 and 2016. Global N2O emissions were 17.0 (minimum-maximum estimates: 12.2-23.5) teragrams of nitrogen per year (bottom-up) and 16.9 (15.9-17.7) teragrams of nitrogen per year (top-down) between 2007 and 2016. Global human-induced emissions, which are dominated by nitrogen additions to croplands, increased by 30% over the past four decades to 7.3 (4.2-11.4) teragrams of nitrogen per year. This increase was mainly responsible for the growth in the atmospheric burden. Our findings point to growing N2O emissions in emerging economies-particularly Brazil, China and India. Analysis of process-based model estimates reveals an emerging N2O-climate feedback resulting from interactions between nitrogen additions and climate change. The recent growth in N2O emissions exceeds some of the highest projected emission scenarios3,4, underscoring the urgency to mitigate N2O emissions.

650 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +213 moreInstitutions (66)
TL;DR: In this article, the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release are described, with a hybrid method using different approximations at low (l ǫ ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases.
Abstract: We describe the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release. The overall approach is similar in spirit to the one retained for the 2013 and 2015 data release, with a hybrid method using different approximations at low (l ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases. With more realistic simulations, and better correction and modelling of systematic effects, we can now make full use of the CMB polarization observed in the High Frequency Instrument (HFI) channels. The low-multipole EE cross-spectra from the 100 GHz and 143 GHz data give a constraint on the ΛCDM reionization optical-depth parameter τ to better than 15% (in combination with the TT low-l data and the high-l temperature and polarization data), tightening constraints on all parameters with posterior distributions correlated with τ . We also update the weaker constraint on τ from the joint TEB likelihood using the Low Frequency Instrument (LFI) channels, which was used in 2015 as part of our baseline analysis. At higher multipoles, the CMB temperature spectrum and likelihood are very similar to previous releases. A better model of the temperature-to-polarization leakage and corrections for the effective calibrations of the polarization channels (i.e., the polarization efficiencies) allow us to make full use of polarization spectra, improving the ΛCDM constraints on the parameters θ MC , ω c , ω b , and H 0 by more than 30%, and ns by more than 20% compared to TT-only constraints. Extensive tests on the robustness of the modelling of the polarization data demonstrate good consistency, with some residual modelling uncertainties. At high multipoles, we are now limited mainly by the accuracy of the polarization efficiency modelling. Using our various tests, simulations, and comparison between different high-multipole likelihood implementations, we estimate the consistency of the results to be better than the 0.5 σ level on the ΛCDM parameters, as well as classical single-parameter extensions for the joint likelihood (to be compared to the 0.3 σ levels we achieved in 2015 for the temperature data alone on ΛCDM only). Minor curiosities already present in the previous releases remain, such as the differences between the best-fit ΛCDM parameters for the l > 800 ranges of the power spectrum, or the preference for more smoothing of the power-spectrum peaks than predicted in ΛCDM fits. These are shown to be driven by the temperature power spectrum and are not significantly modified by the inclusion of the polarization data. Overall, the legacy Planck CMB likelihoods provide a robust tool for constraining the cosmological model and represent a reference for future CMB observations.

523 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a selective survey of algorithms for the offline detection of multiple change points in multivariate time series, and a general yet structuring methodological strategy is adopted to organize this vast body of work.

506 citations


Journal ArticleDOI
Olivier Boucher1, Jérôme Servonnat2, Anna Lea Albright3, Olivier Aumont1, Yves Balkanski2, Vladislav Bastrikov2, Slimane Bekki1, Rémy Bonnet1, Sandrine Bony3, Laurent Bopp3, Pascale Braconnot2, Patrick Brockmann2, Patricia Cadule1, Arnaud Caubel2, Frédérique Cheruy3, Francis Codron1, Anne Cozic2, David Cugnet3, Fabio D'Andrea3, Paolo Davini, Casimir de Lavergne1, Sébastien Denvil1, Julie Deshayes1, Marion Devilliers4, Agnès Ducharne1, Jean-Louis Dufresne3, Eliott Dupont1, Christian Ethé1, Laurent Fairhead3, Lola Falletti1, Simona Flavoni1, Marie Alice Foujols1, Sébastien Gardoll1, Guillaume Gastineau1, Josefine Ghattas1, Jean Yves Grandpeix3, Bertrand Guenet2, E. Guez Lionel3, Eric Guilyardi1, Matthieu Guimberteau2, Didier Hauglustaine2, Frédéric Hourdin3, Abderrahmane Idelkadi3, Sylvie Joussaume2, Masa Kageyama2, Myriam Khodri1, Gerhard Krinner5, Nicolas Lebas1, Guillaume Levavasseur1, Claire Lévy1, Laurent Li3, François Lott3, Thibaut Lurton1, Sebastiaan Luyssaert6, Gurvan Madec1, Jean Baptiste Madeleine3, Fabienne Maignan2, Marion Marchand1, Olivier Marti2, Lidia Mellul3, Yann Meurdesoif2, Juliette Mignot1, Ionela Musat3, Catherine Ottlé2, Philippe Peylin2, Yann Planton1, Jan Polcher3, Catherine Rio2, Nicolas Rochetin3, Clément Rousset1, Pierre Sepulchre2, Adriana Sima3, Didier Swingedouw4, Rémi Thiéblemont, Abdoul Khadre Traore3, Martin Vancoppenolle1, Jessica Vial3, Jérôme Vialard1, Nicolas Viovy2, Nicolas Vuichard2 
TL;DR: The authors presented the global climate model IPSL-CM6A-LR developed at the Institut Pierre-Simon Laplace (IPSL) to study natural climate variability and climate response to natural and anthropogenic forcings as part of the sixth phase of the Coupled Model Intercomparison Project (CMIP6).
Abstract: This study presents the global climate model IPSL-CM6A-LR developed at Institut Pierre-Simon Laplace (IPSL) to study natural climate variability and climate response to natural and anthropogenic forcings as part of the sixth phase of the Coupled Model Intercomparison Project (CMIP6). This article describes the different model components, their coupling, and the simulated climate in comparison to previous model versions. We focus here on the representation of the physical climate along with the main characteristics of the global carbon cycle. The model's climatology, as assessed from a range of metrics (related in particular to radiation, temperature, precipitation, and wind), is strongly improved in comparison to previous model versions. Although they are reduced, a number of known biases and shortcomings (e.g., double Intertropical Convergence Zone [ITCZ], frequency of midlatitude wintertime blockings, and El Nino–Southern Oscillation [ENSO] dynamics) persist. The equilibrium climate sensitivity and transient climate response have both increased from the previous climate model IPSL-CM5A-LR used in CMIP5. A large ensemble of more than 30 members for the historical period (1850–2018) and a smaller ensemble for a range of emissions scenarios (until 2100 and 2300) are also presented and discussed.

492 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +202 moreInstitutions (63)
TL;DR: In this article, the authors presented an extensive set of tests of the robustness of the lensing-potential power spectrum, and constructed a minimum-variance estimator likelihood over lensing multipoles 8.
Abstract: We present measurements of the cosmic microwave background (CMB) lensing potential using the final Planck 2018 temperature and polarization data. Using polarization maps filtered to account for the noise anisotropy, we increase the significance of the detection of lensing in the polarization maps from 5σ to 9σ . Combined with temperature, lensing is detected at 40σ . We present an extensive set of tests of the robustness of the lensing-potential power spectrum, and construct a minimum-variance estimator likelihood over lensing multipoles 8 ≤ L ≤ 400 (extending the range to lower L compared to 2015), which we use to constrain cosmological parameters. We find good consistency between lensing constraints and the results from the Planck CMB power spectra within the ΛCDM model. Combined with baryon density and other weak priors, the lensing analysis alone constrains (1σ errors). Also combining with baryon acoustic oscillation data, we find tight individual parameter constraints, σ 8 = 0.811 ± 0.019, , and . Combining with Planck CMB power spectrum data, we measure σ 8 to better than 1% precision, finding σ 8 = 0.811 ± 0.006. CMB lensing reconstruction data are complementary to galaxy lensing data at lower redshift, having a different degeneracy direction in σ 8 − Ωm space; we find consistency with the lensing results from the Dark Energy Survey, and give combined lensing-only parameter constraints that are tighter than joint results using galaxy clustering. Using the Planck cosmic infrared background (CIB) maps as an additional tracer of high-redshift matter, we make a combined Planck -only estimate of the lensing potential over 60% of the sky with considerably more small-scale signal. We additionally demonstrate delensing of the Planck power spectra using the joint and individual lensing potential estimates, detecting a maximum removal of 40% of the lensing-induced power in all spectra. The improvement in the sharpening of the acoustic peaks by including both CIB and the quadratic lensing reconstruction is detected at high significance.

464 citations


Journal ArticleDOI
TL;DR: In this review, recent advances in the preparation, modification, and emerging application of nanocellulose, especially cellulose nanocrystals (CNCs), are described and discussed based on the analysis of the latest investigations.
Abstract: Over the past few years, nanocellulose (NC), cellulose in the form of nanostructures, has been proved to be one of the most prominent green materials of modern times. NC materials have gained growing interests owing to their attractive and excellent characteristics such as abundance, high aspect ratio, better mechanical properties, renewability, and biocompatibility. The abundant hydroxyl functional groups allow a wide range of functionalizations via chemical reactions, leading to developing various materials with tunable features. In this review, recent advances in the preparation, modification, and emerging application of nanocellulose, especially cellulose nanocrystals (CNCs), are described and discussed based on the analysis of the latest investigations (particularly for the reports of the past 3 years). We start with a concise background of cellulose, its structural organization as well as the nomenclature of cellulose nanomaterials for beginners in this field. Then, different experimental procedures for the production of nanocelluloses, their properties, and functionalization approaches were elaborated. Furthermore, a number of recent and emerging uses of nanocellulose in nanocomposites, Pickering emulsifiers, wood adhesives, wastewater treatment, as well as in new evolving biomedical applications are presented. Finally, the challenges and opportunities of NC-based emerging materials are discussed.

461 citations


Journal ArticleDOI
09 Mar 2020
TL;DR: A typology of compound events is proposed, distinguishing events that are preconditioned, multivariate, temporally compounding and spatially compounding, and suggests analytical and modelling approaches to aid in their investigation.
Abstract: Compound weather and climate events describe combinations of multiple climate drivers and/or hazards that contribute to societal or environmental risk. Although many climate-related disasters are caused by compound events, the understanding, analysis, quantification and prediction of such events is still in its infancy. In this Review, we propose a typology of compound events and suggest analytical and modelling approaches to aid in their investigation. We organize the highly diverse compound event types according to four themes: preconditioned, where a weather-driven or climate-driven precondition aggravates the impacts of a hazard; multivariate, where multiple drivers and/or hazards lead to an impact; temporally compounding, where a succession of hazards leads to an impact; and spatially compounding, where hazards in multiple connected locations cause an aggregated impact. Through structuring compound events and their respective analysis tools, the typology offers an opportunity for deeper insight into their mechanisms and impacts, benefiting the development of effective adaptation strategies. However, the complex nature of compound events results in some cases inevitably fitting into more than one class, necessitating soft boundaries within the typology. Future work must homogenize the available analytical approaches into a robust toolset for compound-event analysis under present and future climate conditions. Research on compound events has increased vastly in the last several years, yet, a typology was absent. This Review proposes a comprehensive classification scheme, incorporating compound events that are preconditioned, multivariate, temporally compounding and spatially compounding events.

Journal ArticleDOI
Yashar Akrami1, Frederico Arroja2, M. Ashdown3, J. Aumont4  +187 moreInstitutions (59)
TL;DR: In this paper, the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps were used to obtain constraints on primordial non-Gaussianity.
Abstract: We analyse the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps to obtain constraints on primordial non-Gaussianity (NG). We compare estimates obtained from separable template-fitting, binned, and optimal modal bispectrum estimators, finding consistent values for the local, equilateral, and orthogonal bispectrum amplitudes. Our combined temperature and polarization analysis produces the following final results: $f_{NL}^{local}$ = −0.9 ± 5.1; $f_{NL}^{equil}$ = −26 ± 47; and $f_{NL}^{ortho}$ = −38 ± 24 (68% CL, statistical). These results include low-multipole (4 ≤ l < 40) polarization data that are not included in our previous analysis. The results also pass an extensive battery of tests (with additional tests regarding foreground residuals compared to 2015), and they are stable with respect to our 2015 measurements (with small fluctuations, at the level of a fraction of a standard deviation, which is consistent with changes in data processing). Polarization-only bispectra display a significant improvement in robustness; they can now be used independently to set primordial NG constraints with a sensitivity comparable to WMAP temperature-based results and they give excellent agreement. In addition to the analysis of the standard local, equilateral, and orthogonal bispectrum shapes, we consider a large number of additional cases, such as scale-dependent feature and resonance bispectra, isocurvature primordial NG, and parity-breaking models, where we also place tight constraints but do not detect any signal. The non-primordial lensing bispectrum is, however, detected with an improved significance compared to 2015, excluding the null hypothesis at 3.5σ. Beyond estimates of individual shape amplitudes, we also present model-independent reconstructions and analyses of the Planck CMB bispectrum. Our final constraint on the local primordial trispectrum shape is $g_{NL}^{local}$ = (−5.8 ± 6.5) × 10$^4$ (68% CL, statistical), while constraints for other trispectrum shapes are also determined. Exploiting the tight limits on various bispectrum and trispectrum shapes, we constrain the parameter space of different early-Universe scenarios that generate primordial NG, including general single-field models of inflation, multi-field models (e.g. curvaton models), models of inflation with axion fields producing parity-violation bispectra in the tensor sector, and inflationary models involving vector-like fields with directionally-dependent bispectra. Our results provide a high-precision test for structure-formation scenarios, showing complete agreement with the basic picture of the ΛCDM cosmology regarding the statistics of the initial conditions, with cosmic structures arising from adiabatic, passive, Gaussian, and primordial seed perturbations.

Journal ArticleDOI
TL;DR: Preliminary data concerning the potential activity of type 1 interferons on SARS-CoV-2, and the relevance of evaluating these molecules in clinical trials for the treatment of COVID-19 are discussed.

Journal ArticleDOI
03 Dec 2020-Nature
TL;DR: Recent work on optical computing for artificial intelligence applications is reviewed and its promise and challenges are discussed.
Abstract: Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.

Journal ArticleDOI
TL;DR: A new range of aerosol radiative forcing over the industrial era is provided based on multiple, traceable, and arguable lines of evidence, including modeling approaches, theoretical considerations, and observations, to constrain the forcing from aerosol‐radiation interactions.
Abstract: Aerosols interact with radiation and clouds. Substantial progress made over the past 40 years in observing, understanding, and modeling these processes helped quantify the imbalance in the Earth’s radiation budget caused by anthropogenic aerosols, called aerosol radiative forcing, but uncertainties remain large. This review provides a new range of aerosol radiative forcing over the industrial era based on multiple, traceable and arguable lines of evidence, including modelling approaches, theoretical considerations, and observations. Improved understanding of aerosol absorption and the causes of trends in surface radiative fluxes constrain the forcing from aerosol-radiation interactions. A robust theoretical foundation and convincing evidence constrain the forcing caused by aerosol-driven increases in liquid cloud droplet number concentration. However, the influence of anthropogenic aerosols on cloud liquid water content and cloud fraction is less clear, and the influence on mixed-phase and ice clouds remains poorly constrained. Observed changes in surface temperature and radiative fluxes provide additional constraints. These multiple lines of evidence lead to a 68% confidence interval for the total aerosol effective radiative forcing of −1.60 to −0.65 W m−2, or −2.0 to −0.4 W m−2 with a 90% likelihood. Those intervals are of similar width to the last Intergovernmental Panel on Climate Change assessment but shifted towards more negative values. The uncertainty will narrow in the future by continuing to critically combine multiple lines of evidence, especially those addressing industrial-era changes in aerosol sources and aerosol effects on liquid cloud amount and on ice clouds.

Journal ArticleDOI
TL;DR: A failure to recognize the factors behind continued emissions growth could limit the world's ability to shift to a pathway consistent with 1.5 °C or 2 °C of global warming as mentioned in this paper.
Abstract: A failure to recognize the factors behind continued emissions growth could limit the world’s ability to shift to a pathway consistent with 1.5 °C or 2 °C of global warming. Continued support for low-carbon technologies needs to be combined with policies directed at phasing out the use of fossil fuels.

Book ChapterDOI
23 Aug 2020
TL;DR: The proposed method, dubbed CosyPose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets.
Abstract: We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed CosyPose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. (https://www.di.ens.fr/willow/research/cosypose/.)

Journal ArticleDOI
TL;DR: This study shows that the lockdown effect on atmospheric composition has been important for several short-lived atmospheric trace species, with a large reduction in NO2 concentrations, a lower reduction in Particulate Matter concentrations and a mitigated effect on ozone concentrations due to non-linear chemical effects.

Journal ArticleDOI
28 Oct 2020-Nature
TL;DR: A high-throughput search for magnetic topological materials based on first-principles calculations is performed and several materials display previously unknown topological phases, including symmetry-indicated magnetic semimetals, three-dimensional anomalous Hall insulators and higher-order magneticSemimetals.
Abstract: The discoveries of intrinsically magnetic topological materials, including semimetals with a large anomalous Hall effect and axion insulators1–3, have directed fundamental research in solid-state materials. Topological quantum chemistry4 has enabled the understanding of and the search for paramagnetic topological materials5,6. Using magnetic topological indices obtained from magnetic topological quantum chemistry (MTQC)7, here we perform a high-throughput search for magnetic topological materials based on first-principles calculations. We use as our starting point the Magnetic Materials Database on the Bilbao Crystallographic Server, which contains more than 549 magnetic compounds with magnetic structures deduced from neutron-scattering experiments, and identify 130 enforced semimetals (for which the band crossings are implied by symmetry eigenvalues), and topological insulators. For each compound, we perform complete electronic structure calculations, which include complete topological phase diagrams using different values of the Hubbard potential. Using a custom code to find the magnetic co-representations of all bands in all magnetic space groups, we generate data to be fed into the algorithm of MTQC to determine the topology of each magnetic material. Several of these materials display previously unknown topological phases, including symmetry-indicated magnetic semimetals, three-dimensional anomalous Hall insulators and higher-order magnetic semimetals. We analyse topological trends in the materials under varying interactions: 60 per cent of the 130 topological materials have topologies sensitive to interactions, and the others have stable topologies under varying interactions. We provide a materials database for future experimental studies and open-source code for diagnosing topologies of magnetic materials. High-throughput calculations are performed to predict approximately 130 magnetic topological materials, with complete electronic structure calculations and topological phase diagrams.

Journal ArticleDOI
TL;DR: In this paper, the authors assess projections of these drivers of environmental change over the twenty-first century from Earth system models (ESMs) participating in the Coupled Model Intercomparison Project Phase 6 (CMIP6) that were forced under the CMIP6 Shared Socioeconomic Pathways (SSPs).
Abstract: . Anthropogenic climate change is projected to lead to ocean warming, acidification, deoxygenation, reductions in near-surface nutrients, and changes to primary production, all of which are expected to affect marine ecosystems. Here we assess projections of these drivers of environmental change over the twenty-first century from Earth system models (ESMs) participating in the Coupled Model Intercomparison Project Phase 6 (CMIP6) that were forced under the CMIP6 Shared Socioeconomic Pathways (SSPs). Projections are compared to those from the previous generation (CMIP5) forced under the Representative Concentration Pathways (RCPs). A total of 10 CMIP5 and 13 CMIP6 models are used in the two multi-model ensembles. Under the high-emission scenario SSP5-8.5, the multi-model global mean change (2080–2099 mean values relative to 1870–1899) ± the inter-model SD in sea surface temperature, surface pH, subsurface (100–600 m ) oxygen concentration, euphotic (0–100 m ) nitrate concentration, and depth-integrated primary production is + 3.47 ± 0.78 ∘C , - 0.44 ± 0.005 , - 13.27 ± 5.28 , - 1.06 ± 0.45 mmol m−3 and - 2.99 ± 9.11 %, respectively. Under the low-emission, high-mitigation scenario SSP1-2.6, the corresponding global changes are + 1.42 ± 0.32 ∘C , - 0.16 ± 0.002 , - 6.36 ± 2.92 , - 0.52 ± 0.23 mmol m−3 , and - 0.56 ± 4.12 %. Projected exposure of the marine ecosystem to these drivers of ocean change depends largely on the extent of future emissions, consistent with previous studies. The ESMs in CMIP6 generally project greater warming, acidification, deoxygenation, and nitrate reductions but lesser primary production declines than those from CMIP5 under comparable radiative forcing. The increased projected ocean warming results from a general increase in the climate sensitivity of CMIP6 models relative to those of CMIP5. This enhanced warming increases upper-ocean stratification in CMIP6 projections, which contributes to greater reductions in upper-ocean nitrate and subsurface oxygen ventilation. The greater surface acidification in CMIP6 is primarily a consequence of the SSPs having higher associated atmospheric CO2 concentrations than their RCP analogues for the same radiative forcing. We find no consistent reduction in inter-model uncertainties, and even an increase in net primary production inter-model uncertainties in CMIP6, as compared to CMIP5.

Journal ArticleDOI
TL;DR: The results demonstrate ingestion and mainly sublethal effects of environmental MPs in early life stages of fish at realistic MP concentrations, and the ecological consequences microplastic build-up in aquatic ecosystems.

Journal ArticleDOI
TL;DR: Observational evidence is provided that increased foliage cover over the Northern Hemisphere, during 1982–2011, triggers an additional soil moisture deficit that is further carried over into summer, and climate model simulations independently support this and attribute the driving process to be larger increases in evapotranspiration than in precipitation.
Abstract: Earlier vegetation greening under climate change raises evapotranspiration and thus lowers spring soil moisture, yet the extent and magnitude of this water deficit persistence into the following summer remain elusive. We provide observational evidence that increased foliage cover over the Northern Hemisphere, during 1982-2011, triggers an additional soil moisture deficit that is further carried over into summer. Climate model simulations independently support this and attribute the driving process to be larger increases in evapotranspiration than in precipitation. This extra soil drying is projected to amplify the frequency and intensity of summer heatwaves. Most feedbacks operate locally, except for a notable teleconnection where extra moisture transpired over Europe is transported to central Siberia. Model results illustrate that this teleconnection offsets Siberian soil moisture losses from local spring greening. Our results highlight that climate change adaptation planning must account for the extra summer water and heatwave stress inherited from warming-induced earlier greening.

Journal ArticleDOI
TL;DR: This is a turning point for nanofluidics as discussed by the authors, which allows us to envision both fundamental discoveries for the transport of fluids at the ultimate scales, and disruptive technologies for the water-energy nexus.
Abstract: This is a turning point for nanofluidics. Recent progress allows envisioning both fundamental discoveries for the transport of fluids at the ultimate scales, and disruptive technologies for the water–energy nexus.

Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, M. Ashdown4  +202 moreInstitutions (61)
TL;DR: In this article, the authors present an extensive analysis of systematic effects, including the use of end-to-end simulations to facilitate their removal and characterize the residuals, for the Planck 2018 HFI data.
Abstract: This paper presents the High Frequency Instrument (HFI) data processing procedures for the Planck 2018 release. Major improvements in mapmaking have been achieved since the previous Planck 2015 release, many of which were used and described already in an intermediate paper dedicated to the Planck polarized data at low multipoles. These improvements enabled the first significant measurement of the reionization optical depth parameter using Planck -HFI data. This paper presents an extensive analysis of systematic effects, including the use of end-to-end simulations to facilitate their removal and characterize the residuals. The polarized data, which presented a number of known problems in the 2015 Planck release, are very significantly improved, especially the leakage from intensity to polarization. Calibration, based on the cosmic microwave background (CMB) dipole, is now extremely accurate and in the frequency range 100–353 GHz reduces intensity-to-polarization leakage caused by calibration mismatch. The Solar dipole direction has been determined in the three lowest HFI frequency channels to within one arc minute, and its amplitude has an absolute uncertainty smaller than 0.35 μ K, an accuracy of order 10−4 . This is a major legacy from the Planck HFI for future CMB experiments. The removal of bandpass leakage has been improved for the main high-frequency foregrounds by extracting the bandpass-mismatch coefficients for each detector as part of the mapmaking process; these values in turn improve the intensity maps. This is a major change in the philosophy of “frequency maps”, which are now computed from single detector data, all adjusted to the same average bandpass response for the main foregrounds. End-to-end simulations have been shown to reproduce very well the relative gain calibration of detectors, as well as drifts within a frequency induced by the residuals of the main systematic effect (analogue-to-digital convertor non-linearity residuals). Using these simulations, we have been able to measure and correct the small frequency calibration bias induced by this systematic effect at the 10−4 level. There is no detectable sign of a residual calibration bias between the first and second acoustic peaks in the CMB channels, at the 10−3 level.

Journal ArticleDOI
TL;DR: The diversity of tools and activities observed in these three sites shows that Western Europe was populated by adaptable hominins during this time, and questions concerning understudied migration pathways, such as the Sicilian route are raised.
Abstract: Notarchirico (Southern Italy) has yielded the earliest evidence of Acheulean settlement in Italy and four older occupation levels have recently been unearthed, including one with bifaces, extending the roots of the Acheulean in Italy even further back in time. New 40Ar/39Ar on tephras and ESR dates on bleached quartz securely and accurately place these occupations between 695 and 670 ka (MIS 17), penecontemporaneous with the Moulin-Quignon and la Noira sites (France). These new data demonstrate a very rapid expansion of shared traditions over Western Europe during a period of highly variable climatic conditions, including interglacial and glacial episodes, between 670 and 650 (i.e., MIS17/MIS16 transition). The diversity of tools and activities observed in these three sites shows that Western Europe was populated by adaptable hominins during this time. These conclusions question the existence of refuge areas during intense glacial stages and raise questions concerning understudied migration pathways, such as the Sicilian route.

Posted Content
TL;DR: It is shown that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions.
Abstract: Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. In presence of hidden low-dimensional structures, the resulting margin is independent of the ambiant dimension, which leads to strong generalization bounds. In contrast, training only the output layer implicitly solves a kernel support vector machine, which a priori does not enjoy such an adaptivity. Our analysis of training is non-quantitative in terms of running time but we prove computational guarantees in simplified settings by showing equivalences with online mirror descent. Finally, numerical experiments suggest that our analysis describes well the practical behavior of two-layer neural networks with ReLU activation and confirm the statistical benefits of this implicit bias.

Posted Content
TL;DR: This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
Abstract: With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.


Journal ArticleDOI
TL;DR: This review discussed on the current knowledge on the structure and chemistry of isolation of lignin from different sources using various common methods, its characterization, properties and its applications.

Journal ArticleDOI
24 Jul 2020-Science
TL;DR: A process to learn the constraints for specifying proteins purely from evolutionary sequence data, design and build libraries of synthetic genes, and test them for activity in vivo using a quantitative complementation assay is described.
Abstract: The rational design of enzymes is an important goal for both fundamental and practical reasons. Here, we describe a process to learn the constraints for specifying proteins purely from evolutionary sequence data, design and build libraries of synthetic genes, and test them for activity in vivo using a quantitative complementation assay. For chorismate mutase, a key enzyme in the biosynthesis of aromatic amino acids, we demonstrate the design of natural-like catalytic function with substantial sequence diversity. Further optimization focuses the generative model toward function in a specific genomic context. The data show that sequence-based statistical models suffice to specify proteins and provide access to an enormous space of functional sequences. This result provides a foundation for a general process for evolution-based design of artificial proteins.