scispace - formally typeset
Search or ask a question

Showing papers by "Stockholm University published in 2020"


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +229 moreInstitutions (70)
TL;DR: In this article, the authors present cosmological parameter results from the full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction Compared to the 2015 results, improved measurements of large-scale polarization allow the reionization optical depth to be measured with higher precision, leading to significant gains in the precision of other correlated parameters Improved modelling of the small-scale polarization leads to more robust constraints on manyparameters,withresidualmodellinguncertaintiesestimatedtoaffectthemonlyatthe05σlevelWefindgoodconsistencywiththestandard spatially-flat6-parameter ΛCDMcosmologyhavingapower-lawspectrumofadiabaticscalarperturbations(denoted“base ΛCDM”inthispaper), from polarization, temperature, and lensing, separately and in combination A combined analysis gives dark matter density Ωch2 = 0120±0001, baryon density Ωbh2 = 00224±00001, scalar spectral index ns = 0965±0004, and optical depth τ = 0054±0007 (in this abstract we quote 68% confidence regions on measured parameters and 95% on upper limits) The angular acoustic scale is measured to 003% precision, with 100θ∗ = 10411±00003Theseresultsareonlyweaklydependentonthecosmologicalmodelandremainstable,withsomewhatincreasederrors, in many commonly considered extensions Assuming the base-ΛCDM cosmology, the inferred (model-dependent) late-Universe parameters are: HubbleconstantH0 = (674±05)kms−1Mpc−1;matterdensityparameterΩm = 0315±0007;andmatterfluctuationamplitudeσ8 = 0811±0006 We find no compelling evidence for extensions to the base-ΛCDM model Combining with baryon acoustic oscillation (BAO) measurements (and consideringsingle-parameterextensions)weconstraintheeffectiveextrarelativisticdegreesoffreedomtobe Neff = 299±017,inagreementwith the Standard Model prediction Neff = 3046, and find that the neutrino mass is tightly constrained toPmν < 012 eV The CMB spectra continue to prefer higher lensing amplitudesthan predicted in base ΛCDM at over 2σ, which pulls some parameters that affect thelensing amplitude away from the ΛCDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAOdataThejointconstraintwithBAOmeasurementsonspatialcurvatureisconsistentwithaflatuniverse, ΩK = 0001±0002Alsocombining with Type Ia supernovae (SNe), the dark-energy equation of state parameter is measured to be w0 = −103±003, consistent with a cosmological constant We find no evidence for deviations from a purely power-law primordial spectrum, and combining with data from BAO, BICEP2, and Keck Array data, we place a limit on the tensor-to-scalar ratio r0002 < 006 Standard big-bang nucleosynthesis predictions for the helium and deuterium abundances for the base-ΛCDM cosmology are in excellent agreement with observations The Planck base-ΛCDM results are in good agreement with BAO, SNe, and some galaxy lensing observations, but in slight tension with the Dark Energy Survey’s combined-probe results including galaxy clustering (which prefers lower fluctuation amplitudes or matter density parameters), and in significant, 36σ, tension with local measurements of the Hubble constant (which prefer a higher value) Simple model extensions that can partially resolve these tensions are not favoured by the Planck data

4,688 citations


Book
Georges Aad1, E. Abat2, Jalal Abdallah3, Jalal Abdallah4  +3029 moreInstitutions (164)
23 Feb 2020
TL;DR: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper, where a brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
Abstract: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.

3,111 citations


Journal ArticleDOI
Yashar Akrami1, Yashar Akrami2, M. Ashdown3, J. Aumont4  +180 moreInstitutions (59)
TL;DR: In this paper, a power-law fit to the angular power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky is presented.
Abstract: The study of polarized dust emission has become entwined with the analysis of the cosmic microwave background (CMB) polarization. We use new Planck maps to characterize Galactic dust emission as a foreground to the CMB polarization. We present Planck EE, BB, and TE power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky. We present power-law fits to the angular power spectra, yielding evidence for statistically significant variations of the exponents over sky regions and a difference between the values for the EE and BB spectra. The TE correlation and E/B power asymmetry extend to low multipoles that were not included in earlier Planck polarization papers. We also report evidence for a positive TB dust signal. Combining data from Planck and WMAP, we determine the amplitudes and spectral energy distributions (SEDs) of polarized foregrounds, including the correlation between dust and synchrotron polarized emission, for the six sky regions as a function of multipole. This quantifies the challenge of the component separation procedure required for detecting the reionization and recombination peaks of primordial CMB B modes. The SED of polarized dust emission is fit well by a single-temperature modified blackbody emission law from 353 GHz to below 70 GHz. For a dust temperature of 19.6 K, the mean spectral index for dust polarization is $\beta_{\rm d}^{P} = 1.53\pm0.02 $. By fitting multi-frequency cross-spectra, we examine the correlation of the dust polarization maps across frequency. We find no evidence for decorrelation. If the Planck limit for the largest sky region applies to the smaller sky regions observed by sub-orbital experiments, then decorrelation might not be a problem for CMB experiments aiming at a primordial B-mode detection limit on the tensor-to-scalar ratio $r\simeq0.01$ at the recombination peak.

1,749 citations


Journal ArticleDOI
Marielle Saunois1, Ann R. Stavert2, Ben Poulter3, Philippe Bousquet1, Josep G. Canadell2, Robert B. Jackson4, Peter A. Raymond5, Edward J. Dlugokencky6, Sander Houweling7, Sander Houweling8, Prabir K. Patra9, Prabir K. Patra10, Philippe Ciais1, Vivek K. Arora, David Bastviken11, Peter Bergamaschi, Donald R. Blake12, Gordon Brailsford13, Lori Bruhwiler6, Kimberly M. Carlson14, Mark Carrol3, Simona Castaldi15, Naveen Chandra9, Cyril Crevoisier16, Patrick M. Crill17, Kristofer R. Covey18, Charles L. Curry19, Giuseppe Etiope20, Giuseppe Etiope21, Christian Frankenberg22, Nicola Gedney23, Michaela I. Hegglin24, Lena Höglund-Isaksson25, Gustaf Hugelius17, Misa Ishizawa26, Akihiko Ito26, Greet Janssens-Maenhout, Katherine M. Jensen27, Fortunat Joos28, Thomas Kleinen29, Paul B. Krummel2, Ray L. Langenfelds2, Goulven Gildas Laruelle, Licheng Liu30, Toshinobu Machida26, Shamil Maksyutov26, Kyle C. McDonald27, Joe McNorton31, Paul A. Miller32, Joe R. Melton, Isamu Morino26, Jurek Müller28, Fabiola Murguia-Flores33, Vaishali Naik34, Yosuke Niwa26, Sergio Noce, Simon O'Doherty33, Robert J. Parker35, Changhui Peng36, Shushi Peng37, Glen P. Peters, Catherine Prigent, Ronald G. Prinn38, Michel Ramonet1, Pierre Regnier, William J. Riley39, Judith A. Rosentreter40, Arjo Segers, Isobel J. Simpson12, Hao Shi41, Steven J. Smith42, L. Paul Steele2, Brett F. Thornton17, Hanqin Tian41, Yasunori Tohjima26, Francesco N. Tubiello43, Aki Tsuruta44, Nicolas Viovy1, Apostolos Voulgarakis45, Apostolos Voulgarakis46, Thomas Weber47, Michiel van Weele48, Guido R. van der Werf8, Ray F. Weiss49, Doug Worthy, Debra Wunch50, Yi Yin22, Yi Yin1, Yukio Yoshida26, Weiya Zhang32, Zhen Zhang51, Yuanhong Zhao1, Bo Zheng1, Qing Zhu39, Qiuan Zhu52, Qianlai Zhuang30 
Université Paris-Saclay1, Commonwealth Scientific and Industrial Research Organisation2, Goddard Space Flight Center3, Stanford University4, Yale University5, National Oceanic and Atmospheric Administration6, Netherlands Institute for Space Research7, VU University Amsterdam8, Japan Agency for Marine-Earth Science and Technology9, Chiba University10, Linköping University11, University of California, Irvine12, National Institute of Water and Atmospheric Research13, New York University14, Seconda Università degli Studi di Napoli15, École Polytechnique16, Stockholm University17, Skidmore College18, University of Victoria19, National Institute of Geophysics and Volcanology20, Babeș-Bolyai University21, California Institute of Technology22, Met Office23, University of Reading24, International Institute for Applied Systems Analysis25, National Institute for Environmental Studies26, City University of New York27, University of Bern28, Max Planck Society29, Purdue University30, European Centre for Medium-Range Weather Forecasts31, Lund University32, University of Bristol33, Geophysical Fluid Dynamics Laboratory34, University of Leicester35, Université du Québec à Montréal36, Peking University37, Massachusetts Institute of Technology38, Lawrence Berkeley National Laboratory39, Southern Cross University40, Auburn University41, Joint Global Change Research Institute42, Food and Agriculture Organization43, Finnish Meteorological Institute44, Imperial College London45, Technical University of Crete46, University of Rochester47, Royal Netherlands Meteorological Institute48, Scripps Institution of Oceanography49, University of Toronto50, University of Maryland, College Park51, Hohai University52
TL;DR: The second version of the living review paper dedicated to the decadal methane budget, integrating results of top-down studies (atmospheric observations within an atmospheric inverse-modeling framework) and bottom-up estimates (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations) as discussed by the authors.
Abstract: Understanding and quantifying the global methane (CH4) budget is important for assessing realistic pathways to mitigate climate change. Atmospheric emissions and concentrations of CH4 continue to increase, making CH4 the second most important human-influenced greenhouse gas in terms of climate forcing, after carbon dioxide (CO2). The relative importance of CH4 compared to CO2 depends on its shorter atmospheric lifetime, stronger warming potential, and variations in atmospheric growth rate over the past decade, the causes of which are still debated. Two major challenges in reducing uncertainties in the atmospheric growth rate arise from the variety of geographically overlapping CH4 sources and from the destruction of CH4 by short-lived hydroxyl radicals (OH). To address these challenges, we have established a consortium of multidisciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate new research aimed at improving and regularly updating the global methane budget. Following Saunois et al. (2016), we present here the second version of the living review paper dedicated to the decadal methane budget, integrating results of top-down studies (atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up estimates (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations). For the 2008–2017 decade, global methane emissions are estimated by atmospheric inversions (a top-down approach) to be 576 Tg CH4 yr−1 (range 550–594, corresponding to the minimum and maximum estimates of the model ensemble). Of this total, 359 Tg CH4 yr−1 or ∼ 60 % is attributed to anthropogenic sources, that is emissions caused by direct human activity (i.e. anthropogenic emissions; range 336–376 Tg CH4 yr−1 or 50 %–65 %). The mean annual total emission for the new decade (2008–2017) is 29 Tg CH4 yr−1 larger than our estimate for the previous decade (2000–2009), and 24 Tg CH4 yr−1 larger than the one reported in the previous budget for 2003–2012 (Saunois et al., 2016). Since 2012, global CH4 emissions have been tracking the warmest scenarios assessed by the Intergovernmental Panel on Climate Change. Bottom-up methods suggest almost 30 % larger global emissions (737 Tg CH4 yr−1, range 594–881) than top-down inversion methods. Indeed, bottom-up estimates for natural sources such as natural wetlands, other inland water systems, and geological sources are higher than top-down estimates. The atmospheric constraints on the top-down budget suggest that at least some of these bottom-up emissions are overestimated. The latitudinal distribution of atmospheric observation-based emissions indicates a predominance of tropical emissions (∼ 65 % of the global budget, < 30∘ N) compared to mid-latitudes (∼ 30 %, 30–60∘ N) and high northern latitudes (∼ 4 %, 60–90∘ N). The most important source of uncertainty in the methane budget is attributable to natural emissions, especially those from wetlands and other inland waters. Some of our global source estimates are smaller than those in previously published budgets (Saunois et al., 2016; Kirschke et al., 2013). In particular wetland emissions are about 35 Tg CH4 yr−1 lower due to improved partition wetlands and other inland waters. Emissions from geological sources and wild animals are also found to be smaller by 7 Tg CH4 yr−1 by 8 Tg CH4 yr−1, respectively. However, the overall discrepancy between bottom-up and top-down estimates has been reduced by only 5 % compared to Saunois et al. (2016), due to a higher estimate of emissions from inland waters, highlighting the need for more detailed research on emissions factors. Priorities for improving the methane budget include (i) a global, high-resolution map of water-saturated soils and inundated areas emitting methane based on a robust classification of different types of emitting habitats; (ii) further development of process-based models for inland-water emissions; (iii) intensification of methane observations at local scales (e.g., FLUXNET-CH4 measurements) and urban-scale monitoring to constrain bottom-up land surface models, and at regional scales (surface networks and satellites) to constrain atmospheric inversions; (iv) improvements of transport models and the representation of photochemical sinks in top-down inversions; and (v) development of a 3D variational inversion system using isotopic and/or co-emitted species such as ethane to improve source partitioning.

1,047 citations


Journal ArticleDOI
Jens Kattge1, Gerhard Bönisch2, Sandra Díaz3, Sandra Lavorel  +751 moreInstitutions (314)
TL;DR: The extent of the trait data compiled in TRY is evaluated and emerging patterns of data coverage and representativeness are analyzed to conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements.
Abstract: Plant traits-the morphological, anatomical, physiological, biochemical and phenological characteristics of plants-determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits-almost complete coverage for 'plant growth form'. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait-environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives.

882 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Frederico Arroja4  +251 moreInstitutions (72)
TL;DR: In this paper, the authors present the cosmological legacy of the Planck satellite, which provides the strongest constraints on the parameters of the standard cosmology model and some of the tightest limits available on deviations from that model.
Abstract: The European Space Agency’s Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009. It scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857 GHz. This paper presents the cosmological legacy of Planck, which currently provides our strongest constraints on the parameters of the standard cosmological model and some of the tightest limits available on deviations from that model. The 6-parameter ΛCDM model continues to provide an excellent fit to the cosmic microwave background data at high and low redshift, describing the cosmological information in over a billion map pixels with just six parameters. With 18 peaks in the temperature and polarization angular power spectra constrained well, Planck measures five of the six parameters to better than 1% (simultaneously), with the best-determined parameter (θ*) now known to 0.03%. We describe the multi-component sky as seen by Planck, the success of the ΛCDM model, and the connection to lower-redshift probes of structure formation. We also give a comprehensive summary of the major changes introduced in this 2018 release. The Planck data, alone and in combination with other probes, provide stringent constraints on our models of the early Universe and the large-scale structure within which all astrophysical objects form and evolve. We discuss some lessons learned from the Planck mission, and highlight areas ripe for further experimental advances.

879 citations


Journal ArticleDOI
Sadaf G. Sepanlou1, Saeid Safiri2, Catherine Bisignano3, Kevin S Ikuta4  +198 moreInstitutions (106)
TL;DR: Mortality, prevalence, and DALY estimates are compared with those expected according to the Socio-demographic Index (SDI) as a proxy for the development status of regions and countries, and a significant increase in age-standardised prevalence rate of decompensated cirrhosis between 1990 and 2017.

670 citations


Journal ArticleDOI
20 Jan 2020
TL;DR: In this article, the authors propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research, and offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.
Abstract: Research practice, funding agencies and global science organizations suggest that research aimed at addressing sustainability challenges is most effective when ‘co-produced’ by academics and non-academics. Co-production promises to address the complex nature of contemporary sustainability challenges better than more traditional scientific approaches. But definitions of knowledge co-production are diverse and often contradictory. We propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research. Using these principles, we offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.

607 citations


Journal ArticleDOI
23 Jun 2020-Science
TL;DR: By introducing age and activity heterogeneities into population models for SARS-CoV-2, herd immunity can be achieved at a population-wide infection rate of ∼40%, considerably lower than previous estimates.
Abstract: Despite various levels of preventive measures, in 2020 many countries have suffered severely from the coronavirus 2019 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus We show that population heterogeneity can significantly impact disease-induced immunity as the proportion infected in groups with the highest contact rates is greater than in groups with low contact rates We estimate that if R0 = 25 in an age-structured community with mixing rates fitted to social activity then the disease-induced herd immunity level can be around 43%, which is substantially less than the classical herd immunity level of 60% obtained through homogeneous immunization of the population Our estimates should be interpreted as an illustration of how population heterogeneity affects herd immunity, rather than an exact value or even a best estimate

574 citations


Journal ArticleDOI
04 Jun 2020-Nature
TL;DR: The results obtained by seventy different teams analysing the same functional magnetic resonance imaging dataset show substantial variation, highlighting the influence of analytical choices and the importance of sharing workflows publicly and performing multiple analyses.
Abstract: Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses1. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset2-5. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed.

551 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +213 moreInstitutions (66)
TL;DR: In this article, the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release are described, with a hybrid method using different approximations at low (l ǫ ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases.
Abstract: We describe the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release. The overall approach is similar in spirit to the one retained for the 2013 and 2015 data release, with a hybrid method using different approximations at low (l ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases. With more realistic simulations, and better correction and modelling of systematic effects, we can now make full use of the CMB polarization observed in the High Frequency Instrument (HFI) channels. The low-multipole EE cross-spectra from the 100 GHz and 143 GHz data give a constraint on the ΛCDM reionization optical-depth parameter τ to better than 15% (in combination with the TT low-l data and the high-l temperature and polarization data), tightening constraints on all parameters with posterior distributions correlated with τ . We also update the weaker constraint on τ from the joint TEB likelihood using the Low Frequency Instrument (LFI) channels, which was used in 2015 as part of our baseline analysis. At higher multipoles, the CMB temperature spectrum and likelihood are very similar to previous releases. A better model of the temperature-to-polarization leakage and corrections for the effective calibrations of the polarization channels (i.e., the polarization efficiencies) allow us to make full use of polarization spectra, improving the ΛCDM constraints on the parameters θ MC , ω c , ω b , and H 0 by more than 30%, and ns by more than 20% compared to TT-only constraints. Extensive tests on the robustness of the modelling of the polarization data demonstrate good consistency, with some residual modelling uncertainties. At high multipoles, we are now limited mainly by the accuracy of the polarization efficiency modelling. Using our various tests, simulations, and comparison between different high-multipole likelihood implementations, we estimate the consistency of the results to be better than the 0.5 σ level on the ΛCDM parameters, as well as classical single-parameter extensions for the joint likelihood (to be compared to the 0.3 σ levels we achieved in 2015 for the temperature data alone on ΛCDM only). Minor curiosities already present in the previous releases remain, such as the differences between the best-fit ΛCDM parameters for the l > 800 ranges of the power spectrum, or the preference for more smoothing of the power-spectrum peaks than predicted in ΛCDM fits. These are shown to be driven by the temperature power spectrum and are not significantly modified by the inclusion of the polarization data. Overall, the legacy Planck CMB likelihoods provide a robust tool for constraining the cosmological model and represent a reference for future CMB observations.

Journal ArticleDOI
TL;DR: Evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S, is assessed, using a Bayesian approach to produce a probability density function for S given all the evidence, and promising avenues for further narrowing the range are identified.
Abstract: We assess evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S. This evidence includes feedback process understanding, the historical climate record, and the paleoclimate record. An S value lower than 2 K is difficult to reconcile with any of the three lines of evidence. The amount of cooling during the Last Glacial Maximum provides strong evidence against values of S greater than 4.5 K. Other lines of evidence in combination also show that this is relatively unlikely. We use a Bayesian approach to produce a probability density function (PDF) for S given all the evidence, including tests of robustness to difficult-to-quantify uncertainties and different priors. The 66% range is 2.6-3.9 K for our Baseline calculation and remains within 2.3-4.5 K under the robustness tests; corresponding 5-95% ranges are 2.3-4.7 K, bounded by 2.0-5.7 K (although such high-confidence ranges should be regarded more cautiously). This indicates a stronger constraint on S than reported in past assessments, by lifting the low end of the range. This narrowing occurs because the three lines of evidence agree and are judged to be largely independent and because of greater confidence in understanding feedback processes and in combining evidence. We identify promising avenues for further narrowing the range in S, in particular using comprehensive models and process understanding to address limitations in the traditional forcing-feedback paradigm for interpreting past changes.

Journal ArticleDOI
TL;DR: This study clearly demonstrates that PFAS are used in almost all industry branches and many consumer products, and more than 200 use categories and subcategories are identified for more than 1400 individual PFAS.
Abstract: Per- and polyfluoroalkyl substances (PFAS) are of concern because of their high persistence (or that of their degradation products) and their impacts on human and environmental health that are known or can be deduced from some well-studied PFAS. Currently, many different PFAS (on the order of several thousands) are used in a wide range of applications, and there is no comprehensive source of information on the many individual substances and their functions in different applications. Here we provide a broad overview of many use categories where PFAS have been employed and for which function; we also specify which PFAS have been used and discuss the magnitude of the uses. Despite being non-exhaustive, our study clearly demonstrates that PFAS are used in almost all industry branches and many consumer products. In total, more than 200 use categories and subcategories are identified for more than 1400 individual PFAS. In addition to well-known categories such as textile impregnation, fire-fighting foam, and electroplating, the identified use categories also include many categories not described in the scientific literature, including PFAS in ammunition, climbing ropes, guitar strings, artificial turf, and soil remediation. We further discuss several use categories that may be prioritised for finding PFAS-free alternatives. Besides the detailed description of use categories, the present study also provides a list of the identified PFAS per use category, including their exact masses for future analytical studies aiming to identify additional PFAS.

Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +202 moreInstitutions (63)
TL;DR: In this article, the authors presented an extensive set of tests of the robustness of the lensing-potential power spectrum, and constructed a minimum-variance estimator likelihood over lensing multipoles 8.
Abstract: We present measurements of the cosmic microwave background (CMB) lensing potential using the final Planck 2018 temperature and polarization data. Using polarization maps filtered to account for the noise anisotropy, we increase the significance of the detection of lensing in the polarization maps from 5σ to 9σ . Combined with temperature, lensing is detected at 40σ . We present an extensive set of tests of the robustness of the lensing-potential power spectrum, and construct a minimum-variance estimator likelihood over lensing multipoles 8 ≤ L ≤ 400 (extending the range to lower L compared to 2015), which we use to constrain cosmological parameters. We find good consistency between lensing constraints and the results from the Planck CMB power spectra within the ΛCDM model. Combined with baryon density and other weak priors, the lensing analysis alone constrains (1σ errors). Also combining with baryon acoustic oscillation data, we find tight individual parameter constraints, σ 8 = 0.811 ± 0.019, , and . Combining with Planck CMB power spectrum data, we measure σ 8 to better than 1% precision, finding σ 8 = 0.811 ± 0.006. CMB lensing reconstruction data are complementary to galaxy lensing data at lower redshift, having a different degeneracy direction in σ 8 − Ωm space; we find consistency with the lensing results from the Dark Energy Survey, and give combined lensing-only parameter constraints that are tighter than joint results using galaxy clustering. Using the Planck cosmic infrared background (CIB) maps as an additional tracer of high-redshift matter, we make a combined Planck -only estimate of the lensing potential over 60% of the sky with considerably more small-scale signal. We additionally demonstrate delensing of the Planck power spectra using the joint and individual lensing potential estimates, detecting a maximum removal of 40% of the lensing-induced power in all spectra. The improvement in the sharpening of the acoustic peaks by including both CIB and the quadratic lensing reconstruction is detected at high significance.

Journal ArticleDOI
Yashar Akrami1, Frederico Arroja2, M. Ashdown3, J. Aumont4  +187 moreInstitutions (59)
TL;DR: In this paper, the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps were used to obtain constraints on primordial non-Gaussianity.
Abstract: We analyse the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps to obtain constraints on primordial non-Gaussianity (NG). We compare estimates obtained from separable template-fitting, binned, and optimal modal bispectrum estimators, finding consistent values for the local, equilateral, and orthogonal bispectrum amplitudes. Our combined temperature and polarization analysis produces the following final results: $f_{NL}^{local}$ = −0.9 ± 5.1; $f_{NL}^{equil}$ = −26 ± 47; and $f_{NL}^{ortho}$ = −38 ± 24 (68% CL, statistical). These results include low-multipole (4 ≤ l < 40) polarization data that are not included in our previous analysis. The results also pass an extensive battery of tests (with additional tests regarding foreground residuals compared to 2015), and they are stable with respect to our 2015 measurements (with small fluctuations, at the level of a fraction of a standard deviation, which is consistent with changes in data processing). Polarization-only bispectra display a significant improvement in robustness; they can now be used independently to set primordial NG constraints with a sensitivity comparable to WMAP temperature-based results and they give excellent agreement. In addition to the analysis of the standard local, equilateral, and orthogonal bispectrum shapes, we consider a large number of additional cases, such as scale-dependent feature and resonance bispectra, isocurvature primordial NG, and parity-breaking models, where we also place tight constraints but do not detect any signal. The non-primordial lensing bispectrum is, however, detected with an improved significance compared to 2015, excluding the null hypothesis at 3.5σ. Beyond estimates of individual shape amplitudes, we also present model-independent reconstructions and analyses of the Planck CMB bispectrum. Our final constraint on the local primordial trispectrum shape is $g_{NL}^{local}$ = (−5.8 ± 6.5) × 10$^4$ (68% CL, statistical), while constraints for other trispectrum shapes are also determined. Exploiting the tight limits on various bispectrum and trispectrum shapes, we constrain the parameter space of different early-Universe scenarios that generate primordial NG, including general single-field models of inflation, multi-field models (e.g. curvaton models), models of inflation with axion fields producing parity-violation bispectra in the tensor sector, and inflationary models involving vector-like fields with directionally-dependent bispectra. Our results provide a high-precision test for structure-formation scenarios, showing complete agreement with the basic picture of the ΛCDM cosmology regarding the statistics of the initial conditions, with cosmic structures arising from adiabatic, passive, Gaussian, and primordial seed perturbations.

Journal ArticleDOI
TL;DR: In this article, the authors present a review of the development of hydrogen storage materials, methods and techniques, including electrochemical and thermal storage systems, and an outlook for future prospects and research on hydrogen-based energy storage.

Journal ArticleDOI
TL;DR: In this article, the authors synthesize the best available information and develop inventory models to simulate abrupt thaw impacts on permafrost carbon balance, and they conclude that models considering only gradual thaw are substantially underestimating carbon emissions.
Abstract: The permafrost zone is expected to be a substantial carbon source to the atmosphere, yet large-scale models currently only simulate gradual changes in seasonally thawed soil. Abrupt thaw will probably occur in <20% of the permafrost zone but could affect half of permafrost carbon through collapsing ground, rapid erosion and landslides. Here, we synthesize the best available information and develop inventory models to simulate abrupt thaw impacts on permafrost carbon balance. Emissions across 2.5 million km2 of abrupt thaw could provide a similar climate feedback as gradual thaw emissions from the entire 18 million km2 permafrost region under the warming projection of Representative Concentration Pathway 8.5. While models forecast that gradual thaw may lead to net ecosystem carbon uptake under projections of Representative Concentration Pathway 4.5, abrupt thaw emissions are likely to offset this potential carbon sink. Active hillslope erosional features will occupy 3% of abrupt thaw terrain by 2300 but emit one-third of abrupt thaw carbon losses. Thaw lakes and wetlands are methane hot spots but their carbon release is partially offset by slowly regrowing vegetation. After considering abrupt thaw stabilization, lake drainage and soil carbon uptake by vegetation regrowth, we conclude that models considering only gradual permafrost thaw are substantially underestimating carbon emissions from thawing permafrost. Analyses of inventory models under two climate change projection scenarios suggest that carbon emissions from abrupt thaw of permafrost through ground collapse, erosion and landslides could contribute significantly to the overall permafrost carbon balance.

Journal ArticleDOI
TL;DR: This Review comprehensively surveys the progress in polymer-derived functional HPCMs in terms of how to produce and control their porosities, heteroatom doping effects, and morphologies and their related use and provides perspective on how to predefine the structures of HPC Ms by using polymers to realize their potential applications in the current fields of energy generation/conversion and environmental remediation.
Abstract: Heteroatom-doped porous carbon materials (HPCMs) have found extensive applications in adsorption/separation, organic catalysis, sensing, and energy conversion/storage. The judicious choice of carbon precursors is crucial for the manufacture of HPCMs with specific usages and maximization of their functions. In this regard, polymers as precursors have demonstrated great promise because of their versatile molecular and nanoscale structures, modulatable chemical composition, and rich processing techniques to generate textures that, in combination with proper solid-state chemistry, can be maintained throughout carbonization. This Review comprehensively surveys the progress in polymer-derived functional HPCMs in terms of how to produce and control their porosities, heteroatom doping effects, and morphologies and their related use. First, we summarize and discuss synthetic approaches, including hard and soft templating methods as well as direct synthesis strategies employing polymers to control the pores and/or heteroatoms in HPCMs. Second, we summarize the heteroatom doping effects on the thermal stability, electronic and optical properties, and surface chemistry of HPCMs. Specifically, the heteroatom doping effect, which involves both single-type heteroatom doping and codoping of two or more types of heteroatoms into the carbon network, is discussed. Considering the significance of the morphologies of HPCMs in their application spectrum, potential choices of suitable polymeric precursors and strategies to precisely regulate the morphologies of HPCMs are presented. Finally, we provide our perspective on how to predefine the structures of HPCMs by using polymers to realize their potential applications in the current fields of energy generation/conversion and environmental remediation. We believe that these analyses and deductions are valuable for a systematic understanding of polymer-derived carbon materials and will serve as a source of inspiration for the design of future HPCMs.

Journal ArticleDOI
TL;DR: This work discusses and evaluates the potential of social tipping interventions (STIs) that can activate contagious processes of rapidly spreading technologies, behaviors, social norms, and structural reorganization within their functional domains that it describes as social tipping elements (STEs).
Abstract: Safely achieving the goals of the Paris Climate Agreement requires a worldwide transformation to carbon-neutral societies within the next 30 y. Accelerated technological progress and policy implementations are required to deliver emissions reductions at rates sufficiently fast to avoid crossing dangerous tipping points in the Earth's climate system. Here, we discuss and evaluate the potential of social tipping interventions (STIs) that can activate contagious processes of rapidly spreading technologies, behaviors, social norms, and structural reorganization within their functional domains that we refer to as social tipping elements (STEs). STEs are subdomains of the planetary socioeconomic system where the required disruptive change may take place and lead to a sufficiently fast reduction in anthropogenic greenhouse gas emissions. The results are based on online expert elicitation, a subsequent expert workshop, and a literature review. The STIs that could trigger the tipping of STE subsystems include 1) removing fossil-fuel subsidies and incentivizing decentralized energy generation (STE1, energy production and storage systems), 2) building carbon-neutral cities (STE2, human settlements), 3) divesting from assets linked to fossil fuels (STE3, financial markets), 4) revealing the moral implications of fossil fuels (STE4, norms and value systems), 5) strengthening climate education and engagement (STE5, education system), and 6) disclosing information on greenhouse gas emissions (STE6, information feedbacks). Our research reveals important areas of focus for larger-scale empirical and modeling efforts to better understand the potentials of harnessing social tipping dynamics for climate change mitigation.

Journal ArticleDOI
TL;DR: A new range of aerosol radiative forcing over the industrial era is provided based on multiple, traceable, and arguable lines of evidence, including modeling approaches, theoretical considerations, and observations, to constrain the forcing from aerosol‐radiation interactions.
Abstract: Aerosols interact with radiation and clouds. Substantial progress made over the past 40 years in observing, understanding, and modeling these processes helped quantify the imbalance in the Earth’s radiation budget caused by anthropogenic aerosols, called aerosol radiative forcing, but uncertainties remain large. This review provides a new range of aerosol radiative forcing over the industrial era based on multiple, traceable and arguable lines of evidence, including modelling approaches, theoretical considerations, and observations. Improved understanding of aerosol absorption and the causes of trends in surface radiative fluxes constrain the forcing from aerosol-radiation interactions. A robust theoretical foundation and convincing evidence constrain the forcing caused by aerosol-driven increases in liquid cloud droplet number concentration. However, the influence of anthropogenic aerosols on cloud liquid water content and cloud fraction is less clear, and the influence on mixed-phase and ice clouds remains poorly constrained. Observed changes in surface temperature and radiative fluxes provide additional constraints. These multiple lines of evidence lead to a 68% confidence interval for the total aerosol effective radiative forcing of −1.60 to −0.65 W m−2, or −2.0 to −0.4 W m−2 with a 90% likelihood. Those intervals are of similar width to the last Intergovernmental Panel on Climate Change assessment but shifted towards more negative values. The uncertainty will narrow in the future by continuing to critically combine multiple lines of evidence, especially those addressing industrial-era changes in aerosol sources and aerosol effects on liquid cloud amount and on ice clouds.

Journal ArticleDOI
TL;DR: This work used shotgun metagenomics of mucosal biopsies to explore the microbial communities’ compositions of terminal ileum and large intestine in 5 healthy individuals, and details which species are involved with the tryptophan/indole pathway and the antimicrobial resistance biogeography along the intestine.
Abstract: Gut mucosal microbes evolved closest to the host, developing specialized local communities. There is, however, insufficient knowledge of these communities as most studies have employed sequencing technologies to investigate faecal microbiota only. This work used shotgun metagenomics of mucosal biopsies to explore the microbial communities' compositions of terminal ileum and large intestine in 5 healthy individuals. Functional annotations and genome-scale metabolic modelling of selected species were then employed to identify local functional enrichments. While faecal metagenomics provided a good approximation of the average gut mucosal microbiome composition, mucosal biopsies allowed detecting the subtle variations of local microbial communities. Given their significant enrichment in the mucosal microbiota, we highlight the roles of Bacteroides species and describe the antimicrobial resistance biogeography along the intestine. We also detail which species, at which locations, are involved with the tryptophan/indole pathway, whose malfunctioning has been linked to pathologies including inflammatory bowel disease. Our study thus provides invaluable resources for investigating mechanisms connecting gut microbiota and host pathophysiology.

Journal ArticleDOI
W. Decking, S. Abeghyan, P. Abramian, A. Abramsky  +478 moreInstitutions (15)
TL;DR: The European XFEL as discussed by the authors is a hard X-ray free-electron laser (FEL) based on a highelectron-energy superconducting linear accelerator, which allows for the acceleration of many electron bunches within one radio-frequency pulse of the accelerating voltage and, in turn, for the generation of a large number of hard Xray pulses.
Abstract: The European XFEL is a hard X-ray free-electron laser (FEL) based on a high-electron-energy superconducting linear accelerator. The superconducting technology allows for the acceleration of many electron bunches within one radio-frequency pulse of the accelerating voltage and, in turn, for the generation of a large number of hard X-ray pulses. We report on the performance of the European XFEL accelerator with up to 5,000 electron bunches per second and demonstrating a full energy of 17.5 GeV. Feedback mechanisms enable stabilization of the electron beam delivery at the FEL undulator in space and time. The measured FEL gain curve at 9.3 keV is in good agreement with predictions for saturated FEL radiation. Hard X-ray lasing was achieved between 7 keV and 14 keV with pulse energies of up to 2.0 mJ. Using the high repetition rate, an FEL beam with 6 W average power was created.

Journal ArticleDOI
TL;DR: This study compiles over 7,000 field observations to present a data-driven map of northern peatlands and their carbon and nitrogen stocks, and uses machine-learning techniques with extensive peat core data to create observation-based maps ofNorthern peatland C and N stocks and to assess their response to warming and permafrost thaw.
Abstract: Northern peatlands have accumulated large stocks of organic carbon (C) and nitrogen (N), but their spatial distribution and vulnerability to climate warming remain uncertain. Here, we used machine-learning techniques with extensive peat core data (n > 7,000) to create observation-based maps of northern peatland C and N stocks, and to assess their response to warming and permafrost thaw. We estimate that northern peatlands cover 3.7 ± 0.5 million km2 and store 415 ± 150 Pg C and 10 ± 7 Pg N. Nearly half of the peatland area and peat C stocks are permafrost affected. Using modeled global warming stabilization scenarios (from 1.5 to 6 °C warming), we project that the current sink of atmospheric C (0.10 ± 0.02 Pg C⋅y-1) in northern peatlands will shift to a C source as 0.8 to 1.9 million km2 of permafrost-affected peatlands thaw. The projected thaw would cause peatland greenhouse gas emissions equal to ∼1% of anthropogenic radiative forcing in this century. The main forcing is from methane emissions (0.7 to 3 Pg cumulative CH4-C) with smaller carbon dioxide forcing (1 to 2 Pg CO2-C) and minor nitrous oxide losses. We project that initial CO2-C losses reverse after ∼200 y, as warming strengthens peatland C-sinks. We project substantial, but highly uncertain, additional losses of peat into fluvial systems of 10 to 30 Pg C and 0.4 to 0.9 Pg N. The combined gaseous and fluvial peatland C loss estimated here adds 30 to 50% onto previous estimates of permafrost-thaw C losses, with southern permafrost regions being the most vulnerable.

Journal ArticleDOI
TL;DR: In this article, the authors propose that soil carbon persistence can be understood through the lens of decomposers as a result of functional complexity derived from the interplay between spatial and temporal variation of molecular diversity and composition, which suggests soil management should be based on constant care rather than one-time action to lock away carbon in soils.
Abstract: Soil organic carbon management has the potential to aid climate change mitigation through drawdown of atmospheric carbon dioxide. To be effective, such management must account for processes influencing carbon storage and re-emission at different space and time scales. Achieving this requires a conceptual advance in our understanding to link carbon dynamics from the scales at which processes occur to the scales at which decisions are made. Here, we propose that soil carbon persistence can be understood through the lens of decomposers as a result of functional complexity derived from the interplay between spatial and temporal variation of molecular diversity and composition. For example, co-location alone can determine whether a molecule is decomposed, with rapid changes in moisture leading to transport of organic matter and constraining the fitness of the microbial community, while greater molecular diversity may increase the metabolic demand of, and thus potentially limit, decomposition. This conceptual shift accounts for emergent behaviour of the microbial community and would enable soil carbon changes to be predicted without invoking recalcitrant carbon forms that have not been observed experimentally. Functional complexity as a driver of soil carbon persistence suggests soil management should be based on constant care rather than one-time action to lock away carbon in soils. Dynamic interactions between chemical and biological controls govern the stability of soil organic carbon and drive complex, emergent patterns in soil carbon persistence.

Journal ArticleDOI
TL;DR: In this paper, a bathymetric sill in Sherard Osborn Fjord, northwest Greenland shields Ryder Glacier from melting by warm Atlantic water found at the bottom of the fjord.
Abstract: The processes controlling advance and retreat of outlet glaciers in fjords draining the Greenland Ice Sheet remain poorly known, undermining assessments of their dynamics and associated sea-level rise in a warming climate. Mass loss of the Greenland Ice Sheet has increased six-fold over the last four decades, with discharge and melt from outlet glaciers comprising key components of this loss. Here we acquired oceanographic data and multibeam bathymetry in the previously uncharted Sherard Osborn Fjord in northwest Greenland where Ryder Glacier drains into the Arctic Ocean. Our data show that warmer subsurface water of Atlantic origin enters the fjord, but Ryder Glacier’s floating tongue at its present location is partly protected from the inflow by a bathymetric sill located in the innermost fjord. This reduces under-ice melting of the glacier, providing insight into Ryder Glacier’s dynamics and its vulnerability to inflow of Atlantic warmer water. A bathymetric sill in Sherard Osborn Fjord, northwest Greenland shields Ryder Glacier from melting by warm Atlantic water found at the bottom of the fjord, according to high-resolution bathymetric mapping and oceanographic data.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, Ovsat Abdinov4  +2934 moreInstitutions (199)
TL;DR: In this article, a search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented, based on 139.fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at
Abstract: A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider at $\sqrt{s}=13$ $\text {TeV}$. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 $\text {Ge}\text {V}$ are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 $\text {TeV}$ for slepton-mediated decays, whereas for slepton-pair production masses up to 700 $\text {Ge}\text {V}$ are excluded assuming three generations of mass-degenerate sleptons.

Posted ContentDOI
Pascal Barbry1, Christoph Muus2, Christoph Muus3, Malte D Luecken, Gökcen Eraslan2, Avinash Waghray3, Graham Heimberg2, Lisa Sikkema, Yoshihiko Kobayashi4, Eeshit Dhaval Vaishnav5, Ayshwarya Subramanian2, Christopher Smilie2, Karthik A. Jagadeesh2, Elizabeth Thu Duong6, Evgenij Fiskin2, Elena Torlai Triglia2, Meshal Ansari, Peiwen Cai7, Brian M. Lin3, Justin Buchanan6, Sijia Chen8, Jian Shu2, Jian Shu5, Adam L. Haber3, Adam L. Haber2, Hattie Chung2, Daniel T. Montoro2, Taylor Adams9, Hananeh Aliee, J. Samuel10, Allon Zaneta Andrusivova11, Ilias Angelidis, Orr Ashenberg2, Kevin Bassler12, Christophe Bécavin1, Inbal Benhar3, Joseph Bergenstråhle11, Ludvig Bergenstråhle11, Liam Bolt13, Emelie Braun14, Linh T. Bui15, Mark Chaffin2, Evgeny Chichelnitskiy16, Joshua Chiou6, Thomas M. Conlon, Michael S. Cuoco2, Marie Deprez1, David Fischer, Astrid Gillich, Joshua Gould2, Minzhe Guo17, Austin J. Gutierrez15, Arun C. Habermann18, Tyler Harvey2, Peng He13, Xiaomeng Hou6, Xiaomeng Hou8, Lijuan Hu14, Alok Jaiswal2, Peiyong Jiang19, Theodoros Kapellos12, Christin S. Kuo, Ludvig Larsson11, Michael Leney-Greene2, Kyungtae Lim20, Monika Litviňuková21, Monika Litviňuková13, Ji Lu19, Leif S. Ludwig2, Wendy Luo2, Henrike Maatz21, Elo Madissoon13, Lira Mamanova13, Kasidet Manakongtreecheep2, Kasidet Manakongtreecheep3, Charles-Hugo Marquette1, Ian Mbano, Alexi McAdams22, Ross J. Metzger, Ahmad N. Nabhan, Sarah K. Nyquist10, Lolita Penland, Olivier Poirion6, Sergio Poli9, Cancan Qi23, Rachel Queen24, Daniel Reichart25, Daniel Reichart3, Ivan O. Rosas9, Jonas C. Schupp9, Rahul Sinha, Rene Sit, Kamil Slowikowski2, Kamil Slowikowski3, Michal Slyper2, Neal Smith2, Neal Smith3, Alex Sountoulidis26, Maximilian Strunz, Dawei Sun20, Carlos Talavera-López13, Peng Tan2, Jessica Tantivit2, Jessica Tantivit3, Kyle J. Travaglini, Nathan R. Tucker2, Katherine A. Vernon2, Katherine A. Vernon8, Marc Wadsworth10, Julia Waldman2, Xiuting Wang7, Wenjun Yan3, William Zhao7, Carly Ziegler10 
20 Apr 2020-bioRxiv
TL;DR: Differences in the cell type-specific expression of mediators of SARS-CoV-2 viral entry may be responsible for aspects of COVID-19 epidemiology and clinical course, and point to putative molecular pathways involved in disease susceptibility and pathogenesis.
Abstract: The COVID-19 pandemic, caused by the novel coronavirus SARS-CoV-2, creates an urgent need for identifying molecular mechanisms that mediate viral entry, propagation, and tissue pathology. Cell membrane bound angiotensin-converting enzyme 2 (ACE2) and associated proteases, transmembrane protease serine 2 (TMPRSS2) and Cathepsin L (CTSL), were previously identified as mediators of SARS-CoV2 cellular entry. Here, we assess the cell type-specific RNA expression of ACE2 , TMPRSS2 , and CTSL through an integrated analysis of 107 single-cell and single-nucleus RNA-Seq studies, including 22 lung and airways datasets (16 unpublished), and 85 datasets from other diverse organs. Joint expression of ACE2 and the accessory proteases identifies specific subsets of respiratory epithelial cells as putative targets of viral infection in the nasal passages, airways, and alveoli. Cells that co-express ACE2 and proteases are also identified in cells from other organs, some of which have been associated with COVID-19 transmission or pathology, including gut enterocytes, corneal epithelial cells, cardiomyocytes, heart pericytes, olfactory sustentacular cells, and renal epithelial cells. Performing the first meta-analyses of scRNA-seq studies, we analyzed 1,176,683 cells from 282 nasal, airway, and lung parenchyma samples from 164 donors spanning fetal, childhood, adult, and elderly age groups, associate increased levels of ACE2 , TMPRSS2 , and CTSL in specific cell types with increasing age, male gender, and smoking, all of which are epidemiologically linked to COVID-19 susceptibility and outcomes. Notably, there was a particularly low expression of ACE2 in the few young pediatric samples in the analysis. Further analysis reveals a gene expression program shared by ACE2 + TMPRSS2 + cells in nasal, lung and gut tissues, including genes that may mediate viral entry, subtend key immune functions, and mediate epithelial-macrophage cross-talk. Amongst these are IL6, its receptor and co-receptor, IL1R , TNF response pathways, and complement genes. Cell type specificity in the lung and airways and smoking effects were conserved in mice. Our analyses suggest that differences in the cell type-specific expression of mediators of SARS-CoV-2 viral entry may be responsible for aspects of COVID-19 epidemiology and clinical course, and point to putative molecular pathways involved in disease susceptibility and pathogenesis.

Journal ArticleDOI
20 Jan 2020
TL;DR: In this paper, a process-detailed, spatially explicit representation of four interlinked planetary boundaries (biosphere integrity, land-system change, freshwater use, nitrogen flows) and agricultural systems in an internally consistent model framework is presented.
Abstract: Global agriculture puts heavy pressure on planetary boundaries, posing the challenge to achieve future food security without compromising Earth system resilience. On the basis of process-detailed, spatially explicit representation of four interlinked planetary boundaries (biosphere integrity, land-system change, freshwater use, nitrogen flows) and agricultural systems in an internally consistent model framework, we here show that almost half of current global food production depends on planetary boundary transgressions. Hotspot regions, mainly in Asia, even face simultaneous transgression of multiple underlying local boundaries. If these boundaries were strictly respected, the present food system could provide a balanced diet (2,355 kcal per capita per day) for 3.4 billion people only. However, as we also demonstrate, transformation towards more sustainable production and consumption patterns could support 10.2 billion people within the planetary boundaries analysed. Key prerequisites are spatially redistributed cropland, improved water–nutrient management, food waste reduction and dietary changes. Agriculture transforms the Earth and risks crossing thresholds for a healthy planet. This study finds almost half of current food production crosses such boundaries, as for freshwater use, but that transformation towards more sustainable production and consumption could support 10.2 billion people.

Journal ArticleDOI
TL;DR: The second version of the coupled Norwegian Earth System Model (NorESM2) is presented and evaluated in this article, which employs entirely different ocean and ocean biogeochemistry models, including a different module for aerosol physics and chemistry, including interactions with cloud and radiation.
Abstract: . The second version of the coupled Norwegian Earth System Model (NorESM2) is presented and evaluated. NorESM2 is based on the second version of the Community Earth System Model (CESM2) and shares with CESM2 the computer code infrastructure and many Earth system model components. However, NorESM2 employs entirely different ocean and ocean biogeochemistry models. The atmosphere component of NorESM2 (CAM-Nor) includes a different module for aerosol physics and chemistry, including interactions with cloud and radiation; additionally, CAM-Nor includes improvements in the formulation of local dry and moist energy conservation, in local and global angular momentum conservation, and in the computations for deep convection and air–sea fluxes. The surface components of NorESM2 have minor changes in the albedo calculations and to land and sea-ice models. We present results from simulations with NorESM2 that were carried out for the sixth phase of the Coupled Model Intercomparison Project (CMIP6). Two versions of the model are used: one with lower ( ∼ 2 ∘ ) atmosphere–land resolution and one with medium ( ∼ 1 ∘ ) atmosphere–land resolution. The stability of the pre-industrial climate and the sensitivity of the model to abrupt and gradual quadrupling of CO 2 are assessed, along with the ability of the model to simulate the historical climate under the CMIP6 forcings. Compared to observations and reanalyses, NorESM2 represents an improvement over previous versions of NorESM in most aspects. NorESM2 appears less sensitive to greenhouse gas forcing than its predecessors, with an estimated equilibrium climate sensitivity of 2.5 K in both resolutions on a 150-year time frame; however, this estimate increases with the time window and the climate sensitivity at equilibration is much higher. We also consider the model response to future scenarios as defined by selected Shared Socioeconomic Pathways (SSPs) from the Scenario Model Intercomparison Project defined under CMIP6. Under the four scenarios (SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5), the warming in the period 2090–2099 compared to 1850–1879 reaches 1.3, 2.2, 3.0, and 3.9 K in NorESM2-LM, and 1.3, 2.1, 3.1, and 3.9 K in NorESM-MM, robustly similar in both resolutions. NorESM2-LM shows a rather satisfactory evolution of recent sea-ice area. In NorESM2-LM, an ice-free Arctic Ocean is only avoided in the SSP1-2.6 scenario.

Journal ArticleDOI
TL;DR: It is demonstrated that being male, having less individual income, lower education, not being married, and being an immigrant from a low- or middle-income country predicts higher risk of death from COVID-19 but not for all other causes of death.
Abstract: As global deaths from COVID-19 continue to rise, the world's governments, institutions, and agencies are still working toward an understanding of who is most at risk of death. In this study, data on all recorded COVID-19 deaths in Sweden up to May 7, 2020 are linked to high-quality and accurate individual-level background data from administrative registers of the total population. By means of individual-level survival analysis we demonstrate that being male, having less individual income, lower education, not being married all independently predict a higher risk of death from COVID-19 and from all other causes of death. Being an immigrant from a low- or middle-income country predicts higher risk of death from COVID-19 but not for all other causes of death. The main message of this work is that the interaction of the virus causing COVID-19 and its social environment exerts an unequal burden on the most disadvantaged members of society.