scispace - formally typeset
Search or ask a question

Showing papers by "Colorado State University published in 2021"


Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1428 moreInstitutions (155)
TL;DR: In this article, the population of 47 compact binary mergers detected with a false-alarm rate of 0.614 were dynamically assembled, and the authors found that the BBH rate likely increases with redshift, but not faster than the star formation rate.
Abstract: We report on the population of 47 compact binary mergers detected with a false-alarm rate of 0.01 are dynamically assembled. Third, we estimate merger rates, finding RBBH = 23.9-+8.614.3 Gpc-3 yr-1 for BBHs and RBNS = 320-+240490 Gpc-3 yr-1 for binary neutron stars. We find that the BBH rate likely increases with redshift (85% credibility) but not faster than the star formation rate (86% credibility). Additionally, we examine recent exceptional events in the context of our population models, finding that the asymmetric masses of GW190412 and the high component masses of GW190521 are consistent with our models, but the low secondary mass of GW190814 makes it an outlier.

468 citations


Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1692 moreInstitutions (195)
TL;DR: In this article, the authors reported the observation of gravitational waves from two compact binary coalescences in LIGO's and Virgo's third observing run with properties consistent with neutron star-black hole (NSBH) binaries.
Abstract: We report the observation of gravitational waves from two compact binary coalescences in LIGO’s and Virgo’s third observing run with properties consistent with neutron star–black hole (NSBH) binaries. The two events are named GW200105_162426 and GW200115_042309, abbreviated as GW200105 and GW200115; the first was observed by LIGO Livingston and Virgo and the second by all three LIGO–Virgo detectors. The source of GW200105 has component masses 8.9−1.5+1.2 and 1.9−0.2+0.3M⊙ , whereas the source of GW200115 has component masses 5.7−2.1+1.8 and 1.5−0.3+0.7M⊙ (all measurements quoted at the 90% credible level). The probability that the secondary’s mass is below the maximal mass of a neutron star is 89%–96% and 87%–98%, respectively, for GW200105 and GW200115, with the ranges arising from different astrophysical assumptions. The source luminosity distances are 280−110+110 and 300−100+150Mpc , respectively. The magnitude of the primary spin of GW200105 is less than 0.23 at the 90% credible level, and its orientation is unconstrained. For GW200115, the primary spin has a negative spin projection onto the orbital angular momentum at 88% probability. We are unable to constrain the spin or tidal deformation of the secondary component for either event. We infer an NSBH merger rate density of 45−33+75Gpc−3yr−1 when assuming that GW200105 and GW200115 are representative of the NSBH population or 130−69+112Gpc−3yr−1 under the assumption of a broader distribution of component masses.

374 citations


Journal ArticleDOI
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, which aims to provide real-time information about the physical and emotional impacts of age-related illness and disability on individuals and society.
Abstract: 1School of Social Work, Bar Ilan University, Ramat Gan, Israel. 2Department of Psychology, University of Toronto, Ontario, Canada. 3Department of Human Development and Family Studies, Colorado State University, Fort Collins. 4Social and Behavioral Sciences Department, Yale School of Public Health, New Haven, Connecticut. 5Department of Psychology, North Carolina State University, Raleigh. 6Department of Psychology, Friedrich-Schiller-University Jena, Germany. 7German Centre of Gerontology, Berlin, Germany. 8Network of Aging Research, Heidelberg University, Germany.

351 citations


Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1335 moreInstitutions (144)
TL;DR: The data recorded by these instruments during their first and second observing runs are described, including the gravitational-wave strain arrays, released as time series sampled at 16384 Hz.

320 citations


Journal ArticleDOI
TL;DR: This paper devise, implement, and evaluate a technique, called SEQUENCER, for fixing bugs based on sequence-to-sequence learning on source code, which captures a wide range of repair operators without any domain-specific top-down design.
Abstract: This paper presents a novel end-to-end approach to program repair based on sequence-to-sequence learning. We devise, implement, and evaluate a technique, called SequenceR , for fixing bugs based on sequence-to-sequence learning on source code. This approach uses the copy mechanism to overcome the unlimited vocabulary problem that occurs with big code. Our system is data-driven; we train it on 35,578 samples, carefully curated from commits to open-source repositories. We evaluate SequenceR on 4,711 independent real bug fixes, as well on the Defects4J benchmark used in program repair research. SequenceR is able to perfectly predict the fixed line for 950/4,711 testing samples, and find correct patches for 14 bugs in Defects4J benchmark. SequenceR captures a wide range of repair operators without any domain-specific top-down design.

276 citations


Journal ArticleDOI
TL;DR: Human monoclonal antibody potency in vitro does not uniformly correlate with its in vivo neutralization of SARS-CoV-2, but antibody combinations show superior efficacy compared with single antibodies.
Abstract: SARS-CoV-2, the causative agent of COVID-19, has been responsible for over 42 million infections and 1 million deaths since its emergence in December 2019. There are few therapeutic options and no approved vaccines. Here, we examine the properties of highly potent human monoclonal antibodies (hu-mAbs) in a Syrian hamster model of SARS-CoV-2 and in a mouse-adapted model of SARS-CoV-2 infection (SARS-CoV-2 MA). Antibody combinations were effective for prevention and in therapy when administered early. However, in vitro antibody neutralization potency did not uniformly correlate with in vivo protection, and some hu-mAbs were more protective in combination in vivo. Analysis of antibody Fc regions revealed that binding to activating Fc receptors contributes to optimal protection against SARS-CoV-2 MA. The data indicate that intact effector function can affect hu-mAb protective activity and that in vivo testing is required to establish optimal hu-mAb combinations for COVID-19 prevention.

273 citations


Journal ArticleDOI
TL;DR: In this paper, a quantitative health impact assessment for the year 2015 to estimate the effect of air pollution exposure (PM2·5 and NO2) on natural-cause mortality for adult residents (aged ≥20 years) in 969 cities and 47 greater cities in Europe.

195 citations


Journal ArticleDOI
08 Jan 2021
TL;DR: In this article, Membrane distillation (MD) has been garnering increasing attention in research and development, since it has been proposed as a promising technology for desalinating hypersaline brine from various...
Abstract: Membrane distillation (MD) has been garnering increasing attention in research and development, since it has been proposed as a promising technology for desalinating hypersaline brine from various ...

184 citations


Journal ArticleDOI
TL;DR: In this paper, the authors synthesized 20 years of research to explain the interrelated processes that determine soil and plant responses to biochar and found that biochar can catalyze biotic and abiotic reactions, particularly in the rhizosphere, that increase nutrient supply and uptake by plants.
Abstract: We synthesized 20 years of research to explain the interrelated processes that determine soil and plant responses to biochar. The properties of biochar and its effects within agricultural ecosystems largely depend on feedstock and pyrolysis conditions. We describe three stages of reactions of biochar in soil: dissolution (1–3 weeks); reactive surface development (1–6 months); and aging (beyond 6 months). As biochar ages, it is incorporated into soil aggregates, protecting the biochar carbon and promoting the stabilization of rhizodeposits and microbial products. Biochar carbon persists in soil for hundreds to thousands of years. By increasing pH, porosity, and water availability, biochars can create favorable conditions for root development and microbial functions. Biochars can catalyze biotic and abiotic reactions, particularly in the rhizosphere, that increase nutrient supply and uptake by plants, reduce phytotoxins, stimulate plant development, and increase resilience to disease and environmental stressors. Meta-analyses found that, on average, biochars increase P availability by a factor of 4.6; decrease plant tissue concentration of heavy metals by 17%–39%; build soil organic carbon through negative priming by 3.8% (range −21% to +20%); and reduce non-CO2 greenhouse gas emissions from soil by 12%–50%. Meta-analyses show average crop yield increases of 10%–42% with biochar addition, with greatest increases in low-nutrient P-sorbing acidic soils (common in the tropics), and in sandy soils in drylands due to increase in nutrient retention and water holding capacity. Studies report a wide range of plant responses to biochars due to the diversity of biochars and contexts in which biochars have been applied. Crop yields increase strongly if site-specific soil constraints and nutrient and water limitations are mitigated by appropriate biochar formulations. Biochars can be tailored to address site constraints through feedstock selection, by modifying pyrolysis conditions, through pre- or post-production treatments, or co-application with organic or mineral fertilizers. We demonstrate how, when used wisely, biochar mitigates climate change and supports food security and the circular economy.

181 citations


Journal ArticleDOI
TL;DR: The primary areas in which population genomics approaches can be applied to wildlife conservation and management are reviewed, examples of how they have been used are highlighted, and recommendations for building on the progress that has been made are provided.
Abstract: Biodiversity is under threat worldwide. Over the past decade, the field of population genomics has developed across nonmodel organisms, and the results of this research have begun to be applied in conservation and management of wildlife species. Genomics tools can provide precise estimates of basic features of wildlife populations, such as effective population size, inbreeding, demographic history and population structure, that are critical for conservation efforts. Moreover, population genomics studies can identify particular genetic loci and variants responsible for inbreeding depression or adaptation to changing environments, allowing for conservation efforts to estimate the capacity of populations to evolve and adapt in response to environmental change and to manage for adaptive variation. While connections from basic research to applied wildlife conservation have been slow to develop, these connections are increasingly strengthening. Here we review the primary areas in which population genomics approaches can be applied to wildlife conservation and management, highlight examples of how they have been used, and provide recommendations for building on the progress that has been made in this field.

174 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3  +1273 moreInstitutions (140)
TL;DR: In this article, the first and second observing runs of the Advanced LIGO and Virgo detector network were used to obtain the first standard-siren measurement of the Hubble constant (H 0).
Abstract: This paper presents the gravitational-wave measurement of the Hubble constant (H 0) using the detections from the first and second observing runs of the Advanced LIGO and Virgo detector network. The presence of the transient electromagnetic counterpart of the binary neutron star GW170817 led to the first standard-siren measurement of H 0. Here we additionally use binary black hole detections in conjunction with galaxy catalogs and report a joint measurement. Our updated measurement is H 0 = km s−1 Mpc−1 (68.3% of the highest density posterior interval with a flat-in-log prior) which is an improvement by a factor of 1.04 (about 4%) over the GW170817-only value of km s−1 Mpc−1. A significant additional contribution currently comes from GW170814, a loud and well-localized detection from a part of the sky thoroughly covered by the Dark Energy Survey. With numerous detections anticipated over the upcoming years, an exhaustive understanding of other systematic effects are also going to become increasingly important. These results establish the path to cosmology using gravitational-wave observations with and without transient electromagnetic counterparts.

Journal ArticleDOI
Jens H. Kuhn1, Scott Adkins2, Daniela Alioto3, S. V. Alkhovsky4  +231 moreInstitutions (125)
TL;DR: The updated taxonomy of Negarnaviricota is presented, as now accepted by the ICTV, after the phylum was amended and emended in March 2020.
Abstract: In March 2020, following the annual International Committee on Taxonomy of Viruses (ICTV) ratification vote on newly proposed taxa, the phylum Negarnaviricota was amended and emended. At the genus rank, 20 new genera were added, two were deleted, one was moved, and three were renamed. At the species rank, 160 species were added, four were deleted, ten were moved and renamed, and 30 species were renamed. This article presents the updated taxonomy of Negarnaviricota as now accepted by the ICTV.

Journal ArticleDOI
01 Apr 2021
TL;DR: In this paper, the authors synthesize observational and modelling evidence to demonstrate emerging differences in dryland aridity dependent on the specific metric considered and place these climate-induced aridity changes in the context of exacerbated water scarcity driven by rapidly increasing anthropogenic needs for freshwater to support population growth and economic development.
Abstract: Drylands are an essential component of the Earth System and are among the most vulnerable to climate change. In this Review, we synthesize observational and modelling evidence to demonstrate emerging differences in dryland aridity dependent on the specific metric considered. Although warming heightens vapour pressure deficit and, thus, atmospheric demand for water in both the observations and the projections, these changes do not wholly propagate to exacerbate soil moisture and runoff deficits. Moreover, counter-intuitively, many arid ecosystems have exhibited significant greening and enhanced vegetation productivity since the 1980s. Such divergence between atmospheric and ecohydrological aridity changes can primarily be related to moisture limitations by dry soils and plant physiological regulations of evapotranspiration under elevated CO2. The latter process ameliorates water stress on plant growth and decelerates warming-enhanced water losses from soils, while simultaneously warming and drying the near-surface air. We place these climate-induced aridity changes in the context of exacerbated water scarcity driven by rapidly increasing anthropogenic needs for freshwater to support population growth and economic development. Under future warming, dryland ecosystems might respond non-linearly, caused by, for example, complex ecosystem–hydrology–human interactions and increased mortality risks from drought and heat stress, which is a foremost priority for future research. Estimates of global dryland changes are often conflicting. This Review discusses and quantifies observed and projected aridity changes, revealing divergent responses between atmospheric and ecohydrological metrics that can be explained by plant physiological responses to elevated CO2.

Journal ArticleDOI
TL;DR: In this article, a comprehensive review of the field of microfluidic paper-based analytical devices (μPADs) is presented, highlighting fabrication methods available to date with their respective advantages and drawbacks, device designs and modifications to accommodate different assay needs, detection strategies, and the growing applications of μPAD.
Abstract: Microfluidic paper-based analytical devices (μPADs) have garnered significant interest as a promising analytical platform in the past decade. Compared with traditional microfluidics, μPADs present unique advantages, such as easy fabrication using established patterning methods, economical cost, ability to drive and manipulate flow without equipment, and capability of storing reagents for various applications. This Review aims to provide a comprehensive review of the field, highlighting fabrication methods available to date with their respective advantages and drawbacks, device designs and modifications to accommodate different assay needs, detection strategies, and the growing applications of μPADs. Finally, we discuss how the field needs to continue moving forward to realize its full potential.

Journal ArticleDOI
TL;DR: In this article, the authors make use of the COVID-19 global health disaster to address the associated mental illness public crisis, and seek to broaden these calls to include mental health issues.
Abstract: As the COVID-19 global health disaster continues to unfold across the world, calls have been made to address the associated mental illness public crisis. The current paper seeks to broaden these ca...

Journal ArticleDOI
27 Jan 2021-Nature
TL;DR: In this paper, the authors present country-, process-, GHG- and product-specific inventories of global land-use emissions from 1961 to 2017, decompose key demographic, economic and technical drivers of emissions and assess the uncertainties and the sensitivity of results to different accounting assumptions.
Abstract: Historically, human uses of land have transformed and fragmented ecosystems1,2, degraded biodiversity3,4, disrupted carbon and nitrogen cycles5,6 and added prodigious quantities of greenhouse gases (GHGs) to the atmosphere7,8. However, in contrast to fossil-fuel carbon dioxide (CO2) emissions, trends and drivers of GHG emissions from land management and land-use change (together referred to as ‘land-use emissions’) have not been as comprehensively and systematically assessed. Here we present country-, process-, GHG- and product-specific inventories of global land-use emissions from 1961 to 2017, we decompose key demographic, economic and technical drivers of emissions and we assess the uncertainties and the sensitivity of results to different accounting assumptions. Despite steady increases in population (+144 per cent) and agricultural production per capita (+58 per cent), as well as smaller increases in emissions per land area used (+8 per cent), decreases in land required per unit of agricultural production (–70 per cent) kept global annual land-use emissions relatively constant at about 11 gigatonnes CO2-equivalent until 2001. After 2001, driven by rising emissions per land area, emissions increased by 2.4 gigatonnes CO2-equivalent per decade to 14.6 gigatonnes CO2-equivalent in 2017 (about 25 per cent of total anthropogenic GHG emissions). Although emissions intensity decreased in all regions, large differences across regions persist over time. The three highest-emitting regions (Latin America, Southeast Asia and sub-Saharan Africa) dominate global emissions growth from 1961 to 2017, driven by rapid and extensive growth of agricultural production and related land-use change. In addition, disproportionate emissions are related to certain products: beef and a few other red meats supply only 1 per cent of calories worldwide, but account for 25 per cent of all land-use emissions. Even where land-use change emissions are negligible or negative, total per capita CO2-equivalent land-use emissions remain near 0.5 tonnes per capita, suggesting the current frontier of mitigation efforts. Our results are consistent with existing knowledge—for example, on the role of population and economic growth and dietary choice—but provide additional insight into regional and sectoral trends. Trends in the rate of region- and sector-specific land-use greenhouse gas emissions in 1961–2017 show an acceleration of about 20% per decade after 2001.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1678 moreInstitutions (193)
TL;DR: In this article, the authors report results of a search for an isotropic gravitational-wave background (GWB) using data from Advanced LIGO's and Advanced Virgo's third observing run (O3) combined with upper limits from the earlier O1 and O2 runs.
Abstract: We report results of a search for an isotropic gravitational-wave background (GWB) using data from Advanced LIGO’s and Advanced Virgo’s third observing run (O3) combined with upper limits from the earlier O1 and O2 runs. Unlike in previous observing runs in the advanced detector era, we include Virgo in the search for the GWB. The results of the search are consistent with uncorrelated noise, and therefore we place upper limits on the strength of the GWB. We find that the dimensionless energy density Ω GW ≤ 5.8 × 10 − 9 at the 95% credible level for a flat (frequency-independent) GWB, using a prior which is uniform in the log of the strength of the GWB, with 99% of the sensitivity coming from the band 20–76.6 Hz; Ω GW ( f ) ≤ 3.4 × 10 − 9 at 25 Hz for a power-law GWB with a spectral index of 2 / 3 (consistent with expectations for compact binary coalescences), in the band 20–90.6 Hz; and Ω GW ( f ) ≤ 3.9 × 10 − 10 at 25 Hz for a spectral index of 3, in the band 20–291.6 Hz. These upper limits improve over our previous results by a factor of 6.0 for a flat GWB, 8.8 for a spectral index of 2 / 3 , and 13.1 for a spectral index of 3. We also search for a GWB arising from scalar and vector modes, which are predicted by alternative theories of gravity; we do not find evidence of these, and place upper limits on the strength of GWBs with these polarizations. We demonstrate that there is no evidence of correlated noise of magnetic origin by performing a Bayesian analysis that allows for the presence of both a GWB and an effective magnetic background arising from geophysical Schumann resonances. We compare our upper limits to a fiducial model for the GWB from the merger of compact binaries, updating the model to use the most recent data-driven population inference from the systems detected during O3a. Finally, we combine our results with observations of individual mergers and show that, at design sensitivity, this joint approach may yield stronger constraints on the merger rate of binary black holes at z ≳ 2 than can be achieved with individually resolved mergers alone.

Journal ArticleDOI
TL;DR: In this article, the authors describe the concept of performance-advantaged, bio-based polymers (PBPs), highlighting examples wherein superior performance is facilitated by the inherent chemical functionality of bio-derived feedstocks.
Abstract: Bio-based compounds with unique chemical functionality can be obtained through selective transformations of plant and other non-fossil, biogenic feedstocks for the development of new polymers to displace those produced from fossil carbon feedstocks. Although substantial efforts have been invested to produce bio-based polymers that are chemically identical to and directly replace those from petroleum, a long-pursued goal is to synthesize new, sustainable, bio-based polymers that either functionally replace or exhibit performance advantages relative to incumbent polymers. Owing to anthropogenic climate change and the environmental consequences of global plastics pollution, the need to realize a bio-based materials economy at scale is critical. To that end, in this Review we describe the concept of performance-advantaged, bio-based polymers (PBPs), highlighting examples wherein superior performance is facilitated by the inherent chemical functionality of bio-based feedstocks. We focus on PBPs with C–O and C–N inter-unit chemical bonds, as these are often readily accessible from bio-based feedstocks, which are heteroatom-rich relative to petroleum-derived feedstocks. Finally, we outline guiding principles and challenges to aid progress in the development of PBPs. Bio-based polymers that exhibit superior performance relative to petroleum-based incumbents can encourage industry adoption and offset fossil carbon use. This Review introduces performance-advantaged, bio-based polymers and highlights examples wherein superior performance is facilitated by the inherent chemical functionality of bio-based feedstocks.

Journal ArticleDOI
05 Aug 2021
TL;DR: In this article, a cluster analysis identified six modes of co-production: (1) researching solutions; (2) empowering voices; (3) brokering power; (4) reframing power; navigating differences and (6) reframeing agency.
Abstract: The promise of co-production to address complex sustainability challenges is compelling. Yet, co-production, the collaborative weaving of research and practice, encompasses diverse aims, terminologies and practices, with poor clarity over their implications. To explore this diversity, we systematically mapped differences in how 32 initiatives from 6 continents co-produce diverse outcomes for the sustainable development of ecosystems at local to global scales. We found variation in their purpose for utilizing co-production, understanding of power, approach to politics and pathways to impact. A cluster analysis identified six modes of co-production: (1) researching solutions; (2) empowering voices; (3) brokering power; (4) reframing power; (5) navigating differences and (6) reframing agency. No mode is ideal; each holds unique potential to achieve particular outcomes, but also poses unique challenges and risks. Our analysis provides a heuristic tool for researchers and societal actors to critically explore this diversity and effectively navigate trade-offs when co-producing sustainability. Co-production includes diverse aims, terminologies and practices. This study explores such diversity by mapping differences in how 32 initiatives from 6 continents co-produce diverse outcomes for the sustainable development of ecosystems at local to global scales.

Journal ArticleDOI
Tamsin L. Edwards1, Sophie Nowicki2, Sophie Nowicki3, Ben Marzeion4, Regine Hock5, Regine Hock6, Heiko Goelzer7, Heiko Goelzer8, Heiko Goelzer9, Helene Seroussi10, Nicolas C. Jourdain11, Donald Slater12, Donald Slater13, Donald Slater14, Fiona Turner1, Christopher J. Smith15, Christopher J. Smith16, Christine M. McKenna16, Erika Simon3, Ayako Abe-Ouchi17, Jonathan M. Gregory18, Jonathan M. Gregory19, Eric Larour10, William H. Lipscomb20, Antony J. Payne21, Andrew Shepherd16, Cécile Agosta22, Patrick Alexander23, Patrick Alexander24, Torsten Albrecht25, Brian Anderson26, Xylar Asay-Davis27, Andy Aschwanden5, Alice Barthel27, Andrew Bliss28, Reinhard Calov25, Christopher Chambers29, Nicolas Champollion11, Nicolas Champollion4, Youngmin Choi30, Youngmin Choi10, Richard I. Cullather3, J. K. Cuzzone10, Christophe Dumas22, Denis Felikson3, Denis Felikson31, Xavier Fettweis32, Koji Fujita33, Benjamin K. Galton-Fenzi34, Benjamin K. Galton-Fenzi35, Rupert Gladstone36, Nicholas R. Golledge26, Ralf Greve29, Tore Hattermann37, Tore Hattermann38, Matthew J. Hoffman27, Angelika Humbert4, Angelika Humbert39, Matthias Huss40, Matthias Huss41, Matthias Huss42, Philippe Huybrechts43, Walter W. Immerzeel8, Thomas Kleiner39, Philip Kraaijenbrink8, Sébastien Le clec'h43, Victoria Lee21, Gunter R. Leguy20, Christopher M. Little, Daniel P. Lowry44, Jan Hendrik Malles4, Daniel F. Martin45, Fabien Maussion46, Mathieu Morlighem30, James F. O’Neill1, Isabel Nias47, Isabel Nias3, Frank Pattyn7, Tyler Pelle30, Stephen Price27, Aurélien Quiquet22, Valentina Radić48, Ronja Reese25, David R. Rounce49, David R. Rounce5, Martin Rückamp39, Akiko Sakai33, Courtney Shafer45, Nicole Schlegel10, Sarah Shannon21, Robin S. Smith19, Fiammetta Straneo12, Sainan Sun7, Lev Tarasov50, Luke D. Trusel51, Jonas Van Breedam43, Roderik S. W. van de Wal8, Michiel R. van den Broeke8, Ricarda Winkelmann25, Ricarda Winkelmann52, Harry Zekollari, Cheng Zhao34, Tong Zhang53, Tong Zhang27, Thomas Zwinger54 
King's College London1, University at Buffalo2, Goddard Space Flight Center3, University of Bremen4, University of Alaska Fairbanks5, University of Oslo6, Université libre de Bruxelles7, Utrecht University8, Bjerknes Centre for Climate Research9, California Institute of Technology10, University of Grenoble11, University of California, San Diego12, University of St Andrews13, University of Edinburgh14, International Institute for Applied Systems Analysis15, University of Leeds16, University of Tokyo17, Met Office18, University of Reading19, National Center for Atmospheric Research20, University of Bristol21, Université Paris-Saclay22, Goddard Institute for Space Studies23, Columbia University24, Potsdam Institute for Climate Impact Research25, Victoria University of Wellington26, Los Alamos National Laboratory27, Colorado State University28, Hokkaido University29, University of California, Irvine30, Universities Space Research Association31, University of Liège32, Nagoya University33, University of Tasmania34, Australian Antarctic Division35, University of Lapland36, Norwegian Polar Institute37, University of Tromsø38, Alfred Wegener Institute for Polar and Marine Research39, University of Fribourg40, Swiss Federal Institute for Forest, Snow and Landscape Research41, ETH Zurich42, Vrije Universiteit Brussel43, GNS Science44, Lawrence Berkeley National Laboratory45, University of Innsbruck46, University of Liverpool47, University of British Columbia48, Carnegie Mellon University49, Memorial University of Newfoundland50, Pennsylvania State University51, University of Potsdam52, Beijing Normal University53, CSC – IT Center for Science54
06 May 2021-Nature
TL;DR: In this article, the authors estimate probability distributions for these projections under the new scenarios using statistical emulation of the ice sheet and glacier models, and find that limiting global warming to 1.5 degrees Celsius would halve the land ice contribution to twenty-first-century sea level rise, relative to current emissions pledges.
Abstract: The land ice contribution to global mean sea level rise has not yet been predicted1 using ice sheet and glacier models for the latest set of socio-economic scenarios, nor using coordinated exploration of uncertainties arising from the various computer models involved. Two recent international projects generated a large suite of projections using multiple models2,3,4,5,6,7,8, but primarily used previous-generation scenarios9 and climate models10, and could not fully explore known uncertainties. Here we estimate probability distributions for these projections under the new scenarios11,12 using statistical emulation of the ice sheet and glacier models. We find that limiting global warming to 1.5 degrees Celsius would halve the land ice contribution to twenty-first-century sea level rise, relative to current emissions pledges. The median decreases from 25 to 13 centimetres sea level equivalent (SLE) by 2100, with glaciers responsible for half the sea level contribution. The projected Antarctic contribution does not show a clear response to the emissions scenario, owing to uncertainties in the competing processes of increasing ice loss and snowfall accumulation in a warming climate. However, under risk-averse (pessimistic) assumptions, Antarctic ice loss could be five times higher, increasing the median land ice contribution to 42 centimetres SLE under current policies and pledges, with the 95th percentile projection exceeding half a metre even under 1.5 degrees Celsius warming. This would severely limit the possibility of mitigating future coastal flooding. Given this large range (between 13 centimetres SLE using the main projections under 1.5 degrees Celsius warming and 42 centimetres SLE using risk-averse projections under current pledges), adaptation planning for twenty-first-century sea level rise must account for a factor-of-three uncertainty in the land ice contribution until climate policies and the Antarctic response are further constrained.

Posted ContentDOI
TL;DR: In this paper, the authors evaluate the feasibility and likely benefits of this approach in conservation and find that conserving genome-wide genetic variation is generally the best approach to prevent inbreeding depression and loss of adaptive potential from driving populations toward extinction.
Abstract: The unprecedented rate of extinction calls for efficient use of genetics to help conserve biodiversity. Several recent genomic and simulation-based studies have argued that the field of conservation biology has placed too much focus on conserving genome-wide genetic variation, and that the field should instead focus on managing the subset of functional genetic variation that is thought to affect fitness. Here, we critically evaluate the feasibility and likely benefits of this approach in conservation. We find that population genetics theory and empirical results show that conserving genome-wide genetic variation is generally the best approach to prevent inbreeding depression and loss of adaptive potential from driving populations toward extinction. Focusing conservation efforts on presumably functional genetic variation will only be feasible occasionally, often misleading, and counterproductive when prioritized over genome-wide genetic variation. Given the increasing rate of habitat loss and other environmental changes, failure to recognize the detrimental effects of lost genome-wide genetic variation on long-term population viability will only worsen the biodiversity crisis.

Journal ArticleDOI
TL;DR: In this paper, the authors quantify the health impact of childhood vaccination programs by estimating the deaths and disability-adjusted life-years (DALYs) averted by vaccination against ten pathogens in 98 low-income and middle-income countries between 2000 and 2030.

Journal ArticleDOI
TL;DR: It is found that phylogenetic signal deep in the amphibian phylogeny varies greatly across loci in a manner that is consistent with incomplete lineage sorting in the ancestral lineage of extant amphibians, and a surprisingly younger timescale for crown and ordinal amphibian diversification than previously reported.
Abstract: Molecular phylogenies have yielded strong support for many parts of the amphibian Tree of Life, but poor support for the resolution of deeper nodes, including relationships among families and orders. To clarify these relationships, we provide a phylogenomic perspective on amphibian relationships by developing a taxon-specific Anchored Hybrid Enrichment protocol targeting hundreds of conserved exons which are effective across the class. After obtaining data from 220 loci for 286 species (representing 94% of the families and 44% of the genera), we estimate a phylogeny for extant amphibians and identify gene tree-species tree conflict across the deepest branches of the amphibian phylogeny. We perform locus-by-locus genealogical interrogation of alternative topological hypotheses for amphibian monophyly, focusing on interordinal relationships. We find that phylogenetic signal deep in the amphibian phylogeny varies greatly across loci in a manner that is consistent with incomplete lineage sorting in the ancestral lineage of extant amphibians. Our results overwhelmingly support amphibian monophyly and a sister relationship between frogs and salamanders, consistent with the Batrachia hypothesis. Species tree analyses converge on a small set of topological hypotheses for the relationships among extant amphibian families. These results clarify several contentious portions of the amphibian Tree of Life, which in conjunction with a set of vetted fossil calibrations, support a surprisingly younger timescale for crown and ordinal amphibian diversification than previously reported. More broadly, our study provides insight into the sources, magnitudes, and heterogeneity of support across loci in phylogenomic data sets.[AIC; Amphibia; Batrachia; Phylogeny; gene tree-species tree discordance; genomics; information theory.].

Journal ArticleDOI
TL;DR: Selective removal or enrichment of targeted solutes including micropollutants, valuable elements, and mineral scalants from complex aqueous matrices is both challenging and pivotal to the success o...
Abstract: Selective removal or enrichment of targeted solutes including micropollutants, valuable elements, and mineral scalants from complex aqueous matrices is both challenging and pivotal to the success o...

Journal ArticleDOI
B. Abi1, R. Acciarri2, M. A. Acero3, George Adamov4  +979 moreInstitutions (156)
TL;DR: Of the many potential beyond the Standard Model (BSM) topics DUNE will probe, this paper presents a selection of studies quantifying DUNE’s sensitivities to sterile neutrino mixing, heavy neutral leptons, non-standard interactions, CPT symmetry violation, Lorentz invariance violation, and other new physics topics that complement those at high-energy colliders and significantly extend the present reach.
Abstract: The Deep Underground Neutrino Experiment (DUNE) will be a powerful tool for a variety of physics topics. The high-intensity proton beams provide a large neutrino flux, sampled by a near detector system consisting of a combination of capable precision detectors, and by the massive far detector system located deep underground. This configuration sets up DUNE as a machine for discovery, as it enables opportunities not only to perform precision neutrino measurements that may uncover deviations from the present three-flavor mixing paradigm, but also to discover new particles and unveil new interactions and symmetries beyond those predicted in the Standard Model (SM). Of the many potential beyond the Standard Model (BSM) topics DUNE will probe, this paper presents a selection of studies quantifying DUNE’s sensitivities to sterile neutrino mixing, heavy neutral leptons, non-standard interactions, CPT symmetry violation, Lorentz invariance violation, neutrino trident production, dark matter from both beam induced and cosmogenic sources, baryon number violation, and other new physics topics that complement those at high-energy colliders and significantly extend the present reach.

Journal ArticleDOI
TL;DR: The devices provide a promising approach toward neurorobotics, human–machine interaction technologies, and scalable bionic systems with visual data storage/buffering and processing.
Abstract: Imprinting vision as memory is a core attribute of human cognitive learning. Fundamental to artificial intelligence systems are bioinspired neuromorphic vision components for the visible and invisible segments of the electromagnetic spectrum. Realization of a single imaging unit with a combination of in-built memory and signal processing capability is imperative to deploy efficient brain-like vision systems. However, the lack of a platform that can be fully controlled by light without the need to apply alternating polarity electric signals has hampered this technological advance. Here, a neuromorphic imaging element based on a fully light-modulated 2D semiconductor in a simple reconfigurable phototransistor structure is presented. This standalone device exhibits inherent characteristics that enable neuromorphic image pre-processing and recognition. Fundamentally, the unique photoresponse induced by oxidation-related defects in 2D black phosphorus (BP) is exploited to achieve visual memory, wavelength-selective multibit programming, and erasing functions, which allow in-pixel image pre-processing. Furthermore, all-optically driven neuromorphic computation is demonstrated by machine learning to classify numbers and recognize images with an accuracy of over 90%. The devices provide a promising approach toward neurorobotics, human-machine interaction technologies, and scalable bionic systems with visual data storage/buffering and processing.

Journal ArticleDOI
TL;DR: The primary design considerations of deal.II are outlined and some of the technical and social challenges and lessons learned in running a large community software project over the course of two decades are discussed.
Abstract: deal.II is a state-of-the-art finite element library focused on generality, dimension-independent programming, parallelism, and extensibility. Herein, we outline its primary design considerations and its sophisticated features such as distributed meshes, h p -adaptivity, support for complex geometries, and matrix-free algorithms. But deal.II is more than just a software library: It is also a diverse and worldwide community of developers and users, as well as an educational platform. We therefore also discuss some of the technical and social challenges and lessons learned in running a large community software project over the course of two decades.

Journal ArticleDOI
TL;DR: This article used in situ probes, radar, lidar, and other instruments to make comprehensive measurements of thermodynamics, surface radiation, cloud, precipitation, aerosol, cloud condensation nuclei (CCN), and ice nucleating particles over the Southern Ocean cold waters, and in ubiquitous liquid and mixed-phase clouds common to this pristine environment.
Abstract: Weather and climate models are challenged by uncertainties and biases in simulating Southern Ocean (SO) radiative fluxes that trace to a poor understanding of cloud, aerosol, precipitation, and radiative processes, and their interactions. Projects between 2016 and 2018 used in situ probes, radar, lidar, and other instruments to make comprehensive measurements of thermodynamics, surface radiation, cloud, precipitation, aerosol, cloud condensation nuclei (CCN), and ice nucleating particles over the SO cold waters, and in ubiquitous liquid and mixed-phase clouds common to this pristine environment. Data including soundings were collected from the NSF-NCAR G-V aircraft flying north-south gradients south of Tasmania, at Macquarie Island, and on the R/V Investigator and RSV Aurora Australis. Synergistically these data characterize boundary layer and free troposphere environmental properties, and represent the most comprehensive data of this type available south of the oceanic polar front, in the cold sector of SO cyclones, and across seasons. Results show largely pristine environments with numerous small and few large aerosols above cloud, suggesting new particle formation and limited long-range transport from continents, high variability in CCN and cloud droplet concentrations, and ubiquitous supercooled water in thin, multilayered clouds, often with small-scale generating cells near cloud top. These observations demonstrate how cloud properties depend on aerosols while highlighting the importance of dynamics and turbulence that likely drive heterogeneity of cloud phase. Satellite retrievals confirmed low clouds were responsible for radiation biases. The combination of models and observations is examining how aerosols and meteorology couple to control SO water and energy budgets.

Journal ArticleDOI
TL;DR: Results suggest that plants may adjust their exudation patterns over the course their different growth phases to help tailor microbial recruitment to meet increased nutrient demands during periods demanding faster growth.
Abstract: Although interactions between plants and microbes at the plant-soil interface are known to be important for plant nutrient acquisition, relatively little is known about how root exudates contribute to nutrient exchange over the course of plant development. In this study, root exudates from slow- and fast-growing stages of Arabidopsis thaliana plants were collected, chemically analysed and then applied to a sandy nutrient-depleted soil. We then tracked the impacts of these exudates on soil bacterial communities, soil nutrients (ammonium, nitrate, available phosphorus and potassium) and plant growth. Both pools of exudates shifted bacterial community structure. GeoChip analyses revealed increases in the functional gene potential of both exudate-treated soils, with similar responses observed for slow-growing and fast-growing plant exudate treatments. The fast-growing stage root exudates induced higher nutrient mineralization and enhanced plant growth as compared to treatments with slow-growing stage exudates and the control. These results suggest that plants may adjust their exudation patterns over the course of their different growth phases to help tailor microbial recruitment to meet increased nutrient demands during periods demanding faster growth.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate how separating soil carbon into particulate and mineral-associated organic matter (POM and MAOM, respectively) aids in the understanding of its vulnerability to climate change and identification of carbon sequestration strategies.
Abstract: Soil carbon sequestration is seen as an effective means to draw down atmospheric CO2, but at the same time warming may accelerate the loss of extant soil carbon, so an accurate estimation of soil carbon stocks and their vulnerability to climate change is required. Here we demonstrate how separating soil carbon into particulate and mineral-associated organic matter (POM and MAOM, respectively) aids in the understanding of its vulnerability to climate change and identification of carbon sequestration strategies. By coupling European-wide databases with soil organic matter physical fractionation, we assessed the current geographical distribution of mineral topsoil carbon in POM and MAOM by land cover using a machine-learning approach. Further, using observed climate relationships, we projected the vulnerability of carbon in POM and MAOM to future climate change. Arable and coniferous forest soils contain the largest and most vulnerable carbon stocks when cumulated at the European scale. Although we show a lower carbon loss from mineral topsoils with climate change (2.5 ± 1.2 PgC by 2080) than those in some previous predictions, we urge the implementation of coniferous forest management practices that increase plant inputs to soils to offset POM losses, and the adoption of best management practices to avert the loss of and to build up both POM and MAOM in arable soils. Particulate and mineral-associated soil organic carbon have different climate sensitivity and distributions in Europe, according to analyses of measurements of soil carbon fractions from 352 topsoils.