scispace - formally typeset
Search or ask a question

Showing papers by "University of Illinois at Urbana–Champaign published in 2020"


Journal ArticleDOI
Theo Vos1, Theo Vos2, Theo Vos3, Stephen S Lim  +2416 moreInstitutions (246)
TL;DR: Global health has steadily improved over the past 30 years as measured by age-standardised DALY rates, and there has been a marked shift towards a greater proportion of burden due to YLDs from non-communicable diseases and injuries.

5,802 citations


Book
Georges Aad1, E. Abat2, Jalal Abdallah3, Jalal Abdallah4  +3029 moreInstitutions (164)
23 Feb 2020
TL;DR: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper, where a brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
Abstract: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.

3,111 citations


Journal ArticleDOI
TL;DR: The largest declines in risk exposure from 2010 to 2019 were among a set of risks that are strongly linked to social and economic development, including household air pollution; unsafe water, sanitation, and handwashing; and child growth failure.

3,059 citations


Journal ArticleDOI
28 Jan 2020-ACS Nano
TL;DR: Prominent authors from all over the world joined efforts to summarize the current state-of-the-art in understanding and using SERS, as well as to propose what can be expected in the near future, in terms of research, applications, and technological development.
Abstract: The discovery of the enhancement of Raman scattering by molecules adsorbed on nanostructured metal surfaces is a landmark in the history of spectroscopic and analytical techniques. Significant experimental and theoretical effort has been directed toward understanding the surface-enhanced Raman scattering (SERS) effect and demonstrating its potential in various types of ultrasensitive sensing applications in a wide variety of fields. In the 45 years since its discovery, SERS has blossomed into a rich area of research and technology, but additional efforts are still needed before it can be routinely used analytically and in commercial products. In this Review, prominent authors from around the world joined together to summarize the state of the art in understanding and using SERS and to predict what can be expected in the near future in terms of research, applications, and technological development. This Review is dedicated to SERS pioneer and our coauthor, the late Prof. Richard Van Duyne, whom we lost during the preparation of this article.

1,768 citations


Journal ArticleDOI
Pierre Friedlingstein1, Pierre Friedlingstein2, Michael O'Sullivan2, Matthew W. Jones3, Robbie M. Andrew, Judith Hauck, Are Olsen, Glen P. Peters, Wouter Peters4, Wouter Peters5, Julia Pongratz6, Julia Pongratz7, Stephen Sitch1, Corinne Le Quéré3, Josep G. Canadell8, Philippe Ciais9, Robert B. Jackson10, Simone R. Alin11, Luiz E. O. C. Aragão12, Luiz E. O. C. Aragão1, Almut Arneth, Vivek K. Arora, Nicholas R. Bates13, Nicholas R. Bates14, Meike Becker, Alice Benoit-Cattin, Henry C. Bittig, Laurent Bopp15, Selma Bultan7, Naveen Chandra16, Naveen Chandra17, Frédéric Chevallier9, Louise Chini18, Wiley Evans, Liesbeth Florentie4, Piers M. Forster19, Thomas Gasser20, Marion Gehlen9, Dennis Gilfillan, Thanos Gkritzalis21, Luke Gregor22, Nicolas Gruber22, Ian Harris23, Kerstin Hartung24, Kerstin Hartung7, Vanessa Haverd8, Richard A. Houghton25, Tatiana Ilyina6, Atul K. Jain26, Emilie Joetzjer27, Koji Kadono28, Etsushi Kato, Vassilis Kitidis29, Jan Ivar Korsbakken, Peter Landschützer6, Nathalie Lefèvre30, Andrew Lenton31, Sebastian Lienert32, Zhu Liu33, Danica Lombardozzi34, Gregg Marland35, Nicolas Metzl30, David R. Munro11, David R. Munro36, Julia E. M. S. Nabel6, S. Nakaoka16, Yosuke Niwa16, Kevin D. O'Brien37, Kevin D. O'Brien11, Tsuneo Ono, Paul I. Palmer, Denis Pierrot38, Benjamin Poulter, Laure Resplandy39, Eddy Robertson40, Christian Rödenbeck6, Jörg Schwinger, Roland Séférian27, Ingunn Skjelvan, Adam J. P. Smith3, Adrienne J. Sutton11, Toste Tanhua41, Pieter P. Tans11, Hanqin Tian42, Bronte Tilbrook43, Bronte Tilbrook31, Guido R. van der Werf44, N. Vuichard9, Anthony P. Walker45, Rik Wanninkhof38, Andrew J. Watson1, David R. Willis23, Andy Wiltshire40, Wenping Yuan46, Xu Yue47, Sönke Zaehle6 
University of Exeter1, École Normale Supérieure2, Norwich Research Park3, Wageningen University and Research Centre4, University of Groningen5, Max Planck Society6, Ludwig Maximilian University of Munich7, Commonwealth Scientific and Industrial Research Organisation8, Université Paris-Saclay9, Stanford University10, National Oceanic and Atmospheric Administration11, National Institute for Space Research12, Bermuda Institute of Ocean Sciences13, University of Southampton14, PSL Research University15, National Institute for Environmental Studies16, Japan Agency for Marine-Earth Science and Technology17, University of Maryland, College Park18, University of Leeds19, International Institute of Minnesota20, Flanders Marine Institute21, ETH Zurich22, University of East Anglia23, German Aerospace Center24, Woods Hole Research Center25, University of Illinois at Urbana–Champaign26, University of Toulouse27, Japan Meteorological Agency28, Plymouth Marine Laboratory29, University of Paris30, Hobart Corporation31, Oeschger Centre for Climate Change Research32, Tsinghua University33, National Center for Atmospheric Research34, Appalachian State University35, University of Colorado Boulder36, University of Washington37, Atlantic Oceanographic and Meteorological Laboratory38, Princeton University39, Met Office40, Leibniz Institute of Marine Sciences41, Auburn University42, University of Tasmania43, VU University Amsterdam44, Oak Ridge National Laboratory45, Sun Yat-sen University46, Nanjing University47
TL;DR: In this paper, the authors describe and synthesize data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere in a changing climate – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe and synthesize data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions (EFOS) are based on energy statistics and cement production data, while emissions from land-use change (ELUC), mainly deforestation, are based on land use and land-use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate (GATM) is computed from the annual changes in concentration. The ocean CO2 sink (SOCEAN) and terrestrial CO2 sink (SLAND) are estimated with global process models constrained by observations. The resulting carbon budget imbalance (BIM), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ. For the last decade available (2010–2019), EFOS was 9.6 ± 0.5 GtC yr−1 excluding the cement carbonation sink (9.4 ± 0.5 GtC yr−1 when the cement carbonation sink is included), and ELUC was 1.6 ± 0.7 GtC yr−1. For the same decade, GATM was 5.1 ± 0.02 GtC yr−1 (2.4 ± 0.01 ppm yr−1), SOCEAN 2.5 ± 0.6 GtC yr−1, and SLAND 3.4 ± 0.9 GtC yr−1, with a budget imbalance BIM of −0.1 GtC yr−1 indicating a near balance between estimated sources and sinks over the last decade. For the year 2019 alone, the growth in EFOS was only about 0.1 % with fossil emissions increasing to 9.9 ± 0.5 GtC yr−1 excluding the cement carbonation sink (9.7 ± 0.5 GtC yr−1 when cement carbonation sink is included), and ELUC was 1.8 ± 0.7 GtC yr−1, for total anthropogenic CO2 emissions of 11.5 ± 0.9 GtC yr−1 (42.2 ± 3.3 GtCO2). Also for 2019, GATM was 5.4 ± 0.2 GtC yr−1 (2.5 ± 0.1 ppm yr−1), SOCEAN was 2.6 ± 0.6 GtC yr−1, and SLAND was 3.1 ± 1.2 GtC yr−1, with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 409.85 ± 0.1 ppm averaged over 2019. Preliminary data for 2020, accounting for the COVID-19-induced changes in emissions, suggest a decrease in EFOS relative to 2019 of about −7 % (median estimate) based on individual estimates from four studies of −6 %, −7 %, −7 % (−3 % to −11 %), and −13 %. Overall, the mean and trend in the components of the global carbon budget are consistently estimated over the period 1959–2019, but discrepancies of up to 1 GtC yr−1 persist for the representation of semi-decadal variability in CO2 fluxes. Comparison of estimates from diverse approaches and observations shows (1) no consensus in the mean and trend in land-use change emissions over the last decade, (2) a persistent low agreement between the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent discrepancy between the different methods for the ocean sink outside the tropics, particularly in the Southern Ocean. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding of the global carbon cycle compared with previous publications of this data set (Friedlingstein et al., 2019; Le Quere et al., 2018b, a, 2016, 2015b, a, 2014, 2013). The data presented in this work are available at https://doi.org/10.18160/gcp-2020 (Friedlingstein et al., 2020).

1,764 citations


Journal ArticleDOI
TL;DR: The main features of NAMD are reviewed, including the variety of options offered by NAMD for enhanced-sampling simulations aimed at determining free-energy differences of either alchemical or geometrical transformations and their applicability to specific problems.
Abstract: NAMDis a molecular dynamics program designed for high-performance simulations of very large biological objects on CPU- and GPU-based architectures. NAMD offers scalable performance on petascale parallel supercomputers consisting of hundreds of thousands of cores, as well as on inexpensive commodity clusters commonly found in academic environments. It is written in C++ and leans on Charm++ parallel objects for optimal performance on low-latency architectures. NAMD is a versatile, multipurpose code that gathers state-of-the-art algorithms to carry out simulations in apt thermodynamic ensembles, using the widely popular CHARMM, AMBER, OPLS, and GROMOS biomolecular force fields. Here, we review the main features of NAMD that allow both equilibrium and enhanced-sampling molecular dynamics simulations with numerical efficiency. We describe the underlying concepts utilized by NAMD and their implementation, most notably for handling long-range electrostatics; controlling the temperature, pressure, and pH; applying external potentials on tailored grids; leveraging massively parallel resources in multiple-copy simulations; and hybrid quantum-mechanical/molecular-mechanical descriptions. We detail the variety of options offered by NAMD for enhanced-sampling simulations aimed at determining free-energy differences of either alchemical or geometrical transformations and outline their applicability to specific problems. Last, we discuss the roadmap for the development of NAMD and our current efforts toward achieving optimal performance on GPU-based architectures, for pushing back the limitations that have prevented biologically realistic billion-atom objects to be fruitfully simulated, and for making large-scale simulations less expensive and easier to set up, run, and analyze. NAMD is distributed free of charge with its source code at www.ks.uiuc.edu.

1,215 citations


Journal ArticleDOI
B. P. Abbott1, R. Abbott1, T. D. Abbott2, Sheelu Abraham3  +1271 moreInstitutions (145)
TL;DR: In 2019, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9 and the Virgo detector was also taking data that did not contribute to detection due to a low SINR but were used for subsequent parameter estimation as discussed by the authors.
Abstract: On 2019 April 25, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9. The Virgo detector was also taking data that did not contribute to detection due to a low signal-to-noise ratio, but were used for subsequent parameter estimation. The 90% credible intervals for the component masses range from to if we restrict the dimensionless component spin magnitudes to be smaller than 0.05). These mass parameters are consistent with the individual binary components being neutron stars. However, both the source-frame chirp mass and the total mass of this system are significantly larger than those of any other known binary neutron star (BNS) system. The possibility that one or both binary components of the system are black holes cannot be ruled out from gravitational-wave data. We discuss possible origins of the system based on its inconsistency with the known Galactic BNS population. Under the assumption that the signal was produced by a BNS coalescence, the local rate of neutron star mergers is updated to 250-2810.

1,189 citations


Journal ArticleDOI
TL;DR: This Consensus Statement outlines the definition and scope of the term ‘synbiotics’ as determined by an expert panel convened by the International Scientific Association for Probiotics and Prebiotics in May 2019 and explores the levels of evidence, safety, effects upon targets and implications for stakeholders of the synbiotic concept.
Abstract: In May 2019, the International Scientific Association for Probiotics and Prebiotics (ISAPP) convened a panel of nutritionists, physiologists and microbiologists to review the definition and scope of synbiotics. The panel updated the definition of a synbiotic to “a mixture comprising live microorganisms and substrate(s) selectively utilized by host microorganisms that confers a health benefit on the host”. The panel concluded that defining synbiotics as simply a mixture of probiotics and prebiotics could suppress the innovation of synbiotics that are designed to function cooperatively. Requiring that each component must meet the evidence and dose requirements for probiotics and prebiotics individually could also present an obstacle. Rather, the panel clarified that a complementary synbiotic, which has not been designed so that its component parts function cooperatively, must be composed of a probiotic plus a prebiotic, whereas a synergistic synbiotic does not need to be so. A synergistic synbiotic is a synbiotic for which the substrate is designed to be selectively utilized by the co-administered microorganisms. This Consensus Statement further explores the levels of evidence (existing and required), safety, effects upon targets and implications for stakeholders of the synbiotic concept. Gut microbiota can be manipulated to benefit host health, including the use of probiotics, prebiotics and synbiotics. This Consensus Statement outlines the definition and scope of the term ‘synbiotics’ as determined by an expert panel convened by the International Scientific Association for Probiotics and Prebiotics in May 2019.

953 citations


Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1334 moreInstitutions (150)
TL;DR: In this paper, the authors reported the observation of a compact binary coalescence involving a 222 −243 M ⊙ black hole and a compact object with a mass of 250 −267 M ⋆ (all measurements quoted at the 90% credible level) The gravitational-wave signal, GW190814, was observed during LIGO's and Virgo's third observing run on 2019 August 14 at 21:10:39 UTC and has a signal-to-noise ratio of 25 in the three-detector network.
Abstract: We report the observation of a compact binary coalescence involving a 222–243 M ⊙ black hole and a compact object with a mass of 250–267 M ⊙ (all measurements quoted at the 90% credible level) The gravitational-wave signal, GW190814, was observed during LIGO's and Virgo's third observing run on 2019 August 14 at 21:10:39 UTC and has a signal-to-noise ratio of 25 in the three-detector network The source was localized to 185 deg2 at a distance of ${241}_{-45}^{+41}$ Mpc; no electromagnetic counterpart has been confirmed to date The source has the most unequal mass ratio yet measured with gravitational waves, ${0112}_{-0009}^{+0008}$, and its secondary component is either the lightest black hole or the heaviest neutron star ever discovered in a double compact-object system The dimensionless spin of the primary black hole is tightly constrained to ≤007 Tests of general relativity reveal no measurable deviations from the theory, and its prediction of higher-multipole emission is confirmed at high confidence We estimate a merger rate density of 1–23 Gpc−3 yr−1 for the new class of binary coalescence sources that GW190814 represents Astrophysical models predict that binaries with mass ratios similar to this event can form through several channels, but are unlikely to have formed in globular clusters However, the combination of mass ratio, component masses, and the inferred merger rate for this event challenges all current models of the formation and mass distribution of compact-object binaries

913 citations


Journal ArticleDOI
Jens Kattge1, Gerhard Bönisch2, Sandra Díaz3, Sandra Lavorel  +751 moreInstitutions (314)
TL;DR: The extent of the trait data compiled in TRY is evaluated and emerging patterns of data coverage and representativeness are analyzed to conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements.
Abstract: Plant traits-the morphological, anatomical, physiological, biochemical and phenological characteristics of plants-determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits-almost complete coverage for 'plant growth form'. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait-environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives.

882 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Frederico Arroja4  +251 moreInstitutions (72)
TL;DR: In this paper, the authors present the cosmological legacy of the Planck satellite, which provides the strongest constraints on the parameters of the standard cosmology model and some of the tightest limits available on deviations from that model.
Abstract: The European Space Agency’s Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009. It scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857 GHz. This paper presents the cosmological legacy of Planck, which currently provides our strongest constraints on the parameters of the standard cosmological model and some of the tightest limits available on deviations from that model. The 6-parameter ΛCDM model continues to provide an excellent fit to the cosmic microwave background data at high and low redshift, describing the cosmological information in over a billion map pixels with just six parameters. With 18 peaks in the temperature and polarization angular power spectra constrained well, Planck measures five of the six parameters to better than 1% (simultaneously), with the best-determined parameter (θ*) now known to 0.03%. We describe the multi-component sky as seen by Planck, the success of the ΛCDM model, and the connection to lower-redshift probes of structure formation. We also give a comprehensive summary of the major changes introduced in this 2018 release. The Planck data, alone and in combination with other probes, provide stringent constraints on our models of the early Universe and the large-scale structure within which all astrophysical objects form and evolve. We discuss some lessons learned from the Planck mission, and highlight areas ripe for further experimental advances.

Journal ArticleDOI
R. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1332 moreInstitutions (150)
TL;DR: It is inferred that the primary black hole mass lies within the gap produced by (pulsational) pair-instability supernova processes, with only a 0.32% probability of being below 65 M⊙, which can be considered an intermediate mass black hole (IMBH).
Abstract: On May 21, 2019 at 03:02:29 UTC Advanced LIGO and Advanced Virgo observed a short duration gravitational-wave signal, GW190521, with a three-detector network signal-to-noise ratio of 14.7, and an estimated false-alarm rate of 1 in 4900 yr using a search sensitive to generic transients. If GW190521 is from a quasicircular binary inspiral, then the detected signal is consistent with the merger of two black holes with masses of 85_{-14}^{+21} M_{⊙} and 66_{-18}^{+17} M_{⊙} (90% credible intervals). We infer that the primary black hole mass lies within the gap produced by (pulsational) pair-instability supernova processes, with only a 0.32% probability of being below 65 M_{⊙}. We calculate the mass of the remnant to be 142_{-16}^{+28} M_{⊙}, which can be considered an intermediate mass black hole (IMBH). The luminosity distance of the source is 5.3_{-2.6}^{+2.4} Gpc, corresponding to a redshift of 0.82_{-0.34}^{+0.28}. The inferred rate of mergers similar to GW190521 is 0.13_{-0.11}^{+0.30} Gpc^{-3} yr^{-1}.

Journal ArticleDOI
TL;DR: Drawing on a survey of more than 5,800 small businesses, insight is provided into the economic impact of coronavirus disease 2019 (COVID-19) on small businesses and on businesses’ expectations about the longer-term impact of CO VID-19.
Abstract: To explore the impact of coronavirus disease 2019 (COVID-19) on small businesses, we conducted a survey of more than 5,800 small businesses between March 28 and April 4, 2020. Several themes emerged. First, mass layoffs and closures had already occurred-just a few weeks into the crisis. Second, the risk of closure was negatively associated with the expected length of the crisis. Moreover, businesses had widely varying beliefs about the likely duration of COVID-related disruptions. Third, many small businesses are financially fragile: The median business with more than $10,000 in monthly expenses had only about 2 wk of cash on hand at the time of the survey. Fourth, the majority of businesses planned to seek funding through the Coronavirus Aid, Relief, and Economic Security (CARES) Act. However, many anticipated problems with accessing the program, such as bureaucratic hassles and difficulties establishing eligibility. Using experimental variation, we also assess take-up rates and business resilience effects for loans relative to grants-based programs.

Journal ArticleDOI
T. Aoyama1, Nils Asmussen2, M. Benayoun3, Johan Bijnens4  +146 moreInstitutions (64)
TL;DR: The current status of the Standard Model calculation of the anomalous magnetic moment of the muon is reviewed in this paper, where the authors present a detailed account of recent efforts to improve the calculation of these two contributions with either a data-driven, dispersive approach, or a first-principle, lattice approach.

Journal ArticleDOI
Gilberto Pastorello1, Carlo Trotta2, E. Canfora2, Housen Chu1  +300 moreInstitutions (119)
TL;DR: The FLUXNET2015 dataset provides ecosystem-scale data on CO 2 , water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe, and is detailed in this paper.
Abstract: The FLUXNET2015 dataset provides ecosystem-scale data on CO2, water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe (over 1500 site-years, up to and including year 2014). These sites, independently managed and operated, voluntarily contributed their data to create global datasets. Data were quality controlled and processed using uniform methods, to improve consistency and intercomparability across sites. The dataset is already being used in a number of applications, including ecophysiology studies, remote sensing studies, and development of ecosystem and Earth system models. FLUXNET2015 includes derived-data products, such as gap-filled time series, ecosystem respiration and photosynthetic uptake estimates, estimation of uncertainties, and metadata about the measurements, presented for the first time in this paper. In addition, 206 of these sites are for the first time distributed under a Creative Commons (CC-BY 4.0) license. This paper details this enhanced dataset and the processing methods, now made available as open-source codes, making the dataset more accessible, transparent, and reproducible.

Proceedings Article
30 Apr 2020
TL;DR: It is shown that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.
Abstract: The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but all of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability at little to no cost by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.

Journal ArticleDOI
TL;DR: The primary recommendations consisted of eliminating routine monitoring of serum peak concentrations, emphasizing a ratio of area under the curve over 24 hours to minimum inhibitory concentration (AUC/ MIC) of ≥400 as the primary PK/ PD predictor of vancomycin activity, and promoting serum trough concentrations of 15 to 20 mg/L as a surrogate marker for the optimal vancomYcin AUC/MIC if the MIC was ≤1mg/L in patients with normal renal function.
Abstract: Recent clinical data on vancomycin pharmacokinetics and pharmacodynamics suggest a reevaluation of current dosing and monitoring recommendations. The previous 2009 vancomycin consensus guidelines recommend trough monitoring as a surrogate marker for the target area under the curve over 24 hours to minimum inhibitory concentration (AUC/MIC). However, recent data suggest that trough monitoring is associated with higher nephrotoxicity. This document is an executive summary of the new vancomycin consensus guidelines for vancomycin dosing and monitoring. It was developed by the American Society of Health-System Pharmacists, the Infectious Diseases Society of America, the Pediatric Infectious Diseases Society, and the Society of Infectious Diseases Pharmacists vancomycin consensus guidelines committee. These consensus guidelines recommend an AUC/MIC ratio of 400-600 mg*hour/L (assuming a broth microdilution MIC of 1 mg/L) to achieve clinical efficacy and ensure safety for patients being treated for serious methicillin-resistant Staphylococcus aureus infections.

Posted ContentDOI
Arang Rhie1, Shane A. McCarthy2, Olivier Fedrigo3, Joana Damas4, Giulio Formenti3, Sergey Koren1, Marcela Uliano-Silva2, William Chow2, Arkarachai Fungtammasan, Gregory Gedman3, Lindsey J. Cantin3, Françoise Thibaud-Nissen1, Leanne Haggerty5, Chul Hee Lee6, Byung June Ko6, J. H. Kim6, Iliana Bista2, Michelle Smith2, Bettina Haase3, Jacquelyn Mountcastle3, Sylke Winkler7, Sadye Paez3, Jason T. Howard8, Sonja C. Vernes7, Tanya M. Lama9, Frank Grützner10, Wesley C. Warren11, Christopher N. Balakrishnan12, Dave W Burt13, Jimin George14, Matthew T. Biegler3, David Iorns15, Andrew Digby, Daryl Eason, Taylor Edwards16, Mark Wilkinson17, George F. Turner18, Axel Meyer19, Andreas F. Kautt19, Paolo Franchini19, H. William Detrich20, Hannes Svardal21, Maximilian Wagner22, Gavin J. P. Naylor23, Martin Pippel7, Milan Malinsky2, Mark Mooney, Maria Simbirsky, Brett T. Hannigan, Trevor Pesout24, Marlys L. Houck, Ann C Misuraca, Sarah B. Kingan25, Richard Hall25, Zev N. Kronenberg25, Jonas Korlach25, Ivan Sović25, Christopher Dunn25, Zemin Ning2, Alex Hastie, Joyce V. Lee, Siddarth Selvaraj, Richard E. Green24, Nicholas H. Putnam, Jay Ghurye26, Erik Garrison24, Ying Sims2, Joanna Collins2, Sarah Pelan2, James Torrance2, Alan Tracey2, Jonathan Wood2, Dengfeng Guan27, Sarah E. London28, David F. Clayton14, Claudio V. Mello29, Samantha R. Friedrich29, Peter V. Lovell29, Ekaterina Osipova7, Farooq O. Al-Ajli30, Simona Secomandi31, Heebal Kim6, Constantina Theofanopoulou3, Yang Zhou32, Robert S. Harris33, Kateryna D. Makova33, Paul Medvedev33, Jinna Hoffman1, Patrick Masterson1, Karen Clark1, Fergal J. Martin5, Kevin L. Howe5, Paul Flicek5, Brian P. Walenz1, Woori Kwak, Hiram Clawson24, Mark Diekhans24, Luis R Nassar24, Benedict Paten24, Robert H. S. Kraus19, Harris A. Lewin4, Andrew J. Crawford34, M. Thomas P. Gilbert32, Guojie Zhang32, Byrappa Venkatesh35, Robert W. Murphy36, Klaus-Peter Koepfli37, Beth Shapiro24, Warren E. Johnson37, Federica Di Palma38, Tomas Marques-Bonet39, Emma C. Teeling40, Tandy Warnow41, Jennifer A. Marshall Graves42, Oliver A. Ryder43, David Haussler24, Stephen J. O'Brien44, Kerstin Howe2, Eugene W. Myers45, Richard Durbin2, Adam M. Phillippy1, Erich D. Jarvis3 
23 May 2020-bioRxiv
TL;DR: The Vertebrate Genomes Project is embarked on, an effort to generate high-quality, complete reference genomes for all ~70,000 extant vertebrate species and help enable a new era of discovery across the life sciences.
Abstract: High-quality and complete reference genome assemblies are fundamental for the application of genomics to biology, disease, and biodiversity conservation. However, such assemblies are only available for a few non-microbial species. To address this issue, the international Genome 10K (G10K) consortium has worked over a five-year period to evaluate and develop cost-effective methods for assembling the most accurate and complete reference genomes to date. Here we summarize these developments, introduce a set of quality standards, and present lessons learned from sequencing and assembling 16 species representing major vertebrate lineages (mammals, birds, reptiles, amphibians, teleost fishes and cartilaginous fishes). We confirm that long-read sequencing technologies are essential for maximizing genome quality and that unresolved complex repeats and haplotype heterozygosity are major sources of error in assemblies. Our new assemblies identify and correct substantial errors in some of the best historical reference genomes. Adopting these lessons, we have embarked on the Vertebrate Genomes Project (VGP), an effort to generate high-quality, complete reference genomes for all ~70,000 extant vertebrate species and help enable a new era of discovery across the life sciences.

Journal ArticleDOI
28 May 2020-ACS Nano
TL;DR: A colorimetric assay based on gold nanoparticles (AuNPs), when capped with suitably designed thiol-modified antisense oligonucleotides (ASOs) specific for N-gene (nucleocapsid phosphoprotein) of SARS-CoV-2, could be used for diagnosing positive COVID-19 cases within 10 min from the isolated RNA samples.
Abstract: The current outbreak of the pandemic coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) demands its rapid, convenient, and large-scale diagnosis to downregulate its spread within as well as across the communities. But the reliability, reproducibility, and selectivity of majority of such diagnostic tests fail when they are tested either to a viral load at its early representation or to a viral gene mutated during its current spread. In this regard, a selective "naked-eye" detection of SARS-CoV-2 is highly desirable, which can be tested without accessing any advanced instrumental techniques. We herein report the development of a colorimetric assay based on gold nanoparticles (AuNPs), when capped with suitably designed thiol-modified antisense oligonucleotides (ASOs) specific for N-gene (nucleocapsid phosphoprotein) of SARS-CoV-2, could be used for diagnosing positive COVID-19 cases within 10 min from the isolated RNA samples. The thiol-modified ASO-capped AuNPs agglomerate selectively in the presence of its target RNA sequence of SARS-CoV-2 and demonstrate a change in its surface plasmon resonance. Further, the addition of RNaseH cleaves the RNA strand from the RNA-DNA hybrid leading to a visually detectable precipitate from the solution mediated by the additional agglomeration among the AuNPs. The selectivity of the assay has been monitored in the presence of MERS-CoV viral RNA with a limit of detection of 0.18 ng/μL of RNA having SARS-CoV-2 viral load. Thus, the current study reports a selective and visual "naked-eye" detection of COVID-19 causative virus, SARS-CoV-2, without the requirement of any sophisticated instrumental techniques.

Proceedings Article
30 Apr 2020
TL;DR: The authors proposed Rectified Adam (RAdam), a variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate, which is a variance reduction technique.
Abstract: The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -- its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam.

Journal ArticleDOI
TL;DR: The comparison results on the benchmark functions suggest that MRFO is far superior to its competitors, and the real-world engineering applications show the merits of this algorithm in tackling challenging problems in terms of computational cost and solution precision.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1330 moreInstitutions (149)
TL;DR: In this article, the authors reported the observation of gravitational waves from a binary-black-hole coalescence during the first two weeks of LIGO and Virgo's third observing run.
Abstract: We report the observation of gravitational waves from a binary-black-hole coalescence during the first two weeks of LIGO’s and Virgo’s third observing run. The signal was recorded on April 12, 2019 at 05∶30∶44 UTC with a network signal-to-noise ratio of 19. The binary is different from observations during the first two observing runs most notably due to its asymmetric masses: a ∼30 M⊙ black hole merged with a ∼8 M⊙ black hole companion. The more massive black hole rotated with a dimensionless spin magnitude between 0.22 and 0.60 (90% probability). Asymmetric systems are predicted to emit gravitational waves with stronger contributions from higher multipoles, and indeed we find strong evidence for gravitational radiation beyond the leading quadrupolar order in the observed signal. A suite of tests performed on GW190412 indicates consistency with Einstein’s general theory of relativity. While the mass ratio of this system differs from all previous detections, we show that it is consistent with the population model of stellar binary black holes inferred from the first two observing runs.

Journal ArticleDOI
TL;DR: Evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S, is assessed, using a Bayesian approach to produce a probability density function for S given all the evidence, and promising avenues for further narrowing the range are identified.
Abstract: We assess evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S. This evidence includes feedback process understanding, the historical climate record, and the paleoclimate record. An S value lower than 2 K is difficult to reconcile with any of the three lines of evidence. The amount of cooling during the Last Glacial Maximum provides strong evidence against values of S greater than 4.5 K. Other lines of evidence in combination also show that this is relatively unlikely. We use a Bayesian approach to produce a probability density function (PDF) for S given all the evidence, including tests of robustness to difficult-to-quantify uncertainties and different priors. The 66% range is 2.6-3.9 K for our Baseline calculation and remains within 2.3-4.5 K under the robustness tests; corresponding 5-95% ranges are 2.3-4.7 K, bounded by 2.0-5.7 K (although such high-confidence ranges should be regarded more cautiously). This indicates a stronger constraint on S than reported in past assessments, by lifting the low end of the range. This narrowing occurs because the three lines of evidence agree and are judged to be largely independent and because of greater confidence in understanding feedback processes and in combining evidence. We identify promising avenues for further narrowing the range in S, in particular using comprehensive models and process understanding to address limitations in the traditional forcing-feedback paradigm for interpreting past changes.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: HigherHRNet is presented, a novel bottom-up human pose estimation method for learning scale-aware representations using high-resolution feature pyramids that surpasses all top-down methods on CrowdPose test and achieves new state-of-the-art result on COCO test-dev, suggesting its robustness in crowded scene.
Abstract: Bottom-up human pose estimation methods have difficulties in predicting the correct pose for small persons due to challenges in scale variation. In this paper, we present HigherHRNet: a novel bottom-up human pose estimation method for learning scale-aware representations using high-resolution feature pyramids. Equipped with multi-resolution supervision for training and multi-resolution aggregation for inference, the proposed approach is able to solve the scale variation challenge in bottom-up multi-person pose estimation and localize keypoints more precisely, especially for small person. The feature pyramid in HigherHRNet consists of feature map outputs from HRNet and upsampled higher-resolution outputs through a transposed convolution. HigherHRNet outperforms the previous best bottom-up method by 2.5% AP for medium person on COCO test-dev, showing its effectiveness in handling scale variation. Furthermore, HigherHRNet achieves new state-of-the-art result on COCO test-dev (70.5% AP) without using refinement or other post-processing techniques, surpassing all existing bottom-up methods. HigherHRNet even surpasses all top-down methods on CrowdPose test (67.6% AP), suggesting its robustness in crowded scene.

Journal ArticleDOI
04 Sep 2020-Science
TL;DR: A variant of ACE2 based on deep mutagenesis far outcompetes the natural receptor in binding the SARS-CoV-2 spike protein and gives ACE2 variants with affinities that rival those of monoclonal antibodies.
Abstract: The spike (S) protein of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) binds angiotensin-converting enzyme 2 (ACE2) on host cells to initiate entry, and soluble ACE2 is a therapeutic candidate that neutralizes infection by acting as a decoy. By using deep mutagenesis, mutations in ACE2 that increase S binding are found across the interaction surface, in the asparagine 90-glycosylation motif and at buried sites. The mutational landscape provides a blueprint for understanding the specificity of the interaction between ACE2 and S and for engineering high-affinity decoy receptors. Combining mutations gives ACE2 variants with affinities that rival those of monoclonal antibodies. A stable dimeric variant shows potent SARS-CoV-2 and -1 neutralization in vitro. The engineered receptor is catalytically active, and its close similarity with the native receptor may limit the potential for viral escape.

Journal ArticleDOI
TL;DR: In this article, the authors search for an isotropic stochastic GWB in the 12.5-yr pulsar-timing data set collected by the North American Nanohertz Observatory for Gravitational Waves.
Abstract: We search for an isotropic stochastic gravitational-wave background (GWB) in the 12.5 yr pulsar-timing data set collected by the North American Nanohertz Observatory for Gravitational Waves. Our analysis finds strong evidence of a stochastic process, modeled as a power law, with common amplitude and spectral slope across pulsars. Under our fiducial model, the Bayesian posterior of the amplitude for an f −2/3 power-law spectrum, expressed as the characteristic GW strain, has median 1.92 × 10−15 and 5%–95% quantiles of 1.37–2.67 × 10−15 at a reference frequency of the Bayes factor in favor of the common-spectrum process versus independent red-noise processes in each pulsar exceeds 10,000. However, we find no statistically significant evidence that this process has quadrupolar spatial correlations, which we would consider necessary to claim a GWB detection consistent with general relativity. We find that the process has neither monopolar nor dipolar correlations, which may arise from, for example, reference clock or solar system ephemeris systematics, respectively. The amplitude posterior has significant support above previously reported upper limits; we explain this in terms of the Bayesian priors assumed for intrinsic pulsar red noise. We examine potential implications for the supermassive black hole binary population under the hypothesis that the signal is indeed astrophysical in nature.

Posted Content
TL;DR: The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation, and an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.
Abstract: Message-passing neural networks (MPNNs) have been successfully applied to representation learning on graphs in a variety of real-world applications. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. Few studies have noticed the weaknesses from different perspectives. From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses. The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph. The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation. We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN (Geometric Graph Convolutional Networks), to perform transductive learning on graphs. Experimental results show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs. Code is available at this https URL.

Journal ArticleDOI
TL;DR: Two bioinformatic tools enable sequence similarity network and phylogenetic analysis of gene clusters and their families across hundreds of strains and in large datasets, leading to the discovery of new natural products.
Abstract: Genome mining has become a key technology to exploit natural product diversity. Although initially performed on a single-genome basis, the process is now being scaled up to mine entire genera, strain collections and microbiomes. However, no bioinformatic framework is currently available for effectively analyzing datasets of this size and complexity. In the present study, a streamlined computational workflow is provided, consisting of two new software tools: the ‘biosynthetic gene similarity clustering and prospecting engine’ (BiG-SCAPE), which facilitates fast and interactive sequence similarity network analysis of biosynthetic gene clusters and gene cluster families; and the ‘core analysis of syntenic orthologues to prioritize natural product gene clusters’ (CORASON), which elucidates phylogenetic relationships within and across these families. BiG-SCAPE is validated by correlating its output to metabolomic data across 363 actinobacterial strains and the discovery potential of CORASON is demonstrated by comprehensively mapping biosynthetic diversity across a range of detoxin/rimosamide-related gene cluster families, culminating in the characterization of seven detoxin analogues. Two bioinformatic tools, BiG-SCAPE and CORASON, enable sequence similarity network and phylogenetic analysis of gene clusters and their families across hundreds of strains and in large datasets, leading to the discovery of new natural products.

Journal ArticleDOI
TL;DR: This essay focuses on three occupationally-related domains that may be impacted by the Covid-19 pandemic, and discusses the increased segmentation of the labor market which allocate workers to “good jobs’ and “bad jobs” and the contribution of occupational segmentation to inequality.

Journal ArticleDOI
T. Aoyama1, Nils Asmussen2, M. Benayoun3, Johan Bijnens4  +146 moreInstitutions (64)
TL;DR: The current status of the Standard Model calculation of the anomalous magnetic moment of the muon has been reviewed in this paper, where the authors present a detailed account of recent efforts to improve the calculation of these two contributions with either a data-driven, dispersive approach, or a first-principle, lattice-QCD approach.
Abstract: We review the present status of the Standard Model calculation of the anomalous magnetic moment of the muon. This is performed in a perturbative expansion in the fine-structure constant $\alpha$ and is broken down into pure QED, electroweak, and hadronic contributions. The pure QED contribution is by far the largest and has been evaluated up to and including $\mathcal{O}(\alpha^5)$ with negligible numerical uncertainty. The electroweak contribution is suppressed by $(m_\mu/M_W)^2$ and only shows up at the level of the seventh significant digit. It has been evaluated up to two loops and is known to better than one percent. Hadronic contributions are the most difficult to calculate and are responsible for almost all of the theoretical uncertainty. The leading hadronic contribution appears at $\mathcal{O}(\alpha^2)$ and is due to hadronic vacuum polarization, whereas at $\mathcal{O}(\alpha^3)$ the hadronic light-by-light scattering contribution appears. Given the low characteristic scale of this observable, these contributions have to be calculated with nonperturbative methods, in particular, dispersion relations and the lattice approach to QCD. The largest part of this review is dedicated to a detailed account of recent efforts to improve the calculation of these two contributions with either a data-driven, dispersive approach, or a first-principle, lattice-QCD approach. The final result reads $a_\mu^\text{SM}=116\,591\,810(43)\times 10^{-11}$ and is smaller than the Brookhaven measurement by 3.7$\sigma$. The experimental uncertainty will soon be reduced by up to a factor four by the new experiment currently running at Fermilab, and also by the future J-PARC experiment. This and the prospects to further reduce the theoretical uncertainty in the near future-which are also discussed here-make this quantity one of the most promising places to look for evidence of new physics.