scispace - formally typeset
Search or ask a question

Showing papers by "Australian National University published in 2020"


Journal ArticleDOI
TL;DR: Some notable features of IQ-TREE version 2 are described and the key advantages over other software are highlighted.
Abstract: IQ-TREE (http://www.iqtree.org, last accessed February 6, 2020) is a user-friendly and widely used software package for phylogenetic inference using maximum likelihood. Since the release of version 1 in 2014, we have continuously expanded IQ-TREE to integrate a plethora of new models of sequence evolution and efficient computational approaches of phylogenetic inference to deal with genomic data. Here, we describe notable features of IQ-TREE version 2 and highlight the key advantages over other software.

4,337 citations


Journal ArticleDOI
TL;DR: The largest declines in risk exposure from 2010 to 2019 were among a set of risks that are strongly linked to social and economic development, including household air pollution; unsafe water, sanitation, and handwashing; and child growth failure.

3,059 citations


Journal ArticleDOI
03 Apr 2020
TL;DR: Random Erasing as mentioned in this paper randomly selects a rectangle region in an image and erases its pixels with random values, which reduces the risk of overfitting and makes the model robust to occlusion.
Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: https://github.com/zhunzhong07/Random-Erasing.

1,748 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore seven different scenarios of how COVID-19 might evolve in the coming year using a modelling technique developed by Lee and McKibbin (2003) and extended by McKibbin and Sidorenko (2006) and examine the impacts of different scenarios on macroeconomic outcomes and financial markets in a global hybrid DSGE/CGE general equilibrium model.
Abstract: The outbreak of coronavirus named COVID-19 has disrupted the Chinese economy and is spreading globally. The evolution of the disease and its economic impact is highly uncertain which makes it difficult for policymakers to formulate an appropriate macroeconomic policy response. In order to better understand possible economic outcomes, this paper explores seven different scenarios of how COVID-19 might evolve in the coming year using a modelling technique developed by Lee and McKibbin (2003) and extended by McKibbin and Sidorenko (2006). It examines the impacts of different scenarios on macroeconomic outcomes and financial markets in a global hybrid DSGE/CGE general equilibrium model. The scenarios in this paper demonstrate that even a contained outbreak could significantly impact the global economy in the short run. These scenarios demonstrate the scale of costs that might be avoided by greater investment in public health systems in all economies but particularly in less developed economies where health care systems are less developed and popultion density is high.

1,270 citations


Journal ArticleDOI
B. P. Abbott1, R. Abbott1, T. D. Abbott2, Sheelu Abraham3  +1271 moreInstitutions (145)
TL;DR: In 2019, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9 and the Virgo detector was also taking data that did not contribute to detection due to a low SINR but were used for subsequent parameter estimation as discussed by the authors.
Abstract: On 2019 April 25, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9. The Virgo detector was also taking data that did not contribute to detection due to a low signal-to-noise ratio, but were used for subsequent parameter estimation. The 90% credible intervals for the component masses range from to if we restrict the dimensionless component spin magnitudes to be smaller than 0.05). These mass parameters are consistent with the individual binary components being neutron stars. However, both the source-frame chirp mass and the total mass of this system are significantly larger than those of any other known binary neutron star (BNS) system. The possibility that one or both binary components of the system are black holes cannot be ruled out from gravitational-wave data. We discuss possible origins of the system based on its inconsistency with the known Galactic BNS population. Under the assumption that the signal was produced by a BNS coalescence, the local rate of neutron star mergers is updated to 250-2810.

1,189 citations


Posted Content
TL;DR: A comprehensive review of recent pioneering efforts in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multiscale and pyramid-based approaches, recurrent networks, visual attention models, and generative models in adversarial settings are provided.
Abstract: Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.

950 citations


Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1334 moreInstitutions (150)
TL;DR: In this paper, the authors reported the observation of a compact binary coalescence involving a 222 −243 M ⊙ black hole and a compact object with a mass of 250 −267 M ⋆ (all measurements quoted at the 90% credible level) The gravitational-wave signal, GW190814, was observed during LIGO's and Virgo's third observing run on 2019 August 14 at 21:10:39 UTC and has a signal-to-noise ratio of 25 in the three-detector network.
Abstract: We report the observation of a compact binary coalescence involving a 222–243 M ⊙ black hole and a compact object with a mass of 250–267 M ⊙ (all measurements quoted at the 90% credible level) The gravitational-wave signal, GW190814, was observed during LIGO's and Virgo's third observing run on 2019 August 14 at 21:10:39 UTC and has a signal-to-noise ratio of 25 in the three-detector network The source was localized to 185 deg2 at a distance of ${241}_{-45}^{+41}$ Mpc; no electromagnetic counterpart has been confirmed to date The source has the most unequal mass ratio yet measured with gravitational waves, ${0112}_{-0009}^{+0008}$, and its secondary component is either the lightest black hole or the heaviest neutron star ever discovered in a double compact-object system The dimensionless spin of the primary black hole is tightly constrained to ≤007 Tests of general relativity reveal no measurable deviations from the theory, and its prediction of higher-multipole emission is confirmed at high confidence We estimate a merger rate density of 1–23 Gpc−3 yr−1 for the new class of binary coalescence sources that GW190814 represents Astrophysical models predict that binaries with mass ratios similar to this event can form through several channels, but are unlikely to have formed in globular clusters However, the combination of mass ratio, component masses, and the inferred merger rate for this event challenges all current models of the formation and mass distribution of compact-object binaries

913 citations


Journal ArticleDOI
Jens Kattge1, Gerhard Bönisch2, Sandra Díaz3, Sandra Lavorel  +751 moreInstitutions (314)
TL;DR: The extent of the trait data compiled in TRY is evaluated and emerging patterns of data coverage and representativeness are analyzed to conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements.
Abstract: Plant traits-the morphological, anatomical, physiological, biochemical and phenological characteristics of plants-determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits-almost complete coverage for 'plant growth form'. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait-environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives.

882 citations


Journal ArticleDOI
R. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1332 moreInstitutions (150)
TL;DR: It is inferred that the primary black hole mass lies within the gap produced by (pulsational) pair-instability supernova processes, with only a 0.32% probability of being below 65 M⊙, which can be considered an intermediate mass black hole (IMBH).
Abstract: On May 21, 2019 at 03:02:29 UTC Advanced LIGO and Advanced Virgo observed a short duration gravitational-wave signal, GW190521, with a three-detector network signal-to-noise ratio of 14.7, and an estimated false-alarm rate of 1 in 4900 yr using a search sensitive to generic transients. If GW190521 is from a quasicircular binary inspiral, then the detected signal is consistent with the merger of two black holes with masses of 85_{-14}^{+21} M_{⊙} and 66_{-18}^{+17} M_{⊙} (90% credible intervals). We infer that the primary black hole mass lies within the gap produced by (pulsational) pair-instability supernova processes, with only a 0.32% probability of being below 65 M_{⊙}. We calculate the mass of the remnant to be 142_{-16}^{+28} M_{⊙}, which can be considered an intermediate mass black hole (IMBH). The luminosity distance of the source is 5.3_{-2.6}^{+2.4} Gpc, corresponding to a redshift of 0.82_{-0.34}^{+0.28}. The inferred rate of mergers similar to GW190521 is 0.13_{-0.11}^{+0.30} Gpc^{-3} yr^{-1}.

876 citations


Journal ArticleDOI
Gilberto Pastorello1, Carlo Trotta2, E. Canfora2, Housen Chu1  +300 moreInstitutions (119)
TL;DR: The FLUXNET2015 dataset provides ecosystem-scale data on CO 2 , water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe, and is detailed in this paper.
Abstract: The FLUXNET2015 dataset provides ecosystem-scale data on CO2, water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe (over 1500 site-years, up to and including year 2014). These sites, independently managed and operated, voluntarily contributed their data to create global datasets. Data were quality controlled and processed using uniform methods, to improve consistency and intercomparability across sites. The dataset is already being used in a number of applications, including ecophysiology studies, remote sensing studies, and development of ecosystem and Earth system models. FLUXNET2015 includes derived-data products, such as gap-filled time series, ecosystem respiration and photosynthetic uptake estimates, estimation of uncertainties, and metadata about the measurements, presented for the first time in this paper. In addition, 206 of these sites are for the first time distributed under a Creative Commons (CC-BY 4.0) license. This paper details this enhanced dataset and the processing methods, now made available as open-source codes, making the dataset more accessible, transparent, and reproducible.

681 citations


Journal ArticleDOI
20 Jan 2020
TL;DR: In this article, the authors propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research, and offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.
Abstract: Research practice, funding agencies and global science organizations suggest that research aimed at addressing sustainability challenges is most effective when ‘co-produced’ by academics and non-academics. Co-production promises to address the complex nature of contemporary sustainability challenges better than more traditional scientific approaches. But definitions of knowledge co-production are diverse and often contradictory. We propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research. Using these principles, we offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.

Journal ArticleDOI
TL;DR: A discussion of many of the recently implemented features of GAMESS (General Atomic and Molecular Electronic Structure System) and LibCChem (the C++ CPU/GPU library associated with GAMESS) is presented, which include fragmentation methods, hybrid MPI/OpenMP approaches to Hartree-Fock, and resolution of the identity second order perturbation theory.
Abstract: A discussion of many of the recently implemented features of GAMESS (General Atomic and Molecular Electronic Structure System) and LibCChem (the C++ CPU/GPU library associated with GAMESS) is presented. These features include fragmentation methods such as the fragment molecular orbital, effective fragment potential and effective fragment molecular orbital methods, hybrid MPI/OpenMP approaches to Hartree-Fock, and resolution of the identity second order perturbation theory. Many new coupled cluster theory methods have been implemented in GAMESS, as have multiple levels of density functional/tight binding theory. The role of accelerators, especially graphical processing units, is discussed in the context of the new features of LibCChem, as it is the associated problem of power consumption as the power of computers increases dramatically. The process by which a complex program suite such as GAMESS is maintained and developed is considered. Future developments are briefly summarized.

Journal ArticleDOI
17 Jan 2020-Science
TL;DR: This work implements a new physical mechanism for suppressing radiative losses of individual nanoscale resonators to engineer special modes with high quality factors: optical bound states in the continuum (BICs), and demonstrates that an individual subwavelength dielectric resonator hosting a BIC mode can boost nonlinear effects increasing second-harmonic generation efficiency.
Abstract: Subwavelength optical resonators made of high-index dielectric materials provide efficient ways to manipulate light at the nanoscale through mode interferences and enhancement of both electric and magnetic fields. Such Mie-resonant dielectric structures have low absorption, and their functionalities are limited predominantly by radiative losses. We implement a new physical mechanism for suppressing radiative losses of individual nanoscale resonators to engineer special modes with high quality factors: optical bound states in the continuum (BICs). We demonstrate that an individual subwavelength dielectric resonator hosting a BIC mode can boost nonlinear effects increasing second-harmonic generation efficiency. Our work suggests a route to use subwavelength high-index dielectric resonators for a strong enhancement of light-matter interactions with applications to nonlinear optics, nanoscale lasers, quantum photonics, and sensors.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1330 moreInstitutions (149)
TL;DR: In this article, the authors reported the observation of gravitational waves from a binary-black-hole coalescence during the first two weeks of LIGO and Virgo's third observing run.
Abstract: We report the observation of gravitational waves from a binary-black-hole coalescence during the first two weeks of LIGO’s and Virgo’s third observing run. The signal was recorded on April 12, 2019 at 05∶30∶44 UTC with a network signal-to-noise ratio of 19. The binary is different from observations during the first two observing runs most notably due to its asymmetric masses: a ∼30 M⊙ black hole merged with a ∼8 M⊙ black hole companion. The more massive black hole rotated with a dimensionless spin magnitude between 0.22 and 0.60 (90% probability). Asymmetric systems are predicted to emit gravitational waves with stronger contributions from higher multipoles, and indeed we find strong evidence for gravitational radiation beyond the leading quadrupolar order in the observed signal. A suite of tests performed on GW190412 indicates consistency with Einstein’s general theory of relativity. While the mass ratio of this system differs from all previous detections, we show that it is consistent with the population model of stellar binary black holes inferred from the first two observing runs.

Journal ArticleDOI
TL;DR: The SARS-CoV-2 main protease is considered a promising drug target, as it is dissimilar to human proteases, facilitating drug discovery attempts based on previous lead compounds.

Journal ArticleDOI
TL;DR: Evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S, is assessed, using a Bayesian approach to produce a probability density function for S given all the evidence, and promising avenues for further narrowing the range are identified.
Abstract: We assess evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S. This evidence includes feedback process understanding, the historical climate record, and the paleoclimate record. An S value lower than 2 K is difficult to reconcile with any of the three lines of evidence. The amount of cooling during the Last Glacial Maximum provides strong evidence against values of S greater than 4.5 K. Other lines of evidence in combination also show that this is relatively unlikely. We use a Bayesian approach to produce a probability density function (PDF) for S given all the evidence, including tests of robustness to difficult-to-quantify uncertainties and different priors. The 66% range is 2.6-3.9 K for our Baseline calculation and remains within 2.3-4.5 K under the robustness tests; corresponding 5-95% ranges are 2.3-4.7 K, bounded by 2.0-5.7 K (although such high-confidence ranges should be regarded more cautiously). This indicates a stronger constraint on S than reported in past assessments, by lifting the low end of the range. This narrowing occurs because the three lines of evidence agree and are judged to be largely independent and because of greater confidence in understanding feedback processes and in combining evidence. We identify promising avenues for further narrowing the range in S, in particular using comprehensive models and process understanding to address limitations in the traditional forcing-feedback paradigm for interpreting past changes.

Journal ArticleDOI
TL;DR: In some cases the comparison of two models using ICs can be viewed as equivalent to a likelihood ratio test, with the different criteria representing different alpha levels and BIC being a more conservative test than AIC.
Abstract: Choosing a model with too few parameters can involve making unrealistically simple assumptions and lead to high bias, poor prediction, and missed opportunities for insight. Such models are not flexible enough to describe the sample or the population well. A model with too many parameters can fit the observed data very well, but be too closely tailored to it. Such models may generalize poorly. Penalizedlikelihood information criteria, such as Akaike’s Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Consistent AIC, and the Adjusted BIC, are widely used for model selection. However, different criteria sometimes support different models, leading to uncertainty about which criterion is the most trustworthy. In some simple cases the comparison of two models using information criteria can be viewed as equivalent to a likelihood ratio test, with the different models representing different alpha levels (i.e., different emphases on sensitivity or specificity; Lin & Dayton 1997). This perspective may lead to insights about how to interpret the criteria in less simple situations. For example, AIC or BIC could be preferable, depending on sample size and on the relative importance one assigns to sensitivity versus specificity. Understanding the differences among the criteria may make it easier to compare their results and to use them to make informed decisions.

Journal ArticleDOI
Julia Koehler Leman1, Brian D. Weitzner2, Brian D. Weitzner3, Steven M. Lewis4, Steven M. Lewis5, Jared Adolf-Bryfogle6, Nawsad Alam7, Rebecca F. Alford2, Melanie L. Aprahamian8, David Baker3, Kyle A. Barlow9, Patrick Barth10, Patrick Barth11, Benjamin Basanta3, Brian J. Bender12, Kristin Blacklock13, Jaume Bonet11, Jaume Bonet14, Scott E. Boyken3, Phil Bradley15, Christopher Bystroff16, Patrick Conway3, Seth Cooper17, Bruno E. Correia14, Bruno E. Correia11, Brian Coventry3, Rhiju Das18, René M. de Jong19, Frank DiMaio3, Lorna Dsilva17, Roland L. Dunbrack20, Alex Ford3, Brandon Frenz3, Darwin Y. Fu12, Caleb Geniesse18, Lukasz Goldschmidt3, Ragul Gowthaman21, Jeffrey J. Gray2, Dominik Gront22, Sharon L. Guffy5, Scott Horowitz23, Po-Ssu Huang3, Thomas Huber24, Timothy M. Jacobs5, Jeliazko R. Jeliazkov2, David K. Johnson25, Kalli Kappel18, John Karanicolas20, Hamed Khakzad26, Hamed Khakzad14, Karen R. Khar25, Sagar D. Khare13, Firas Khatib27, Alisa Khramushin7, Indigo Chris King3, Robert Kleffner17, Brian Koepnick3, Tanja Kortemme9, Georg Kuenze12, Brian Kuhlman5, Daisuke Kuroda28, Jason W. Labonte29, Jason W. Labonte2, Jason K. Lai10, Gideon Lapidoth30, Andrew Leaver-Fay5, Steffen Lindert8, Thomas W. Linsky3, Nir London7, Joseph H. Lubin2, Sergey Lyskov2, Jack Maguire5, Lars Malmström14, Lars Malmström31, Lars Malmström26, Enrique Marcos3, Orly Marcu7, Nicholas A. Marze2, Jens Meiler12, Rocco Moretti12, Vikram Khipple Mulligan3, Santrupti Nerli32, Christoffer Norn30, Shane O’Conchúir9, Noah Ollikainen9, Sergey Ovchinnikov3, Michael S. Pacella2, Xingjie Pan9, Hahnbeom Park3, Ryan E. Pavlovicz3, Manasi A. Pethe13, Brian G. Pierce21, Kala Bharath Pilla24, Barak Raveh7, P. Douglas Renfrew, Shourya S. Roy Burman2, Aliza B. Rubenstein13, Marion F. Sauer12, Andreas Scheck11, Andreas Scheck14, William R. Schief6, Ora Schueler-Furman7, Yuval Sedan7, Alexander M. Sevy12, Nikolaos G. Sgourakis32, Lei Shi3, Justin B. Siegel33, Daniel-Adriano Silva3, Shannon Smith12, Yifan Song3, Amelie Stein9, Maria Szegedy13, Frank D. Teets5, Summer B. Thyme3, Ray Yu-Ruei Wang3, Andrew M. Watkins18, Lior Zimmerman7, Richard Bonneau1 
TL;DR: This Perspective reviews tools developed over the past five years in the Rosetta software, including over 80 methods, and discusses improvements to the score function, user interfaces and usability.
Abstract: The Rosetta software for macromolecular modeling, docking and design is extensively used in laboratories worldwide. During two decades of development by a community of laboratories at more than 60 institutions, Rosetta has been continuously refactored and extended. Its advantages are its performance and interoperability between broad modeling capabilities. Here we review tools developed in the last 5 years, including over 80 methods. We discuss improvements to the score function, user interfaces and usability. Rosetta is available at http://www.rosettacommons.org.

Journal ArticleDOI
28 Feb 2020-Science
TL;DR: The results provide an approach that breaks the long-standing trade-off between low energy consumption and high-speed nanophotonics, introducing vortex microlasers that are switchable at terahertz frequencies.
Abstract: The development of classical and quantum information–processing technology calls for on-chip integrated sources of structured light. Although integrated vortex microlasers have been previously demonstrated, they remain static and possess relatively high lasing thresholds, making them unsuitable for high-speed optical communication and computing. We introduce perovskite-based vortex microlasers and demonstrate their application to ultrafast all-optical switching at room temperature. By exploiting both mode symmetry and far-field properties, we reveal that the vortex beam lasing can be switched to linearly polarized beam lasing, or vice versa, with switching times of 1 to 1.5 picoseconds and energy consumption that is orders of magnitude lower than in previously demonstrated all-optical switching. Our results provide an approach that breaks the long-standing trade-off between low energy consumption and high-speed nanophotonics, introducing vortex microlasers that are switchable at terahertz frequencies.

Journal ArticleDOI
TL;DR: In this article, the authors present a critical review of the current state of the arts of hydrogen supply chain as a forwarding energy vector, comprising its resources, generation and storage technologies, demand market, and economics.
Abstract: Hydrogen is known as a technically viable and benign energy vector for applications ranging from the small-scale power supply in off-grid modes to large-scale chemical energy exports. However, with hydrogen being naturally unavailable in its pure form, traditionally reliant industries such as oil refining and fertilisers have sourced it through emission-intensive gasification and reforming of fossil fuels. Although the deployment of hydrogen as an alternative energy vector has long been discussed, it has not been realised because of the lack of low-cost hydrogen generation and conversion technologies. The recent tipping point in the cost of some renewable energy technologies such as wind and photovoltaics (PV) has mobilised continuing sustained interest in renewable hydrogen through water splitting. This paper presents a critical review of the current state of the arts of hydrogen supply chain as a forwarding energy vector, comprising its resources, generation and storage technologies, demand market, and economics.

Journal ArticleDOI
TL;DR: The proposed UWCNN model directly reconstructs the clear latent underwater image, which benefits from the underwater scene prior which can be used to synthesize underwater image training data, and can be easily extended to underwater videos for frame-by-frame enhancement.

Journal ArticleDOI
TL;DR: Xiao et al. as mentioned in this paper used strongly reductive surface-anchoring zwitterionic molecules to suppress Sn2+ oxidation and passivate defects at the grain surfaces in mixed lead-tin perovskite films, enabling an efficiency of 21.7% (certified 20.7%).
Abstract: Monolithic all-perovskite tandem solar cells offer an avenue to increase power conversion efficiency beyond the limits of single-junction cells. It is an important priority to unite efficiency, uniformity and stability, yet this has proven challenging because of high trap density and ready oxidation in narrow-bandgap mixed lead–tin perovskite subcells. Here we report simultaneous enhancements in the efficiency, uniformity and stability of narrow-bandgap subcells using strongly reductive surface-anchoring zwitterionic molecules. The zwitterionic antioxidant inhibits Sn2+ oxidation and passivates defects at the grain surfaces in mixed lead–tin perovskite films, enabling an efficiency of 21.7% (certified 20.7%) for single-junction solar cells. We further obtain a certified efficiency of 24.2% in 1-cm2-area all-perovskite tandem cells and in-lab power conversion efficiencies of 25.6% and 21.4% for 0.049 cm2 and 12 cm2 devices, respectively. The encapsulated tandem devices retain 88% of their initial performance following 500 hours of operation at a device temperature of 54–60 °C under one-sun illumination in ambient conditions. Ensuring both stability and efficiency in mixed lead–tin perovskite solar cells is crucial to the development of all-perovskite tandems. Xiao et al. use an antioxidant zwitterionic molecule to suppress tin oxidation thus enabling large-area tandem cells with 24.2% efficiency and operational stability over 500 hours.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this paper, the authors propose a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the betweenclass similarity$s_n$.
Abstract: This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning paradigms, \emph {i.e.}, learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.

Journal ArticleDOI
TL;DR: In this paper, a group of conservation biologists deeply concerned about the decline of insect populations, reviewed what we know about the drivers of insect extinctions, their consequences, and how extinctions can negatively impact humanity.

Journal ArticleDOI
Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3, Fausto Acernese4  +1329 moreInstitutions (150)
TL;DR: The GW190521 signal is consistent with a binary black hole (BBH) merger source at redshift 0.13-0.30 Gpc-3 yr-1.8 as discussed by the authors.
Abstract: The gravitational-wave signal GW190521 is consistent with a binary black hole (BBH) merger source at redshift 0.8 with unusually high component masses, 85-14+21 M o˙ and 66-18+17 M o˙, compared to previously reported events, and shows mild evidence for spin-induced orbital precession. The primary falls in the mass gap predicted by (pulsational) pair-instability supernova theory, in the approximate range 65-120 M o˙. The probability that at least one of the black holes in GW190521 is in that range is 99.0%. The final mass of the merger (142-16+28 M o˙) classifies it as an intermediate-mass black hole. Under the assumption of a quasi-circular BBH coalescence, we detail the physical properties of GW190521's source binary and its post-merger remnant, including component masses and spin vectors. Three different waveform models, as well as direct comparison to numerical solutions of general relativity, yield consistent estimates of these properties. Tests of strong-field general relativity targeting the merger-ringdown stages of the coalescence indicate consistency of the observed signal with theoretical predictions. We estimate the merger rate of similar systems to be 0.13-0.11+0.30 Gpc-3 yr-1. We discuss the astrophysical implications of GW190521 for stellar collapse and for the possible formation of black holes in the pair-instability mass gap through various channels: via (multiple) stellar coalescences, or via hierarchical mergers of lower-mass black holes in star clusters or in active galactic nuclei. We find it to be unlikely that GW190521 is a strongly lensed signal of a lower-mass black hole binary merger. We also discuss more exotic possible sources for GW190521, including a highly eccentric black hole binary, or a primordial black hole binary.

Journal ArticleDOI
Edoardo Aprà1, Eric J. Bylaska1, W. A. de Jong2, Niranjan Govind1, Karol Kowalski1, T. P. Straatsma3, Marat Valiev1, H. J. J. van Dam4, Yuri Alexeev5, J. Anchell6, V. Anisimov5, Fredy W. Aquino, Raymond Atta-Fynn7, Jochen Autschbach8, Nicholas P. Bauman1, Jeffrey C. Becca9, David E. Bernholdt10, K. Bhaskaran-Nair11, Stuart Bogatko12, Piotr Borowski13, Jeffery S. Boschen14, Jiří Brabec15, Adam Bruner16, Emilie Cauet17, Y. Chen18, Gennady N. Chuev19, Christopher J. Cramer20, Jeff Daily1, M. J. O. Deegan, Thom H. Dunning21, Michel Dupuis8, Kenneth G. Dyall, George I. Fann10, Sean A. Fischer22, Alexandr Fonari23, Herbert A. Früchtl24, Laura Gagliardi20, Jorge Garza25, Nitin A. Gawande1, Soumen Ghosh20, Kurt R. Glaesemann1, Andreas W. Götz26, Jeff R. Hammond6, Volkhard Helms27, Eric D. Hermes28, Kimihiko Hirao, So Hirata29, Mathias Jacquelin2, Lasse Jensen9, Benny G. Johnson, Hannes Jónsson30, Ricky A. Kendall10, Michael Klemm6, Rika Kobayashi31, V. Konkov32, Sriram Krishnamoorthy1, M. Krishnan18, Zijing Lin33, Roberto D. Lins34, Rik J. Littlefield, Andrew J. Logsdail35, Kenneth Lopata36, Wan Yong Ma37, Aleksandr V. Marenich20, J. Martin del Campo38, Daniel Mejía-Rodríguez39, Justin E. Moore6, Jonathan M. Mullin, Takahito Nakajima, Daniel R. Nascimento1, Jeffrey A. Nichols10, P. J. Nichols40, J. Nieplocha1, Alberto Otero-de-la-Roza41, Bruce J. Palmer1, Ajay Panyala1, T. Pirojsirikul42, Bo Peng1, Roberto Peverati32, Jiri Pittner15, L. Pollack, Ryan M. Richard43, P. Sadayappan44, George C. Schatz45, William A. Shelton36, Daniel W. Silverstein46, D. M. A. Smith6, Thereza A. Soares47, Duo Song1, Marcel Swart, H. L. Taylor48, G. S. Thomas1, Vinod Tipparaju49, Donald G. Truhlar20, Kiril Tsemekhman, T. Van Voorhis50, Álvaro Vázquez-Mayagoitia5, Prakash Verma, Oreste Villa51, Abhinav Vishnu1, Konstantinos D. Vogiatzis52, Dunyou Wang53, John H. Weare26, Mark J. Williamson54, Theresa L. Windus14, Krzysztof Wolinski13, A. T. Wong, Qin Wu4, Chan-Shan Yang2, Q. Yu55, Martin Zacharias56, Zhiyong Zhang57, Yan Zhao58, Robert W. Harrison59 
Pacific Northwest National Laboratory1, Lawrence Berkeley National Laboratory2, National Center for Computational Sciences3, Brookhaven National Laboratory4, Argonne National Laboratory5, Intel6, University of Texas at Arlington7, State University of New York System8, Pennsylvania State University9, Oak Ridge National Laboratory10, Washington University in St. Louis11, Wellesley College12, Maria Curie-Skłodowska University13, Iowa State University14, Academy of Sciences of the Czech Republic15, University of Tennessee at Martin16, Université libre de Bruxelles17, Facebook18, Russian Academy of Sciences19, University of Minnesota20, University of Washington21, United States Naval Research Laboratory22, Georgia Institute of Technology23, University of St Andrews24, Universidad Autónoma Metropolitana25, University of California, San Diego26, Saarland University27, Sandia National Laboratories28, University of Illinois at Urbana–Champaign29, University of Iceland30, Australian National University31, Florida Institute of Technology32, University of Science and Technology of China33, Oswaldo Cruz Foundation34, Cardiff University35, Louisiana State University36, Chinese Academy of Sciences37, National Autonomous University of Mexico38, University of Florida39, Los Alamos National Laboratory40, University of Oviedo41, Prince of Songkla University42, Ames Laboratory43, University of Utah44, Northwestern University45, Universal Display Corporation46, Federal University of Pernambuco47, CD-adapco48, Cray49, Massachusetts Institute of Technology50, Nvidia51, University of Tennessee52, Shandong Normal University53, University of Cambridge54, Advanced Micro Devices55, Technische Universität München56, Stanford University57, Wuhan University of Technology58, Stony Brook University59
TL;DR: The NWChem computational chemistry suite is reviewed, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.
Abstract: Specialized computational chemistry packages have permanently reshaped the landscape of chemical and materials science by providing tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties. In this regard, electronic structure packages have played a special role by using first-principle-driven methodologies to model complex chemical and materials processes. Over the past few decades, the rapid development of computing technologies and the tremendous increase in computational power have offered a unique chance to study complex transformations using sophisticated and predictive many-body techniques that describe correlated behavior of electrons in molecular and condensed phase systems at different levels of theory. In enabling these simulations, novel parallel algorithms have been able to take advantage of computational resources to address the polynomial scaling of electronic structure methods. In this paper, we briefly review the NWChem computational chemistry suite, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the emerging field of nonlinear topological photonics and highlight the recent developments in bridging the physics of topological phases with nonlinear optics.
Abstract: Rapidly growing demands for fast information processing have launched a race for creating compact and highly efficient optical devices that can reliably transmit signals without losses. Recently discovered topological phases of light provide novel opportunities for photonic devices robust against scattering losses and disorder. Combining these topological photonic structures with nonlinear effects will unlock advanced functionalities such as magnet-free nonreciprocity and active tunability. Here, we introduce the emerging field of nonlinear topological photonics and highlight the recent developments in bridging the physics of topological phases with nonlinear optics. This includes the design of novel photonic platforms which combine topological phases of light with appreciable nonlinear response, self-interaction effects leading to edge solitons in topological photonic lattices, frequency conversion, active photonic structures exhibiting lasing from topologically protected modes, and many-body quantum topological phases of light. We also chart future research directions discussing device applications such as mode stabilization in lasers, parametric amplifiers protected against feedback, and ultrafast optical switches employing topological waveguides.

Journal ArticleDOI
TL;DR: Evidence is found for a substantial expansion in the types and quantities of UPFs sold worldwide, representing a transition towards a more processed global diet but with wide variations between regions and countries, as countries grow richer, higher volumes and a wider variety are sold.
Abstract: Understanding the drivers and dynamics of global ultra-processed food (UPF) consumption is essential, given the evidence linking these foods with adverse health outcomes. In this synthesis review, we take two steps. First, we quantify per capita volumes and trends in UPF sales, and ingredients (sweeteners, fats, sodium and cosmetic additives) supplied by these foods, in countries classified by income and region. Second, we review the literature on food systems and political economy factors that likely explain the observed changes. We find evidence for a substantial expansion in the types and quantities of UPFs sold worldwide, representing a transition towards a more processed global diet but with wide variations between regions and countries. As countries grow richer, higher volumes and a wider variety of UPFs are sold. Sales are highest in Australasia, North America, Europe and Latin America but growing rapidly in Asia, the Middle East and Africa. These developments are closely linked with the industrialization of food systems, technological change and globalization, including growth in the market and political activities of transnational food corporations and inadequate policies to protect nutrition in these new contexts. The scale of dietary change underway, especially in highly populated middle-income countries, raises serious concern for global health.

Posted Content
TL;DR: The Circle loss is demonstrated, which has a unified formula for two elemental deep feature learning paradigms, learning with class-level labels and pair-wise labels, and the superiority of the Circle loss on a variety ofDeep feature learning tasks.
Abstract: This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax plus cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning approaches, i.e. learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: This paper provides a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples and empirically shows that such modelling leads to robustness against perturbations and yields competitive results on the task of supervised and semi-supervised few- shot classification.
Abstract: Object recognition requires a generalization capability to avoid overfitting, especially when the samples are extremely few. Generalization from limited samples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of life long learning. In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples. A subspace method is exploited as the central block of a dynamic classifier. We will empirically show that such modelling leads to robustness against perturbations (e.g., outliers) and yields competitive results on the task of supervised and semi-supervised few-shot classification. We also develop a discriminative form which can boost the accuracy even further. Our code is available at https://github.com/chrysts/dsn_fewshot