Showing papers by "Australian National University published in 2020"
••
TL;DR: Some notable features of IQ-TREE version 2 are described and the key advantages over other software are highlighted.
Abstract: IQ-TREE (http://www.iqtree.org, last accessed February 6, 2020) is a user-friendly and widely used software package for phylogenetic inference using maximum likelihood. Since the release of version 1 in 2014, we have continuously expanded IQ-TREE to integrate a plethora of new models of sequence evolution and efficient computational approaches of phylogenetic inference to deal with genomic data. Here, we describe notable features of IQ-TREE version 2 and highlight the key advantages over other software.
4,337 citations
••
Christopher J L Murray1, Christopher J L Murray2, Christopher J L Murray3, Aleksandr Y. Aravkin1 +2269 more•Institutions (286)
TL;DR: The largest declines in risk exposure from 2010 to 2019 were among a set of risks that are strongly linked to social and economic development, including household air pollution; unsafe water, sanitation, and handwashing; and child growth failure.
3,059 citations
••
03 Apr 2020
TL;DR: Random Erasing as mentioned in this paper randomly selects a rectangle region in an image and erases its pixels with random values, which reduces the risk of overfitting and makes the model robust to occlusion.
Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: https://github.com/zhunzhong07/Random-Erasing.
1,748 citations
••
TL;DR: In this article, the authors explore seven different scenarios of how COVID-19 might evolve in the coming year using a modelling technique developed by Lee and McKibbin (2003) and extended by McKibbin and Sidorenko (2006) and examine the impacts of different scenarios on macroeconomic outcomes and financial markets in a global hybrid DSGE/CGE general equilibrium model.
Abstract: The outbreak of coronavirus named COVID-19 has disrupted the Chinese economy and is spreading globally. The evolution of the disease and its economic impact is highly uncertain which makes it difficult for policymakers to formulate an appropriate macroeconomic policy response. In order to better understand possible economic outcomes, this paper explores seven different scenarios of how COVID-19 might evolve in the coming year using a modelling technique developed by Lee and McKibbin (2003) and extended by McKibbin and Sidorenko (2006). It examines the impacts of different scenarios on macroeconomic outcomes and financial markets in a global hybrid DSGE/CGE general equilibrium model. The scenarios in this paper demonstrate that even a contained outbreak could significantly impact the global economy in the short run. These scenarios demonstrate the scale of costs that might be avoided by greater investment in public health systems in all economies but particularly in less developed economies where health care systems are less developed and popultion density is high.
1,270 citations
••
TL;DR: In 2019, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9 and the Virgo detector was also taking data that did not contribute to detection due to a low SINR but were used for subsequent parameter estimation as discussed by the authors.
Abstract: On 2019 April 25, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9. The Virgo detector was also taking data that did not contribute to detection due to a low signal-to-noise ratio, but were used for subsequent parameter estimation. The 90% credible intervals for the component masses range from to if we restrict the dimensionless component spin magnitudes to be smaller than 0.05). These mass parameters are consistent with the individual binary components being neutron stars. However, both the source-frame chirp mass and the total mass of this system are significantly larger than those of any other known binary neutron star (BNS) system. The possibility that one or both binary components of the system are black holes cannot be ruled out from gravitational-wave data. We discuss possible origins of the system based on its inconsistency with the known Galactic BNS population. Under the assumption that the signal was produced by a BNS coalescence, the local rate of neutron star mergers is updated to 250-2810.
1,189 citations
•
TL;DR: A comprehensive review of recent pioneering efforts in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multiscale and pyramid-based approaches, recurrent networks, visual attention models, and generative models in adversarial settings are provided.
Abstract: Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
950 citations
••
TL;DR: In this paper, the authors reported the observation of a compact binary coalescence involving a 222 −243 M ⊙ black hole and a compact object with a mass of 250 −267 M ⋆ (all measurements quoted at the 90% credible level) The gravitational-wave signal, GW190814, was observed during LIGO's and Virgo's third observing run on 2019 August 14 at 21:10:39 UTC and has a signal-to-noise ratio of 25 in the three-detector network.
Abstract: We report the observation of a compact binary coalescence involving a 222–243 M ⊙ black hole and a compact object with a mass of 250–267 M ⊙ (all measurements quoted at the 90% credible level) The gravitational-wave signal, GW190814, was observed during LIGO's and Virgo's third observing run on 2019 August 14 at 21:10:39 UTC and has a signal-to-noise ratio of 25 in the three-detector network The source was localized to 185 deg2 at a distance of ${241}_{-45}^{+41}$ Mpc; no electromagnetic counterpart has been confirmed to date The source has the most unequal mass ratio yet measured with gravitational waves, ${0112}_{-0009}^{+0008}$, and its secondary component is either the lightest black hole or the heaviest neutron star ever discovered in a double compact-object system The dimensionless spin of the primary black hole is tightly constrained to ≤007 Tests of general relativity reveal no measurable deviations from the theory, and its prediction of higher-multipole emission is confirmed at high confidence We estimate a merger rate density of 1–23 Gpc−3 yr−1 for the new class of binary coalescence sources that GW190814 represents Astrophysical models predict that binaries with mass ratios similar to this event can form through several channels, but are unlikely to have formed in globular clusters However, the combination of mass ratio, component masses, and the inferred merger rate for this event challenges all current models of the formation and mass distribution of compact-object binaries
913 citations
••
TL;DR: The extent of the trait data compiled in TRY is evaluated and emerging patterns of data coverage and representativeness are analyzed to conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements.
Abstract: Plant traits-the morphological, anatomical, physiological, biochemical and phenological characteristics of plants-determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits-almost complete coverage for 'plant growth form'. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait-environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives.
882 citations
••
TL;DR: It is inferred that the primary black hole mass lies within the gap produced by (pulsational) pair-instability supernova processes, with only a 0.32% probability of being below 65 M⊙, which can be considered an intermediate mass black hole (IMBH).
Abstract: On May 21, 2019 at 03:02:29 UTC Advanced LIGO and Advanced Virgo observed a short duration gravitational-wave signal, GW190521, with a three-detector network signal-to-noise ratio of 14.7, and an estimated false-alarm rate of 1 in 4900 yr using a search sensitive to generic transients. If GW190521 is from a quasicircular binary inspiral, then the detected signal is consistent with the merger of two black holes with masses of 85_{-14}^{+21} M_{⊙} and 66_{-18}^{+17} M_{⊙} (90% credible intervals). We infer that the primary black hole mass lies within the gap produced by (pulsational) pair-instability supernova processes, with only a 0.32% probability of being below 65 M_{⊙}. We calculate the mass of the remnant to be 142_{-16}^{+28} M_{⊙}, which can be considered an intermediate mass black hole (IMBH). The luminosity distance of the source is 5.3_{-2.6}^{+2.4} Gpc, corresponding to a redshift of 0.82_{-0.34}^{+0.28}. The inferred rate of mergers similar to GW190521 is 0.13_{-0.11}^{+0.30} Gpc^{-3} yr^{-1}.
876 citations
••
TL;DR: The FLUXNET2015 dataset provides ecosystem-scale data on CO 2 , water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe, and is detailed in this paper.
Abstract: The FLUXNET2015 dataset provides ecosystem-scale data on CO2, water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe (over 1500 site-years, up to and including year 2014). These sites, independently managed and operated, voluntarily contributed their data to create global datasets. Data were quality controlled and processed using uniform methods, to improve consistency and intercomparability across sites. The dataset is already being used in a number of applications, including ecophysiology studies, remote sensing studies, and development of ecosystem and Earth system models. FLUXNET2015 includes derived-data products, such as gap-filled time series, ecosystem respiration and photosynthetic uptake estimates, estimation of uncertainties, and metadata about the measurements, presented for the first time in this paper. In addition, 206 of these sites are for the first time distributed under a Creative Commons (CC-BY 4.0) license. This paper details this enhanced dataset and the processing methods, now made available as open-source codes, making the dataset more accessible, transparent, and reproducible.
681 citations
••
Stockholm Resilience Centre1, Australian National University2, University of Tasmania3, Stockholm University4, Charles Darwin University5, International Union for Conservation of Nature and Natural Resources6, University of Montana7, National Autonomous University of Mexico8, The Pew Charitable Trusts9, McGill University10, Stellenbosch University11, University of Maryland, College Park12, University of Bern13, International Center for Tropical Agriculture14, Commonwealth Scientific and Industrial Research Organisation15, University of Wisconsin-Madison16, Royal Swedish Academy of Sciences17, Hobart Corporation18, Potsdam Institute for Climate Impact Research19, Pontifical Catholic University of Chile20, University of Sussex21, University College Cork22, Lüneburg University23, University of Arizona24, Azim Premji University25, University of the Witwatersrand26, Radboud University Nijmegen27, Utrecht University28
TL;DR: In this article, the authors propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research, and offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.
Abstract: Research practice, funding agencies and global science organizations suggest that research aimed at addressing sustainability challenges is most effective when ‘co-produced’ by academics and non-academics. Co-production promises to address the complex nature of contemporary sustainability challenges better than more traditional scientific approaches. But definitions of knowledge co-production are diverse and often contradictory. We propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research. Using these principles, we offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.
••
Australian National University1, Argonne National Laboratory2, Scripps Health3, Iowa State University4, Western New England University5, Michigan State University6, National Institute of Advanced Industrial Science and Technology7, Microsoft8, University of Colorado Denver9, Oak Ridge National Laboratory10, Pacific Northwest National Laboratory11, University of Nebraska–Lincoln12, Nanjing University13, Sandia National Laboratories14, Moscow State University15, Kyocera16, Cray17, Purdue University18, Old Dominion University19, University of Rochester20
TL;DR: A discussion of many of the recently implemented features of GAMESS (General Atomic and Molecular Electronic Structure System) and LibCChem (the C++ CPU/GPU library associated with GAMESS) is presented, which include fragmentation methods, hybrid MPI/OpenMP approaches to Hartree-Fock, and resolution of the identity second order perturbation theory.
Abstract: A discussion of many of the recently implemented features of GAMESS (General Atomic and Molecular Electronic Structure System) and LibCChem (the C++ CPU/GPU library associated with GAMESS) is presented. These features include fragmentation methods such as the fragment molecular orbital, effective fragment potential and effective fragment molecular orbital methods, hybrid MPI/OpenMP approaches to Hartree-Fock, and resolution of the identity second order perturbation theory. Many new coupled cluster theory methods have been implemented in GAMESS, as have multiple levels of density functional/tight binding theory. The role of accelerators, especially graphical processing units, is discussed in the context of the new features of LibCChem, as it is the associated problem of power consumption as the power of computers increases dramatically. The process by which a complex program suite such as GAMESS is maintained and developed is considered. Future developments are briefly summarized.
••
TL;DR: This work implements a new physical mechanism for suppressing radiative losses of individual nanoscale resonators to engineer special modes with high quality factors: optical bound states in the continuum (BICs), and demonstrates that an individual subwavelength dielectric resonator hosting a BIC mode can boost nonlinear effects increasing second-harmonic generation efficiency.
Abstract: Subwavelength optical resonators made of high-index dielectric materials provide efficient ways to manipulate light at the nanoscale through mode interferences and enhancement of both electric and magnetic fields. Such Mie-resonant dielectric structures have low absorption, and their functionalities are limited predominantly by radiative losses. We implement a new physical mechanism for suppressing radiative losses of individual nanoscale resonators to engineer special modes with high quality factors: optical bound states in the continuum (BICs). We demonstrate that an individual subwavelength dielectric resonator hosting a BIC mode can boost nonlinear effects increasing second-harmonic generation efficiency. Our work suggests a route to use subwavelength high-index dielectric resonators for a strong enhancement of light-matter interactions with applications to nonlinear optics, nanoscale lasers, quantum photonics, and sensors.
••
TL;DR: In this article, the authors reported the observation of gravitational waves from a binary-black-hole coalescence during the first two weeks of LIGO and Virgo's third observing run.
Abstract: We report the observation of gravitational waves from a binary-black-hole coalescence during the first two weeks of LIGO’s and Virgo’s third observing run. The signal was recorded on April 12, 2019 at 05∶30∶44 UTC with a network signal-to-noise ratio of 19. The binary is different from observations during the first two observing runs most notably due to its asymmetric masses: a ∼30 M⊙ black hole merged with a ∼8 M⊙ black hole companion. The more massive black hole rotated with a dimensionless spin magnitude between 0.22 and 0.60 (90% probability). Asymmetric systems are predicted to emit gravitational waves with stronger contributions from higher multipoles, and indeed we find strong evidence for gravitational radiation beyond the leading quadrupolar order in the observed signal. A suite of tests performed on GW190412 indicates consistency with Einstein’s general theory of relativity. While the mass ratio of this system differs from all previous detections, we show that it is consistent with the population model of stellar binary black holes inferred from the first two observing runs.
••
TL;DR: The SARS-CoV-2 main protease is considered a promising drug target, as it is dissimilar to human proteases, facilitating drug discovery attempts based on previous lead compounds.
••
University of New South Wales1, Met Office2, University of Washington3, University of Leeds4, University of Edinburgh5, University of California, Berkeley6, Columbia University7, Goddard Institute for Space Studies8, National Oceanography Centre9, Australian National University10, University of Tokyo11, Université Paris-Saclay12, Breakthrough Institute13, Utrecht University14, Stockholm University15, Scripps Institution of Oceanography16, University of Illinois at Urbana–Champaign17, Max Planck Society18
TL;DR: Evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S, is assessed, using a Bayesian approach to produce a probability density function for S given all the evidence, and promising avenues for further narrowing the range are identified.
Abstract: We assess evidence relevant to Earth's equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity S. This evidence includes feedback process understanding, the historical climate record, and the paleoclimate record. An S value lower than 2 K is difficult to reconcile with any of the three lines of evidence. The amount of cooling during the Last Glacial Maximum provides strong evidence against values of S greater than 4.5 K. Other lines of evidence in combination also show that this is relatively unlikely. We use a Bayesian approach to produce a probability density function (PDF) for S given all the evidence, including tests of robustness to difficult-to-quantify uncertainties and different priors. The 66% range is 2.6-3.9 K for our Baseline calculation and remains within 2.3-4.5 K under the robustness tests; corresponding 5-95% ranges are 2.3-4.7 K, bounded by 2.0-5.7 K (although such high-confidence ranges should be regarded more cautiously). This indicates a stronger constraint on S than reported in past assessments, by lifting the low end of the range. This narrowing occurs because the three lines of evidence agree and are judged to be largely independent and because of greater confidence in understanding feedback processes and in combining evidence. We identify promising avenues for further narrowing the range in S, in particular using comprehensive models and process understanding to address limitations in the traditional forcing-feedback paradigm for interpreting past changes.
••
TL;DR: In some cases the comparison of two models using ICs can be viewed as equivalent to a likelihood ratio test, with the different criteria representing different alpha levels and BIC being a more conservative test than AIC.
Abstract: Choosing a model with too few parameters can involve making unrealistically simple assumptions and lead to high bias, poor prediction, and missed opportunities for insight. Such models are not flexible enough to describe the sample or the population well. A model with too many parameters can fit the observed data very well, but be too closely tailored to it. Such models may generalize poorly. Penalizedlikelihood information criteria, such as Akaike’s Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Consistent AIC, and the Adjusted BIC, are widely used for model selection. However, different criteria sometimes support different models, leading to uncertainty about which criterion is the most trustworthy. In some simple cases the comparison of two models using information criteria can be viewed as equivalent to a likelihood ratio test, with the different models representing different alpha levels (i.e., different emphases on sensitivity or specificity; Lin & Dayton 1997). This perspective may lead to insights about how to interpret the criteria in less simple situations. For example, AIC or BIC could be preferable, depending on sample size and on the relative importance one assigns to sensitivity versus specificity. Understanding the differences among the criteria may make it easier to compare their results and to use them to make informed decisions.
••
New York University1, Johns Hopkins University2, University of Washington3, Duke University4, University of North Carolina at Chapel Hill5, Scripps Research Institute6, Hebrew University of Jerusalem7, Ohio State University8, University of California, San Francisco9, Baylor College of Medicine10, École Polytechnique Fédérale de Lausanne11, Vanderbilt University12, Rutgers University13, Swiss Institute of Bioinformatics14, Fred Hutchinson Cancer Research Center15, Rensselaer Polytechnic Institute16, Northeastern University17, Stanford University18, DSM19, Fox Chase Cancer Center20, University of Maryland, College Park21, University of Warsaw22, University of Denver23, Australian National University24, University of Kansas25, University of Zurich26, University of Massachusetts Dartmouth27, University of Tokyo28, Franklin & Marshall College29, Weizmann Institute of Science30, Lund University31, University of California, Santa Cruz32, University of California, Davis33
TL;DR: This Perspective reviews tools developed over the past five years in the Rosetta software, including over 80 methods, and discusses improvements to the score function, user interfaces and usability.
Abstract: The Rosetta software for macromolecular modeling, docking and design is extensively used in laboratories worldwide. During two decades of development by a community of laboratories at more than 60 institutions, Rosetta has been continuously refactored and extended. Its advantages are its performance and interoperability between broad modeling capabilities. Here we review tools developed in the last 5 years, including over 80 methods. We discuss improvements to the score function, user interfaces and usability. Rosetta is available at http://www.rosettacommons.org.
••
TL;DR: The results provide an approach that breaks the long-standing trade-off between low energy consumption and high-speed nanophotonics, introducing vortex microlasers that are switchable at terahertz frequencies.
Abstract: The development of classical and quantum information–processing technology calls for on-chip integrated sources of structured light. Although integrated vortex microlasers have been previously demonstrated, they remain static and possess relatively high lasing thresholds, making them unsuitable for high-speed optical communication and computing. We introduce perovskite-based vortex microlasers and demonstrate their application to ultrafast all-optical switching at room temperature. By exploiting both mode symmetry and far-field properties, we reveal that the vortex beam lasing can be switched to linearly polarized beam lasing, or vice versa, with switching times of 1 to 1.5 picoseconds and energy consumption that is orders of magnitude lower than in previously demonstrated all-optical switching. Our results provide an approach that breaks the long-standing trade-off between low energy consumption and high-speed nanophotonics, introducing vortex microlasers that are switchable at terahertz frequencies.
••
TL;DR: In this article, the authors present a critical review of the current state of the arts of hydrogen supply chain as a forwarding energy vector, comprising its resources, generation and storage technologies, demand market, and economics.
Abstract: Hydrogen is known as a technically viable and benign energy vector for applications ranging from the small-scale power supply in off-grid modes to large-scale chemical energy exports. However, with hydrogen being naturally unavailable in its pure form, traditionally reliant industries such as oil refining and fertilisers have sourced it through emission-intensive gasification and reforming of fossil fuels. Although the deployment of hydrogen as an alternative energy vector has long been discussed, it has not been realised because of the lack of low-cost hydrogen generation and conversion technologies. The recent tipping point in the cost of some renewable energy technologies such as wind and photovoltaics (PV) has mobilised continuing sustained interest in renewable hydrogen through water splitting. This paper presents a critical review of the current state of the arts of hydrogen supply chain as a forwarding energy vector, comprising its resources, generation and storage technologies, demand market, and economics.
••
TL;DR: The proposed UWCNN model directly reconstructs the clear latent underwater image, which benefits from the underwater scene prior which can be used to synthesize underwater image training data, and can be easily extended to underwater videos for frame-by-frame enhancement.
••
TL;DR: Xiao et al. as mentioned in this paper used strongly reductive surface-anchoring zwitterionic molecules to suppress Sn2+ oxidation and passivate defects at the grain surfaces in mixed lead-tin perovskite films, enabling an efficiency of 21.7% (certified 20.7%).
Abstract: Monolithic all-perovskite tandem solar cells offer an avenue to increase power conversion efficiency beyond the limits of single-junction cells. It is an important priority to unite efficiency, uniformity and stability, yet this has proven challenging because of high trap density and ready oxidation in narrow-bandgap mixed lead–tin perovskite subcells. Here we report simultaneous enhancements in the efficiency, uniformity and stability of narrow-bandgap subcells using strongly reductive surface-anchoring zwitterionic molecules. The zwitterionic antioxidant inhibits Sn2+ oxidation and passivates defects at the grain surfaces in mixed lead–tin perovskite films, enabling an efficiency of 21.7% (certified 20.7%) for single-junction solar cells. We further obtain a certified efficiency of 24.2% in 1-cm2-area all-perovskite tandem cells and in-lab power conversion efficiencies of 25.6% and 21.4% for 0.049 cm2 and 12 cm2 devices, respectively. The encapsulated tandem devices retain 88% of their initial performance following 500 hours of operation at a device temperature of 54–60 °C under one-sun illumination in ambient conditions. Ensuring both stability and efficiency in mixed lead–tin perovskite solar cells is crucial to the development of all-perovskite tandems. Xiao et al. use an antioxidant zwitterionic molecule to suppress tin oxidation thus enabling large-area tandem cells with 24.2% efficiency and operational stability over 500 hours.
••
14 Jun 2020TL;DR: In this paper, the authors propose a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the betweenclass similarity$s_n$.
Abstract: This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning paradigms, \emph {i.e.}, learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.
••
University of Helsinki1, Australian National University2, Brandenburg University of Technology3, Stellenbosch University4, University of Osnabrück5, University of Salzburg6, Radboud University Nijmegen7, University of Huddersfield8, International Union for Conservation of Nature and Natural Resources9, University of Trier10, National University of Singapore11, IRSA12, University of Los Andes13, University of Florida14, Florida A&M University15, University of Zurich16, James Cook University17, Federal University of Mato Grosso do Sul18, Helmholtz Centre for Environmental Research - UFZ19, University of the Philippines Los Baños20, Griffith University21
TL;DR: In this paper, a group of conservation biologists deeply concerned about the decline of insect populations, reviewed what we know about the drivers of insect extinctions, their consequences, and how extinctions can negatively impact humanity.
••
TL;DR: The GW190521 signal is consistent with a binary black hole (BBH) merger source at redshift 0.13-0.30 Gpc-3 yr-1.8 as discussed by the authors.
Abstract: The gravitational-wave signal GW190521 is consistent with a binary black hole (BBH) merger source at redshift 0.8 with unusually high component masses, 85-14+21 M o˙ and 66-18+17 M o˙, compared to previously reported events, and shows mild evidence for spin-induced orbital precession. The primary falls in the mass gap predicted by (pulsational) pair-instability supernova theory, in the approximate range 65-120 M o˙. The probability that at least one of the black holes in GW190521 is in that range is 99.0%. The final mass of the merger (142-16+28 M o˙) classifies it as an intermediate-mass black hole. Under the assumption of a quasi-circular BBH coalescence, we detail the physical properties of GW190521's source binary and its post-merger remnant, including component masses and spin vectors. Three different waveform models, as well as direct comparison to numerical solutions of general relativity, yield consistent estimates of these properties. Tests of strong-field general relativity targeting the merger-ringdown stages of the coalescence indicate consistency of the observed signal with theoretical predictions. We estimate the merger rate of similar systems to be 0.13-0.11+0.30 Gpc-3 yr-1. We discuss the astrophysical implications of GW190521 for stellar collapse and for the possible formation of black holes in the pair-instability mass gap through various channels: via (multiple) stellar coalescences, or via hierarchical mergers of lower-mass black holes in star clusters or in active galactic nuclei. We find it to be unlikely that GW190521 is a strongly lensed signal of a lower-mass black hole binary merger. We also discuss more exotic possible sources for GW190521, including a highly eccentric black hole binary, or a primordial black hole binary.
••
Pacific Northwest National Laboratory1, Lawrence Berkeley National Laboratory2, National Center for Computational Sciences3, Brookhaven National Laboratory4, Argonne National Laboratory5, Intel6, University of Texas at Arlington7, State University of New York System8, Pennsylvania State University9, Oak Ridge National Laboratory10, Washington University in St. Louis11, Wellesley College12, Maria Curie-Skłodowska University13, Iowa State University14, Academy of Sciences of the Czech Republic15, University of Tennessee at Martin16, Université libre de Bruxelles17, Facebook18, Russian Academy of Sciences19, University of Minnesota20, University of Washington21, United States Naval Research Laboratory22, Georgia Institute of Technology23, University of St Andrews24, Universidad Autónoma Metropolitana25, University of California, San Diego26, Saarland University27, Sandia National Laboratories28, University of Illinois at Urbana–Champaign29, University of Iceland30, Australian National University31, Florida Institute of Technology32, University of Science and Technology of China33, Oswaldo Cruz Foundation34, Cardiff University35, Louisiana State University36, Chinese Academy of Sciences37, National Autonomous University of Mexico38, University of Florida39, Los Alamos National Laboratory40, University of Oviedo41, Prince of Songkla University42, Ames Laboratory43, University of Utah44, Northwestern University45, Universal Display Corporation46, Federal University of Pernambuco47, CD-adapco48, Cray49, Massachusetts Institute of Technology50, Nvidia51, University of Tennessee52, Shandong Normal University53, University of Cambridge54, Advanced Micro Devices55, Technische Universität München56, Stanford University57, Wuhan University of Technology58, Stony Brook University59
TL;DR: The NWChem computational chemistry suite is reviewed, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.
Abstract: Specialized computational chemistry packages have permanently reshaped the landscape of chemical and materials science by providing tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties. In this regard, electronic structure packages have played a special role by using first-principle-driven methodologies to model complex chemical and materials processes. Over the past few decades, the rapid development of computing technologies and the tremendous increase in computational power have offered a unique chance to study complex transformations using sophisticated and predictive many-body techniques that describe correlated behavior of electrons in molecular and condensed phase systems at different levels of theory. In enabling these simulations, novel parallel algorithms have been able to take advantage of computational resources to address the polynomial scaling of electronic structure methods. In this paper, we briefly review the NWChem computational chemistry suite, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.
••
TL;DR: In this paper, the authors introduce the emerging field of nonlinear topological photonics and highlight the recent developments in bridging the physics of topological phases with nonlinear optics.
Abstract: Rapidly growing demands for fast information processing have launched a race for creating compact and highly efficient optical devices that can reliably transmit signals without losses. Recently discovered topological phases of light provide novel opportunities for photonic devices robust against scattering losses and disorder. Combining these topological photonic structures with nonlinear effects will unlock advanced functionalities such as magnet-free nonreciprocity and active tunability. Here, we introduce the emerging field of nonlinear topological photonics and highlight the recent developments in bridging the physics of topological phases with nonlinear optics. This includes the design of novel photonic platforms which combine topological phases of light with appreciable nonlinear response, self-interaction effects leading to edge solitons in topological photonic lattices, frequency conversion, active photonic structures exhibiting lasing from topologically protected modes, and many-body quantum topological phases of light. We also chart future research directions discussing device applications such as mode stabilization in lasers, parametric amplifiers protected against feedback, and ultrafast optical switches employing topological waveguides.
••
TL;DR: Evidence is found for a substantial expansion in the types and quantities of UPFs sold worldwide, representing a transition towards a more processed global diet but with wide variations between regions and countries, as countries grow richer, higher volumes and a wider variety are sold.
Abstract: Understanding the drivers and dynamics of global ultra-processed food (UPF) consumption is essential, given the evidence linking these foods with adverse health outcomes. In this synthesis review, we take two steps. First, we quantify per capita volumes and trends in UPF sales, and ingredients (sweeteners, fats, sodium and cosmetic additives) supplied by these foods, in countries classified by income and region. Second, we review the literature on food systems and political economy factors that likely explain the observed changes. We find evidence for a substantial expansion in the types and quantities of UPFs sold worldwide, representing a transition towards a more processed global diet but with wide variations between regions and countries. As countries grow richer, higher volumes and a wider variety of UPFs are sold. Sales are highest in Australasia, North America, Europe and Latin America but growing rapidly in Asia, the Middle East and Africa. These developments are closely linked with the industrialization of food systems, technological change and globalization, including growth in the market and political activities of transnational food corporations and inadequate policies to protect nutrition in these new contexts. The scale of dietary change underway, especially in highly populated middle-income countries, raises serious concern for global health.
•
TL;DR: The Circle loss is demonstrated, which has a unified formula for two elemental deep feature learning paradigms, learning with class-level labels and pair-wise labels, and the superiority of the Circle loss on a variety ofDeep feature learning tasks.
Abstract: This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax plus cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning approaches, i.e. learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.
••
14 Jun 2020TL;DR: This paper provides a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples and empirically shows that such modelling leads to robustness against perturbations and yields competitive results on the task of supervised and semi-supervised few- shot classification.
Abstract: Object recognition requires a generalization capability to avoid overfitting, especially when the samples are extremely few. Generalization from limited samples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of life long learning. In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples. A subspace method is exploited as the central block of a dynamic classifier. We will empirically show that such modelling leads to robustness against perturbations (e.g., outliers) and yields competitive results on the task of supervised and semi-supervised few-shot classification. We also develop a discriminative form which can boost the accuracy even further. Our code is available at https://github.com/chrysts/dsn_fewshot