Showing papers by "Australian National University published in 2018"
••
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4 +1025 more•Institutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).
5,211 citations
••
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1 +1050 more•Institutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.
2,910 citations
••
18 Jun 2018TL;DR: In this paper, a bottom-up and top-down attention mechanism was proposed to enable attention to be calculated at the level of objects and other salient image regions, which achieved state-of-the-art results on the MSCOCO test server.
Abstract: Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
2,904 citations
••
TL;DR: The results illustrate the importance of parameter tuning for optimizing classifier performance, and the recommendations regarding parameter choices for these classifiers under a range of standard operating conditions are made.
Abstract: Taxonomic classification of marker-gene sequences is an important step in microbiome analysis. We present q2-feature-classifier (
https://github.com/qiime2/q2-feature-classifier
), a QIIME 2 plugin containing several novel machine-learning and alignment-based methods for taxonomy classification. We evaluated and optimized several commonly used classification methods implemented in QIIME 1 (RDP, BLAST, UCLUST, and SortMeRNA) and several new methods implemented in QIIME 2 (a scikit-learn naive Bayes machine-learning classifier, and alignment-based taxonomy consensus methods based on VSEARCH, and BLAST+) for classification of bacterial 16S rRNA and fungal ITS marker-gene amplicon sequence data. The naive-Bayes, BLAST+-based, and VSEARCH-based classifiers implemented in QIIME 2 meet or exceed the species-level accuracy of other commonly used methods designed for classification of marker gene sequences that were evaluated in this work. These evaluations, based on 19 mock communities and error-free sequence simulations, including classification of simulated “novel” marker-gene sequences, are available in our extensible benchmarking framework, tax-credit (
https://github.com/caporaso-lab/tax-credit-data
). Our results illustrate the importance of parameter tuning for optimizing classifier performance, and we make recommendations regarding parameter choices for these classifiers under a range of standard operating conditions. q2-feature-classifier and tax-credit are both free, open-source, BSD-licensed packages available on GitHub.
2,475 citations
••
TL;DR: It is found that the risk of all-cause mortality, and of cancers specifically, rises with increasing levels of consumption, and the level of consumption that minimises health loss is zero.
1,831 citations
••
Stockholm Resilience Centre1, Stockholm University2, University of Copenhagen3, University of Exeter4, Royal Swedish Academy of Sciences5, University of Arizona6, Scott Polar Research Institute7, Stanford University8, Université catholique de Louvain9, Potsdam Institute for Climate Impact Research10, Australian National University11, Wageningen University and Research Centre12, University of Potsdam13
TL;DR: The risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced is explored.
Abstract: We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a "Hothouse Earth" pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be. If the threshold is crossed, the resulting trajectory would likely cause serious disruptions to ecosystems, society, and economies. Collective human action is required to steer the Earth System away from a potential threshold and stabilize it in a habitable interglacial-like state. Such action entails stewardship of the entire Earth System-biosphere, climate, and societies-and could include decarbonization of the global economy, enhancement of biosphere carbon sinks, behavioral changes, technological innovations, new governance arrangements, and transformed social values.
1,685 citations
••
08 Sep 2018
TL;DR: In this paper, a part-based convolutional baseline (PCB) is proposed to learn discriminative part-informed features for person retrieval and two contributions are made: (i) a network named Part-based Convolutional Baseline (PCBB) which outputs a convolutionAL descriptor consisting of several part-level features.
Abstract: Employing part-level features offers fine-grained information for pedestrian image description. A prerequisite of part discovery is that each part should be well located. Instead of using external resources like pose estimator, we consider content consistency within each part for precise part location. Specifically, we target at learning discriminative part-informed features for person retrieval and make two contributions. (i) A network named Part-based Convolutional Baseline (PCB). Given an image input, it outputs a convolutional descriptor consisting of several part-level features. With a uniform partition strategy, PCB achieves competitive results with the state-of-the-art methods, proving itself as a strong convolutional baseline for person retrieval. (ii) A refined part pooling (RPP) method. Uniform partition inevitably incurs outliers in each part, which are in fact more similar to other parts. RPP re-assigns these outliers to the parts they are closest to, resulting in refined parts with enhanced within-part consistency. Experiment confirms that RPP allows PCB to gain another round of performance boost. For instance, on the Market-1501 dataset, we achieve (77.4+4.2)% mAP and (92.3+1.5)% rank-1 accuracy, surpassing the state of the art by a large margin. Code is available at: https://github.com/syfafterzy/PCB_RPP
1,633 citations
••
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.
1,595 citations
••
TL;DR: The RNA targets and molecular and cellular functions of the new RBPs, as well as the possibility that some RBPs may be regulated by RNA rather than regulate RNA, are discussed.
Abstract: RNA-binding proteins (RBPs) are typically thought of as proteins that bind RNA through one or multiple globular RNA-binding domains (RBDs) and change the fate or function of the bound RNAs. Several hundred such RBPs have been discovered and investigated over the years. Recent proteome-wide studies have more than doubled the number of proteins implicated in RNA binding and uncovered hundreds of additional RBPs lacking conventional RBDs. In this Review, we discuss these new RBPs and the emerging understanding of their unexpected modes of RNA binding, which can be mediated by intrinsically disordered regions, protein-protein interaction interfaces and enzymatic cores, among others. We also discuss the RNA targets and molecular and cellular functions of the new RBPs, as well as the possibility that some RBPs may be regulated by RNA rather than regulate RNA.
1,013 citations
••
TL;DR: The genetic architecture of the human plasma proteome in healthy blood donors from the INTERVAL study is characterized, and it is shown that protein quantitative trait loci overlap with gene expression quantitative traits, as well as with disease-associated loci, and evidence that protein biomarkers have causal roles in disease is found.
Abstract: Although plasma proteins have important roles in biological processes and are the direct targets of many drugs, the genetic factors that control inter-individual variation in plasma protein levels are not well understood. Here we characterize the genetic architecture of the human plasma proteome in healthy blood donors from the INTERVAL study. We identify 1,927 genetic associations with 1,478 proteins, a fourfold increase on existing knowledge, including trans associations for 1,104 proteins. To understand the consequences of perturbations in plasma protein levels, we apply an integrated approach that links genetic variation with biological pathway, disease, and drug databases. We show that protein quantitative trait loci overlap with gene expression quantitative trait loci, as well as with disease-associated loci, and find evidence that protein biomarkers have causal roles in disease using Mendelian randomization analysis. By linking genetic factors to diseases via specific proteins, our analyses highlight potential therapeutic targets, opportunities for matching existing drugs with new disease indications, and potential safety concerns for drugs under development.
961 citations
••
Northern Arizona University1, University of Minnesota2, University of California, Davis3, Woods Hole Oceanographic Institution4, Massachusetts Institute of Technology5, University of Copenhagen6, University of Trento7, Chinese Academy of Sciences8, University of California, San Francisco9, Children's Hospital of Philadelphia10, Pacific Northwest National Laboratory11, North Carolina State University12, University of Montana13, Dalhousie University14, University of British Columbia15, Shedd Aquarium16, University of Colorado Denver17, University of California, San Diego18, Michigan State University19, Stanford University20, Broad Institute21, Harvard University22, Australian National University23, University of Düsseldorf24, Sookmyung Women's University25, San Diego State University26, Howard Hughes Medical Institute27, Max Planck Society28, Cornell University29, University of Washington30, Colorado State University31, Google32, Syracuse University33, Webster University34, United States Department of Agriculture35, University of Arkansas for Medical Sciences36, Colorado School of Mines37, Atlantic Oceanographic and Meteorological Laboratory38, University of Southern Mississippi39, University of California, Merced40, Wageningen University and Research Centre41, University of Arizona42, Environment Agency43, University of Florida44, Merck & Co.45
TL;DR: QIIME 2 provides new features that will drive the next generation of microbiome research, including interactive spatial and temporal analysis and visualization tools, support for metabolomics and shotgun metagenomics analysis, and automated data provenance tracking to ensure reproducible, transparent microbiome data science.
Abstract: We present QIIME 2, an open-source microbiome data science platform accessible to users spanning the microbiome research ecosystem, from scientists and engineers to clinicians and policy makers. QIIME 2 provides new features that will drive the next generation of microbiome research. These include interactive spatial and temporal analysis and visualization tools, support for metabolomics and shotgun metagenomics analysis, and automated data provenance tracking to ensure reproducible, transparent microbiome data science.
••
TL;DR: It is revealed that metasurfaces created by seemingly different lattices of (dielectric or metallic) meta-atoms with broken in-plane symmetry can support sharp high-Q resonances arising from a distortion of symmetry-protected bound states in the continuum.
Abstract: We reveal that metasurfaces created by seemingly different lattices of (dielectric or metallic) meta-atoms with broken in-plane symmetry can support sharp high-$Q$ resonances arising from a distortion of symmetry-protected bound states in the continuum. We develop a rigorous theory of such asymmetric periodic structures and demonstrate a link between the bound states in the continuum and Fano resonances. Our results suggest the way for smart engineering of resonances in metasurfaces for many applications in nanophotonics and metaoptics.
••
TL;DR: In this article, the authors present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves.
Abstract: We present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. We report our findings for gravitational-wave transients, with particular focus on gravitational-wave signals from the inspiral of binary neutron star systems, which are the most promising targets for multi-messenger astronomy. The ability to localize the sources of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and 90% credible regions can be as large as thousands of square degrees when only two sensitive detectors are operational. Determining the sky position of a significant fraction of detected signals to areas of 5– 20 deg2 requires at least three detectors of sensitivity within a factor of ∼2 of each other and with a broad frequency bandwidth. When all detectors, including KAGRA and the third LIGO detector in India, reach design sensitivity, a significant fraction of gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone.
••
18 Jun 2018
TL;DR: The Room-to-Room (R2R) dataset as mentioned in this paper provides a large-scale reinforcement learning environment based on real imagery for visually-grounded natural language navigation in real buildings.
Abstract: A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset1.
••
TL;DR: Graphene networks with "well-sequencing genes" can serve as nanogenerators, thermally promoting electromagnetic wave absorption by 250%, with broadened bandwidth covering the whole investigated frequency, opening up an unexpected horizon for converting, storing, and reusing waste electromagnetic energy.
Abstract: Electromagnetic energy radiation is becoming a "health-killer" of living bodies, especially around industrial transformer substation and electricity pylon. Harvesting, converting, and storing waste energy for recycling are considered the ideal ways to control electromagnetic radiation. However, heat-generation and temperature-rising with performance degradation remain big problems. Herein, graphene-silica xerogel is dissected hierarchically from functions to "genes," thermally driven relaxation and charge transport, experimentally and theoretically, demonstrating a competitive synergy on energy conversion. A generic approach of "material genes sequencing" is proposed, tactfully transforming the negative effects of heat energy to superiority for switching self-powered and self-circulated electromagnetic devices, beneficial for waste energy harvesting, conversion, and storage. Graphene networks with "well-sequencing genes" (w = Pc /Pp > 0.2) can serve as nanogenerators, thermally promoting electromagnetic wave absorption by 250%, with broadened bandwidth covering the whole investigated frequency. This finding of nonionic energy conversion opens up an unexpected horizon for converting, storing, and reusing waste electromagnetic energy, providing the most promising way for governing electromagnetic pollution with self-powered and self-circulated electromagnetic devices.
••
University of Cambridge1, Australian National University2, Norwegian Institute of Public Health3, Utrecht University4, University of Tromsø5, The George Institute for Global Health6, Johns Hopkins University7, University of Oxford8, National Institutes of Health9, University of Copenhagen10, Copenhagen University Hospital11, Fiona Stanley Hospital12, Harry Perkins Institute of Medical Research13, University of Western Australia14, University of London15, Lund University16, University of Pittsburgh17, French Institute of Health and Medical Research18, University College London19, Technische Universität München20, University of Ulm21, University of Padua22, University of Southampton23, German Cancer Research Center24, Erasmus University Medical Center25, Umeå University26, Cardiff University27, Greifswald University Hospital28, Aarhus University29, Portland State University30, University of New South Wales31, Harvard University32, National and Kapodistrian University of Athens33, University of Hawaii34, Columbia University35, University of Iowa36, Duke University37, Yamagata University38, Tuskegee University39, University of Oulu40, University of Helsinki41, Medical University of South Carolina42, University of Washington43, Kaiser Permanente44, University of Groningen45, University of Granada46, Yale University47, Prevention Institute48, University of Edinburgh49, Uppsala University50, Basque Government51, Kyushu University52, Royal Prince Alfred Hospital53, Harokopio University54, University of California, San Diego55, VU University Medical Center56, Aalborg University57, University of Eastern Finland58, Laval University59, University of Vermont60, Wake Forest Baptist Medical Center61, Wake Forest University62, Kanazawa Medical University63, Baker IDI Heart and Diabetes Institute64, Heidelberg University65, Istituto Superiore di Sanità66, Pasteur Institute67, City College of New York68, Howard University69, University of Glasgow70, International Agency for Research on Cancer71, University of Bristol72, University of Auckland73
TL;DR: Current drinkers of alcohol in high-income countries, the threshold for lowest risk of all-cause mortality was about 100 g/week, and data support limits for alcohol consumption that are lower than those recommended in most current guidelines.
••
TL;DR: Clinicians are provided with evidence to recommend that patients obtain both aerobic and resistance exercise of at least moderate intensity on as many days of the week as feasible, in line with current exercise guidelines, to improve cognitive function.
Abstract: Background Physical exercise is seen as a promising intervention to prevent or delay cognitive decline in individuals aged 50 years and older, yet the evidence from reviews is not conclusive. Objectives To determine if physical exercise is effective in improving cognitive function in this population. Design Systematic review with multilevel meta-analysis. Data sources Electronic databases Medline (PubMed), EMBASE (Scopus), PsychINFO and CENTRAL (Cochrane) from inception to November 2016. Eligibility criteria Randomised controlled trials of physical exercise interventions in community-dwelling adults older than 50 years, with an outcome measure of cognitive function. Results The search returned 12 820 records, of which 39 studies were included in the systematic review. Analysis of 333 dependent effect sizes from 36 studies showed that physical exercise improved cognitive function (0.29; 95% CI 0.17 to 0.41; p Conclusions Physical exercise improved cognitive function in the over 50s, regardless of the cognitive status of participants. To improve cognitive function, this meta-analysis provides clinicians with evidence to recommend that patients obtain both aerobic and resistance exercise of at least moderate intensity on as many days of the week as feasible, in line with current exercise guidelines.
••
TL;DR: An imaging-based nanophotonic technique can resolve absorption fingerprints without the need for spectrometry, frequency scanning, or moving mechanical parts, thereby paving the way toward sensitive and versatile miniaturized mid-infrared spectroscopy devices.
Abstract: Metasurfaces provide opportunities for wavefront control, flat optics, and subwavelength light focusing. We developed an imaging-based nanophotonic method for detecting mid-infrared molecular fingerprints and implemented it for the chemical identification and compositional analysis of surface-bound analytes. Our technique features a two-dimensional pixelated dielectric metasurface with a range of ultrasharp resonances, each tuned to a discrete frequency; this enables molecular absorption signatures to be read out at multiple spectral points, and the resulting information is then translated into a barcode-like spatial absorption map for imaging. The signatures of biological, polymer, and pesticide molecules can be detected with high sensitivity, covering applications such as biosensing and environmental monitoring. Our chemically specific technique can resolve absorption fingerprints without the need for spectrometry, frequency scanning, or moving mechanical parts, thereby paving the way toward sensitive and versatile miniaturized mid-infrared spectroscopy devices.
••
Wildlife Conservation Society1, University of Queensland2, University of Northern British Columbia3, Canadian Forest Service4, Wildlife Conservation Society Canada5, Imperial College London6, University of Maryland, College Park7, American Museum of Natural History8, James Cook University9, Woods Hole Research Center10, Swedish University of Agricultural Sciences11, Forest Trends12, United Nations Development Programme13, Australian National University14
TL;DR: It is argued that maintaining and, where possible, restoring the integrity of dwindling intact forests is an urgent priority for current global efforts to halt the ongoing biodiversity crisis, slow rapid climate change and achieve sustainability goals.
Abstract: As the terrestrial human footprint continues to expand, the amount of native forest that is free from significant damaging human activities is in precipitous decline. There is emerging evidence that the remaining intact forest supports an exceptional confluence of globally significant environmental values relative to degraded forests, including imperilled biodiversity, carbon sequestration and storage, water provision, indigenous culture and the maintenance of human health. Here we argue that maintaining and, where possible, restoring the integrity of dwindling intact forests is an urgent priority for current global efforts to halt the ongoing biodiversity crisis, slow rapid climate change and achieve sustainability goals. Retaining the integrity of intact forest ecosystems should be a central component of proactive global and national environmental strategies, alongside current efforts aimed at halting deforestation and promoting reforestation.
••
TL;DR: In this paper, the authors describe the processing of the Gaia DR2 data, and describe the criteria used to select the sample published in Gaia DR 2, and explore the data set to assess its quality.
Abstract: Context. The Gaia spacecraft of the European Space Agency (ESA) has been securing observations of solar system objects (SSOs) since the beginning of its operations. Data Release 2 (DR2) contains the observations of a selected sample of 14,099 SSOs. These asteroids have been already identified and have been numbered by the Minor Planet Center repository. Positions are provided for each Gaia observation at CCD level. As additional information, complementary to astrometry, the apparent brightness of SSOs in the unfiltered G band is also provided for selected observations.Aims. We explain the processing of SSO data, and describe the criteria we used to select the sample published in Gaia DR2. We then explore the data set to assess its quality.Methods. To exploit the main data product for the solar system in Gaia DR2, which is the epoch astrometry of asteroids, it is necessary to take into account the unusual properties of the uncertainty, as the position information is nearly one-dimensional. When this aspect is handled appropriately, an orbit fit can be obtained with post-fit residuals that are overall consistent with the a-priori error model that was used to define individual values of the astrometric uncertainty. The role of both random and systematic errors is described. The distribution of residuals allowed us to identify possible contaminants in the data set (such as stars). Photometry in the G band was compared to computed values from reference asteroid shapes and to the flux registered at the corresponding epochs by the red and blue photometers (RP and BP).Results. The overall astrometric performance is close to the expectations, with an optimal range of brightness G ~ 12 − 17. In this range, the typical transit-level accuracy is well below 1 mas. For fainter asteroids, the growing photon noise deteriorates the performance. Asteroids brighter than G ~ 12 are affected by a lower performance of the processing of their signals. The dramatic improvement brought by Gaia DR2 astrometry of SSOs is demonstrated by comparisons to the archive data and by preliminary tests on the detection of subtle non-gravitational effects.
••
TL;DR: After 1 year of CVC treatment, twice as many subjects achieved improvement in fibrosis and no worsening of SH compared with placebo, and these findings warrant phase 3 evaluation.
••
TL;DR: WHO recently has launched new guidelines on the use of medically important antimicrobials in food-producing animals, recommending that farmers and the food industry stop using antimicroBials routinely to promote growth and prevent disease in healthy animals.
Abstract: One Health is the collaborative effort of multiple health science professions to attain optimal health for people, domestic animals, wildlife, plants, and our environment. The drivers of antimicrobial resistance include antimicrobial use and abuse in human, animal, and environmental sectors and the spread of resistant bacteria and resistance determinants within and between these sectors and around the globe. Most of the classes of antimicrobials used to treat bacterial infections in humans are also used in animals. Given the important and interdependent human, animal, and environmental dimensions of antimicrobial resistance, it is logical to take a One Health approach when addressing this problem. This includes taking steps to preserve the continued effectiveness of existing antimicrobials by eliminating their inappropriate use and by limiting the spread of infection. Major concerns in the animal health and agriculture sectors are mass medication of animals with antimicrobials that are critically important for humans, such as third-generation cephalosporins and fluoroquinolones, and the long-term, in-feed use of medically important antimicrobials, such as colistin, tetracyclines, and macrolides, for growth promotion. In the human sector it is essential to prevent infections, reduce over-prescribing of antimicrobials, improve sanitation, and improve hygiene and infection control. Pollution from inadequate treatment of industrial, residential, and farm waste is expanding the resistome in the environment. Numerous countries and several international agencies have included a One Health approach within their action plans to address antimicrobial resistance. Necessary actions include improvements in antimicrobial use regulation and policy, surveillance, stewardship, infection control, sanitation, animal husbandry, and alternatives to antimicrobials. WHO recently has launched new guidelines on the use of medically important antimicrobials in food-producing animals, recommending that farmers and the food industry stop using antimicrobials routinely to promote growth and prevent disease in healthy animals. These guidelines aim to help preserve the effectiveness of antimicrobials that are important for human medicine by reducing their use in animals.
••
Australian National University1, University of Bordeaux2, Institut de recherche pour le développement3, International Food Policy Research Institute4, Food and Agriculture Organization5, Colorado State University6, University of Adelaide7, Tsinghua University8, University of Oxford9, University of Idaho10
TL;DR: It is shown that to mitigate global water scarcity, increases in IE must be accompanied by robust water accounting and measurements, a cap on extractions, an assessment of uncertainties, the valuation of trade-offs, and a better understanding of the incentives and behavior of irrigators.
Abstract: S.A.W. was supported by the Australian Research Council project FT140100773; C.R. was supported by the CGIAR Research Program on Water, Land, and Ecosystems; and F.M. was supported by the Agence Nationale de la Recherche (ANR) AMETHYST project (ANR-12 TMED-0006-01).
••
TL;DR: This paper presents a geodesic distance based technique that provides reliable and temporally consistent saliency measurement of superpixels as a prior for pixel-wise labeling in video saliency estimation.
Abstract: Video saliency, aiming for estimation of a single dominant object in a sequence, offers strong object-level cues for unsupervised video object segmentation. In this paper, we present a geodesic distance based technique that provides reliable and temporally consistent saliency measurement of superpixels as a prior for pixel-wise labeling. Using undirected intra-frame and inter-frame graphs constructed from spatiotemporal edges or appearance and motion, and a skeleton abstraction step to further enhance saliency estimates, our method formulates the pixel-wise segmentation task as an energy minimization problem on a function that consists of unary terms of global foreground and background models, dynamic location models, and pairwise terms of label smoothness potentials. We perform extensive quantitative and qualitative experiments on benchmark datasets. Our method achieves superior performance in comparison to the current state-of-the-art in terms of accuracy and speed.
••
TL;DR: Findings indicate that over the next 20 years there will be an expansion of morbidity, particularly complex multi-morbidity (4+ diseases), and advocate for a new focus on prevention of, and appropriate and efficient service provision for those with, complexMulti-Morbidity.
Abstract: Background Models projecting future disease burden have focussed on one or two diseases. Little is known on how risk factors of younger cohorts will play out in the future burden of multi-morbidity (two or more concurrent long-term conditions). Design A dynamic microsimulation model, the Population Ageing and Care Simulation (PACSim) model, simulates the characteristics (sociodemographic factors, health behaviours, chronic diseases and geriatric conditions) of individuals over the period 2014 to 2040. Population 303589 individuals aged 35 years and over (a 1% random sample of the 2014 England population) created from Understanding Society, the English Longitudinal Study of Ageing, and the Cognitive Function and Ageing Study II. Main outcome measures The prevalence of, numbers with, and years lived with, chronic diseases, geriatric conditions, and multi-morbidity. Results Between 2015 and 2035, multi-morbidity prevalence is estimated to increase, the proportion with 4+ diseases almost doubling (2015:9.8%; 2035:17.0%) and two-thirds of those with 4+ diseases will have mental ill-health (dementia, depression, cognitive impairment no dementia). Multi-morbidity prevalence in incoming cohorts aged 65-74 years will rise (2015:45.7%; 2035:52.8%). Life expectancy gains (men 3.6 years, women: 2.9 years) will be spent mostly with 4+ diseases (men: 2.4 years, 65.9%; women: 2.5 years, 85.2%), resulting from increased prevalence of rather than longer survival with multi-morbidity. Conclusions Our findings indicate that over the next twenty years there will be an expansion of morbidity, particularly complex multi-morbidity (4+ diseases). We advocate for a new focus on prevention of, and appropriate and efficient service provision for those with, complex multi-morbidity.
•
13 Feb 2018TL;DR: This self-contained guide will benefit those who seek to both understand the theory behind CNNs and to gain hands-on experience on the application of CNNs in computer vision, providing a comprehensive introduction to CNNs.
Abstract: Computer vision has become increasingly important and effective in recent years due to its wide-ranging applications in areas as diverse as smart surveillance and monitoring, health and medicine, sports and recreation, robotics, drones, and self-driving cars. Visual recognition tasks, such as image classification, localization, and detection, are the core building blocks of many of these applications, and recent developments in Convolutional Neural Networks (CNNs) have led to outstanding performance in these state-of-the-art visual recognition tasks and systems. As a result, CNNs now form the crux of deep learning algorithms in computer vision. This self-contained guide will benefit those who seek to both understand the theory behind CNNs and to gain hands-on experience on the application of CNNs in computer vision. It provides a comprehensive introduction to CNNs starting with the essential concepts behind neural networks: training, regularization, and optimization of CNNs. The book also discusses a wide range of loss functions, network layers, and popular CNN architectures, reviews the different techniques for the evaluation of CNNs, and presents some popular CNN tools and libraries that are commonly used in computer vision. Further, this text describes and discusses case studies that are related to the application of CNN in computer vision, including image classification, object detection, semantic segmentation, scene understanding, and image generation. This book is ideal for undergraduate and graduate students, as no prior background knowledge in the field is required to follow the material, as well as new researchers, developers, engineers, and practitioners who are interested in gaining a quick understanding of CNN models.
••
TL;DR: Improving sanitation, increasing access to clean water, and ensuring good governance, as well as increasing public health-care expenditure and better regulating the private health sector are all necessary to reduce global antimicrobial resistance.
••
TL;DR: A new graphical method, the Doi plot, to visualize asymmetry and also a new measure, the LFK index, to detect and quantify asymmetry of study effects in Doi plots are proposed and demonstrated.
Abstract: Detection of publication and related biases remains suboptimal and threatens the validity and interpretation of meta-analytical findings. When bias is present, it usually differentially affects small and large studies manifesting as an association between precision and effect size and therefore visual asymmetry of conventional funnel plots. This asymmetry can be quantified and Egger's regression is, by far, the most widely used statistical measure for quantifying funnel plot asymmetry. However, concerns have been raised about both the visual appearance of funnel plots and the sensitivity of Egger's regression to detect such asymmetry, particularly when the number of studies is small. In this article, we propose a new graphical method, the Doi plot, to visualize asymmetry and also a new measure, the LFK index, to detect and quantify asymmetry of study effects in Doi plots. We demonstrate that the visual representation of asymmetry was better for the Doi plot when compared with the funnel plot. We also show that the diagnostic accuracy of the LFK index in discriminating between asymmetry due to simulated publication bias versus chance or no asymmetry was also better with the LFK index which had areas under the receiver operating characteristic curve of 0.74-0.88 with simulations of meta-analyses with five, 10, 15, and 20 studies. The Egger's regression result had lower areas under the receiver operating characteristic curve values of 0.58-0.75 across the same simulations. The LFK index also had a higher sensitivity (71.3-72.1%) than the Egger's regression result (18.5-43.0%). We conclude that the methods proposed in this article can markedly improve the ability of researchers to detect bias in meta-analysis.
•
TL;DR: A category-level adversarial network is introduced, aiming to enforce local semantic consistency during the trend of global alignment, to take a close look at the category- level data distribution and align each class with an adaptive adversarial loss.
Abstract: We consider the problem of unsupervised domain adaptation in semantic segmentation. The key in this campaign consists in reducing the domain shift, i.e., enforcing the data distributions of the two domains to be similar. A popular strategy is to align the marginal distribution in the feature space through adversarial learning. However, this global alignment strategy does not consider the local category-level feature distribution. A possible consequence of the global movement is that some categories which are originally well aligned between the source and target may be incorrectly mapped. To address this problem, this paper introduces a category-level adversarial network, aiming to enforce local semantic consistency during the trend of global alignment. Our idea is to take a close look at the category-level data distribution and align each class with an adaptive adversarial loss. Specifically, we reduce the weight of the adversarial loss for category-level aligned features while increasing the adversarial force for those poorly aligned. In this process, we decide how well a feature is category-level aligned between source and target by a co-training approach. In two domain adaptation tasks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, we validate that the proposed method matches the state of the art in segmentation accuracy.
••
European Centre for Medium-Range Weather Forecasts1, University of Bristol2, National Space Institute3, Goddard Space Flight Center4, European Space Agency5, National Oceanic and Atmospheric Administration6, Goethe University Frankfurt7, University of South Florida8, University of Bremen9, Academia Sinica10, University of Texas at Austin11, Chinese Academy of Sciences12, University of New South Wales13, Trent University14, University of Siegen15, IFREMER16, Commonwealth Scientific and Industrial Research Organisation17, California Institute of Technology18, University of Bonn19, University of Urbino20, Dresden University of Technology21, Old Dominion University22, University of Leeds23, ETH Zurich24, University of Grenoble25, University of Bern26, Northern Oklahoma College27, Australian National University28, University of Oslo29, University of Rennes30, University of the Balearic Islands31, University of Reading32, University of California, San Diego33, University of Ottawa34, University of California, Irvine35, University of Colorado Boulder36, University of Zurich37, Woods Hole Oceanographic Institution38, Delft University of Technology39, Alfred Wegener Institute for Polar and Marine Research40, Ohio State University41, University of Hamburg42, Utrecht University43, University of California44, Bjerknes Centre for Climate Research45, University of Tasmania46, University of La Rochelle47
TL;DR: In this paper, the authors present estimates of the altimetry-based global mean sea level (average variance of 3.1 +/- 0.3 mm/yr and acceleration of 0.1 mm/r2 over 1993-present), as well as of the different components of the sea level budget over 2005-present, using GRACE-based ocean mass estimates.
Abstract: Global mean sea level is an integral of changes occurring in the climate system in response to
unforced climate variability as well as natural and anthropogenic forcing factors. Its temporal
evolution allows detecting changes (e.g., acceleration) in one or more components. Study of
the sea level budget provides constraints on missing or poorly known contributions, such as
the unsurveyed deep ocean or the still uncertain land water component. In the context of the
World Climate Research Programme Grand Challenge entitled “Regional Sea Level and
Coastal Impacts”, an international effort involving the sea level community worldwide has
been recently initiated with the objective of assessing the various data sets used to estimate
components of the sea level budget during the altimetry era (1993 to present). These data sets
are based on the combination of a broad range of space-based and in situ observations, model
estimates and algorithms. Evaluating their quality, quantifying uncertainties and identifying
sources of discrepancies between component estimates is extremely useful for various
applications in climate research. This effort involves several tens of scientists from about fifty
research teams/institutions worldwide (www.wcrp-climate.org/grand-challenges/gc-sea-
level). The results presented in this paper are a synthesis of the first assessment performed
during 2017-2018. We present estimates of the altimetry-based global mean sea level (average
rate of 3.1 +/- 0.3 mm/yr and acceleration of 0.1 mm/yr2 over 1993-present), as well as of the
different components of the sea level budget (http://doi.org/10.17882/54854). We further
examine closure of the sea level budget, comparing the observed global mean sea level with
the sum of components. Ocean thermal expansion, glaciers, Greenland and Antarctica
contribute by 42%, 21%, 15% and 8% to the global mean sea level over the 1993-present. We
also study the sea level budget over 2005-present, using GRACE-based ocean mass estimates
instead of sum of individual mass components. Results show closure of the sea level budget
within 0.3 mm/yr. Substantial uncertainty remains for the land water storage component, as
shown in examining individual mass contributions to sea level.