Showing papers by "University of Oxford published in 2016"
••
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.
10,728 citations
••
Broad Institute1, Harvard University2, Boston Children's Hospital3, University of Washington4, University of Arizona5, Cardiff University6, Google7, Icahn School of Medicine at Mount Sinai8, Samsung Medical Center9, Vertex Pharmaceuticals10, University of Michigan11, University of Cambridge12, State University of New York Upstate Medical University13, Karolinska Institutet14, University of Eastern Finland15, Wellcome Trust Centre for Human Genetics16, University of Oxford17, Cedars-Sinai Medical Center18, University of Ottawa19, University of Pennsylvania20, University of North Carolina at Chapel Hill21, University of Helsinki22, University of California, San Diego23, University of Mississippi Medical Center24
TL;DR: The aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC) provides direct evidence for the presence of widespread mutational recurrence.
Abstract: Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.
8,758 citations
••
University of Bristol1, Harvard University2, University Hospitals Bristol NHS Foundation Trust3, Research Triangle Park4, University of Toronto5, University of Oxford6, University of Ottawa7, Paris Descartes University8, University of London9, University of York10, University of Birmingham11, University of Southern Denmark12, University of Liverpool13, University of East Anglia14, Loyola University Chicago15, University of Aberdeen16, Kaiser Permanente17, Baruch College18, McMaster University19, Cochrane Collaboration20, McGill University21, Ottawa Hospital Research Institute22, University of Louisville23, University of Melbourne24
TL;DR: Risk of Bias In Non-randomised Studies - of Interventions is developed, a new tool for evaluating risk of bias in estimates of the comparative effectiveness of interventions from studies that did not use randomisation to allocate units or clusters of individuals to comparison groups.
Abstract: Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.
8,028 citations
••
Technical University of Madrid1, Stanford University2, Elsevier3, VU University Amsterdam4, National Institutes of Health5, University of Leicester6, Harvard University7, Beijing Genomics Institute8, Maastricht University9, Wageningen University and Research Centre10, University of Oxford11, Heriot-Watt University12, University of Manchester13, University of California, San Diego14, Leiden University Medical Center15, Leiden University16, Federal University of São Paulo17, Science for Life Laboratory18, Bayer19, Swiss Institute of Bioinformatics20, Cray21, University Medical Center Groningen22, Erasmus University Rotterdam23
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.
7,602 citations
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.
5,050 citations
••
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.
4,804 citations
••
01 Jan 2016TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Abstract: Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
3,703 citations
••
TL;DR: Using multi-modal magnetic resonance images from the Human Connectome Project and an objective semi-automated neuroanatomical approach, 180 areas per hemisphere are delineated bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults.
Abstract: Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal 'fingerprint' of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.
3,414 citations
••
08 Oct 2016TL;DR: A basic tracking algorithm is equipped with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video and achieves state-of-the-art performance in multiple benchmarks.
Abstract: The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.
2,936 citations
••
Imperial College London1, University of Barcelona2, Keio University3, University of Duisburg-Essen4, Queen's University5, Peter MacCallum Cancer Centre6, University of Michigan7, University of São Paulo8, Yale University9, Northern General Hospital10, University of Caen Lower Normandy11, Fred Hutchinson Cancer Research Center12, University of Oxford13, Memorial Sloan Kettering Cancer Center14, University of Sydney15, Sungkyunkwan University16, Seoul National University17, Kyorin University18, University of Copenhagen19, Nippon Medical School20, Katholieke Universiteit Leuven21, University of Texas MD Anderson Cancer Center22, University of Antwerp23, Hyogo College of Medicine24, University of Western Australia25, Glenfield Hospital26, Cleveland Clinic27, Icahn School of Medicine at Mount Sinai28, University of Turin29, Université libre de Bruxelles30, Juntendo University31, National Cancer Research Institute32, Mayo Clinic33, University of Toronto34, Sinai Grace Hospital35, Netherlands Cancer Institute36, Hiroshima University37, City of Hope National Medical Center38, University of Chicago39, New York University40, Georgetown University41, University of Tokushima42, University of Pisa43, Osaka University44, University of Valencia45, Good Samaritan Hospital46, Military Medical Academy47, Fundación Favaloro48, Autonomous University of Barcelona49, Complutense University of Madrid50, University of Oviedo51, National and Kapodistrian University of Athens52, Rovira i Virgili University53, Autonomous University of Madrid54, Ghent University55
TL;DR: The methods used to evaluate the resultant Stage groupings and the proposals put forward for the 8th edition of the TNM Classification for lung cancer due to be published late 2016 are described.
••
TL;DR: In this article, the authors used a Bayesian hierarchical model to estimate trends in diabetes prevalence, defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs in 200 countries and territories in 21 regions, by sex and from 1980 to 2014.
••
TL;DR: It is shown that using cesium ions along with formamidinium cations in lead bromide–iodide cells improved thermal and photostability and lead to high efficiency in single and tandem cells.
Abstract: Metal halide perovskite photovoltaic cells could potentially boost the efficiency of commercial silicon photovoltaic modules from ∼20 toward 30% when used in tandem architectures. An optimum perovskite cell optical band gap of ~1.75 electron volts (eV) can be achieved by varying halide composition, but to date, such materials have had poor photostability and thermal stability. Here we present a highly crystalline and compositionally photostable material, [HC(NH2)2](0.83)Cs(0.17)Pb(I(0.6)Br(0.4))3, with an optical band gap of ~1.74 eV, and we fabricated perovskite cells that reached open-circuit voltages of 1.2 volts and power conversion efficiency of over 17% on small areas and 14.7% on 0.715 cm(2) cells. By combining these perovskite cells with a 19%-efficient silicon cell, we demonstrated the feasibility of achieving >25%-efficient four-terminal tandem cells.
••
Katholieke Universiteit Leuven1, University of Valencia2, Radboud University Nijmegen3, Sheba Medical Center4, University of Turin5, Hospital Clínico San Carlos6, Université Paris-Saclay7, University of Pisa8, Mayo Clinic9, University of São Paulo10, The Chinese University of Hong Kong11, University of Oxford12, Helsinki University Central Hospital13, University of Helsinki14, Institute of Cancer Research15, Bank of Cyprus16, University of Ioannina17, Odense University Hospital18, University of Amsterdam19, Otto-von-Guericke University Magdeburg20, Geneva College21, Medical University of Vienna22, Martin Luther University of Halle-Wittenberg23, Hebron University24, Imperial College Healthcare25
TL;DR: These ESMO consensus guidelines have been developed based on the current available evidence to provide a series of evidence-based recommendations to assist in the treatment and management of patients with mCRC in this rapidly evolving treatment setting.
••
TL;DR: CKD has a high global prevalence with a consistent estimated global CKD prevalence of between 11 to 13% with the majority stage 3, and future research should evaluate intervention strategies deliverable at scale to delay the progression of CKD and improve CVD outcomes.
Abstract: © 2016 Hill et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Chronic kidney disease (CKD) is a global health burden with a high economic cost to health systems and is an independent risk factor for cardiovascular disease (CVD). All stages of CKD are associated with increased risks of cardiovascular morbidity, premature mortality, and/or decreased quality of life. CKD is usually asymptomatic until later stages and accurate prevalence data are lacking. Thus we sought to determine the prevalence of CKD globally, by stage, geographical location, gender and age. A systematic review and meta-analysis of observational studies estimating CKD prevalence in general populations was conducted through literature searches in 8 databases. We assessed pooled data using a random effects model. Of 5,842 potential articles, 100 studies of diverse quality were included, comprising 6,908,440 patients. Global mean(95%CI) CKD prevalence of 5 stages 13.4%(11.7-15.1%), and stages 3-5 was 10.6%(9.2-12.2%). Weighting by study quality did not affect prevalence estimates. CKD prevalence by stage was Stage-1 (eGFR>90+ACR>30): 3.5% (2.8-4.2%); Stage-2 (eGFR 60-89+ACR>30): 3.9% (2.7-5.3%); Stage-3 (eGFR 30-59): 7.6% (6.4-8.9%); Stage-4 = (eGFR 29-15): 0.4% (0.3-0.5%); and Stage-5 (eGFR<15): 0.1% (0.1-0.1%). CKD has a high global prevalence with a consistent estimated global CKD prevalence of between 11 to 13% with the majority stage 3. Future research should evaluate intervention strategies deliverable at scale to delay the progression of CKD and improve CVD outcomes.
••
Wellcome Trust Sanger Institute1, University of Michigan2, University of Oxford3, University of Geneva4, University of Exeter5, Greifswald University Hospital6, National Research Council7, University of Bristol8, University of Colorado Boulder9, University of Washington10, Fred Hutchinson Cancer Research Center11, SUNY Downstate Medical Center12, Erasmus University Rotterdam13, University of Trieste14, VU University Amsterdam15, South London and Maudsley NHS Foundation Trust16, King's College London17, University of Edinburgh18, Harvard University19, National Institutes of Health20, Harokopio University21, Innsbruck Medical University22, Broad Institute23, University of Helsinki24, Lund University25, Norwegian University of Science and Technology26, University of Cambridge27, University of Minnesota28, Technische Universität München29, University of North Carolina at Chapel Hill30, University of Toronto31, McGill University32, Leiden University33, University of Pennsylvania34, University of Groningen35, Utrecht University36, Churchill Hospital37
TL;DR: A reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies.
Abstract: We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies, and it can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently.
••
27 Jun 2016TL;DR: A new ConvNet architecture for spatiotemporal fusion of video snippets is proposed, and its performance on standard benchmarks where this architecture achieves state-of-the-art results is evaluated.
Abstract: Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.
••
University of Oxford1, University of Bristol2, Cardiff University3, North Bristol NHS Trust4, Royal Victoria Infirmary5, University of Edinburgh6, University of Sheffield7, University Hospitals of Leicester NHS Trust8, Leeds Teaching Hospitals NHS Trust9, Freeman Hospital10, University of Cambridge11
TL;DR: At a median of 10 years, prostate-cancer-specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments.
Abstract: BACKGROUND The comparative effectiveness of treatments for prostate cancer that is detected by prostate-specific antigen (PSA) testing remains uncertain. METHODS We compared active monitoring, radical prostatectomy, and external-beam radiotherapy for the treatment of clinically localized prostate cancer. Between 1999 and 2009, a total of 82,429 men 50 to 69 years of age received a PSA test; 2664 received a diagnosis of localized prostate cancer, and 1643 agreed to undergo randomization to active monitoring (545 men), surgery (553), or radiotherapy (545). The primary outcome was prostate-cancer mortality at a median of 10 years of follow-up. Secondary outcomes included the rates of disease progression, metastases, and all-cause deaths. RESULTS There were 17 prostate-cancer–specific deaths overall: 8 in the active-monitoring group (1.5 deaths per 1000 person-years; 95% confidence interval [CI], 0.7 to 3.0), 5 in the surgery group (0.9 per 1000 person-years; 95% CI, 0.4 to 2.2), and 4 in the radiotherapy group (0.7 per 1000 person-years; 95% CI, 0.3 to 2.0); the difference among the groups was not significant (P=0.48 for the overall comparison). In addition, no significant difference was seen among the groups in the number of deaths from any cause (169 deaths overall; P=0.87 for the comparison among the three groups). Metastases developed in more men in the active-monitoring group (33 men; 6.3 events per 1000 person-years; 95% CI, 4.5 to 8.8) than in the surgery group (13 men; 2.4 per 1000 person-years; 95% CI, 1.4 to 4.2) or the radiotherapy group (16 men; 3.0 per 1000 person-years; 95% CI, 1.9 to 4.9) (P=0.004 for the overall comparison). Higher rates of disease progression were seen in the active-monitoring group (112 men; 22.9 events per 1000 person-years; 95% CI, 19.0 to 27.5) than in the surgery group (46 men; 8.9 events per 1000 person-years; 95% CI, 6.7 to 11.9) or the radiotherapy group (46 men; 9.0 events per 1000 person-years; 95% CI, 6.7 to 12.0) (P<0.001 for the overall comparison). CONCLUSIONS At a median of 10 years, prostate-cancer–specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments. Surgery and radiotherapy were associated with lower incidences of disease progression and metastases than was active monitoring.
••
TL;DR: In this paper, the authors present an extension to the Consolidated Standards of Reporting Trials (CONSORT) statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT.
Abstract: The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply. The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist. The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number. This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials. Editor’s note: In order to encourage its wide dissemination this article is freely accessible on the BMJ and Pilot and Feasibility Studies journal websites.
••
University of Cambridge1, Harvard University2, Peking University3, National Institutes of Health4, University of Oxford5, Curtin University6, Australian National University7, Imperial College London8, American Cancer Society9, University of Southern California10, Johns Hopkins University11, University of Sydney12, Vanderbilt University13, Chinese Center for Disease Control and Prevention14, University of Bristol15, Capital Medical University16, Erasmus University Rotterdam17, Yonsei University18, Fred Hutchinson Cancer Research Center19, University of Turin20, University of Glasgow21, University of North Carolina at Chapel Hill22, Shiga University of Medical Science23, Innsbruck Medical University24, International Agency for Research on Cancer25, University of Hong Kong26, Massey University27
TL;DR: The associations of both overweight and obesity with higher all-cause mortality were broadly consistent in four continents and supports strategies to combat the entire spectrum of excess adiposity in many populations.
••
01 Oct 2016TL;DR: A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.
Abstract: This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.
••
TL;DR: There are opportunities to use such sustainable polymers in both high-value areas and in basic applications such as packaging.
Abstract: Renewable resources are used increasingly in the production of polymers. In particular, monomers such as carbon dioxide, terpenes, vegetable oils and carbohydrates can be used as feedstocks for the manufacture of a variety of sustainable materials and products, including elastomers, plastics, hydrogels, flexible electronics, resins, engineering polymers and composites. Efficient catalysis is required to produce monomers, to facilitate selective polymerizations and to enable recycling or upcycling of waste materials. There are opportunities to use such sustainable polymers in both high-value areas and in basic applications such as packaging. Life-cycle assessment can be used to quantify the environmental benefits of sustainable polymers.
•
TL;DR: In this paper, a fully-convolutional Siamese network is trained end-to-end on the ILSVRC15 dataset for object detection in video, which achieves state-of-the-art performance.
Abstract: The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object's appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.
•
TL;DR: In this paper, a spatial and temporal network can be fused at the last convolution layer without loss of performance, but with a substantial saving in parameters, and furthermore, pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance.
Abstract: Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.
••
TL;DR: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant as discussed by the authors, and there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists.
Abstract: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power We conclude with guidelines for improving statistical interpretation and reporting
••
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2 +679 more•Institutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.
••
TL;DR: A framework for adaptive visual object tracking based on structured output prediction that is able to outperform state-of-the-art trackers on various benchmark videos and can easily incorporate additional features and kernels into the framework, which results in increased tracking performance.
Abstract: Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.
••
TL;DR: The cross-platform software tool, TempEst (formerly known as Path-O-Gen), is introduced, for the visualization and analysis of temporally sampled sequence data and can be used to assess whether there is sufficient temporal signal in the data to proceed with phylogenetic molecular clock analysis, and identify sequences whose genetic divergence and sampling date are incongruent.
Abstract: Gene sequences sampled at different points in time can be used to infer molecular phylogenies on a natural timescale of months or years, provided that the sequences in question undergo measurable amounts of evolutionary change between sampling times. Data sets with this property are termed heterochronous and have become increasingly common in several fields of biology, most notably the molecular epidemiology of rapidly evolving viruses. Here we introduce the cross-platform software tool, TempEst (formerly known as Path-O-Gen), for the visualization and analysis of temporally sampled sequence data. Given a molecular phylogeny and the dates of sampling for each sequence, TempEst uses an interactive regression approach to explore the association between genetic divergence through time and sampling dates. TempEst can be used to (1) assess whether there is sufficient temporal signal in the data to proceed with phylogenetic molecular clock analysis, and (2) identify sequences whose genetic divergence and sampling date are incongruent. Examination of the latter can help identify data quality problems, including errors in data annotation, sample contamination, sequence recombination, or alignment error. We recommend that all users of the molecular clock models implemented in BEAST first check their data using TempEst prior to analysis.
••
TL;DR: In this paper, the authors proposed a framework to combat the threat to human health and biosecurity from antimicrobial resistance, an understanding of its mechanisms and drivers is needed.
•
TL;DR: This paper provided definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions, and provided an explanatory list of 25 misinterpretations of P values, confidence intervals, and power.
Abstract: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting.