scispace - formally typeset
Search or ask a question

Showing papers by "University of Oxford published in 2016"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations


Journal ArticleDOI
Monkol Lek, Konrad J. Karczewski1, Konrad J. Karczewski2, Eric Vallabh Minikel1, Eric Vallabh Minikel2, Kaitlin E. Samocha, Eric Banks1, Timothy Fennell1, Anne H. O’Donnell-Luria1, Anne H. O’Donnell-Luria3, Anne H. O’Donnell-Luria2, James S. Ware, Andrew J. Hill4, Andrew J. Hill2, Andrew J. Hill1, Beryl B. Cummings1, Beryl B. Cummings2, Taru Tukiainen1, Taru Tukiainen2, Daniel P. Birnbaum1, Jack A. Kosmicki, Laramie E. Duncan1, Laramie E. Duncan2, Karol Estrada1, Karol Estrada2, Fengmei Zhao1, Fengmei Zhao2, James Zou1, Emma Pierce-Hoffman1, Emma Pierce-Hoffman2, Joanne Berghout5, David Neil Cooper6, Nicole A. Deflaux7, Mark A. DePristo1, Ron Do, Jason Flannick2, Jason Flannick1, Menachem Fromer, Laura D. Gauthier1, Jackie Goldstein1, Jackie Goldstein2, Namrata Gupta1, Daniel P. Howrigan2, Daniel P. Howrigan1, Adam Kiezun1, Mitja I. Kurki2, Mitja I. Kurki1, Ami Levy Moonshine1, Pradeep Natarajan, Lorena Orozco, Gina M. Peloso1, Gina M. Peloso2, Ryan Poplin1, Manuel A. Rivas1, Valentin Ruano-Rubio1, Samuel A. Rose1, Douglas M. Ruderfer8, Khalid Shakir1, Peter D. Stenson6, Christine Stevens1, Brett Thomas2, Brett Thomas1, Grace Tiao1, María Teresa Tusié-Luna, Ben Weisburd1, Hong-Hee Won9, Dongmei Yu, David Altshuler1, David Altshuler10, Diego Ardissino, Michael Boehnke11, John Danesh12, Stacey Donnelly1, Roberto Elosua, Jose C. Florez2, Jose C. Florez1, Stacey Gabriel1, Gad Getz2, Gad Getz1, Stephen J. Glatt13, Christina M. Hultman14, Sekar Kathiresan, Markku Laakso15, Steven A. McCarroll2, Steven A. McCarroll1, Mark I. McCarthy16, Mark I. McCarthy17, Dermot P.B. McGovern18, Ruth McPherson19, Benjamin M. Neale1, Benjamin M. Neale2, Aarno Palotie, Shaun Purcell8, Danish Saleheen20, Jeremiah M. Scharf, Pamela Sklar, Patrick F. Sullivan21, Patrick F. Sullivan14, Jaakko Tuomilehto22, Ming T. Tsuang23, Hugh Watkins16, Hugh Watkins17, James G. Wilson24, Mark J. Daly1, Mark J. Daly2, Daniel G. MacArthur1, Daniel G. MacArthur2 
18 Aug 2016-Nature
TL;DR: The aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC) provides direct evidence for the presence of widespread mutational recurrence.
Abstract: Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.

8,758 citations


Journal ArticleDOI
12 Oct 2016-BMJ
TL;DR: Risk of Bias In Non-randomised Studies - of Interventions is developed, a new tool for evaluating risk of bias in estimates of the comparative effectiveness of interventions from studies that did not use randomisation to allocate units or clusters of individuals to comparison groups.
Abstract: Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.

8,028 citations


Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
Theo Vos1, Christine Allen1, Megha Arora1, Ryan M Barber1  +696 moreInstitutions (260)
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.

5,050 citations


Journal ArticleDOI
Haidong Wang1, Mohsen Naghavi1, Christine Allen1, Ryan M Barber1  +841 moreInstitutions (293)
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.

4,804 citations


Journal ArticleDOI
01 Jan 2016
TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Abstract: Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.

3,703 citations


Journal ArticleDOI
11 Aug 2016-Nature
TL;DR: Using multi-modal magnetic resonance images from the Human Connectome Project and an objective semi-automated neuroanatomical approach, 180 areas per hemisphere are delineated bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults.
Abstract: Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal 'fingerprint' of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.

3,414 citations


Book ChapterDOI
08 Oct 2016
TL;DR: A basic tracking algorithm is equipped with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video and achieves state-of-the-art performance in multiple benchmarks.
Abstract: The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.

2,936 citations


Journal ArticleDOI
Peter Goldstraw1, Kari Chansky, John Crowley, Ramón Rami-Porta2, Hisao Asamura3, Wilfried Ernst Erich Eberhardt4, Andrew G. Nicholson1, Patti A. Groome5, Alan Mitchell, Vanessa Bolejack, David Ball6, David G. Beer7, Ricardo Beyruti8, Frank C. Detterbeck9, Wilfried Eberhardt4, John G. Edwards10, Françoise Galateau-Salle11, Dorothy Giroux12, Fergus V. Gleeson13, James Huang14, Catherine Kennedy15, Jhingook Kim16, Young Tae Kim17, Laura Kingsbury12, Haruhiko Kondo18, Mark Krasnik19, Kaoru Kubota20, Antoon Lerut21, Gustavo Lyons, Mirella Marino, Edith M. Marom22, Jan P. van Meerbeeck23, Takashi Nakano24, Anna K. Nowak25, Michael D Peake26, Thomas W. Rice27, Kenneth E. Rosenzweig28, Enrico Ruffini29, Valerie W. Rusch14, Nagahiro Saijo, Paul Van Schil23, Jean-Paul Sculier30, Lynn Shemanski12, Kelly G. Stratton12, Kenji Suzuki31, Yuji Tachimori32, Charles F. Thomas33, William D. Travis14, Ming-Sound Tsao34, Andrew T. Turrisi35, Johan Vansteenkiste21, Hirokazu Watanabe, Yi-Long Wu, Paul Baas36, Jeremy J. Erasmus22, Seiki Hasegawa24, Kouki Inai37, Kemp H. Kernstine38, Hedy L. Kindler39, Lee M. Krug14, Kristiaan Nackaerts21, Harvey I. Pass40, David C. Rice22, Conrad Falkson5, Pier Luigi Filosso29, Giuseppe Giaccone41, Kazuya Kondo42, Marco Lucchi43, Meinoshin Okumura44, Eugene H. Blackstone27, F. Abad Cavaco, E. Ansótegui Barrera, J. Abal Arca, I. Parente Lamelas, A. Arnau Obrer45, R. Guijarro Jorge45, D. Ball6, G.K. Bascom46, A. I. Blanco Orozco, M. A. González Castro, M.G. Blum, D. Chimondeguy, V. Cvijanovic47, S. Defranchi48, B. de Olaiz Navarro, I. Escobar Campuzano2, I. Macía Vidueira2, E. Fernández Araujo49, F. Andreo García49, Kwun M. Fong, G. Francisco Corral, S. Cerezo González, J. Freixinet Gilart, L. García Arangüena, S. García Barajas50, P. Girard, Tuncay Göksel, M. T. González Budiño51, G. González Casaurrán50, J. A. Gullón Blanco, J. Hernández Hernández, H. Hernández Rodríguez, J. Herrero Collantes, M. Iglesias Heras, J. M. Izquierdo Elena, Erik Jakobsen, S. Kostas52, P. León Atance, A. Núñez Ares, M. Liao, M. Losanovscky, G. Lyons, R. Magaroles53, L. De Esteban Júlvez53, M. Mariñán Gorospe, Brian C. McCaughan15, Catherine J. Kennedy15, R. Melchor Íñiguez54, L. Miravet Sorribes, S. Naranjo Gozalo, C. Álvarez de Arriba, M. Núñez Delgado, J. Padilla Alarcón, J. C. Peñalver Cuesta, Jongsun Park16, H. Pass40, M. J. Pavón Fernández, Mara Rosenberg, Enrico Ruffini29, V. Rusch14, J. Sánchez de Cos Escuín, A. Saura Vinuesa, M. Serra Mitjans, Trond Eirik Strand, Dragan Subotic, S.G. Swisher22, Ricardo Mingarini Terra8, Charles R. Thomas33, Kurt G. Tournoy55, P. Van Schil23, M. Velasquez, Y.L. Wu, K. Yokoi 
Imperial College London1, University of Barcelona2, Keio University3, University of Duisburg-Essen4, Queen's University5, Peter MacCallum Cancer Centre6, University of Michigan7, University of São Paulo8, Yale University9, Northern General Hospital10, University of Caen Lower Normandy11, Fred Hutchinson Cancer Research Center12, University of Oxford13, Memorial Sloan Kettering Cancer Center14, University of Sydney15, Sungkyunkwan University16, Seoul National University17, Kyorin University18, University of Copenhagen19, Nippon Medical School20, Katholieke Universiteit Leuven21, University of Texas MD Anderson Cancer Center22, University of Antwerp23, Hyogo College of Medicine24, University of Western Australia25, Glenfield Hospital26, Cleveland Clinic27, Icahn School of Medicine at Mount Sinai28, University of Turin29, Université libre de Bruxelles30, Juntendo University31, National Cancer Research Institute32, Mayo Clinic33, University of Toronto34, Sinai Grace Hospital35, Netherlands Cancer Institute36, Hiroshima University37, City of Hope National Medical Center38, University of Chicago39, New York University40, Georgetown University41, University of Tokushima42, University of Pisa43, Osaka University44, University of Valencia45, Good Samaritan Hospital46, Military Medical Academy47, Fundación Favaloro48, Autonomous University of Barcelona49, Complutense University of Madrid50, University of Oviedo51, National and Kapodistrian University of Athens52, Rovira i Virgili University53, Autonomous University of Madrid54, Ghent University55
TL;DR: The methods used to evaluate the resultant Stage groupings and the proposals put forward for the 8th edition of the TNM Classification for lung cancer due to be published late 2016 are described.

Journal ArticleDOI
Bin Zhou1, Yuan Lu2, Kaveh Hajifathalian2, James Bentham1  +494 moreInstitutions (170)
TL;DR: In this article, the authors used a Bayesian hierarchical model to estimate trends in diabetes prevalence, defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs in 200 countries and territories in 21 regions, by sex and from 1980 to 2014.

Journal ArticleDOI
08 Jan 2016-Science
TL;DR: It is shown that using cesium ions along with formamidinium cations in lead bromide–iodide cells improved thermal and photostability and lead to high efficiency in single and tandem cells.
Abstract: Metal halide perovskite photovoltaic cells could potentially boost the efficiency of commercial silicon photovoltaic modules from ∼20 toward 30% when used in tandem architectures. An optimum perovskite cell optical band gap of ~1.75 electron volts (eV) can be achieved by varying halide composition, but to date, such materials have had poor photostability and thermal stability. Here we present a highly crystalline and compositionally photostable material, [HC(NH2)2](0.83)Cs(0.17)Pb(I(0.6)Br(0.4))3, with an optical band gap of ~1.74 eV, and we fabricated perovskite cells that reached open-circuit voltages of 1.2 volts and power conversion efficiency of over 17% on small areas and 14.7% on 0.715 cm(2) cells. By combining these perovskite cells with a 19%-efficient silicon cell, we demonstrated the feasibility of achieving >25%-efficient four-terminal tandem cells.

Journal ArticleDOI
TL;DR: These ESMO consensus guidelines have been developed based on the current available evidence to provide a series of evidence-based recommendations to assist in the treatment and management of patients with mCRC in this rapidly evolving treatment setting.

Journal ArticleDOI
06 Jul 2016-PLOS ONE
TL;DR: CKD has a high global prevalence with a consistent estimated global CKD prevalence of between 11 to 13% with the majority stage 3, and future research should evaluate intervention strategies deliverable at scale to delay the progression of CKD and improve CVD outcomes.
Abstract: © 2016 Hill et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Chronic kidney disease (CKD) is a global health burden with a high economic cost to health systems and is an independent risk factor for cardiovascular disease (CVD). All stages of CKD are associated with increased risks of cardiovascular morbidity, premature mortality, and/or decreased quality of life. CKD is usually asymptomatic until later stages and accurate prevalence data are lacking. Thus we sought to determine the prevalence of CKD globally, by stage, geographical location, gender and age. A systematic review and meta-analysis of observational studies estimating CKD prevalence in general populations was conducted through literature searches in 8 databases. We assessed pooled data using a random effects model. Of 5,842 potential articles, 100 studies of diverse quality were included, comprising 6,908,440 patients. Global mean(95%CI) CKD prevalence of 5 stages 13.4%(11.7-15.1%), and stages 3-5 was 10.6%(9.2-12.2%). Weighting by study quality did not affect prevalence estimates. CKD prevalence by stage was Stage-1 (eGFR>90+ACR>30): 3.5% (2.8-4.2%); Stage-2 (eGFR 60-89+ACR>30): 3.9% (2.7-5.3%); Stage-3 (eGFR 30-59): 7.6% (6.4-8.9%); Stage-4 = (eGFR 29-15): 0.4% (0.3-0.5%); and Stage-5 (eGFR<15): 0.1% (0.1-0.1%). CKD has a high global prevalence with a consistent estimated global CKD prevalence of between 11 to 13% with the majority stage 3. Future research should evaluate intervention strategies deliverable at scale to delay the progression of CKD and improve CVD outcomes.

Journal ArticleDOI
Shane A. McCarthy1, Sayantan Das2, Warren W. Kretzschmar3, Olivier Delaneau4, Andrew R. Wood5, Alexander Teumer6, Hyun Min Kang2, Christian Fuchsberger2, Petr Danecek1, Kevin Sharp3, Yang Luo1, C Sidore7, Alan Kwong2, Nicholas J. Timpson8, Seppo Koskinen, Scott I. Vrieze9, Laura J. Scott2, He Zhang2, Anubha Mahajan3, Jan H. Veldink, Ulrike Peters10, Ulrike Peters11, Carlos N. Pato12, Cornelia M. van Duijn13, Christopher E. Gillies2, Ilaria Gandin14, Massimo Mezzavilla, Arthur Gilly1, Massimiliano Cocca14, Michela Traglia, Andrea Angius7, Jeffrey C. Barrett1, D.I. Boomsma15, Kari Branham2, Gerome Breen16, Gerome Breen17, Chad M. Brummett2, Fabio Busonero7, Harry Campbell18, Andrew T. Chan19, Sai Chen2, Emily Y. Chew20, Francis S. Collins20, Laura J Corbin8, George Davey Smith8, George Dedoussis21, Marcus Dörr6, Aliki-Eleni Farmaki21, Luigi Ferrucci20, Lukas Forer22, Ross M. Fraser2, Stacey Gabriel23, Shawn Levy, Leif Groop24, Leif Groop25, Tabitha A. Harrison11, Andrew T. Hattersley5, Oddgeir L. Holmen26, Kristian Hveem26, Matthias Kretzler2, James Lee27, Matt McGue28, Thomas Meitinger29, David Melzer5, Josine L. Min8, Karen L. Mohlke30, John B. Vincent31, Matthias Nauck6, Deborah A. Nickerson10, Aarno Palotie19, Aarno Palotie23, Michele T. Pato12, Nicola Pirastu14, Melvin G. McInnis2, J. Brent Richards32, J. Brent Richards17, Cinzia Sala, Veikko Salomaa, David Schlessinger20, Sebastian Schoenherr22, P. Eline Slagboom33, Kerrin S. Small17, Tim D. Spector17, Dwight Stambolian34, Marcus A. Tuke5, Jaakko Tuomilehto, Leonard H. van den Berg, Wouter van Rheenen, Uwe Völker6, Cisca Wijmenga35, Daniela Toniolo, Eleftheria Zeggini1, Paolo Gasparini14, Matthew G. Sampson2, James F. Wilson18, Timothy M. Frayling5, Paul I.W. de Bakker36, Morris A. Swertz35, Steven A. McCarroll19, Charles Kooperberg11, Annelot M. Dekker, David Altshuler, Cristen J. Willer2, William G. Iacono28, Samuli Ripatti24, Nicole Soranzo27, Nicole Soranzo1, Klaudia Walter1, Anand Swaroop20, Francesco Cucca7, Carl A. Anderson1, Richard M. Myers, Michael Boehnke2, Mark I. McCarthy3, Mark I. McCarthy37, Richard Durbin1, Gonçalo R. Abecasis2, Jonathan Marchini3 
TL;DR: A reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies.
Abstract: We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies, and it can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently.

Proceedings ArticleDOI
27 Jun 2016
TL;DR: A new ConvNet architecture for spatiotemporal fusion of video snippets is proposed, and its performance on standard benchmarks where this architecture achieves state-of-the-art results is evaluated.
Abstract: Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.

Journal ArticleDOI
TL;DR: At a median of 10 years, prostate-cancer-specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments.
Abstract: BACKGROUND The comparative effectiveness of treatments for prostate cancer that is detected by prostate-specific antigen (PSA) testing remains uncertain. METHODS We compared active monitoring, radical prostatectomy, and external-beam radiotherapy for the treatment of clinically localized prostate cancer. Between 1999 and 2009, a total of 82,429 men 50 to 69 years of age received a PSA test; 2664 received a diagnosis of localized prostate cancer, and 1643 agreed to undergo randomization to active monitoring (545 men), surgery (553), or radiotherapy (545). The primary outcome was prostate-cancer mortality at a median of 10 years of follow-up. Secondary outcomes included the rates of disease progression, metastases, and all-cause deaths. RESULTS There were 17 prostate-cancer–specific deaths overall: 8 in the active-monitoring group (1.5 deaths per 1000 person-years; 95% confidence interval [CI], 0.7 to 3.0), 5 in the surgery group (0.9 per 1000 person-years; 95% CI, 0.4 to 2.2), and 4 in the radiotherapy group (0.7 per 1000 person-years; 95% CI, 0.3 to 2.0); the difference among the groups was not significant (P=0.48 for the overall comparison). In addition, no significant difference was seen among the groups in the number of deaths from any cause (169 deaths overall; P=0.87 for the comparison among the three groups). Metastases developed in more men in the active-monitoring group (33 men; 6.3 events per 1000 person-years; 95% CI, 4.5 to 8.8) than in the surgery group (13 men; 2.4 per 1000 person-years; 95% CI, 1.4 to 4.2) or the radiotherapy group (16 men; 3.0 per 1000 person-years; 95% CI, 1.9 to 4.9) (P=0.004 for the overall comparison). Higher rates of disease progression were seen in the active-monitoring group (112 men; 22.9 events per 1000 person-years; 95% CI, 19.0 to 27.5) than in the surgery group (46 men; 8.9 events per 1000 person-years; 95% CI, 6.7 to 11.9) or the radiotherapy group (46 men; 9.0 events per 1000 person-years; 95% CI, 6.7 to 12.0) (P<0.001 for the overall comparison). CONCLUSIONS At a median of 10 years, prostate-cancer–specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments. Surgery and radiotherapy were associated with lower incidences of disease progression and metastases than was active monitoring.

Journal ArticleDOI
TL;DR: In this paper, the authors present an extension to the Consolidated Standards of Reporting Trials (CONSORT) statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT.
Abstract: The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply. The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist. The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number. This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials. Editor’s note: In order to encourage its wide dissemination this article is freely accessible on the BMJ and Pilot and Feasibility Studies journal websites.

Journal ArticleDOI
TL;DR: The associations of both overweight and obesity with higher all-cause mortality were broadly consistent in four continents and supports strategies to combat the entire spectrum of excess adiposity in many populations.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.
Abstract: This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.

Journal ArticleDOI
14 Dec 2016-Nature
TL;DR: There are opportunities to use such sustainable polymers in both high-value areas and in basic applications such as packaging.
Abstract: Renewable resources are used increasingly in the production of polymers. In particular, monomers such as carbon dioxide, terpenes, vegetable oils and carbohydrates can be used as feedstocks for the manufacture of a variety of sustainable materials and products, including elastomers, plastics, hydrogels, flexible electronics, resins, engineering polymers and composites. Efficient catalysis is required to produce monomers, to facilitate selective polymerizations and to enable recycling or upcycling of waste materials. There are opportunities to use such sustainable polymers in both high-value areas and in basic applications such as packaging. Life-cycle assessment can be used to quantify the environmental benefits of sustainable polymers.

Posted Content
TL;DR: In this paper, a fully-convolutional Siamese network is trained end-to-end on the ILSVRC15 dataset for object detection in video, which achieves state-of-the-art performance.
Abstract: The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object's appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.

Posted Content
TL;DR: In this paper, a spatial and temporal network can be fused at the last convolution layer without loss of performance, but with a substantial saving in parameters, and furthermore, pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance.
Abstract: Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.

Journal ArticleDOI
TL;DR: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant as discussed by the authors, and there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists.
Abstract: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power We conclude with guidelines for improving statistical interpretation and reporting

Journal ArticleDOI
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2  +679 moreInstitutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.

Journal ArticleDOI
TL;DR: A framework for adaptive visual object tracking based on structured output prediction that is able to outperform state-of-the-art trackers on various benchmark videos and can easily incorporate additional features and kernels into the framework, which results in increased tracking performance.
Abstract: Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.

Journal ArticleDOI
TL;DR: The cross-platform software tool, TempEst (formerly known as Path-O-Gen), is introduced, for the visualization and analysis of temporally sampled sequence data and can be used to assess whether there is sufficient temporal signal in the data to proceed with phylogenetic molecular clock analysis, and identify sequences whose genetic divergence and sampling date are incongruent.
Abstract: Gene sequences sampled at different points in time can be used to infer molecular phylogenies on a natural timescale of months or years, provided that the sequences in question undergo measurable amounts of evolutionary change between sampling times. Data sets with this property are termed heterochronous and have become increasingly common in several fields of biology, most notably the molecular epidemiology of rapidly evolving viruses. Here we introduce the cross-platform software tool, TempEst (formerly known as Path-O-Gen), for the visualization and analysis of temporally sampled sequence data. Given a molecular phylogeny and the dates of sampling for each sequence, TempEst uses an interactive regression approach to explore the association between genetic divergence through time and sampling dates. TempEst can be used to (1) assess whether there is sufficient temporal signal in the data to proceed with phylogenetic molecular clock analysis, and (2) identify sequences whose genetic divergence and sampling date are incongruent. Examination of the latter can help identify data quality problems, including errors in data annotation, sample contamination, sequence recombination, or alignment error. We recommend that all users of the molecular clock models implemented in BEAST first check their data using TempEst prior to analysis.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework to combat the threat to human health and biosecurity from antimicrobial resistance, an understanding of its mechanisms and drivers is needed.

Journal Article
TL;DR: This paper provided definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions, and provided an explanatory list of 25 misinterpretations of P values, confidence intervals, and power.
Abstract: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting.