Showing papers by "University of Cambridge published in 2015"
••
TL;DR: The UK Biobank is described, a large population-based prospective study, established to allow investigation of the genetic and non-genetic determinants of the diseases of middle and old age.
Abstract: Cathie Sudlow and colleagues describe the UK Biobank, a large population-based prospective study, established to allow investigation of the genetic and non-genetic determinants of the diseases of middle and old age.
6,114 citations
••
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as discussed by the authors, the authors used the GBD 2010 methods with some refinements to improve accuracy applied to an updated database of vital registration, survey, and census data.
5,792 citations
••
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as mentioned in this paper, the authors estimated the quantities for acute and chronic diseases and injuries for 188 countries between 1990 and 2013.
4,510 citations
••
TL;DR: Pythia 8.2 is the second main release after the complete rewrite from Fortran to C++, and now has reached such a maturity that it offers a complete replacement for most applications, notably for LHC physics studies.
4,503 citations
••
University Hospital Bonn1, University of California, Riverside2, Harvard University3, Case Western Reserve University4, University of Illinois at Chicago5, European Institute6, Stanford University7, VA Palo Alto Healthcare System8, Spanish National Research Council9, Cleveland Clinic Lerner Research Institute10, Hong Kong University of Science and Technology11, University of California, Los Angeles12, University of Southern Denmark13, University of Cambridge14, University of the Basque Country15, Ikerbasque16, University of Manchester17, RIKEN Brain Science Institute18, University of Eastern Finland19, University of Bonn20, University of Massachusetts Medical School21, Center of Advanced European Studies and Research22, University of Southern California23, University of South Florida24, Duke University25, Southampton General Hospital26, University of Southampton27, Moorgreen Hospital28, Louisiana State University29, Imperial College London30, Centre national de la recherche scientifique31, Karolinska Institutet32, Max Planck Society33, University of Tübingen34, University of Groningen35, University of Colorado Denver36, Douglas Mental Health University Institute37
TL;DR: Genome-wide analysis suggests that several genes that increase the risk for sporadic Alzheimer's disease encode factors that regulate glial clearance of misfolded proteins and the inflammatory reaction.
Abstract: Increasing evidence suggests that Alzheimer's disease pathogenesis is not restricted to the neuronal compartment, but includes strong interactions with immunological mechanisms in the brain. Misfolded and aggregated proteins bind to pattern recognition receptors on microglia and astroglia, and trigger an innate immune response characterised by release of inflammatory mediators, which contribute to disease progression and severity. Genome-wide analysis suggests that several genes that increase the risk for sporadic Alzheimer's disease encode factors that regulate glial clearance of misfolded proteins and the inflammatory reaction. External factors, including systemic inflammation and obesity, are likely to interfere with immunological processes of the brain and further promote disease progression. Modulation of risk factors and targeting of these immune mechanisms could lead to future therapeutic or preventive strategies for Alzheimer's disease.
3,947 citations
••
Technische Universität München1, ETH Zurich2, University of Bern3, Harvard University4, National Institutes of Health5, University of Debrecen6, University Hospital Heidelberg7, McGill University8, University of Pennsylvania9, French Institute for Research in Computer Science and Automation10, University at Buffalo11, Microsoft12, University of Cambridge13, Stanford University14, University of Virginia15, Imperial College London16, Massachusetts Institute of Technology17, Columbia University18, Sabancı University19, Old Dominion University20, RMIT University21, Purdue University22, General Electric23
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource
3,699 citations
••
TL;DR: New MRC guidance provides a framework for conducting and reporting process evaluation studies that will help improve the quality of decision-making in the design and testing of complex interventions.
Abstract: Process evaluation is an essential part of designing and testing complex interventions. New MRC guidance provides a framework for conducting and reporting process evaluation studies
3,662 citations
••
TL;DR: A genome-wide association study and Metabochip meta-analysis of body mass index (BMI), a measure commonly used to define obesity and assess adiposity, in up to 339,224 individuals provide strong support for a role of the central nervous system in obesity susceptibility.
Abstract: Obesity is heritable and predisposes to many diseases To understand the genetic basis of obesity better, here we conduct a genome-wide association study and Metabochip meta-analysis of body mass index (BMI), a measure commonly used to define obesity and assess adiposity, in up to 339,224 individuals This analysis identifies 97 BMI-associated loci (P 20% of BMI variation Pathway analyses provide strong support for a role of the central nervous system in obesity susceptibility and implicate new genes and pathways, including those related to synaptic function, glutamate signalling, insulin secretion/action, energy metabolism, lipid biology and adipogenesis
3,472 citations
••
TL;DR: An adaption of Egger regression can detect some violations of the standard instrumental variable assumptions, and provide an effect estimate which is not subject to these violations, and provides a sensitivity analysis for the robustness of the findings from a Mendelian randomization investigation.
Abstract: Background: The number of Mendelian randomization analyses including large numbers of genetic variants is rapidly increasing. This is due to the proliferation of genome-wide association studies, and the desire to obtain more precise estimates of causal effects. However, some genetic variants may not be valid instrumental variables, in particular due to them having more than one proximal phenotypic correlate (pleiotropy).
Methods: We view Mendelian randomization with multiple instruments as a meta-analysis, and show that bias caused by pleiotropy can be regarded as analogous to small study bias. Causal estimates using each instrument can be displayed visually by a funnel plot to assess potential asymmetry. Egger regression, a tool to detect small study bias in meta-analysis, can be adapted to test for bias from pleiotropy, and the slope coefficient from Egger regression provides an estimate of the causal effect. Under the assumption that the association of each genetic variant with the exposure is independent of the pleiotropic effect of the variant (not via the exposure), Egger’s test gives a valid test of the null causal hypothesis and a consistent causal effect estimate even when all the genetic variants are invalid instrumental variables.
Results: We illustrate the use of this approach by re-analysing two published Mendelian randomization studies of the causal effect of height on lung function, and the causal effect of blood pressure on coronary artery disease risk. The conservative nature of this approach is illustrated with these examples.
Conclusions: An adaption of Egger regression (which we call MR-Egger) can detect some violations of the standard instrumental variable assumptions, and provide an effect estimate which is not subject to these violations. The approach provides a sensitivity analysis for the robustness of the findings from a Mendelian randomization investigation.
3,392 citations
••
TL;DR: Roary, a tool that rapidly builds large-scale pan genomes, identifying the core and accessory genes, is introduced, making construction of the pan genome of thousands of prokaryote samples possible on a standard desktop without compromising on the accuracy of results.
Abstract: Summary: A typical prokaryote population sequencing study can now consist of hundreds or thousands of isolates. Interrogating these datasets can provide detailed insights into the genetic structure of prokaryotic genomes. We introduce Roary, a tool that rapidly builds large-scale pan genomes, identifying the core and accessory genes. Roary makes construction of the pan genome of thousands of prokaryote samples possible on a standard desktop without compromising on the accuracy of results. Using a single CPU Roary can produce a pan genome consisting of 1000 isolates in 4.5 hours using 13 GB of RAM, with further speedups possible using multiple processors.
Availability and implementation: Roary is implemented in Perl and is freely available under an open source GPLv3 license from http://sanger-pathogens.github.io/Roary
Contact: ku.ca.regnas@yraor
Supplementary information: Supplementary data are available at Bioinformatics online.
3,147 citations
••
TL;DR: Graphene and related two-dimensional crystals and hybrid systems showcase several key properties that can address emerging energy needs, in particular for the ever growing market of portable and wearable energy conversion and storage devices.
Abstract: Graphene and related two-dimensional crystals and hybrid systems showcase several key properties that can address emerging energy needs, in particular for the ever growing market of portable and wearable energy conversion and storage devices. Graphene's flexibility, large surface area, and chemical stability, combined with its excellent electrical and thermal conductivity, make it promising as a catalyst in fuel and dye-sensitized solar cells. Chemically functionalized graphene can also improve storage and diffusion of ionic species and electric charge in batteries and supercapacitors. Two-dimensional crystals provide optoelectronic and photocatalytic properties complementing those of graphene, enabling the realization of ultrathin-film photovoltaic devices or systems for hydrogen production. Here, we review the use of graphene and related materials for energy conversion and storage, outlining the roadmap for future applications.
••
Harvard University1, Genentech2, Fred Hutchinson Cancer Research Center3, State University of Campinas4, University of Maryland, College Park5, National Institutes of Health6, University of Cambridge7, University of California, Riverside8, Novartis9, Johns Hopkins University10, University of Washington11, Walter and Eliza Hall Institute of Medical Research12, City University of New York13
TL;DR: An overview of Bioconductor, an open-source, open-development software project for the analysis and comprehension of high-throughput data in genomics and molecular biology, which comprises 934 interoperable packages contributed by a large, diverse community of scientists.
Abstract: Bioconductor is an open-source, open-development software project for the analysis and comprehension of high-throughput data in genomics and molecular biology. The project aims to enable interdisciplinary research, collaboration and rapid development of scientific software. Based on the statistical programming language R, Bioconductor comprises 934 interoperable packages contributed by a large, diverse community of scientists. Packages cover a range of bioinformatic and statistical applications. They undergo formal initial review and continuous automated testing. We present an overview for prospective users and contributors.
••
University of Cambridge1, Istituto Italiano di Tecnologia2, Lancaster University3, University of Manchester4, Catalan Institution for Research and Advanced Studies5, Technical University of Denmark6, Nokia7, fondazione bruno kessler8, University of Trento9, Queen Mary University of London10, Technische Universität München11, Polytechnic University of Milan12, Centre national de la recherche scientifique13, University of Trieste14, University of Ioannina15, University of Geneva16, Trinity College, Dublin17, Texas Instruments18, University of Paris19, Spanish National Research Council20, Leiden University21, Delft University of Technology22, University of Patras23, École Normale Supérieure24, Radboud University Nijmegen25, Nest Labs26, Airbus UK27, Seoul National University28, Yonsei University29, University of Oxford30, Chalmers University of Technology31, University of Groningen32, STMicroelectronics33, Chemnitz University of Technology34, Max Planck Society35, Aalto University36
TL;DR: An overview of the key aspects of graphene and related materials, ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries are provided.
Abstract: We present the science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems, targeting an evolution in technology, that might lead to impacts and benefits reaching into most areas of society. This roadmap was developed within the framework of the European Graphene Flagship and outlines the main targets and research areas as best understood at the start of this ambitious project. We provide an overview of the key aspects of graphene and related materials (GRMs), ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries. We also define an extensive list of acronyms in an effort to standardize the nomenclature in this emerging field.
••
Shadab Alam1, Franco D. Albareti2, Carlos Allende Prieto3, Carlos Allende Prieto4 +360 more•Institutions (102)
TL;DR: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrogram, and a novel optical interferometer.
Abstract: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrograph, and a novel optical interferometer. All the data from SDSS-III are now made public. In particular, this paper describes Data Release 11 (DR11) including all data acquired through 2013 July, and Data Release 12 (DR12) adding data acquired through 2014 July (including all data included in previous data releases), marking the end of SDSS-III observing. Relative to our previous public release (DR10), DR12 adds one million new spectra of galaxies and quasars from the Baryon Oscillation Spectroscopic Survey (BOSS) over an additional 3000 sq. deg of sky, more than triples the number of H-band spectra of stars as part of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE), and includes repeated accurate radial velocity measurements of 5500 stars from the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS). The APOGEE outputs now include measured abundances of 15 different elements for each star. In total, SDSS-III added 2350 sq. deg of ugriz imaging; 155,520 spectra of 138,099 stars as part of the Sloan Exploration of Galactic Understanding and Evolution 2 (SEGUE-2) survey; 2,497,484 BOSS spectra of 1,372,737 galaxies, 294,512 quasars, and 247,216 stars over 9376 sq. deg; 618,080 APOGEE spectra of 156,593 stars; and 197,040 MARVELS spectra of 5,513 stars. Since its first light in 1998, SDSS has imaged over 1/3 of the Celestial sphere in five bands and obtained over five million astronomical spectra.
••
TL;DR: Efficient organic-inorganic perovskite light-emitting diodes were made with nanograin crystals that lack metallic lead, which helped to confine excitons and avoid their quenching.
Abstract: Organic-inorganic hybrid perovskites are emerging low-cost emitters with very high color purity, but their low luminescent efficiency is a critical drawback. We boosted the current efficiency (CE) of perovskite light-emitting diodes with a simple bilayer structure to 42.9 candela per ampere, similar to the CE of phosphorescent organic light-emitting diodes, with two modifications: We prevented the formation of metallic lead (Pb) atoms that cause strong exciton quenching through a small increase in methylammonium bromide (MABr) molar proportion, and we spatially confined the exciton in uniform MAPbBr3 nanograins (average diameter = 99.7 nanometers) formed by a nanocrystal pinning process and concomitant reduction of exciton diffusion length to 67 nanometers. These changes caused substantial increases in steady-state photoluminescence intensity and efficiency of MAPbBr3 nanograin layers.
•
TL;DR: In this article, a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes was developed, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Abstract: Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.
••
TL;DR: In this article, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test is presented, which is based on LO, NLO and NNLO QCD theory and also includes electroweak corrections.
Abstract: We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.
••
TL;DR: A novel approach (pkCSM) which uses graph-based signatures to develop predictive models of central ADMET properties for drug development and performs as well or better than current methods.
Abstract: Drug development has a high attrition rate, with poor pharmacokinetic and safety properties a significant hurdle. Computational approaches may help minimize these risks. We have developed a novel approach (pkCSM) which uses graph-based signatures to develop predictive models of central ADMET properties for drug development. pkCSM performs as well or better than current methods. A freely accessible web server (http://structure.bioc.cam.ac.uk/pkcsm), which retains no information submitted to it, provides an integrated platform to rapidly evaluate pharmacokinetic and toxicity properties.
••
TL;DR: This article conducted a meta-analysis of coronary artery disease (CAD) cases and controls, interrogating 6.7 million common (minor allele frequency (MAF) > 0.05) and 2.7 millions low-frequency (0.005 < MAF < 0.5) variants.
Abstract: Existing knowledge of genetic variants affecting risk of coronary artery disease (CAD) is largely based on genome-wide association study (GWAS) analysis of common SNPs. Leveraging phased haplotypes from the 1000 Genomes Project, we report a GWAS meta-analysis of ∼185,000 CAD cases and controls, interrogating 6.7 million common (minor allele frequency (MAF) > 0.05) and 2.7 million low-frequency (0.005 < MAF < 0.05) variants. In addition to confirming most known CAD-associated loci, we identified ten new loci (eight additive and two recessive) that contain candidate causal genes newly implicating biological processes in vessel walls. We observed intralocus allelic heterogeneity but little evidence of low-frequency variants with larger effects and no evidence of synthetic association. Our analysis provides a comprehensive survey of the fine genetic architecture of CAD, showing that genetic susceptibility to this common disease is largely determined by common SNPs of small effect size.
••
University of Zurich1, Hannover Medical School2, University of California, Davis3, Heidelberg University4, Ludwig Maximilian University of Munich5, Charité6, University of Kentucky7, University of Cologne8, Saarland University9, University of Duisburg-Essen10, University of Göttingen11, University of Hamburg12, University of Ulm13, Technische Universität München14, Otto-von-Guericke University Magdeburg15, John Radcliffe Hospital16, Winterthur Museum, Garden and Library17, University of Turku18, Gdańsk Medical University19, University of Warmia and Mazury in Olsztyn20, Medical University of Warsaw21, University of Cambridge22, University of Basel23, Catholic University of the Sacred Heart24, Innsbruck Medical University25, University of Greifswald26, Leiden University27, University of Glasgow28
TL;DR: Patients with takotsubo cardiomyopathy had a higher prevalence of neurologic or psychiatric disorders than did those with an acute coronary syndrome and physical triggers, acute neurologics or psychiatric diseases, high troponin levels, and a low ejection fraction on admission were independent predictors for in-hospital complications.
Abstract: BackgroundThe natural history, management, and outcome of takotsubo (stress) cardiomyopathy are incompletely understood. MethodsThe International Takotsubo Registry, a consortium of 26 centers in Europe and the United States, was established to investigate clinical features, prognostic predictors, and outcome of takotsubo cardiomyopathy. Patients were compared with age- and sex-matched patients who had an acute coronary syndrome. ResultsOf 1750 patients with takotsubo cardiomyopathy, 89.8% were women (mean age, 66.8 years). Emotional triggers were not as common as physical triggers (27.7% vs. 36.0%), and 28.5% of patients had no evident trigger. Among patients with takotsubo cardiomyopathy, as compared with an acute coronary syndrome, rates of neurologic or psychiatric disorders were higher (55.8% vs. 25.7%) and the mean left ventricular ejection fraction was markedly lower (40.7±11.2% vs. 51.5±12.3%) (P<0.001 for both comparisons). Rates of severe in-hospital complications including shock and death were ...
••
TL;DR: The issue of microplastics in freshwater systems is reviewed to summarise current understanding, identify knowledge gaps and suggest future research priorities.
••
Mohammad H. Forouzanfar1, Lily Alexander1, H. Ross Anderson2, Victoria F Bachman1 +718 more•Institutions (295)
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as mentioned in this paper provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.
••
07 Dec 2015TL;DR: PoseNet as mentioned in this paper uses a CNN to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation.
Abstract: We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples.
•
07 Dec 2015
TL;DR: The FEniCS Project is a collaborative project for the development of innovative concepts and tools for automated scientific computing, with a particular focus on the solution of differential equations by finite element methods.
Abstract: The FEniCS Project is a collaborative project for the development of innovative concepts and tools for automated scientific computing, with a particular focus on the solution of differential equations by finite element methods. The FEniCS Projects software consists of a collection of interoperable software components, including DOLFIN, FFC, FIAT, Instant, UFC, UFL, and mshr. This note describes the new features and changes introduced in the release of FEniCS version 1.5.
••
TL;DR: The ALBI grade offers a simple, evidence-based, objective, and discriminatory method of assessing liver function in HCC that has been extensively tested in an international setting and eliminates the need for subjective variables such as ascites and encephalopathy, a requirement in the conventional C-P grade.
Abstract: Purpose Most patients with hepatocellular carcinoma (HCC) have associated chronic liver disease, the severity of which is currently assessed by the Child-Pugh (C-P) grade. In this international collaboration, we identify objective measures of liver function/dysfunction that independently influence survival in patients with HCC and then combine these into a model that could be compared with the conventional C-P grade. Patients and Methods We developed a simple model to assess liver function, based on 1,313 patients with HCC of all stages from Japan, that involved only serum bilirubin and albumin levels. We then tested the model using similar cohorts from other geographical regions (n = 5,097) and other clinical situations (patients undergoing resection [n = 525] or sorafenib treatment for advanced HCC [n = 1,132]). The specificity of the model for liver (dys)function was tested in patients with chronic liver disease but without HCC (n = 501). Results The model, the Albumin-Bilirubin (ALBI) grade, performed...
••
National University of Cordoba1, Addis Ababa University2, National Autonomous University of Mexico3, State University of Campinas4, United Nations Environment Programme5, UNESCO6, United States Department of Agriculture7, Indiana University8, University of British Columbia9, Commonwealth Scientific and Industrial Research Organisation10, University of Paris-Sud11, Landcare Research12, University College London13, Autonomous University of Madrid14, University of Cambridge15, Council for Scientific and Industrial Research16, University of Southern Denmark17, United Nations University18, Virginia Tech College of Natural Resources and Environment19, The Nature Conservancy20, University of the South Pacific21, University of East Anglia22, Kyushu University23, King Abdulaziz City for Science and Technology24, University of Washington25, Budapest University of Technology and Economics26, Environmental Law Institute27, Ankara University28, University of Portsmouth29, Chinese Academy of Sciences30, Indian Institute of Technology Bombay31, Kyoto University32, Joseph Fourier University33, National Scientific and Technical Research Council34, University of Yaoundé35, Polish Academy of Sciences36, University of São Paulo37, École Normale Supérieure38, University of Otago39, Stanford University40, University of Queensland41, Azim Premji University42, Helmholtz Centre for Environmental Research - UFZ43, University of Ghana44, Corvinus University of Budapest45, Stockholm University46, Lakehead University47, Indian Institute of Forest Management48, Seoul National University49, Sofia University50
TL;DR: The first public product of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES) is its Conceptual Framework as discussed by the authors, which will underpin all IPBES functions and provide structure and comparability to the syntheses that will produce at different spatial scales, on different themes, and in different regions.
••
TL;DR: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4ℓ decay channels.
Abstract: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4l decay channels. The results are obtained from a simultaneous fit to the reconstructed invariant mass peaks in the two channels and for the two experiments. The measured masses from the individual channels and the two experiments are found to be consistent among themselves. The combined measured mass of the Higgs boson is mH=125.09±0.21 (stat)±0.11 (syst) GeV.
••
Harvard University1, Broad Institute2, Cardiff University3, Icahn School of Medicine at Mount Sinai4, University of Michigan5, University of Cambridge6, Karolinska Institutet7, University of Eastern Finland8, University of Oxford9, Cedars-Sinai Medical Center10, University of Ottawa11, University of Helsinki12, University of Pennsylvania13, University of North Carolina at Chapel Hill14, University of Mississippi Medical Center15
TL;DR: The aggregation and analysis of high-quality exome (protein-coding region) sequence data for 60,706 individuals of diverse ethnicities generated as part of the Exome Aggregation Consortium (ExAC) provides direct evidence for the presence of widespread mutational recurrence.
Abstract: Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) sequence data for 60,706 individuals of diverse ethnicities. The resulting catalogue of human genetic diversity has unprecedented resolution, with an average of one variant every eight bases of coding sequence and the presence of widespread mutational recurrence. The deep catalogue of variation provided by the Exome Aggregation Consortium (ExAC) can be used to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; we identify 3,230 genes with near-complete depletion of truncating variants, 79% of which have no currently established human disease phenotype. Finally, we show that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human knockout variants in protein-coding genes.
••
TL;DR: This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Abstract: How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
21 Sep 2015
TL;DR: The 2015 World Alzheimer Report updated data on the prevalence, incidence, cost and trends of dementia worldwide, leaving us with no doubt that dementia, including Alzheimer's disease and other causes, is one of the biggest public health and social care challenges facing people today and in the future as mentioned in this paper.
Abstract: for which we are very grateful. All the authors and investigators of dementia studies who provided us more specific data from their work. foreword Today, over 46 million people live with dementia worldwide, more than the population of Spain. This number is estimated to increase to 131.5 million by 2050. Dementia also has a huge economic impact. Today, the total estimated worldwide cost of dementia is US $818 billion, and it will become a trillion dollar disease by 2018. This means that if dementia care were a country, it would be the world's 18th largest economy, more than the market values of companies such as Apple (US$ 742 billion), Google (US$ 368 billion) and Exxon (US$ 357 billion). In many parts of the world, there is a growing awareness of dementia, but across the globe it remains the case that a diagnosis of dementia can bring with it stigma and social isolation. Today, we estimate that 94% of people living with dementia in low and middle income countries are cared for at home. These are regions where health and care systems often provide limited or no support to people living with dementia or to their families. The 2015 World Alzheimer Report updates data on the prevalence, incidence, cost and trends of dementia worldwide. It also estimates how these numbers will increase in the future, leaving us with no doubt that dementia, including Alzheimer's disease and other causes, is one of the biggest global public health and social care challenges facing people today and in the future. The two organisations we lead are ADI, the only worldwide federation of Alzheimer associations and global voice on dementia, and Bupa, a purpose-driven global health and care company that is the leading international provider of specialist dementia care, caring for around 60,000 people living with dementia each year. Together, we are committed to ensuring that dementia becomes an international health priority. We believe national dementia plans are the first step towards ensuring all countries are equipped to enable people to live well with dementia, and help to reduce the risk of dementia for future generations. There is now a growing list of countries which have such provision in place or which are developing national dementia plans, but it's not enough. Given the epidemic scale of dementia, with no known cure on the horizon, and with a global ageing population, we're calling on governments and …