scispace - formally typeset
Search or ask a question

Showing papers by "Imperial College London published in 2016"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
Theo Vos1, Christine Allen1, Megha Arora1, Ryan M Barber1  +696 moreInstitutions (260)
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.

5,050 citations


Journal ArticleDOI
Haidong Wang1, Mohsen Naghavi1, Christine Allen1, Ryan M Barber1  +841 moreInstitutions (293)
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.

4,804 citations


Journal ArticleDOI
TL;DR: In the time-to-event analysis, the rate of the first occurrence of death from cardiovascular causes, nonfatal myocardial infarction, orNonfatal stroke among patients with type 2 diabetes mellitus was lower with liraglutide than with placebo.
Abstract: BackgroundThe cardiovascular effect of liraglutide, a glucagon-like peptide 1 analogue, when added to standard care in patients with type 2 diabetes, remains unknown. MethodsIn this double-blind trial, we randomly assigned patients with type 2 diabetes and high cardiovascular risk to receive liraglutide or placebo. The primary composite outcome in the time-to-event analysis was the first occurrence of death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke. The primary hypothesis was that liraglutide would be noninferior to placebo with regard to the primary outcome, with a margin of 1.30 for the upper boundary of the 95% confidence interval of the hazard ratio. No adjustments for multiplicity were performed for the prespecified exploratory outcomes. ResultsA total of 9340 patients underwent randomization. The median follow-up was 3.8 years. The primary outcome occurred in significantly fewer patients in the liraglutide group (608 of 4668 patients [13.0%]) than in the placebo ...

4,409 citations


Posted Content
TL;DR: SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.
Abstract: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.

4,404 citations


Journal ArticleDOI
TL;DR: The posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue, is calculated.

3,766 citations


Journal ArticleDOI
11 Aug 2016-Nature
TL;DR: Using multi-modal magnetic resonance images from the Human Connectome Project and an objective semi-automated neuroanatomical approach, 180 areas per hemisphere are delineated bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults.
Abstract: Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal 'fingerprint' of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.

3,414 citations


Posted Content
TL;DR: It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Abstract: The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.

3,026 citations


Journal ArticleDOI
Peter Goldstraw1, Kari Chansky, John Crowley, Ramón Rami-Porta2, Hisao Asamura3, Wilfried Ernst Erich Eberhardt4, Andrew G. Nicholson1, Patti A. Groome5, Alan Mitchell, Vanessa Bolejack, David Ball6, David G. Beer7, Ricardo Beyruti8, Frank C. Detterbeck9, Wilfried Eberhardt4, John G. Edwards10, Françoise Galateau-Salle11, Dorothy Giroux12, Fergus V. Gleeson13, James Huang14, Catherine Kennedy15, Jhingook Kim16, Young Tae Kim17, Laura Kingsbury12, Haruhiko Kondo18, Mark Krasnik19, Kaoru Kubota20, Antoon Lerut21, Gustavo Lyons, Mirella Marino, Edith M. Marom22, Jan P. van Meerbeeck23, Takashi Nakano24, Anna K. Nowak25, Michael D Peake26, Thomas W. Rice27, Kenneth E. Rosenzweig28, Enrico Ruffini29, Valerie W. Rusch14, Nagahiro Saijo, Paul Van Schil23, Jean-Paul Sculier30, Lynn Shemanski12, Kelly G. Stratton12, Kenji Suzuki31, Yuji Tachimori32, Charles F. Thomas33, William D. Travis14, Ming-Sound Tsao34, Andrew T. Turrisi35, Johan Vansteenkiste21, Hirokazu Watanabe, Yi-Long Wu, Paul Baas36, Jeremy J. Erasmus22, Seiki Hasegawa24, Kouki Inai37, Kemp H. Kernstine38, Hedy L. Kindler39, Lee M. Krug14, Kristiaan Nackaerts21, Harvey I. Pass40, David C. Rice22, Conrad Falkson5, Pier Luigi Filosso29, Giuseppe Giaccone41, Kazuya Kondo42, Marco Lucchi43, Meinoshin Okumura44, Eugene H. Blackstone27, F. Abad Cavaco, E. Ansótegui Barrera, J. Abal Arca, I. Parente Lamelas, A. Arnau Obrer45, R. Guijarro Jorge45, D. Ball6, G.K. Bascom46, A. I. Blanco Orozco, M. A. González Castro, M.G. Blum, D. Chimondeguy, V. Cvijanovic47, S. Defranchi48, B. de Olaiz Navarro, I. Escobar Campuzano2, I. Macía Vidueira2, E. Fernández Araujo49, F. Andreo García49, Kwun M. Fong, G. Francisco Corral, S. Cerezo González, J. Freixinet Gilart, L. García Arangüena, S. García Barajas50, P. Girard, Tuncay Göksel, M. T. González Budiño51, G. González Casaurrán50, J. A. Gullón Blanco, J. Hernández Hernández, H. Hernández Rodríguez, J. Herrero Collantes, M. Iglesias Heras, J. M. Izquierdo Elena, Erik Jakobsen, S. Kostas52, P. León Atance, A. Núñez Ares, M. Liao, M. Losanovscky, G. Lyons, R. Magaroles53, L. De Esteban Júlvez53, M. Mariñán Gorospe, Brian C. McCaughan15, Catherine J. Kennedy15, R. Melchor Íñiguez54, L. Miravet Sorribes, S. Naranjo Gozalo, C. Álvarez de Arriba, M. Núñez Delgado, J. Padilla Alarcón, J. C. Peñalver Cuesta, Jongsun Park16, H. Pass40, M. J. Pavón Fernández, Mara Rosenberg, Enrico Ruffini29, V. Rusch14, J. Sánchez de Cos Escuín, A. Saura Vinuesa, M. Serra Mitjans, Trond Eirik Strand, Dragan Subotic, S.G. Swisher22, Ricardo Mingarini Terra8, Charles R. Thomas33, Kurt G. Tournoy55, P. Van Schil23, M. Velasquez, Y.L. Wu, K. Yokoi 
Imperial College London1, University of Barcelona2, Keio University3, University of Duisburg-Essen4, Queen's University5, Peter MacCallum Cancer Centre6, University of Michigan7, University of São Paulo8, Yale University9, Northern General Hospital10, University of Caen Lower Normandy11, Fred Hutchinson Cancer Research Center12, University of Oxford13, Memorial Sloan Kettering Cancer Center14, University of Sydney15, Sungkyunkwan University16, Seoul National University17, Kyorin University18, University of Copenhagen19, Nippon Medical School20, Katholieke Universiteit Leuven21, University of Texas MD Anderson Cancer Center22, University of Antwerp23, Hyogo College of Medicine24, University of Western Australia25, Glenfield Hospital26, Cleveland Clinic27, Icahn School of Medicine at Mount Sinai28, University of Turin29, Université libre de Bruxelles30, Juntendo University31, National Cancer Research Institute32, Mayo Clinic33, University of Toronto34, Sinai Grace Hospital35, Netherlands Cancer Institute36, Hiroshima University37, City of Hope National Medical Center38, University of Chicago39, New York University40, Georgetown University41, University of Tokushima42, University of Pisa43, Osaka University44, University of Valencia45, Good Samaritan Hospital46, Military Medical Academy47, Fundación Favaloro48, Autonomous University of Barcelona49, Complutense University of Madrid50, University of Oviedo51, National and Kapodistrian University of Athens52, Rovira i Virgili University53, Autonomous University of Madrid54, Ghent University55
TL;DR: The methods used to evaluate the resultant Stage groupings and the proposals put forward for the 8th edition of the TNM Classification for lung cancer due to be published late 2016 are described.

2,826 citations


Journal ArticleDOI
Bin Zhou1, Yuan Lu2, Kaveh Hajifathalian2, James Bentham1  +494 moreInstitutions (170)
TL;DR: In this article, the authors used a Bayesian hierarchical model to estimate trends in diabetes prevalence, defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs in 200 countries and territories in 21 regions, by sex and from 1980 to 2014.

Journal ArticleDOI
14 Jan 2016-Nature
TL;DR: Analysis of worldwide variation in six major traits critical to growth, survival and reproduction within the largest sample of vascular plant species ever compiled found that occupancy of six-dimensional trait space is strongly concentrated, indicating coordination and trade-offs.
Abstract: The authors found that the key elements of plant form and function, analysed at global scale, are largely concentrated into a two-dimensional plane indexed by the size of whole plants and organs on the one hand, and the construction costs for photosynthetic leaf area, on the other.

Journal ArticleDOI
TL;DR: The associations of both overweight and obesity with higher all-cause mortality were broadly consistent in four continents and supports strategies to combat the entire spectrum of excess adiposity in many populations.

Journal ArticleDOI
14 Dec 2016-Nature
TL;DR: There are opportunities to use such sustainable polymers in both high-value areas and in basic applications such as packaging.
Abstract: Renewable resources are used increasingly in the production of polymers. In particular, monomers such as carbon dioxide, terpenes, vegetable oils and carbohydrates can be used as feedstocks for the manufacture of a variety of sustainable materials and products, including elastomers, plastics, hydrogels, flexible electronics, resins, engineering polymers and composites. Efficient catalysis is required to produce monomers, to facilitate selective polymerizations and to enable recycling or upcycling of waste materials. There are opportunities to use such sustainable polymers in both high-value areas and in basic applications such as packaging. Life-cycle assessment can be used to quantify the environmental benefits of sustainable polymers.

Journal ArticleDOI
TL;DR: The SCARE Guideline is presented, consisting of a 14-item checklist that will improve the reporting quality of surgical case reports, and was approved by the participants.

Journal ArticleDOI
01 Feb 2016-Gut
TL;DR: The potential of manipulating the gut microbiota in these disorders is assessed, with an examination of the latest and most relevant evidence relating to antibiotics, probiotics, prebiotics, polyphenols and faecal microbiota transplantation.
Abstract: Over the last 10–15 years, our understanding of the composition and functions of the human gut microbiota has increased exponentially. To a large extent, this has been due to new ‘omic’ technologies that have facilitated large-scale analysis of the genetic and metabolic profile of this microbial community, revealing it to be comparable in influence to a new organ in the body and offering the possibility of a new route for therapeutic intervention. Moreover, it might be more accurate to think of it like an immune system: a collection of cells that work in unison with the host and that can promote health but sometimes initiate disease. This review gives an update on the current knowledge in the area of gut disorders, in particular metabolic syndrome and obesity-related disease, liver disease, IBD and colorectal cancer. The potential of manipulating the gut microbiota in these disorders is assessed, with an examination of the latest and most relevant evidence relating to antibiotics, probiotics, prebiotics, polyphenols and faecal microbiota transplantation.

Journal ArticleDOI
TL;DR: In this paper, the authors used three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009.
Abstract: Global environmental change is rapidly altering the dynamics of terrestrial vegetation, with consequences for the functioning of the Earth system and provision of ecosystem services(1,2). Yet how global vegetation is responding to the changing environment is not well established. Here we use three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009. We show a persistent and widespread increase of growing season integrated LAI (greening) over 25% to 50% of the global vegetated area, whereas less than 4% of the globe shows decreasing LAI (browning). Factorial simulations with multiple global ecosystem models suggest that CO2 fertilization effects explain 70% of the observed greening trend, followed by nitrogen deposition (9%), climate change (8%) and land cover change (LCC) (4%). CO2 fertilization effects explain most of the greening trends in the tropics, whereas climate change resulted in greening of the high latitudes and the Tibetan Plateau. LCC contributed most to the regional greening observed in southeast China and the eastern United States. The regional effects of unexplained factors suggest that the next generation of ecosystem models will need to explore the impacts of forest demography, differences in regional management intensities for cropland and pastures, and other emerging productivity constraints such as phosphorus availability.

Journal ArticleDOI
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2  +679 moreInstitutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework to combat the threat to human health and biosecurity from antimicrobial resistance, an understanding of its mechanisms and drivers is needed.

Journal ArticleDOI
TL;DR: This review provides a “beginning‐to‐end” analysis of the recent advances reported in lignin valorisation, with particular emphasis on the improved understanding of lign in's biosynthesis and structure.
Abstract: Lignin is an abundant biopolymer with a high carbon content and high aromaticity. Despite its potential as a raw material for the fuel and chemical industries, lignin remains the most poorly utilised of the lignocellulosic biopolymers. Effective valorisation of lignin requires careful fine-tuning of multiple "upstream" (i.e., lignin bioengineering, lignin isolation and "early-stage catalytic conversion of lignin") and "downstream" (i.e., lignin depolymerisation and upgrading) process stages, demanding input and understanding from a broad array of scientific disciplines. This review provides a "beginning-to-end" analysis of the recent advances reported in lignin valorisation. Particular emphasis is placed on the improved understanding of lignin's biosynthesis and structure, differences in structure and chemical bonding between native and technical lignins, emerging catalytic valorisation strategies, and the relationships between lignin structure and catalyst performance.

Journal ArticleDOI
07 Jul 2016-Nature
TL;DR: Statistical analysis of vibrational spectroscopy time series and dark-field scattering spectra provides evidence of single-molecule strong coupling, opening up the exploration of complex natural processes such as photosynthesis and the possibility of manipulating chemical bonds.
Abstract: Photon emitters placed in an optical cavity experience an environment that changes how they are coupled to the surrounding light field. In the weak-coupling regime, the extraction of light from the emitter is enhanced. But more profound effects emerge when single-emitter strong coupling occurs: mixed states are produced that are part light, part matter1, 2, forming building blocks for quantum information systems and for ultralow-power switches and lasers. Such cavity quantum electrodynamics has until now been the preserve of low temperatures and complicated fabrication methods, compromising its use. Here, by scaling the cavity volume to less than 40 cubic nanometres and using host–guest chemistry to align one to ten protectively isolated methylene-blue molecules, we reach the strong-coupling regime at room temperature and in ambient conditions. Dispersion curves from more than 50 such plasmonic nanocavities display characteristic light–matter mixing, with Rabi frequencies of 300 millielectronvolts for ten methylene-blue molecules, decreasing to 90 millielectronvolts for single molecules—matching quantitative models. Statistical analysis of vibrational spectroscopy time series and dark-field scattering spectra provides evidence of single-molecule strong coupling. This dressing of molecules with light can modify photochemistry, opening up the exploration of complex natural processes such as photosynthesis and the possibility of manipulating chemical bonds.

Journal ArticleDOI
26 Jul 2016-eLife
TL;DR: The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.
Abstract: Being taller is associated with enhanced longevity, and higher education and earnings. We reanalysed 1472 population-based studies, with measurement of height on more than 18.6 million participants to estimate mean height for people born between 1896 and 1996 in 200 countries. The largest gain in adult height over the past century has occurred in South Korean women and Iranian men, who became 20.2 cm (95% credible interval 17.5–22.7) and 16.5 cm (13.3–19.7) taller, respectively. In contrast, there was little change in adult height in some sub-Saharan African countries and in South Asia over the century of analysis. The tallest people over these 100 years are men born in the Netherlands in the last quarter of 20th century, whose average heights surpassed 182.5 cm, and the shortest were women born in Guatemala in 1896 (140.3 cm; 135.8–144.8). The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.

Journal ArticleDOI
TL;DR: UK Biobank brain imaging is described and results derived from the first 5,000 participants' data release are presented, which have already yielded a rich range of associations between brain imaging and other measures collected by UK Biobanks.
Abstract: Medical imaging has enormous potential for early disease prediction, but is impeded by the difficulty and expense of acquiring data sets before symptom onset. UK Biobank aims to address this problem directly by acquiring high-quality, consistently acquired imaging data from 100,000 predominantly healthy participants, with health outcomes being tracked over the coming decades. The brain imaging includes structural, diffusion and functional modalities. Along with body and cardiac imaging, genetics, lifestyle measures, biological phenotyping and health records, this imaging is expected to enable discovery of imaging markers of a broad range of diseases at their earliest stages, as well as provide unique insight into disease mechanisms. We describe UK Biobank brain imaging and present results derived from the first 5,000 participants' data release. Although this covers just 5% of the ultimate cohort, it has already yielded a rich range of associations between brain imaging and other measures collected by UK Biobank.

Journal ArticleDOI
TL;DR: Among women with early-stage breast cancer who were at high clinical risk and low genomic risk for recurrence, the receipt of no chemotherapy on the basis of the 70-gene signature led to a 5-year rate of survival without distant metastasis that was 1.5 percentage points lower than the rate with chemotherapy.
Abstract: BackgroundThe 70-gene signature test (MammaPrint) has been shown to improve prediction of clinical outcome in women with early-stage breast cancer. We sought to provide prospective evidence of the clinical utility of the addition of the 70-gene signature to standard clinical–pathological criteria in selecting patients for adjuvant chemotherapy. MethodsIn this randomized, phase 3 study, we enrolled 6693 women with early-stage breast cancer and determined their genomic risk (using the 70-gene signature) and their clinical risk (using a modified version of Adjuvant! Online). Women at low clinical and genomic risk did not receive chemotherapy, whereas those at high clinical and genomic risk did receive such therapy. In patients with discordant risk results, either the genomic risk or the clinical risk was used to determine the use of chemotherapy. The primary goal was to assess whether, among patients with high-risk clinical features and a low-risk gene-expression profile who did not receive chemotherapy, the...

Journal ArticleDOI
TL;DR: The large-scale evidence from randomised trials indicates that it is unlikely that large absolute excesses in other serious adverse events still await discovery, and any further findings that emerge about the effects of statin therapy would not be expected to alter materially the balance of benefits and harms.

Journal ArticleDOI
TL;DR: An updated review of the literature and evidence on the definitions and lexicon, the limits, the natural history, the markers of progression, and the ethical consequence of detecting the disease at this asymptomatic stage of Alzheimer's disease are provided.
Abstract: During the past decade, a conceptual shift occurred in the field of Alzheimer's disease (AD) considering the disease as a continuum. Thanks to evolving biomarker research and substantial discoveries, it is now possible to identify the disease even at the preclinical stage before the occurrence of the first clinical symptoms. This preclinical stage of AD has become a major research focus as the field postulates that early intervention may offer the best chance of therapeutic success. To date, very little evidence is established on this "silent" stage of the disease. A clarification is needed about the definitions and lexicon, the limits, the natural history, the markers of progression, and the ethical consequence of detecting the disease at this asymptomatic stage. This article is aimed at addressing all the different issues by providing for each of them an updated review of the literature and evidence, with practical recommendations.

Journal ArticleDOI
01 Mar 2016-Gut
TL;DR: A. muciniphila is associated with a healthier metabolic status and better clinical outcomes after CR in overweight/obese adults, and the interaction between gut microbiota ecology and A. muc iniphila warrants further investigation.
Abstract: OBJECTIVE: Individuals with obesity and type 2 diabetes differ from lean and healthy individuals in their abundance of certain gut microbial species and microbial gene richness Abundance of Akkermansia muciniphila, a mucin-degrading bacterium, has been inversely associated with body fat mass and glucose intolerance in mice, but more evidence is needed in humans The impact of diet and weight loss on this bacterial species is unknown Our objective was to evaluate the association between faecal A muciniphila abundance, faecal microbiome gene richness, diet, host characteristics, and their changes after calorie restriction (CR) DESIGN: The intervention consisted of a 6-week CR period followed by a 6-week weight stabilisation diet in overweight and obese adults (N=49, including 41 women) Faecal A muciniphila abundance, faecal microbial gene richness, diet and bioclinical parameters were measured at baseline and after CR and weight stabilisation RESULTS: At baseline A muciniphila was inversely related to fasting glucose, waist-to-hip ratio and subcutaneous adipocyte diameter Subjects with higher gene richness and A muciniphila abundance exhibited the healthiest metabolic status, particularly in fasting plasma glucose, plasma triglycerides and body fat distribution Individuals with higher baseline A muciniphila displayed greater improvement in insulin sensitivity markers and other clinical parameters after CR These participants also experienced a reduction in A muciniphila abundance, but it remained significantly higher than in individuals with lower baseline abundance A muciniphila was associated with microbial species known to be related to health CONCLUSIONS: A muciniphila is associated with a healthier metabolic status and better clinical outcomes after CR in overweight/obese adults The interaction between gut microbiota ecology and A muciniphila warrants further investigation TRIAL REGISTRATION NUMBER: NCT01314690

Journal ArticleDOI
University of East Anglia1, University of Oslo2, Commonwealth Scientific and Industrial Research Organisation3, University of Exeter4, Oak Ridge National Laboratory5, National Oceanic and Atmospheric Administration6, Woods Hole Research Center7, University of California, San Diego8, Karlsruhe Institute of Technology9, Cooperative Institute for Marine and Atmospheric Studies10, Centre national de la recherche scientifique11, University of Maryland, College Park12, National Institute of Water and Atmospheric Research13, Woods Hole Oceanographic Institution14, Flanders Marine Institute15, Alfred Wegener Institute for Polar and Marine Research16, Netherlands Environmental Assessment Agency17, University of Illinois at Urbana–Champaign18, Leibniz Institute of Marine Sciences19, Max Planck Society20, University of Paris21, Hobart Corporation22, University of Bern23, Oeschger Centre for Climate Change Research24, National Center for Atmospheric Research25, University of Miami26, Council of Scientific and Industrial Research27, University of Colorado Boulder28, National Institute for Environmental Studies29, Joint Institute for the Study of the Atmosphere and Ocean30, Geophysical Institute, University of Bergen31, Montana State University32, Goddard Space Flight Center33, University of New Hampshire34, Bjerknes Centre for Climate Research35, Imperial College London36, Lamont–Doherty Earth Observatory37, Auburn University38, Wageningen University and Research Centre39, VU University Amsterdam40, Met Office41
TL;DR: In this article, the authors quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community.
Abstract: . Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates and consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models. We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2006–2015), EFF was 9.3 ± 0.5 GtC yr−1, ELUC 1.0 ± 0.5 GtC yr−1, GATM 4.5 ± 0.1 GtC yr−1, SOCEAN 2.6 ± 0.5 GtC yr−1, and SLAND 3.1 ± 0.9 GtC yr−1. For year 2015 alone, the growth in EFF was approximately zero and emissions remained at 9.9 ± 0.5 GtC yr−1, showing a slowdown in growth of these emissions compared to the average growth of 1.8 % yr−1 that took place during 2006–2015. Also, for 2015, ELUC was 1.3 ± 0.5 GtC yr−1, GATM was 6.3 ± 0.2 GtC yr−1, SOCEAN was 3.0 ± 0.5 GtC yr−1, and SLAND was 1.9 ± 0.9 GtC yr−1. GATM was higher in 2015 compared to the past decade (2006–2015), reflecting a smaller SLAND for that year. The global atmospheric CO2 concentration reached 399.4 ± 0.1 ppm averaged over 2015. For 2016, preliminary data indicate the continuation of low growth in EFF with +0.2 % (range of −1.0 to +1.8 %) based on national emissions projections for China and USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. In spite of the low growth of EFF in 2016, the growth rate in atmospheric CO2 concentration is expected to be relatively high because of the persistence of the smaller residual terrestrial sink (SLAND) in response to El Nino conditions of 2015–2016. From this projection of EFF and assumed constant ELUC for 2016, cumulative emissions of CO2 will reach 565 ± 55 GtC (2075 ± 205 GtCO2) for 1870–2016, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quere et al., 2015b, a, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( doi:10.3334/CDIAC/GCP_2016 ).

Journal ArticleDOI
Aysu Okbay1, Jonathan P. Beauchamp2, Mark Alan Fontana3, James J. Lee4  +293 moreInstitutions (81)
26 May 2016-Nature
TL;DR: In this article, the results of a genome-wide association study (GWAS) for educational attainment were reported, showing that single-nucleotide polymorphisms associated with educational attainment disproportionately occur in genomic regions regulating gene expression in the fetal brain.
Abstract: Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases.

Journal ArticleDOI
TL;DR: The JASPAR CORE collection was expanded with 494 new TF binding profiles, and 130 transcription factor flexible models trained on ChIP-seq data for vertebrates, which capture dinucleotide dependencies within TF binding sites were introduced.
Abstract: JASPAR (http://jaspar.genereg.net) is an open-access database storing curated, non-redundant transcription factor (TF) binding profiles representing transcription factor binding preferences as position frequency matrices for multiple species in six taxonomic groups. For this 2016 release, we expanded the JASPAR CORE collection with 494 new TF binding profiles (315 in vertebrates, 11 in nematodes, 3 in insects, 1 in fungi and 164 in plants) and updated 59 profiles (58 in vertebrates and 1 in fungi). The introduced profiles represent an 83% expansion and 10% update when compared to the previous release. We updated the structural annotation of the TF DNA binding domains (DBDs) following a published hierarchical structural classification. In addition, we introduced 130 transcription factor flexible models trained on ChIP-seq data for vertebrates, which capture dinucleotide dependencies within TF binding sites. This new JASPAR release is accompanied by a new web tool to infer JASPAR TF binding profiles recognized by a given TF protein sequence. Moreover, we provide the users with a Ruby module complementing the JASPAR API to ease programmatic access and use of the JASPAR collection of profiles. Finally, we provide the JASPAR2016 R/Bioconductor data package with the data of this release.