scispace - formally typeset
Search or ask a question

Showing papers by "VU University Amsterdam published in 2016"


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +1008 moreInstitutions (96)
TL;DR: This is the first direct detection of gravitational waves and the first observation of a binary black hole merger, and these observations demonstrate the existence of binary stellar-mass black hole systems.
Abstract: On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of $1.0 \times 10^{-21}$. It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of $410^{+160}_{-180}$ Mpc corresponding to a redshift $z = 0.09^{+0.03}_{-0.04}$. In the source frame, the initial black hole masses are $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, and the final black hole mass is $62^{+4}_{-4} M_\odot$, with $3.0^{+0.5}_{-0.5} M_\odot c^2$ radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

9,596 citations


Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +970 moreInstitutions (114)
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.

3,448 citations


Journal ArticleDOI
Shane A. McCarthy1, Sayantan Das2, Warren W. Kretzschmar3, Olivier Delaneau4, Andrew R. Wood5, Alexander Teumer6, Hyun Min Kang2, Christian Fuchsberger2, Petr Danecek1, Kevin Sharp3, Yang Luo1, C Sidore7, Alan Kwong2, Nicholas J. Timpson8, Seppo Koskinen, Scott I. Vrieze9, Laura J. Scott2, He Zhang2, Anubha Mahajan3, Jan H. Veldink, Ulrike Peters10, Ulrike Peters11, Carlos N. Pato12, Cornelia M. van Duijn13, Christopher E. Gillies2, Ilaria Gandin14, Massimo Mezzavilla, Arthur Gilly1, Massimiliano Cocca14, Michela Traglia, Andrea Angius7, Jeffrey C. Barrett1, D.I. Boomsma15, Kari Branham2, Gerome Breen16, Gerome Breen17, Chad M. Brummett2, Fabio Busonero7, Harry Campbell18, Andrew T. Chan19, Sai Chen2, Emily Y. Chew20, Francis S. Collins20, Laura J Corbin8, George Davey Smith8, George Dedoussis21, Marcus Dörr6, Aliki-Eleni Farmaki21, Luigi Ferrucci20, Lukas Forer22, Ross M. Fraser2, Stacey Gabriel23, Shawn Levy, Leif Groop24, Leif Groop25, Tabitha A. Harrison11, Andrew T. Hattersley5, Oddgeir L. Holmen26, Kristian Hveem26, Matthias Kretzler2, James Lee27, Matt McGue28, Thomas Meitinger29, David Melzer5, Josine L. Min8, Karen L. Mohlke30, John B. Vincent31, Matthias Nauck6, Deborah A. Nickerson10, Aarno Palotie23, Aarno Palotie19, Michele T. Pato12, Nicola Pirastu14, Melvin G. McInnis2, J. Brent Richards32, J. Brent Richards16, Cinzia Sala, Veikko Salomaa, David Schlessinger20, Sebastian Schoenherr22, P. Eline Slagboom33, Kerrin S. Small16, Tim D. Spector16, Dwight Stambolian34, Marcus A. Tuke5, Jaakko Tuomilehto, Leonard H. van den Berg, Wouter van Rheenen, Uwe Völker6, Cisca Wijmenga35, Daniela Toniolo, Eleftheria Zeggini1, Paolo Gasparini14, Matthew G. Sampson2, James F. Wilson18, Timothy M. Frayling5, Paul I.W. de Bakker36, Morris A. Swertz35, Steven A. McCarroll19, Charles Kooperberg11, Annelot M. Dekker, David Altshuler, Cristen J. Willer2, William G. Iacono28, Samuli Ripatti24, Nicole Soranzo1, Nicole Soranzo27, Klaudia Walter1, Anand Swaroop20, Francesco Cucca7, Carl A. Anderson1, Richard M. Myers, Michael Boehnke2, Mark I. McCarthy37, Mark I. McCarthy3, Richard Durbin1, Gonçalo R. Abecasis2, Jonathan Marchini3 
TL;DR: A reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies.
Abstract: We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies, and it can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently.

2,149 citations


Journal ArticleDOI
14 Jan 2016-Nature
TL;DR: Analysis of worldwide variation in six major traits critical to growth, survival and reproduction within the largest sample of vascular plant species ever compiled found that occupancy of six-dimensional trait space is strongly concentrated, indicating coordination and trade-offs.
Abstract: The authors found that the key elements of plant form and function, analysed at global scale, are largely concentrated into a two-dimensional plane indexed by the size of whole plants and organs on the one hand, and the construction costs for photosynthetic leaf area, on the other.

1,814 citations


Journal ArticleDOI
TL;DR: The updated version 2.2.2 of the HADDOCK portal is presented, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface.

1,762 citations


Journal ArticleDOI
28 Jan 2016-Cell
TL;DR: The complete set of genes associated with 1,122 diffuse grade II-III-IV gliomas were defined from The Cancer Genome Atlas and molecular profiles were used to improve disease classification, identify molecular correlations, and provide insights into the progression from low- to high-grade disease.

1,535 citations


Journal ArticleDOI
TL;DR: A powerful strategy that integrates gene expression measurements with summary association statistics from large-scale genome-wide association studies (GWAS) to identify genes whose cis-regulated expression is associated with complex traits is introduced.
Abstract: Many genetic variants influence complex traits by modulating gene expression, thus altering the abundance of one or multiple proteins. Here we introduce a powerful strategy that integrates gene expression measurements with summary association statistics from large-scale genome-wide association studies (GWAS) to identify genes whose cis-regulated expression is associated with complex traits. We leverage expression imputation from genetic data to perform a transcriptome-wide association study (TWAS) to identify significant expression-trait associations. We applied our approaches to expression data from blood and adipose tissue measured in ∼ 3,000 individuals overall. We imputed gene expression into GWAS data from over 900,000 phenotype measurements to identify 69 new genes significantly associated with obesity-related traits (BMI, lipids and height). Many of these genes are associated with relevant phenotypes in the Hybrid Mouse Diversity Panel. Our results showcase the power of integrating genotype, gene expression and phenotype to gain insights into the genetic basis of complex traits.

1,473 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +976 moreInstitutions (107)
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

1,421 citations


Journal ArticleDOI
TL;DR: The Global Land Evaporation Amsterdam Model (GLEAM) as discussed by the authors is a set of algorithms dedicated to the estimation of terrestrial evaporation and root-zone soil moisture from satellite data.
Abstract: . The Global Land Evaporation Amsterdam Model (GLEAM) is a set of algorithms dedicated to the estimation of terrestrial evaporation and root-zone soil moisture from satellite data. Ever since its development in 2011, the model has been regularly revised, aiming at the optimal incorporation of new satellite-observed geophysical variables, and improving the representation of physical processes. In this study, the next version of this model (v3) is presented. Key changes relative to the previous version include (1) a revised formulation of the evaporative stress, (2) an optimized drainage algorithm, and (3) a new soil moisture data assimilation system. GLEAM v3 is used to produce three new data sets of terrestrial evaporation and root-zone soil moisture, including a 36-year data set spanning 1980–2015, referred to as v3a (based on satellite-observed soil moisture, vegetation optical depth and snow-water equivalent, reanalysis air temperature and radiation, and a multi-source precipitation product), and two satellite-based data sets. The latter share most of their forcing, except for the vegetation optical depth and soil moisture, which are based on observations from different passive and active C- and L-band microwave sensors (European Space Agency Climate Change Initiative, ESA CCI) for the v3b data set (spanning 2003–2015) and observations from the Soil Moisture and Ocean Salinity (SMOS) satellite in the v3c data set (spanning 2011–2015). Here, these three data sets are described in detail, compared against analogous data sets generated using the previous version of GLEAM (v2), and validated against measurements from 91 eddy-covariance towers and 2325 soil moisture sensors across a broad range of ecosystems. Results indicate that the quality of the v3 soil moisture is consistently better than the one from v2: average correlations against in situ surface soil moisture measurements increase from 0.61 to 0.64 in the case of the v3a data set and the representation of soil moisture in the second layer improves as well, with correlations increasing from 0.47 to 0.53. Similar improvements are observed for the v3b and c data sets. Despite regional differences, the quality of the evaporation fluxes remains overall similar to the one obtained using the previous version of GLEAM, with average correlations against eddy-covariance measurements ranging between 0.78 and 0.81 for the different data sets. These global data sets of terrestrial evaporation and root-zone soil moisture are now openly available at www.GLEAM.eu and may be used for large-scale hydrological applications, climate studies, or research on land–atmosphere feedbacks.

Journal ArticleDOI
University of East Anglia1, University of Oslo2, Commonwealth Scientific and Industrial Research Organisation3, University of Exeter4, Oak Ridge National Laboratory5, National Oceanic and Atmospheric Administration6, Woods Hole Research Center7, University of California, San Diego8, Karlsruhe Institute of Technology9, Cooperative Institute for Marine and Atmospheric Studies10, Centre national de la recherche scientifique11, University of Maryland, College Park12, National Institute of Water and Atmospheric Research13, Woods Hole Oceanographic Institution14, Flanders Marine Institute15, Alfred Wegener Institute for Polar and Marine Research16, Netherlands Environmental Assessment Agency17, University of Illinois at Urbana–Champaign18, Leibniz Institute of Marine Sciences19, Max Planck Society20, University of Paris21, Hobart Corporation22, Oeschger Centre for Climate Change Research23, University of Bern24, National Center for Atmospheric Research25, University of Miami26, Council of Scientific and Industrial Research27, University of Colorado Boulder28, National Institute for Environmental Studies29, Joint Institute for the Study of the Atmosphere and Ocean30, Geophysical Institute, University of Bergen31, Montana State University32, Goddard Space Flight Center33, University of New Hampshire34, Bjerknes Centre for Climate Research35, Imperial College London36, Lamont–Doherty Earth Observatory37, Auburn University38, Wageningen University and Research Centre39, VU University Amsterdam40, Met Office41
TL;DR: In this article, the authors quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community.
Abstract: . Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates and consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models. We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2006–2015), EFF was 9.3 ± 0.5 GtC yr−1, ELUC 1.0 ± 0.5 GtC yr−1, GATM 4.5 ± 0.1 GtC yr−1, SOCEAN 2.6 ± 0.5 GtC yr−1, and SLAND 3.1 ± 0.9 GtC yr−1. For year 2015 alone, the growth in EFF was approximately zero and emissions remained at 9.9 ± 0.5 GtC yr−1, showing a slowdown in growth of these emissions compared to the average growth of 1.8 % yr−1 that took place during 2006–2015. Also, for 2015, ELUC was 1.3 ± 0.5 GtC yr−1, GATM was 6.3 ± 0.2 GtC yr−1, SOCEAN was 3.0 ± 0.5 GtC yr−1, and SLAND was 1.9 ± 0.9 GtC yr−1. GATM was higher in 2015 compared to the past decade (2006–2015), reflecting a smaller SLAND for that year. The global atmospheric CO2 concentration reached 399.4 ± 0.1 ppm averaged over 2015. For 2016, preliminary data indicate the continuation of low growth in EFF with +0.2 % (range of −1.0 to +1.8 %) based on national emissions projections for China and USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. In spite of the low growth of EFF in 2016, the growth rate in atmospheric CO2 concentration is expected to be relatively high because of the persistence of the smaller residual terrestrial sink (SLAND) in response to El Nino conditions of 2015–2016. From this projection of EFF and assumed constant ELUC for 2016, cumulative emissions of CO2 will reach 565 ± 55 GtC (2075 ± 205 GtCO2) for 1870–2016, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quere et al., 2015b, a, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( doi:10.3334/CDIAC/GCP_2016 ).

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy3  +978 moreInstitutions (112)
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.

Journal ArticleDOI
TL;DR: This work proposes the “A/T/N” system, a descriptive system for categorizing multidomain biomarker findings at the individual person level in a format that is easy to understand and use and suited to population studies of cognitive aging.
Abstract: Biomarkers have become an essential component of Alzheimer disease (AD) research and because of the pervasiveness of AD pathology in the elderly, the same biomarkers are used in cognitive aging research. A number of current issues suggest that an unbiased descriptive classification scheme for these biomarkers would be useful. We propose the "A/T/N" system in which 7 major AD biomarkers are divided into 3 binary categories based on the nature of the pathophysiology that each measures. "A" refers to the value of a β-amyloid biomarker (amyloid PET or CSF Aβ42); "T," the value of a tau biomarker (CSF phospho tau, or tau PET); and "N," biomarkers of neurodegeneration or neuronal injury ([(18)F]-fluorodeoxyglucose-PET, structural MRI, or CSF total tau). Each biomarker category is rated as positive or negative. An individual score might appear as A+/T+/N-, or A+/T-/N-, etc. The A/T/N system includes the new modality tau PET. It is agnostic to the temporal ordering of mechanisms underlying AD pathogenesis. It includes all individuals in any population regardless of the mix of biomarker findings and therefore is suited to population studies of cognitive aging. It does not specify disease labels and thus is not a diagnostic classification system. It is a descriptive system for categorizing multidomain biomarker findings at the individual person level in a format that is easy to understand and use. Given the present lack of consensus among AD specialists on terminology across the clinically normal to dementia spectrum, a biomarker classification scheme will have broadest acceptance if it is independent from any one clinically defined diagnostic scheme.

Journal ArticleDOI
Aysu Okbay1, Jonathan P. Beauchamp2, Mark Alan Fontana3, James J. Lee4  +293 moreInstitutions (81)
26 May 2016-Nature
TL;DR: In this article, the results of a genome-wide association study (GWAS) for educational attainment were reported, showing that single-nucleotide polymorphisms associated with educational attainment disproportionately occur in genomic regions regulating gene expression in the fetal brain.
Abstract: Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases.

Journal ArticleDOI
TL;DR: The group agreed on sets of uniform sampling criteria, placental gross descriptors, pathologic terminologies, and diagnostic criteria for placental lesions, which will assist in international comparability of clinicopathologic and scientific studies and assist in refining the significance of lesions associated with adverse pregnancy and later health outcomes.
Abstract: Context.—The value of placental examination in investigations of adverse pregnancy outcomes may be compromised by sampling and definition differences between laboratories. Objective.—To establish an agreed-upon protocol for sampling the placenta, and for diagnostic criteria for placental lesions. Recommendations would cover reporting placentas in tertiary centers as well as in community hospitals and district general hospitals, and are also relevant to the scientific research community. Data Sources.—Areas of controversy or uncertainty were explored prior to a 1-day meeting where placental and perinatal pathologists, and maternal-fetal medicine specialists discussed available evidence and subsequently reached consensus where possible. Conclusions.—The group agreed on sets of uniform sampling criteria, placental gross descriptors, pathologic terminologies, and diagnostic criteria. The terminology and microscopic descriptions for maternal vascular malperfusion, fetal vascular malperfusion, delayed villous m...

Journal ArticleDOI
TL;DR: Dabrafenib plus trametinib represents a new therapy with clinically meaningful antitumour activity and a manageable safety profile in patients with previously untreated BRAFV600E-mutant NSCLC.
Abstract: Summary Background BRAF mutations act as an oncogenic driver via the mitogen-activated protein kinase (MAPK) pathway in non-small cell lung cancer (NSCLC). BRAF inhibition has shown antitumour activity in patients with BRAF V600E -mutant NSCLC. Dual MAPK pathway inhibition with BRAF and MEK inhibitors in BRAF V600E -mutant NSCLC might improve efficacy over BRAF inhibitor monotherapy based on observations in BRAF V600 -mutant melanoma. We aimed to assess the antitumour activity and safety of dabrafenib plus trametinib in patients with BRAF V600E -mutant NSCLC. Methods In this phase 2, multicentre, non-randomised, open-label study, we enrolled adult patients (aged ≥18 years) with pretreated metastatic stage IV BRAF V600E -mutant NSCLC who had documented tumour progression after at least one previous platinum-based chemotherapy and had had no more than three previous systemic anticancer therapies. Patients with previous BRAF or MEK inhibitor treatment were ineligible. Patients with brain metastases were allowed to enrol only if the lesions were asymptomatic, untreated (or stable more than 3 weeks after local therapy if treated), and measured less than 1 cm. Enrolled patients received oral dabrafenib (150 mg twice daily) plus oral trametinib (2 mg once daily) in continuous 21-day cycles until disease progression, unacceptable adverse events, withdrawal of consent, or death. The primary endpoint was investigator-assessed overall response, which was assessed by intention to treat in the protocol-defined population (patients who received second-line or later treatment); safety was also assessed in this population and was assessed at least once every 3 weeks, with adverse events, laboratory values, and vital signs graded according to the Common Terminology Criteria for Adverse Events version 4.0. The study is ongoing but no longer recruiting patients. This trial is registered with ClinicalTrials.gov, number NCT01336634. Findings Between Dec 20, 2013, and Jan 14, 2015, 59 patients from 30 centres in nine countries across North America, Europe, and Asia met eligibility criteria. Two patients who had previously been untreated due to protocol deviation were excluded; thus, 57 eligible patients were enrolled. 36 patients (63·2% [95% CI 49·3–75·6]) achieved an investigator-assessed overall response. Serious adverse events were reported in 32 (56%) of 57 patients and included pyrexia in nine (16%), anaemia in three (5%), confusional state in two (4%), decreased appetite in two (4%), haemoptysis in two (4%), hypercalcaemia in two (4%), nausea in two (4%), and cutaneous squamous cell carcinoma in two (4%). The most common grade 3–4 adverse events were neutropenia in five patients (9%), hyponatraemia in four (7%), and anaemia in three (5%). Four patients died during the study from fatal adverse events judged to be unrelated to treatment (one retroperitoneal haemorrhage, one subarachnoid haemorrhage, one respiratory distress, and one from disease progression that was more severe than typical progression, as assessed by the investigator). Interpretation Dabrafenib plus trametinib could represent a new targeted therapy with robust antitumour activity and a manageable safety profile in patients with BRAF V600E -mutant NSCLC. Funding GlaxoSmithKline.

Journal ArticleDOI
TL;DR: Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches, and validates that collaborative data analyses can readily be used across brain phenotypes and disorders.
Abstract: The profile of brain structural abnormalities in schizophrenia is still not fully understood, despite decades of research using brain scans. To validate a prospective meta-analysis approach to analyzing multicenter neuroimaging data, we analyzed brain MRI scans from 2028 schizophrenia patients and 2540 healthy controls, assessed with standardized methods at 15 centers worldwide. We identified subcortical brain volumes that differentiated patients from controls, and ranked them according to their effect sizes. Compared with healthy controls, patients with schizophrenia had smaller hippocampus (Cohen's d=-0.46), amygdala (d=-0.31), thalamus (d=-0.31), accumbens (d=-0.25) and intracranial volumes (d=-0.12), as well as larger pallidum (d=0.21) and lateral ventricle volumes (d=0.37). Putamen and pallidum volume augmentations were positively associated with duration of illness and hippocampal deficits scaled with the proportion of unmedicated patients. Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches. This first ENIGMA Schizophrenia Working Group study validates that collaborative data analyses can readily be used across brain phenotypes and disorders and encourages analysis and data sharing efforts to further our understanding of severe mental illness.

Journal ArticleDOI
TL;DR: Return on investment analysis of the kind reported here can contribute strongly to a balanced investment case for enhanced action to address the large and growing burden of common mental disorders worldwide.

Journal ArticleDOI
TL;DR: Vitamin D deficiency is evident throughout the European population at prevalence rates that are concerning and that require action from a public health perspective, and what direction these strategies take will depend on European policy.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted genome-wide association studies of three phenotypes: subjective well-being (n = 298,420), depressive symptoms (n= 161,460), and neuroticism(n = 170,911).
Abstract: Very few genetic variants have been associated with depression and neuroticism, likely because of limitations on sample size in previous studies. Subjective well-being, a phenotype that is genetically correlated with both of these traits, has not yet been studied with genome-wide data. We conducted genome-wide association studies of three phenotypes: subjective well-being (n = 298,420), depressive symptoms (n = 161,460), and neuroticism (n = 170,911). We identify 3 variants associated with subjective well-being, 2 variants associated with depressive symptoms, and 11 variants associated with neuroticism, including 2 inversion polymorphisms. The two loci associated with depressive symptoms replicate in an independent depression sample. Joint analyses that exploit the high genetic correlations between the phenotypes (|ρ^| ≈ 0.8) strengthen the overall credibility of the findings and allow us to identify additional variants. Across our phenotypes, loci regulating expression in central nervous system and adrenal or pancreas tissues are strongly enriched for association.

Journal ArticleDOI
Marielle Saunois1, Philippe Bousquet1, Ben Poulter2, Anna Peregon1, Philippe Ciais1, Josep G. Canadell3, Edward J. Dlugokencky4, Giuseppe Etiope5, David Bastviken6, Sander Houweling7, Greet Janssens-Maenhout, Francesco N. Tubiello8, Simona Castaldi, Robert B. Jackson9, Mihai Alexe, Vivek K. Arora, David J. Beerling10, Peter Bergamaschi, Donald R. Blake11, Gordon Brailsford12, Victor Brovkin13, Lori Bruhwiler4, Cyril Crevoisier14, Patrick M. Crill, Kristofer R. Covey15, Charles L. Curry16, Christian Frankenberg17, Nicola Gedney18, Lena Höglund-Isaksson19, Misa Ishizawa20, Akihiko Ito20, Fortunat Joos21, Heon Sook Kim20, Thomas Kleinen13, Paul B. Krummel3, Jean-Francois Lamarque22, Ray L. Langenfelds3, Robin Locatelli1, Toshinobu Machida20, Shamil Maksyutov20, Kyle C. McDonald23, Julia Marshall13, Joe R. Melton, Isamu Morino18, Vaishali Naik24, Simon O'Doherty25, Frans-Jan W. Parmentier26, Prabir K. Patra27, Changhui Peng28, Shushi Peng1, Glen P. Peters29, Isabelle Pison1, Catherine Prigent30, Ronald G. Prinn31, Michel Ramonet1, William J. Riley32, Makoto Saito20, Monia Santini, Ronny Schroeder23, Ronny Schroeder33, Isobel J. Simpson11, Renato Spahni21, P. Steele3, Atsushi Takizawa34, Brett F. Thornton, Hanqin Tian35, Yasunori Tohjima20, Nicolas Viovy1, Apostolos Voulgarakis36, Michiel van Weele37, Guido R. van der Werf38, Ray F. Weiss39, Christine Wiedinmyer22, David J. Wilton10, Andy Wiltshire18, Doug Worthy40, Debra Wunch41, Xiyan Xu32, Yukio Yoshida20, Bowen Zhang35, Zhen Zhang2, Qiuan Zhu42 
TL;DR: The Global Carbon Project (GCP) as discussed by the authors is a consortium of multi-disciplinary scientists, including atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions.
Abstract: . The global methane (CH4) budget is becoming an increasingly important component for managing realistic pathways to mitigate climate change. This relevance, due to a shorter atmospheric lifetime and a stronger warming potential than carbon dioxide, is challenged by the still unexplained changes of atmospheric CH4 over the past decade. Emissions and concentrations of CH4 are continuing to increase, making CH4 the second most important human-induced greenhouse gas after carbon dioxide. Two major difficulties in reducing uncertainties come from the large variety of diffusive CH4 sources that overlap geographically, and from the destruction of CH4 by the very short-lived hydroxyl radical (OH). To address these difficulties, we have established a consortium of multi-disciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate research on the methane cycle, and producing regular (∼ biennial) updates of the global methane budget. This consortium includes atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions. Following Kirschke et al. (2013), we propose here the first version of a living review paper that integrates results of top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up models, inventories and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, and inventories for anthropogenic emissions, data-driven extrapolations). For the 2003–2012 decade, global methane emissions are estimated by top-down inversions at 558 Tg CH4 yr−1, range 540–568. About 60 % of global emissions are anthropogenic (range 50–65 %). Since 2010, the bottom-up global emission inventories have been closer to methane emissions in the most carbon-intensive Representative Concentrations Pathway (RCP8.5) and higher than all other RCP scenarios. Bottom-up approaches suggest larger global emissions (736 Tg CH4 yr−1, range 596–884) mostly because of larger natural emissions from individual sources such as inland waters, natural wetlands and geological sources. Considering the atmospheric constraints on the top-down budget, it is likely that some of the individual emissions reported by the bottom-up approaches are overestimated, leading to too large global emissions. Latitudinal data from top-down emissions indicate a predominance of tropical emissions (∼ 64 % of the global budget, The most important source of uncertainty on the methane budget is attributable to emissions from wetland and other inland waters. We show that the wetland extent could contribute 30–40 % on the estimated range for wetland emissions. Other priorities for improving the methane budget include the following: (i) the development of process-based models for inland-water emissions, (ii) the intensification of methane observations at local scale (flux measurements) to constrain bottom-up land surface models, and at regional scale (surface networks and satellites) to constrain top-down inversions, (iii) improvements in the estimation of atmospheric loss by OH, and (iv) improvements of the transport models integrated in top-down inversions. The data presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( http://doi.org/10.3334/CDIAC/GLOBAL_METHANE_BUDGET_2016_V1.1 ) and the Global Carbon Project.

Journal ArticleDOI
TL;DR: The Multi-Source Weighted-Ensemble Precipitation (MSWEP) dataset as discussed by the authors is a global precipitation dataset for the period 1979-2015 with a 3-hourly temporal and 0.25° spatial resolution designed for hydrological modeling.
Abstract: . Current global precipitation (P) datasets do not take full advantage of the complementary nature of satellite and reanalysis data. Here, we present Multi-Source Weighted-Ensemble Precipitation (MSWEP) version 1.1, a global P dataset for the period 1979–2015 with a 3-hourly temporal and 0.25° spatial resolution, specifically designed for hydrological modeling. The design philosophy of MSWEP was to optimally merge the highest quality P data sources available as a function of timescale and location. The long-term mean of MSWEP was based on the CHPclim dataset but replaced with more accurate regional datasets where available. A correction for gauge under-catch and orographic effects was introduced by inferring catchment-average P from streamflow (Q) observations at 13 762 stations across the globe. The temporal variability of MSWEP was determined by weighted averaging of P anomalies from seven datasets; two based solely on interpolation of gauge observations (CPC Unified and GPCC), three on satellite remote sensing (CMORPH, GSMaP-MVK, and TMPA 3B42RT), and two on atmospheric model reanalysis (ERA-Interim and JRA-55). For each grid cell, the weight assigned to the gauge-based estimates was calculated from the gauge network density, while the weights assigned to the satellite- and reanalysis-based estimates were calculated from their comparative performance at the surrounding gauges. The quality of MSWEP was compared against four state-of-the-art gauge-adjusted P datasets (WFDEI-CRU, GPCP-1DD, TMPA 3B42, and CPC Unified) using independent P data from 125 FLUXNET tower stations around the globe. MSWEP obtained the highest daily correlation coefficient (R) among the five P datasets for 60.0 % of the stations and a median R of 0.67 vs. 0.44–0.59 for the other datasets. We further evaluated the performance of MSWEP using hydrological modeling for 9011 catchments ( http://www.gloh2o.org .

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +961 moreInstitutions (100)
TL;DR: The discovery of the GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black-hole systems that inspiral and merge within the age of the Universe as mentioned in this paper.
Abstract: The discovery of the gravitational-wave source GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black-hole systems that inspiral and merge within the age of the Universe. Such black-hole mergers have been predicted in two main types of formation models, involving isolated binaries in galactic fields or dynamical interactions in young and old dense stellar environments. The measured masses robustly demonstrate that relatively "heavy" black holes (≳25M⊙) can form in nature. This discovery implies relatively weak massive-star winds and thus the formation of GW150914 in an environment with metallicity lower than ∼1/2 of the solar value. The rate of binary black-hole mergers inferred from the observation of GW150914 is consistent with the higher end of rate predictions (≳1Gpc−3yr−1) from both types of formation models. The low measured redshift (z∼0.1) of GW150914 and the low inferred metallicity of the stellar progenitor imply either binary black-hole formation in a low-mass galaxy in the local Universe and a prompt merger, or formation at high redshift with a time delay between formation and merger of several Gyr. This discovery motivates further studies of binary-black-hole formation astrophysics. It also has implications for future detections and studies by Advanced LIGO and Advanced Virgo, and gravitational-wave detectors in space.

Journal ArticleDOI
TL;DR: Personalized therapeutic strategies are proposed that addresses HFpEF-specific signaling and phenotypic diversity and target individual steps of systemic and myocardial signaling cascades.
Abstract: Heart failure (HF) with preserved ejection fraction (EF; HFpEF) accounts for 50% of HF cases, and its prevalence relative to HF with reduced EF continues to rise. In contrast to HF with reduced EF, large trials testing neurohumoral inhibition in HFpEF failed to reach a positive outcome. This failure was recently attributed to distinct systemic and myocardial signaling in HFpEF and to diversity of HFpEF phenotypes. In this review, an HFpEF treatment strategy is proposed that addresses HFpEF-specific signaling and phenotypic diversity. In HFpEF, extracardiac comorbidities such as metabolic risk, arterial hypertension, and renal insufficiency drive left ventricular remodeling and dysfunction through systemic inflammation and coronary microvascular endothelial dysfunction. The latter affects left ventricular diastolic dysfunction through macrophage infiltration, resulting in interstitial fibrosis, and through altered paracrine signaling to cardiomyocytes, which become hypertrophied and stiff because of low nitric oxide and cyclic guanosine monophosphate. Systemic inflammation also affects other organs such as lungs, skeletal muscle, and kidneys, leading, respectively, to pulmonary hypertension, muscle weakness, and sodium retention. Individual steps of these signaling cascades can be targeted by specific interventions: metabolic risk by caloric restriction, systemic inflammation by statins, pulmonary hypertension by phosphodiesterase 5 inhibitors, muscle weakness by exercise training, sodium retention by diuretics and monitoring devices, myocardial nitric oxide bioavailability by inorganic nitrate-nitrite, myocardial cyclic guanosine monophosphate content by neprilysin or phosphodiesterase 9 inhibition, and myocardial fibrosis by spironolactone. Because of phenotypic diversity in HFpEF, personalized therapeutic strategies are proposed, which are configured in a matrix with HFpEF presentations in the abscissa and HFpEF predispositions in the ordinate.

Journal ArticleDOI
TL;DR: Mental disorders are common among college students, have onsets that mostly occur prior to college entry, and in the case of pre-matriculation disorders are associated with college attrition, and are typically untreated.
Abstract: Background Although mental disorders are significant predictors of educational attainment throughout the entire educational career, most research on mental disorders among students has focused on the primary and secondary school years. Method The World Health Organization World Mental Health Surveys were used to examine the associations of mental disorders with college entry and attrition by comparing college students (n = 1572) and non-students in the same age range (18–22 years; n = 4178), including non-students who recently left college without graduating (n = 702) based on surveys in 21 countries (four low/lower-middle income, five upper-middle-income, one lower-middle or upper-middle at the times of two different surveys, and 11 high income). Lifetime and 12-month prevalence and age-of-onset of DSM-IV anxiety, mood, behavioral and substance disorders were assessed with the Composite International Diagnostic Interview (CIDI). Results One-fifth (20.3%) of college students had 12-month DSM-IV/CIDI disorders; 83.1% of these cases had pre-matriculation onsets. Disorders with pre-matriculation onsets were more important than those with post-matriculation onsets in predicting subsequent college attrition, with substance disorders and, among women, major depression the most important such disorders. Only 16.4% of students with 12-month disorders received any 12-month healthcare treatment for their mental disorders. Conclusions Mental disorders are common among college students, have onsets that mostly occur prior to college entry, in the case of pre-matriculation disorders are associated with college attrition, and are typically untreated. Detection and effective treatment of these disorders early in the college career might reduce attrition and improve educational and psychosocial functioning.

Journal ArticleDOI
TL;DR: In this paper, the authors conduct a meta-analysis of 1,532 effect sizes across 96 studies covering 40 platforms and 26 product categories and find that eWOM is positively correlated with sales, but its effectiveness differs across platform, product, and metric factors.
Abstract: The increasing amount of electronic word of mouth (eWOM) has significantly affected the way consumers make purchase decisions. Empirical studies have established an effect of eWOM on sales but disagree on which online platforms, products, and eWOM metrics moderate this effect. The authors conduct a meta-analysis of 1,532 effect sizes across 96 studies covering 40 platforms and 26 product categories. On average, eWOM is positively correlated with sales (.091), but its effectiveness differs across platform, product, and metric factors. For example, the effectiveness of eWOM on social media platforms is stronger when eWOM receivers can assess their own similarity to eWOM senders, whereas these homophily details do not influence the effectiveness of eWOM for e-commerce platforms. In addition, whereas eWOM has a stronger effect on sales for tangible goods new to the market, the product life cycle does not moderate the eWOM effectiveness for services. With respect to the eWOM metrics, eWOM volume has a ...

Journal ArticleDOI
TL;DR: In this article, the authors present the first global future river flood risk projections that separate the impacts of climate change and socio-economic development, and show that climate change contributes significantly to the increase in risk in Southeast Asia, but it is dwarfed by the effect of socioeconomic growth, even after normalization for gross domestic product (GDP) growth.
Abstract: Global river flood risk is expected to increase substantially over coming decades due to both climate change and socioeconomic development. Model-based projections suggest that southeast Asia and Africa are at particular risk, highlighting the need to invest in adaptation measures. Understanding global future river flood risk is a prerequisite for the quantification of climate change impacts and planning effective adaptation strategies1. Existing global flood risk projections fail to integrate the combined dynamics of expected socio-economic development and climate change. We present the first global future river flood risk projections that separate the impacts of climate change and socio-economic development. The projections are based on an ensemble of climate model outputs2, socio-economic scenarios3, and a state-of-the-art hydrologic river flood model combined with socio-economic impact models4,5. Globally, absolute damage may increase by up to a factor of 20 by the end of the century without action. Countries in Southeast Asia face a severe increase in flood risk. Although climate change contributes significantly to the increase in risk in Southeast Asia6, we show that it is dwarfed by the effect of socio-economic growth, even after normalization for gross domestic product (GDP) growth. African countries face a strong increase in risk mainly due to socio-economic change. However, when normalized to GDP, climate change becomes by far the strongest driver. Both high- and low-income countries may benefit greatly from investing in adaptation measures, for which our analysis provides a basis.

Journal ArticleDOI
TL;DR: This paper argues that it may be more valuable to establish and promote " effective engagement," rather than simply more engagement, with "effective engagement" defined empirically as sufficient engagement with the intervention to achieve intended outcomes.

Journal ArticleDOI
TL;DR: State-of-the-art MRI findings in patients presenting with a clinically isolated syndrome were discussed in a MAGNIMS workshop, the goal of which was to provide an evidence-based and expert-opinion consensus on diagnostic MRI criteria modifications.
Abstract: In patients presenting with a clinically isolated syndrome, MRI can support and substitute clinical information in the diagnosis of multiple sclerosis by showing disease dissemination in space and time and by helping to exclude disorders that can mimic multiple sclerosis. MRI criteria were first included in the diagnostic work-up for multiple sclerosis in 2001, and since then several modifications to the criteria have been proposed in an attempt to simplify lesion-count models for showing disease dissemination in space, change the timing of MRI scanning to show dissemination in time, and increase the value of spinal cord imaging. Since the last update of these criteria, new data on the use of MRI to establish dissemination in space and time have become available, and MRI technology has improved. State-of-the-art MRI findings in these patients were discussed in a MAGNIMS workshop, the goal of which was to provide an evidence-based and expert-opinion consensus on proposed modifications to MRI criteria for the diagnosis of multiple sclerosis.