Showing papers by "Radboud University Nijmegen published in 2016"
••
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.
10,728 citations
••
TL;DR: This is the first direct detection of gravitational waves and the first observation of a binary black hole merger, and these observations demonstrate the existence of binary stellar-mass black hole systems.
Abstract: On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of $1.0 \times 10^{-21}$. It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of $410^{+160}_{-180}$ Mpc corresponding to a redshift $z = 0.09^{+0.03}_{-0.04}$. In the source frame, the initial black hole masses are $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, and the final black hole mass is $62^{+4}_{-4} M_\odot$, with $3.0^{+0.5}_{-0.5} M_\odot c^2$ radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.
9,596 citations
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
[...]
T. Prusti1, J. H. J. de Bruijne1, Anthony G. A. Brown2, Antonella Vallenari3 +621 more•Institutions (93)
TL;DR: Gaia as discussed by the authors is a cornerstone mission in the science programme of the European Space Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach.
Abstract: Gaia is a cornerstone mission in the science programme of the EuropeanSpace Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach. Both the spacecraft and the payload were built by European industry. The involvement of the scientific community focusses on data processing for which the international Gaia Data Processing and Analysis Consortium (DPAC) was selected in 2007. Gaia was launched on 19 December 2013 and arrived at its operating point, the second Lagrange point of the Sun-Earth-Moon system, a few weeks later. The commissioning of the spacecraft and payload was completed on 19 July 2014. The nominal five-year mission started with four weeks of special, ecliptic-pole scanning and subsequently transferred into full-sky scanning mode. We recall the scientific goals of Gaia and give a description of the as-built spacecraft that is currently (mid-2016) being operated to achieve these goals. We pay special attention to the payload module, the performance of which is closely related to the scientific performance of the mission. We provide a summary of the commissioning activities and findings, followed by a description of the routine operational mode. We summarise scientific performance estimates on the basis of in-orbit operations. Several intermediate Gaia data releases are planned and the data can be retrieved from the Gaia Archive, which is available through the Gaia home page.
5,164 citations
••
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
3,448 citations
••
Katholieke Universiteit Leuven1, University of Valencia2, Radboud University Nijmegen3, Sheba Medical Center4, University of Turin5, Hospital Clínico San Carlos6, Université Paris-Saclay7, University of Pisa8, Mayo Clinic9, University of São Paulo10, The Chinese University of Hong Kong11, University of Oxford12, University of Helsinki13, Helsinki University Central Hospital14, Institute of Cancer Research15, Bank of Cyprus16, University of Ioannina17, Odense University Hospital18, University of Amsterdam19, Otto-von-Guericke University Magdeburg20, Geneva College21, Medical University of Vienna22, Martin Luther University of Halle-Wittenberg23, Hebron University24, Imperial College Healthcare25
TL;DR: These ESMO consensus guidelines have been developed based on the current available evidence to provide a series of evidence-based recommendations to assist in the treatment and management of patients with mCRC in this rapidly evolving treatment setting.
2,382 citations
••
Anthony G. A. Brown1, Antonella Vallenari2, T. Prusti2, J. H. J. de Bruijne3 +587 more•Institutions (89)
TL;DR: The first Gaia data release, Gaia DR1 as discussed by the authors, consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues.
Abstract: Context. At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. Aims: A summary of Gaia DR1 is presented along with illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Methods: The raw data collected by Gaia during the first 14 months of the mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC) and turned into an astrometric and photometric catalogue. Results: Gaia DR1 consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues - a realisation of the Tycho-Gaia Astrometric Solution (TGAS) - and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set, consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of 3000 Cepheid and RR Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas yr-1 for the proper motions. A systematic component of 0.3 mas should be added to the parallax uncertainties. For the subset of 94 000 Hipparcos stars in the primary data set, the proper motions are much more precise at about 0.06 mas yr-1. For the secondary astrometric data set, the typical uncertainty of the positions is 10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to0.03 mag over the magnitude range 5 to 20.7. Conclusions: Gaia DR1 is an important milestone ahead of the next Gaia data release, which will feature five-parameter astrometry for all sources. Extensive validation shows that Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data.
2,174 citations
••
Wellcome Trust Sanger Institute1, Cambridge University Hospitals NHS Foundation Trust2, Lund University3, Erasmus University Medical Center4, Radboud University Nijmegen5, European Bioinformatics Institute6, University of Oslo7, Oslo University Hospital8, Gachon University9, Netherlands Cancer Institute10, Université libre de Bruxelles11, University of Antwerp12, Harvard University13, University of Amsterdam14, University of Ulsan15, Hanyang University16, Memorial Sloan Kettering Cancer Center17, University of Texas MD Anderson Cancer Center18, French Institute of Health and Medical Research19, Ninewells Hospital20, ICM Partners21, University of Queensland22, University of Iceland23, Curie Institute24, University of Cambridge25, King's College London26, Institute of Cancer Research27, University of Bergen28, Singapore General Hospital29
TL;DR: This analysis of all classes of somatic mutation across exons, introns and intergenic regions highlights the repertoire of cancer genes and mutational processes operative, and progresses towards a comprehensive account of the somatic genetic basis of breast cancer.
Abstract: We analysed whole-genome sequences of 560 breast cancers to advance understanding of the driver mutations conferring clonal advantage and the mutational processes generating somatic mutations. We found that 93 protein-coding cancer genes carried probable driver mutations. Some non-coding regions exhibited high mutation frequencies, but most have distinctive structural features probably causing elevated mutation rates and do not contain driver mutations. Mutational signature analysis was extended to genome rearrangements and revealed twelve base substitution and six rearrangement signatures. Three rearrangement signatures, characterized by tandem duplications or deletions, appear associated with defective homologous-recombination-based DNA repair: one with deficient BRCA1 function, another with deficient BRCA1 or BRCA2 function, the cause of the third is unknown. This analysis of all classes of somatic mutation across exons, introns and intergenic regions highlights the repertoire of cancer genes and mutational processes operating, and progresses towards a comprehensive account of the somatic genetic basis of breast cancer.
1,696 citations
••
TL;DR: Proof-of-principle experimental studies support the hypothesis that trained immunity is one of the main immunological processes that mediate the nonspecific protective effects against infections induced by vaccines, such as bacillus Calmette-Guérin or measles vaccination.
Abstract: The general view that only adaptive immunity can build immunological memory has recently been challenged. In organisms lacking adaptive immunity, as well as in mammals, the innate immune system can mount resistance to reinfection, a phenomenon termed "trained immunity" or "innate immune memory." Trained immunity is orchestrated by epigenetic reprogramming, broadly defined as sustained changes in gene expression and cell physiology that do not involve permanent genetic changes such as mutations and recombination, which are essential for adaptive immunity. The discovery of trained immunity may open the door for novel vaccine approaches, new therapeutic strategies for the treatment of immune deficiency states, and modulation of exaggerated inflammation in autoinflammatory diseases.
1,690 citations
••
TL;DR: The papers in this special section focus on the technology and applications supported by deep learning, which have proven to be powerful tools for a broad range of computer vision tasks.
Abstract: The papers in this special section focus on the technology and applications supported by deep learning. Deep learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications.
1,428 citations
••
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.
••
University Medical Center Groningen1, Novosibirsk State University2, Harvard University3, Aalto University4, Maastricht University5, Rega Institute for Medical Research6, Katholieke Universiteit Leuven7, Wageningen University and Research Centre8, Radboud University Nijmegen9, Broad Institute10, Vrije Universiteit Brussel11
TL;DR: Deep sequencing of the gut microbiomes of 1135 participants from a Dutch population-based cohort shows relations between the microbiome and 126 exogenous and intrinsic host factors, including 31 intrinsic factors, 12 diseases, 19 drug groups, 4 smoking categories, and 60 dietary factors, and an important step toward a better understanding of environment-diet-microbe-host interactions.
Abstract: Deep sequencing of the gut microbiomes of 1135 participants from a Dutch population-based cohort shows relations between the microbiome and 126 exogenous and intrinsic host factors, including 31 intrinsic factors, 12 diseases, 19 drug groups, 4 smoking categories, and 60 dietary factors. These factors collectively explain 18.7% of the variation seen in the interindividual distance of microbial composition. We could associate 110 factors to 125 species and observed that fecal chromogranin A (CgA), a protein secreted by enteroendocrine cells, was exclusively associated with 61 microbial species whose abundance collectively accounted for 53% of microbial composition. Low CgA concentrations were seen in individuals with a more diverse microbiome. These results are an important step toward a better understanding of environment-diet-microbe-host interactions.
••
Karolinska University Hospital1, Karolinska Institutet2, Pasteur Institute3, University of Toulouse4, Wolfson Centre for Age-Related Diseases5, University of Cambridge6, University of New South Wales7, Pierre-and-Marie-Curie University8, La Trobe University9, Umeå University10, University of British Columbia11, University of Geneva12, Douglas Mental Health University Institute13, Alzheimer Europe14, University of Cologne15, German Center for Neurodegenerative Diseases16, London School of Economics and Political Science17, Radboud University Nijmegen18, Rockefeller University19, VU University Medical Center20, University of Southern California21, Brigham and Women's Hospital22, University of Copenhagen23, University of Gothenburg24, UCL Institute of Neurology25
TL;DR: This poster aims to demonstrate the efforts towards in-situ applicability of EMMARM, which aims to provide real-time information about the physical and cognitive properties of Alzheimer's disease and other dementias.
Abstract: Defeating Alzheimer's disease and other dementias : a priority for European science and society
••
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.
••
Ghent University1, Forschungszentrum Jülich2, Aalto University3, Åbo Akademi University4, Vienna University of Technology5, Duke University6, University of Grenoble7, École Polytechnique Fédérale de Lausanne8, Durham University9, International School for Advanced Studies10, Max Planck Society11, Uppsala University12, Humboldt University of Berlin13, Fritz Haber Institute of the Max Planck Society14, Technical University of Denmark15, National Institute of Standards and Technology16, University of Udine17, Université catholique de Louvain18, University of Basel19, Harvard University20, University of California, Davis21, Rutgers University22, University of York23, Wake Forest University24, Science and Technology Facilities Council25, University of Oxford26, University of Vienna27, Leibniz Institute for Neurobiology28, Dresden University of Technology29, Radboud University Nijmegen30, University of Tokyo31, Centre national de la recherche scientifique32, University of Cambridge33, Royal Holloway, University of London34, University of California, Santa Barbara35, University of Luxembourg36, Los Alamos National Laboratory37, Harbin Institute of Technology38
TL;DR: A procedure to assess the precision of DFT methods was devised and used to demonstrate reproducibility among many of the most widely used DFT codes, demonstrating that the precisionof DFT implementations can be determined, even in the absence of one absolute reference code.
Abstract: The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements.
••
TL;DR: In this article, the results of a genome-wide association study (GWAS) for educational attainment were reported, showing that single-nucleotide polymorphisms associated with educational attainment disproportionately occur in genomic regions regulating gene expression in the fetal brain.
Abstract: Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases.
••
Lars G. Fritsche1, Wilmar Igl2, Jessica N. Cooke Bailey3, Felix Grassmann2 +182 more•Institutions (58)
TL;DR: The results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.
Abstract: Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.
••
TL;DR: It was showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.
Abstract: We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDC-IDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.
••
National Institute for Health Research1, Wellcome Trust Sanger Institute2, University of Cambridge3, NHS Blood and Transplant4, European Bioinformatics Institute5, Barts Health NHS Trust6, Radboud University Nijmegen7, University of Geneva8, Katholieke Universiteit Leuven9, Wellcome Trust Centre for Human Genetics10, University of Oxford11, British Heart Foundation12, University of Amsterdam13, Newcastle University14, McGill University15, Pompeu Fabra University16, University College London17, John Radcliffe Hospital18, Churchill Hospital19
TL;DR: A genome-wide association analysis in the UK Biobank and INTERVAL studies is performed, providing evidence of shared genetic pathways linking blood cell indices with complex pathologies, including autoimmune diseases, schizophrenia, and coronary heart disease and evidence suggesting previously reported population associations betweenBlood cell indices and cardiovascular disease may be non-causal.
••
TL;DR: Investigation of mammalian tumor cell migration in confining microenvironments in vitro and in vivo indicates that cell migration incurs substantial physical stress on the NE and its content and requires efficient NE and DNA damage repair for cell survival.
Abstract: During cancer metastasis, tumor cells penetrate tissues through tight interstitial spaces, which requires extensive deformation of the cell and its nucleus. Here, we investigated mammalian tumor cell migration in confining microenvironments in vitro and in vivo. Nuclear deformation caused localized loss of nuclear envelope (NE) integrity, which led to the uncontrolled exchange of nucleo-cytoplasmic content, herniation of chromatin across the NE, and DNA damage. The incidence of NE rupture increased with cell confinement and with depletion of nuclear lamins, NE proteins that structurally support the nucleus. Cells restored NE integrity using components of the endosomal sorting complexes required for transport III (ESCRT III) machinery. Our findings indicate that cell migration incurs substantial physical stress on the NE and its content and requires efficient NE and DNA damage repair for cell survival.
••
TL;DR: In this article, the authors present the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties.
Abstract: This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (l< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n_s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK^2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.
••
TL;DR: The prediction interval reflects the variation in treatment effects over different settings, including what effect is to be expected in future patients, such as the patients that a clinician is interested to treat, in meta-analyses.
Abstract: Objectives Evaluating the variation in the strength of the effect across studies is a key feature of meta-analyses. This variability is reflected by measures like τ 2 or I 2 , but their clinical interpretation is not straightforward. A prediction interval is less complicated: it presents the expected range of true effects in similar studies. We aimed to show the advantages of having the prediction interval routinely reported in meta-analyses. Design We show how the prediction interval can help understand the uncertainty about whether an intervention works or not. To evaluate the implications of using this interval to interpret the results, we selected the first meta-analysis per intervention review of the Cochrane Database of Systematic Reviews Issues 2009–2013 with a dichotomous (n=2009) or continuous (n=1254) outcome, and generated 95% prediction intervals for them. Results In 72.4% of 479 statistically significant (random-effects p 2 >0), the 95% prediction interval suggested that the intervention effect could be null or even be in the opposite direction. In 20.3% of those 479 meta-analyses, the prediction interval showed that the effect could be completely opposite to the point estimate of the meta-analysis. We demonstrate also how the prediction interval can be used to calculate the probability that a new trial will show a negative effect and to improve the calculations of the power of a new trial. Conclusions The prediction interval reflects the variation in treatment effects over different settings, including what effect is to be expected in future patients, such as the patients that a clinician is interested to treat. Prediction intervals should be routinely reported to allow more informative inferences in meta-analyses.
••
TL;DR: The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity.
Abstract: On September 14, 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterise the properties of the source and its parameters. The data around the time of the event were analysed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$ (for each parameter we report the median value and the range of the 90% credible interval). The dimensionless spin magnitude of the more massive black hole is bound to be $0.7$ (at 90% probability). The luminosity distance to the source is $410^{+160}_{-180}$ Mpc, corresponding to a redshift $0.09^{+0.03}_{-0.04}$ assuming standard cosmology. The source location is constrained to an annulus section of $590$ deg$^2$, primarily in the southern hemisphere. The binary merges into a black hole of $62^{+4}_{-4} M_\odot$ and spin $0.67^{+0.05}_{-0.07}$. This black hole is significantly more massive than any other known in the stellar-mass regime.
••
TL;DR: This tutorial will review and summarize current analysis methods used in the field of invasive and non-invasive electrophysiology to study the dynamic connections between neuronal populations, and highlights a number of interpretational caveats and common pitfalls that can arise when performing functional connectivity analysis.
Abstract: Oscillatory neuronal activity may provide a mechanism for dynamic network coordination. Rhythmic neuronal interactions can be quantified using multiple metrics, each with their own advantages and disadvantages. This tutorial will review and summarize current analysis methods used in the field of invasive and non-invasive electrophysiology to study the dynamic connections between neuronal populations. First, we review metrics for functional connectivity, including coherence, phase synchronization, phase-slope index, and Granger causality, with the specific aim to provide an intuition for how these metrics work, as well as their quantitative definition. Next, we highlight a number of interpretational caveats and common pitfalls that can arise when performing functional connectivity analysis, including the common reference problem, the signal to noise ratio problem, the volume conduction problem, the common input problem, and the trial sample size bias problem. These pitfalls will be illustrated by presenting a set of MATLAB-scripts, which can be executed by the reader to simulate each of these potential problems. We discuss of how these issues can be addressed using current methods.
••
TL;DR: It is found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention.
Abstract: Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
••
TL;DR: This work uses promoter capture Hi-C to identify interacting regions of 31,253 promoters in 17 human primary hematopoietic cell types and shows that promoter interactions are highly cell type specific and enriched for links between active promoters and epigenetically marked enhancers.
••
Stellenbosch University1, University of Western Australia2, University of Kiel3, University of Geneva4, Free University of Berlin5, Macedonian Academy of Sciences and Arts6, Slovenian Academy of Sciences and Arts7, University of Nova Gorica8, Academy of Sciences of the Czech Republic9, University of Vienna10, University of Bayreuth11, Complutense University of Madrid12, Masaryk University13, Sapienza University of Rome14, University of Zielona Góra15, University of Münster16, University of Göttingen17, Russian Academy of Sciences18, Slovak Academy of Sciences19, Wageningen University and Research Centre20, Radboud University Nijmegen21, National Academy of Sciences of Ukraine22, University of Lisbon23, University of Vechta24, University of California, Davis25, University of Patras26
TL;DR: This paper features the first comprehensive and critical account of European syntaxa and synthesizes more than 100 yr of classification effort by European phytosociologists.
Abstract: Aims: Vegetation classification consistent with the
Braun-Blanquet approach is widely used in Europe for applied
vegetation science, conservation planning and land management.
During the long history of syntaxonomy, many concepts and names
of vegetation units have been proposed, but there has been no
single classification system integrating these units. Here we
(1) present a comprehensive, hierarchical, syntaxonomic system
of alliances, orders and classes of Braun-Blanquet syntaxonomy
for vascular plant, bryophyte and lichen, and algal communities
of Europe; (2) briefly characterize in ecological and
geographic terms accepted syntaxonomic concepts; (3) link
available synonyms to these accepted concepts; and (4) provide
a list of diagnostic species for all classes. LocationEuropean
mainland, Greenland, Arctic archipelagos (including Iceland,
Svalbard, Novaya Zemlya), Canary Islands, Madeira, Azores,
Caucasus, Cyprus. Methods: We evaluated approximately 10000
bibliographic sources to create a comprehensive list of
previously proposed syntaxonomic units. These units were
evaluated by experts for their floristic and ecological
distinctness, clarity of geographic distribution and compliance
with the nomenclature code. Accepted units were compiled into
three systems of classes, orders and alliances
(EuroVegChecklist, EVC) for communities dominated by vascular
plants (EVC1), bryophytes and lichens (EVC2) and algae (EVC3).
Results: EVC1 includes 109 classes, 300 orders and 1108
alliances; EVC2 includes 27 classes, 53 orders and 137
alliances, and EVC3 includes 13 classes, 24 orders and 53
alliances. In total 13448 taxa were assigned as indicator
species to classes of EVC1, 2087 to classes of EVC2 and 368 to
classes of EVC3. Accepted syntaxonomic concepts are summarized
in a series of appendices, and detailed information on each is
accessible through the software tool EuroVegBrowser.
Conclusions: This paper features the first comprehensive and
critical account of European syntaxa and synthesizes more than
100 yr of classification effort by European phytosociologists.
It aims to document and stabilize the concepts and nomenclature
of syntaxa for practical uses, such as calibration of habitat
classification used by the European Union, standardization of
terminology for environmental assessment, management and
conservation of nature areas, landscape planning and education.
The presented classification systems provide a baseline for
future development and revision of European syntaxonomy.
••
TL;DR: In this paper, the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario were studied, and it was shown that the density of DE at early times has to be below 2% of the critical density, even when forced to play a role for z < 50.
Abstract: We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forced to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. When testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.
••
Pierre-and-Marie-Curie University1, French Institute of Health and Medical Research2, San Antonio Military Medical Center3, University of Antwerp4, Paris Diderot University5, University of Melbourne6, Catholic University of Leuven7, University of Angers8, Duke University9, University of Virginia10, Radboud University Nijmegen11, Newcastle University12, university of lille13, Virginia Commonwealth University14, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico15, Semmelweis University16
TL;DR: A post-hoc analysis of data from trial of patients with NASH showed that elafibranor (120 mg/d for 1 year) resolved NASH without fibrosis worsening, based on a modified definition, in the intention-to-treat analysis and in patients with moderate or severe NASH.
••
University of California, Berkeley1, United States Department of Energy2, Nagoya University3, University of Texas at Austin4, Ulsan National Institute of Science and Technology5, National Institute of Genetics6, Hiroshima University7, Hokkaido University8, University of Tokyo9, Radboud University Nijmegen10, Salk Institute for Biological Studies11, Nagahama Institute of Bio-Science and Technology12, Yamagata University13, Okinawa Institute of Science and Technology14, Tokyo Institute of Technology15, University of Tokushima16, Harry Perkins Institute of Medical Research17, Rikkyo University18, University of Maryland, Baltimore19, Kitasato University20, University of Chicago21, National Institute of Advanced Industrial Science and Technology22, National Institutes of Natural Sciences, Japan23, Illumina24, University of Washington25, University of Virginia26, Niigata University27, University of Rochester28, Cincinnati Children's Hospital Medical Center29, University of Calgary30, University of Iowa31, University of Basel32, Graduate University for Advanced Studies33, National Institute of Informatics34
TL;DR: The Xenopus laevis genome is sequenced and it is estimated that the two diploid progenitor species diverged around 34 million years ago and combined to form an allotetraploid around 17–18 Ma, where more than 56% of all genes were retained in two homoeologous copies.
Abstract: To explore the origins and consequences of tetraploidy in the African clawed frog, we sequenced the Xenopus laevis genome and compared it to the related diploid X. tropicalis genome. We characterize the allotetraploid origin of X. laevis by partitioning its genome into two homoeologous subgenomes, marked by distinct families of 'fossil' transposable elements. On the basis of the activity of these elements and the age of hundreds of unitary pseudogenes, we estimate that the two diploid progenitor species diverged around 34 million years ago (Ma) and combined to form an allotetraploid around 17-18 Ma. More than 56% of all genes were retained in two homoeologous copies. Protein function, gene expression, and the amount of conserved flanking sequence all correlate with retention rates. The subgenomes have evolved asymmetrically, with one chromosome set more often preserving the ancestral state and the other experiencing more gene loss, deletion, rearrangement, and reduced gene expression.