scispace - formally typeset
Search or ask a question

Showing papers by "Radboud University Nijmegen published in 2016"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +1008 moreInstitutions (96)
TL;DR: This is the first direct detection of gravitational waves and the first observation of a binary black hole merger, and these observations demonstrate the existence of binary stellar-mass black hole systems.
Abstract: On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of $1.0 \times 10^{-21}$. It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of $410^{+160}_{-180}$ Mpc corresponding to a redshift $z = 0.09^{+0.03}_{-0.04}$. In the source frame, the initial black hole masses are $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, and the final black hole mass is $62^{+4}_{-4} M_\odot$, with $3.0^{+0.5}_{-0.5} M_\odot c^2$ radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

9,596 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: Gaia as discussed by the authors is a cornerstone mission in the science programme of the European Space Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach.
Abstract: Gaia is a cornerstone mission in the science programme of the EuropeanSpace Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach. Both the spacecraft and the payload were built by European industry. The involvement of the scientific community focusses on data processing for which the international Gaia Data Processing and Analysis Consortium (DPAC) was selected in 2007. Gaia was launched on 19 December 2013 and arrived at its operating point, the second Lagrange point of the Sun-Earth-Moon system, a few weeks later. The commissioning of the spacecraft and payload was completed on 19 July 2014. The nominal five-year mission started with four weeks of special, ecliptic-pole scanning and subsequently transferred into full-sky scanning mode. We recall the scientific goals of Gaia and give a description of the as-built spacecraft that is currently (mid-2016) being operated to achieve these goals. We pay special attention to the payload module, the performance of which is closely related to the scientific performance of the mission. We provide a summary of the commissioning activities and findings, followed by a description of the routine operational mode. We summarise scientific performance estimates on the basis of in-orbit operations. Several intermediate Gaia data releases are planned and the data can be retrieved from the Gaia Archive, which is available through the Gaia home page.

5,164 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +970 moreInstitutions (114)
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.

3,448 citations


Journal ArticleDOI
TL;DR: These ESMO consensus guidelines have been developed based on the current available evidence to provide a series of evidence-based recommendations to assist in the treatment and management of patients with mCRC in this rapidly evolving treatment setting.

2,382 citations


Journal ArticleDOI
TL;DR: The first Gaia data release, Gaia DR1 as discussed by the authors, consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues.
Abstract: Context. At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. Aims: A summary of Gaia DR1 is presented along with illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Methods: The raw data collected by Gaia during the first 14 months of the mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC) and turned into an astrometric and photometric catalogue. Results: Gaia DR1 consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues - a realisation of the Tycho-Gaia Astrometric Solution (TGAS) - and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set, consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of 3000 Cepheid and RR Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas yr-1 for the proper motions. A systematic component of 0.3 mas should be added to the parallax uncertainties. For the subset of 94 000 Hipparcos stars in the primary data set, the proper motions are much more precise at about 0.06 mas yr-1. For the secondary astrometric data set, the typical uncertainty of the positions is 10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to0.03 mag over the magnitude range 5 to 20.7. Conclusions: Gaia DR1 is an important milestone ahead of the next Gaia data release, which will feature five-parameter astrometry for all sources. Extensive validation shows that Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data.

2,174 citations


Journal ArticleDOI
Serena Nik-Zainal1, Serena Nik-Zainal2, Helen Davies1, Johan Staaf3, Manasa Ramakrishna1, Dominik Glodzik1, Xueqing Zou1, Inigo Martincorena1, Ludmil B. Alexandrov1, Sancha Martin1, David C. Wedge1, Peter Van Loo1, Young Seok Ju1, Michiel M. Smid4, Arie B. Brinkman5, Sandro Morganella6, Miriam Ragle Aure7, Ole Christian Lingjærde7, Anita Langerød8, Markus Ringnér3, Sung-Min Ahn9, Sandrine Boyault, Jane E. Brock, Annegien Broeks10, Adam Butler1, Christine Desmedt11, Luc Dirix12, Serge Dronov1, Aquila Fatima13, John A. Foekens4, Moritz Gerstung1, Gerrit Gk Hooijer14, Se Jin Jang15, David Jones1, Hyung-Yong Kim16, Tari Ta King17, Savitri Krishnamurthy18, Hee Jin Lee15, Jeong-Yeon Lee16, Yang Li1, Stuart McLaren1, Andrew Menzies1, Ville Mustonen1, Sarah O’Meara1, Iris Pauporté, Xavier Pivot19, Colin Ca Purdie20, Keiran Raine1, Kamna Ramakrishnan1, Germán Fg Rodríguez-González4, Gilles Romieu21, Anieta M. Sieuwerts4, Peter Pt Simpson22, Rebecca Shepherd1, Lucy Stebbings1, Olafur Oa Stefansson23, Jon W. Teague1, Stefania Tommasi, Isabelle Treilleux, Gert Van den Eynden12, Peter B. Vermeulen12, Anne Vincent-Salomon24, Lucy R. Yates1, Carlos Caldas25, Laura Van't Veer10, Andrew Tutt26, Andrew Tutt27, Stian Knappskog28, Benita Kiat Tee Bk Tan29, Jos Jonkers10, Åke Borg3, Naoto T. Ueno18, Christos Sotiriou11, Alain Viari, P. Andrew Futreal1, Peter J. Campbell1, Paul N. Span5, Steven Van Laere12, Sunil R. Lakhani22, Jorunn E. Eyfjord23, Alastair M Thompson, Ewan Birney6, Hendrik G. Stunnenberg5, Marc J. van de Vijver14, John W.M. Martens4, Anne Lise Børresen-Dale8, Andrea L. Richardson13, Gu Kong16, Gilles Thomas, Michael R. Stratton1 
02 Jun 2016-Nature
TL;DR: This analysis of all classes of somatic mutation across exons, introns and intergenic regions highlights the repertoire of cancer genes and mutational processes operative, and progresses towards a comprehensive account of the somatic genetic basis of breast cancer.
Abstract: We analysed whole-genome sequences of 560 breast cancers to advance understanding of the driver mutations conferring clonal advantage and the mutational processes generating somatic mutations. We found that 93 protein-coding cancer genes carried probable driver mutations. Some non-coding regions exhibited high mutation frequencies, but most have distinctive structural features probably causing elevated mutation rates and do not contain driver mutations. Mutational signature analysis was extended to genome rearrangements and revealed twelve base substitution and six rearrangement signatures. Three rearrangement signatures, characterized by tandem duplications or deletions, appear associated with defective homologous-recombination-based DNA repair: one with deficient BRCA1 function, another with deficient BRCA1 or BRCA2 function, the cause of the third is unknown. This analysis of all classes of somatic mutation across exons, introns and intergenic regions highlights the repertoire of cancer genes and mutational processes operating, and progresses towards a comprehensive account of the somatic genetic basis of breast cancer.

1,696 citations


Journal ArticleDOI
22 Apr 2016-Science
TL;DR: Proof-of-principle experimental studies support the hypothesis that trained immunity is one of the main immunological processes that mediate the nonspecific protective effects against infections induced by vaccines, such as bacillus Calmette-Guérin or measles vaccination.
Abstract: The general view that only adaptive immunity can build immunological memory has recently been challenged. In organisms lacking adaptive immunity, as well as in mammals, the innate immune system can mount resistance to reinfection, a phenomenon termed "trained immunity" or "innate immune memory." Trained immunity is orchestrated by epigenetic reprogramming, broadly defined as sustained changes in gene expression and cell physiology that do not involve permanent genetic changes such as mutations and recombination, which are essential for adaptive immunity. The discovery of trained immunity may open the door for novel vaccine approaches, new therapeutic strategies for the treatment of immune deficiency states, and modulation of exaggerated inflammation in autoinflammatory diseases.

1,690 citations


Journal ArticleDOI
TL;DR: The papers in this special section focus on the technology and applications supported by deep learning, which have proven to be powerful tools for a broad range of computer vision tasks.
Abstract: The papers in this special section focus on the technology and applications supported by deep learning. Deep learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications.

1,428 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +976 moreInstitutions (107)
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

Journal ArticleDOI
29 Apr 2016-Science
TL;DR: Deep sequencing of the gut microbiomes of 1135 participants from a Dutch population-based cohort shows relations between the microbiome and 126 exogenous and intrinsic host factors, including 31 intrinsic factors, 12 diseases, 19 drug groups, 4 smoking categories, and 60 dietary factors, and an important step toward a better understanding of environment-diet-microbe-host interactions.
Abstract: Deep sequencing of the gut microbiomes of 1135 participants from a Dutch population-based cohort shows relations between the microbiome and 126 exogenous and intrinsic host factors, including 31 intrinsic factors, 12 diseases, 19 drug groups, 4 smoking categories, and 60 dietary factors. These factors collectively explain 18.7% of the variation seen in the interindividual distance of microbial composition. We could associate 110 factors to 125 species and observed that fecal chromogranin A (CgA), a protein secreted by enteroendocrine cells, was exclusively associated with 61 microbial species whose abundance collectively accounted for 53% of microbial composition. Low CgA concentrations were seen in individuals with a more diverse microbiome. These results are an important step toward a better understanding of environment-diet-microbe-host interactions.

Journal ArticleDOI
TL;DR: This poster aims to demonstrate the efforts towards in-situ applicability of EMMARM, which aims to provide real-time information about the physical and cognitive properties of Alzheimer's disease and other dementias.
Abstract: Defeating Alzheimer's disease and other dementias : a priority for European science and society

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy3  +978 moreInstitutions (112)
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.

Journal ArticleDOI
Kurt Lejaeghere1, Gustav Bihlmayer2, Torbjörn Björkman3, Torbjörn Björkman4, Peter Blaha5, Stefan Blügel2, Volker Blum6, Damien Caliste7, Ivano E. Castelli8, Stewart J. Clark9, Andrea Dal Corso10, Stefano de Gironcoli10, Thierry Deutsch7, J. K. Dewhurst11, Igor Di Marco12, Claudia Draxl13, Claudia Draxl14, Marcin Dulak15, Olle Eriksson12, José A. Flores-Livas11, Kevin F. Garrity16, Luigi Genovese7, Paolo Giannozzi17, Matteo Giantomassi18, Stefan Goedecker19, Xavier Gonze18, Oscar Grånäs20, Oscar Grånäs12, E. K. U. Gross11, Andris Gulans14, Andris Gulans13, Francois Gygi21, D. R. Hamann22, P. J. Hasnip23, Natalie Holzwarth24, Diana Iusan12, Dominik B. Jochym25, F. Jollet, Daniel M. Jones26, Georg Kresse27, Klaus Koepernik28, Klaus Koepernik29, Emine Kucukbenli10, Emine Kucukbenli8, Yaroslav Kvashnin12, Inka L. M. Locht12, Inka L. M. Locht30, Sven Lubeck13, Martijn Marsman27, Nicola Marzari8, Ulrike Nitzsche28, Lars Nordström12, Taisuke Ozaki31, Lorenzo Paulatto32, Chris J. Pickard33, Ward Poelmans1, Matt Probert23, Keith Refson25, Keith Refson34, Manuel Richter28, Manuel Richter29, Gian-Marco Rignanese18, Santanu Saha19, Matthias Scheffler14, Matthias Scheffler35, Martin Schlipf21, Karlheinz Schwarz5, Sangeeta Sharma11, Francesca Tavazza16, Patrik Thunström5, Alexandre Tkatchenko36, Alexandre Tkatchenko14, Marc Torrent, David Vanderbilt22, Michiel van Setten18, Veronique Van Speybroeck1, John M. Wills37, Jonathan R. Yates26, Guo-Xu Zhang38, Stefaan Cottenier1 
25 Mar 2016-Science
TL;DR: A procedure to assess the precision of DFT methods was devised and used to demonstrate reproducibility among many of the most widely used DFT codes, demonstrating that the precisionof DFT implementations can be determined, even in the absence of one absolute reference code.
Abstract: The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements.

Journal ArticleDOI
Aysu Okbay1, Jonathan P. Beauchamp2, Mark Alan Fontana3, James J. Lee4  +293 moreInstitutions (81)
26 May 2016-Nature
TL;DR: In this article, the results of a genome-wide association study (GWAS) for educational attainment were reported, showing that single-nucleotide polymorphisms associated with educational attainment disproportionately occur in genomic regions regulating gene expression in the fetal brain.
Abstract: Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases.

Journal ArticleDOI
TL;DR: The results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.
Abstract: Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.

Journal ArticleDOI
TL;DR: It was showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.
Abstract: We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDC-IDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.

Journal ArticleDOI
William J. Astle, Heather Elding1, Heather Elding2, Tao Jiang3, Dave Allen4, Dace Ruklisa4, Dace Ruklisa3, Alice L. Mann2, Daniel Mead2, Heleen J. Bouman2, Fernando Riveros-Mckay2, Myrto Kostadima5, Myrto Kostadima4, Myrto Kostadima3, John J. Lambourne3, John J. Lambourne4, Suthesh Sivapalaratnam6, Suthesh Sivapalaratnam3, Kate Downes3, Kate Downes4, Kousik Kundu2, Kousik Kundu3, Lorenzo Bomba2, Kim Berentsen7, John Bradley1, John Bradley3, Louise C. Daugherty4, Louise C. Daugherty3, Olivier Delaneau8, Kathleen Freson9, Stephen F. Garner4, Stephen F. Garner3, Luigi Grassi4, Luigi Grassi3, Jose A. Guerrero3, Jose A. Guerrero4, Matthias Haimel3, Eva M. Janssen-Megens7, Anita Kaan7, Mihir A Kamat3, Bowon Kim7, Amit Mandoli7, Jonathan Marchini10, Jonathan Marchini11, Joost H.A. Martens7, Stuart Meacham3, Stuart Meacham4, Karyn Megy3, Karyn Megy4, Jared O'Connell10, Jared O'Connell11, Romina Petersen4, Romina Petersen3, Nilofar Sharifi7, S.M. Sheard, James R Staley3, Salih Tuna3, Martijn van der Ent7, Klaudia Walter2, Shuang-Yin Wang7, Eleanor Wheeler2, Steven P. Wilder5, Valentina Iotchkova5, Valentina Iotchkova2, Carmel Moore3, Jennifer G. Sambrook3, Jennifer G. Sambrook4, Hendrik G. Stunnenberg7, Emanuele Di Angelantonio1, Emanuele Di Angelantonio3, Emanuele Di Angelantonio12, Stephen Kaptoge3, Stephen Kaptoge1, Taco W. Kuijpers13, Enrique Carrillo-de-Santa-Pau, David Juan, Daniel Rico14, Alfonso Valencia, Lu Chen3, Lu Chen2, Bing Ge15, Louella Vasquez2, Tony Kwan15, Diego Garrido-Martín16, Stephen Watt2, Ying Yang2, Roderic Guigó16, Stephan Beck17, Dirk S. Paul17, Dirk S. Paul3, Tomi Pastinen15, David Bujold15, Guillaume Bourque15, Mattia Frontini4, Mattia Frontini12, Mattia Frontini3, John Danesh, David J. Roberts18, David J. Roberts19, Willem H. Ouwehand, Adam S. Butterworth1, Adam S. Butterworth12, Adam S. Butterworth3, Nicole Soranzo 
17 Nov 2016-Cell
TL;DR: A genome-wide association analysis in the UK Biobank and INTERVAL studies is performed, providing evidence of shared genetic pathways linking blood cell indices with complex pathologies, including autoimmune diseases, schizophrenia, and coronary heart disease and evidence suggesting previously reported population associations betweenBlood cell indices and cardiovascular disease may be non-causal.

Journal ArticleDOI
15 Apr 2016-Science
TL;DR: Investigation of mammalian tumor cell migration in confining microenvironments in vitro and in vivo indicates that cell migration incurs substantial physical stress on the NE and its content and requires efficient NE and DNA damage repair for cell survival.
Abstract: During cancer metastasis, tumor cells penetrate tissues through tight interstitial spaces, which requires extensive deformation of the cell and its nucleus. Here, we investigated mammalian tumor cell migration in confining microenvironments in vitro and in vivo. Nuclear deformation caused localized loss of nuclear envelope (NE) integrity, which led to the uncontrolled exchange of nucleo-cytoplasmic content, herniation of chromatin across the NE, and DNA damage. The incidence of NE rupture increased with cell confinement and with depletion of nuclear lamins, NE proteins that structurally support the nucleus. Cells restored NE integrity using components of the endosomal sorting complexes required for transport III (ESCRT III) machinery. Our findings indicate that cell migration incurs substantial physical stress on the NE and its content and requires efficient NE and DNA damage repair for cell survival.

Journal ArticleDOI
Nabila Aghanim1, Monique Arnaud2, M. Ashdown3, J. Aumont1  +291 moreInstitutions (73)
TL;DR: In this article, the authors present the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties.
Abstract: This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (l< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n_s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK^2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.

Journal ArticleDOI
01 Jul 2016-BMJ Open
TL;DR: The prediction interval reflects the variation in treatment effects over different settings, including what effect is to be expected in future patients, such as the patients that a clinician is interested to treat, in meta-analyses.
Abstract: Objectives Evaluating the variation in the strength of the effect across studies is a key feature of meta-analyses. This variability is reflected by measures like τ 2 or I 2 , but their clinical interpretation is not straightforward. A prediction interval is less complicated: it presents the expected range of true effects in similar studies. We aimed to show the advantages of having the prediction interval routinely reported in meta-analyses. Design We show how the prediction interval can help understand the uncertainty about whether an intervention works or not. To evaluate the implications of using this interval to interpret the results, we selected the first meta-analysis per intervention review of the Cochrane Database of Systematic Reviews Issues 2009–2013 with a dichotomous (n=2009) or continuous (n=1254) outcome, and generated 95% prediction intervals for them. Results In 72.4% of 479 statistically significant (random-effects p 2 >0), the 95% prediction interval suggested that the intervention effect could be null or even be in the opposite direction. In 20.3% of those 479 meta-analyses, the prediction interval showed that the effect could be completely opposite to the point estimate of the meta-analysis. We demonstrate also how the prediction interval can be used to calculate the probability that a new trial will show a negative effect and to improve the calculations of the power of a new trial. Conclusions The prediction interval reflects the variation in treatment effects over different settings, including what effect is to be expected in future patients, such as the patients that a clinician is interested to treat. Prediction intervals should be routinely reported to allow more informative inferences in meta-analyses.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +984 moreInstitutions (116)
TL;DR: The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity.
Abstract: On September 14, 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterise the properties of the source and its parameters. The data around the time of the event were analysed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$ (for each parameter we report the median value and the range of the 90% credible interval). The dimensionless spin magnitude of the more massive black hole is bound to be $0.7$ (at 90% probability). The luminosity distance to the source is $410^{+160}_{-180}$ Mpc, corresponding to a redshift $0.09^{+0.03}_{-0.04}$ assuming standard cosmology. The source location is constrained to an annulus section of $590$ deg$^2$, primarily in the southern hemisphere. The binary merges into a black hole of $62^{+4}_{-4} M_\odot$ and spin $0.67^{+0.05}_{-0.07}$. This black hole is significantly more massive than any other known in the stellar-mass regime.

Journal ArticleDOI
TL;DR: This tutorial will review and summarize current analysis methods used in the field of invasive and non-invasive electrophysiology to study the dynamic connections between neuronal populations, and highlights a number of interpretational caveats and common pitfalls that can arise when performing functional connectivity analysis.
Abstract: Oscillatory neuronal activity may provide a mechanism for dynamic network coordination. Rhythmic neuronal interactions can be quantified using multiple metrics, each with their own advantages and disadvantages. This tutorial will review and summarize current analysis methods used in the field of invasive and non-invasive electrophysiology to study the dynamic connections between neuronal populations. First, we review metrics for functional connectivity, including coherence, phase synchronization, phase-slope index, and Granger causality, with the specific aim to provide an intuition for how these metrics work, as well as their quantitative definition. Next, we highlight a number of interpretational caveats and common pitfalls that can arise when performing functional connectivity analysis, including the common reference problem, the signal to noise ratio problem, the volume conduction problem, the common input problem, and the trial sample size bias problem. These pitfalls will be illustrated by presenting a set of MATLAB-scripts, which can be executed by the reader to simulate each of these potential problems. We discuss of how these issues can be addressed using current methods.

Journal ArticleDOI
TL;DR: It is found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention.
Abstract: Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.

Journal ArticleDOI
17 Nov 2016-Cell
TL;DR: This work uses promoter capture Hi-C to identify interacting regions of 31,253 promoters in 17 human primary hematopoietic cell types and shows that promoter interactions are highly cell type specific and enriched for links between active promoters and epigenetically marked enhancers.

Journal ArticleDOI
TL;DR: This paper features the first comprehensive and critical account of European syntaxa and synthesizes more than 100 yr of classification effort by European phytosociologists.
Abstract: Aims: Vegetation classification consistent with the Braun-Blanquet approach is widely used in Europe for applied vegetation science, conservation planning and land management. During the long history of syntaxonomy, many concepts and names of vegetation units have been proposed, but there has been no single classification system integrating these units. Here we (1) present a comprehensive, hierarchical, syntaxonomic system of alliances, orders and classes of Braun-Blanquet syntaxonomy for vascular plant, bryophyte and lichen, and algal communities of Europe; (2) briefly characterize in ecological and geographic terms accepted syntaxonomic concepts; (3) link available synonyms to these accepted concepts; and (4) provide a list of diagnostic species for all classes. LocationEuropean mainland, Greenland, Arctic archipelagos (including Iceland, Svalbard, Novaya Zemlya), Canary Islands, Madeira, Azores, Caucasus, Cyprus. Methods: We evaluated approximately 10000 bibliographic sources to create a comprehensive list of previously proposed syntaxonomic units. These units were evaluated by experts for their floristic and ecological distinctness, clarity of geographic distribution and compliance with the nomenclature code. Accepted units were compiled into three systems of classes, orders and alliances (EuroVegChecklist, EVC) for communities dominated by vascular plants (EVC1), bryophytes and lichens (EVC2) and algae (EVC3). Results: EVC1 includes 109 classes, 300 orders and 1108 alliances; EVC2 includes 27 classes, 53 orders and 137 alliances, and EVC3 includes 13 classes, 24 orders and 53 alliances. In total 13448 taxa were assigned as indicator species to classes of EVC1, 2087 to classes of EVC2 and 368 to classes of EVC3. Accepted syntaxonomic concepts are summarized in a series of appendices, and detailed information on each is accessible through the software tool EuroVegBrowser. Conclusions: This paper features the first comprehensive and critical account of European syntaxa and synthesizes more than 100 yr of classification effort by European phytosociologists. It aims to document and stabilize the concepts and nomenclature of syntaxa for practical uses, such as calibration of habitat classification used by the European Union, standardization of terminology for environmental assessment, management and conservation of nature areas, landscape planning and education. The presented classification systems provide a baseline for future development and revision of European syntaxonomy.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +301 moreInstitutions (72)
TL;DR: In this paper, the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario were studied, and it was shown that the density of DE at early times has to be below 2% of the critical density, even when forced to play a role for z < 50.
Abstract: We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forced to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. When testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.

Journal ArticleDOI
TL;DR: A post-hoc analysis of data from trial of patients with NASH showed that elafibranor (120 mg/d for 1 year) resolved NASH without fibrosis worsening, based on a modified definition, in the intention-to-treat analysis and in patients with moderate or severe NASH.

Journal ArticleDOI
Adam M. Session1, Adam M. Session2, Yoshinobu Uno3, Taejoon Kwon4, Taejoon Kwon5, Jarrod Chapman2, Atsushi Toyoda6, Shuji Takahashi7, Akimasa Fukui8, Akira Hikosaka7, Atsushi Suzuki7, Mariko Kondo9, Simon J. van Heeringen10, Ian K. Quigley11, Sven Heinz11, Hajime Ogino12, Haruki Ochi13, Uffe Hellsten2, Jessica B. Lyons1, Oleg Simakov14, Nicholas H. Putnam, Jonathan C. Stites, Yoko Kuroki, Toshiaki Tanaka15, Tatsuo Michiue9, Minoru Watanabe16, Ozren Bogdanovic17, Ryan Lister17, Georgios Georgiou10, Sarita S. Paranjpe10, Ila van Kruijsbergen10, Shengquiang Shu2, Joseph W. Carlson2, Tsutomu Kinoshita18, Yuko Ohta19, Shuuji Mawaribuchi20, Jerry Jenkins2, Jane Grimwood2, Jeremy Schmutz2, Therese Mitros1, Sahar V. Mozaffari21, Yutaka Suzuki9, Yoshikazu Haramoto22, Takamasa S. Yamamoto23, Chiyo Takagi23, Rebecca Heald1, Kelly E. Miller1, Christian D. Haudenschild24, Jacob O. Kitzman25, Takuya Nakayama26, Yumi Izutsu27, Jacques Robert28, Joshua D. Fortriede29, Kevin A. Burns, Vaneet Lotay30, Kamran Karimi30, Yuuri Yasuoka14, Darwin S. Dichmann1, Martin F. Flajnik19, Douglas W. Houston31, Jay Shendure25, Louis DuPasquier32, Peter D. Vize30, Aaron M. Zorn29, Michihiko Ito20, Edward M. Marcotte4, John B. Wallingford4, Yuzuru Ito22, Makoto Asashima22, Naoto Ueno33, Naoto Ueno23, Yoichi Matsuda3, Gert Jan C. Veenstra10, Asao Fujiyama6, Asao Fujiyama34, Asao Fujiyama33, Richard M. Harland1, Masanori Taira9, Daniel S. Rokhsar1, Daniel S. Rokhsar2, Daniel S. Rokhsar14 
20 Oct 2016-Nature
TL;DR: The Xenopus laevis genome is sequenced and it is estimated that the two diploid progenitor species diverged around 34 million years ago and combined to form an allotetraploid around 17–18 Ma, where more than 56% of all genes were retained in two homoeologous copies.
Abstract: To explore the origins and consequences of tetraploidy in the African clawed frog, we sequenced the Xenopus laevis genome and compared it to the related diploid X. tropicalis genome. We characterize the allotetraploid origin of X. laevis by partitioning its genome into two homoeologous subgenomes, marked by distinct families of 'fossil' transposable elements. On the basis of the activity of these elements and the age of hundreds of unitary pseudogenes, we estimate that the two diploid progenitor species diverged around 34 million years ago (Ma) and combined to form an allotetraploid around 17-18 Ma. More than 56% of all genes were retained in two homoeologous copies. Protein function, gene expression, and the amount of conserved flanking sequence all correlate with retention rates. The subgenomes have evolved asymmetrically, with one chromosome set more often preserving the ancestral state and the other experiencing more gene loss, deletion, rearrangement, and reduced gene expression.