Showing papers by "University of Groningen published in 2018"
••
Christina Fitzmaurice1, Christina Fitzmaurice2, Christina Fitzmaurice3, Tomi Akinyemiju4 +177 more•Institutions (102)
TL;DR: In this paper, the authors assess the burden of 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus, and evaluate cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods.
Abstract: Importance The increasing burden due to cancer and other noncommunicable diseases poses a threat to human development, which has resulted in global political commitments reflected in the Sustainable Development Goals as well as the World Health Organization (WHO) Global Action Plan on Non-Communicable Diseases. To determine if these commitments have resulted in improved cancer control, quantitative assessments of the cancer burden are required. Objective To assess the burden for 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus. Evidence Review Cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) were evaluated for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods. Levels and trends were analyzed over time, as well as by the Sociodemographic Index (SDI). Changes in incident cases were categorized by changes due to epidemiological vs demographic transition. Findings In 2016, there were 17.2 million cancer cases worldwide and 8.9 million deaths. Cancer cases increased by 28% between 2006 and 2016. The smallest increase was seen in high SDI countries. Globally, population aging contributed 17%; population growth, 12%; and changes in age-specific rates, −1% to this change. The most common incident cancer globally for men was prostate cancer (1.4 million cases). The leading cause of cancer deaths and DALYs was tracheal, bronchus, and lung cancer (1.2 million deaths and 25.4 million DALYs). For women, the most common incident cancer and the leading cause of cancer deaths and DALYs was breast cancer (1.7 million incident cases, 535 000 deaths, and 14.9 million DALYs). In 2016, cancer caused 213.2 million DALYs globally for both sexes combined. Between 2006 and 2016, the average annual age-standardized incidence rates for all cancers combined increased in 130 of 195 countries or territories, and the average annual age-standardized death rates decreased within that timeframe in 143 of 195 countries or territories. Conclusions and Relevance Large disparities exist between countries in cancer incidence, deaths, and associated disability. Scaling up cancer prevention and ensuring universal access to cancer care are required for health equity and to fulfill the global commitments for noncommunicable disease and cancer control.
4,621 citations
••
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1 +1050 more•Institutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.
2,910 citations
••
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.
1,595 citations
••
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Ludwig Maximilian University of Munich4, Max Planck Society5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Atlantic Oceanographic and Meteorological Laboratory8, Cooperative Institute for Marine and Atmospheric Studies9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Netherlands Environmental Assessment Agency21, Utrecht University22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Hobart Corporation29, Cooperative Research Centre30, Japan Agency for Marine-Earth Science and Technology31, University of Groningen32, Wageningen University and Research Centre33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide
( CO2 ) emissions and their redistribution among the atmosphere,
ocean, and terrestrial biosphere – the “global carbon budget” – is
important to better understand the global carbon cycle, support the
development of climate policies, and project future climate change. Here we
describe data sets and methodology to quantify the five major components of
the global carbon budget and their uncertainties. Fossil CO2
emissions ( EFF ) are based on energy statistics and cement
production data, while emissions from land use and land-use change ( ELUC ),
mainly deforestation, are based on land use and land-use change data and
bookkeeping models. Atmospheric CO2 concentration is measured
directly and its growth rate ( GATM ) is computed from the annual
changes in concentration. The ocean CO2 sink ( SOCEAN )
and terrestrial CO2 sink ( SLAND ) are estimated with
global process models constrained by observations. The resulting carbon
budget imbalance ( BIM ), the difference between the estimated
total emissions and the estimated changes in the atmosphere, ocean, and
terrestrial biosphere, is a measure of imperfect data and understanding of
the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was
9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 ,
SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of
0.5 GtC yr −1 indicating overestimated emissions and/or underestimated
sinks. For the year 2017 alone, the growth in EFF was about 1.6 %
and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017,
ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 ,
with a BIM of 0.3 GtC. The global atmospheric
CO2 concentration reached 405.0±0.1 ppm averaged over 2017.
For 2018, preliminary data for the first 6–9 months indicate a renewed
growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based
on national emission projections for China, the US, the EU, and India and
projections of gross domestic product corrected for recent changes in the
carbon intensity of the economy for the rest of the world. The analysis
presented here shows that the mean and trend in the five components of the
global carbon budget are consistently estimated over the period of 1959–2017,
but discrepancies of up to 1 GtC yr −1 persist for the representation
of semi-decadal variability in CO2 fluxes. A detailed comparison
among individual estimates and the introduction of a broad range of
observations show (1) no consensus in the mean and trend in land-use change
emissions, (2) a persistent low agreement among the different methods on
the magnitude of the land CO2 flux in the northern extra-tropics,
and (3) an apparent underestimation of the CO2 variability by ocean
models, originating outside the tropics. This living data update documents
changes in the methods and data sets used in this new global carbon budget
and the progress in understanding the global carbon cycle compared with
previous publications of this data set (Le Quere et al., 2018, 2016,
2015a, b, 2014, 2013). All results presented here can be downloaded from
https://doi.org/10.18160/GCP-2018 .
1,458 citations
••
Verneri Anttila1, Verneri Anttila2, Brendan Bulik-Sullivan2, Brendan Bulik-Sullivan1 +717 more•Institutions (270)
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.
1,357 citations
••
TL;DR: In insights into the role of alcohol consumption in the genetic architecture of hypertension, a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions is conducted.
Abstract: Heavy alcohol consumption is an established risk factor for hypertension; the mechanism by which alcohol consumption impact blood pressure (BP) regulation remains unknown. We hypothesized that a genome-wide association study accounting for gene-alcohol consumption interaction for BP might identify additional BP loci and contribute to the understanding of alcohol-related BP regulation. We conducted a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions. In Stage 1, genome-wide discovery meta-analyses in ≈131K individuals across several ancestry groups yielded 3,514 SNVs (245 loci) with suggestive evidence of association (P < 1.0 x 10-5). In Stage 2, these SNVs were tested for independent external replication in ≈440K individuals across multiple ancestries. We identified and replicated (at Bonferroni correction threshold) five novel BP loci (380 SNVs in 21 genes) and 49 previously reported BP loci (2,159 SNVs in 109 genes) in European ancestry, and in multi-ancestry meta-analyses (P < 5.0 x 10-8). For African ancestry samples, we detected 18 potentially novel BP loci (P < 5.0 x 10-8) in Stage 1 that warrant further replication. Additionally, correlated meta-analysis identified eight novel BP loci (11 genes). Several genes in these loci (e.g., PINX1, GATA4, BLK, FTO and GABBR2) have been previously reported to be associated with alcohol consumption. These findings provide insights into the role of alcohol consumption in the genetic architecture of hypertension.
1,218 citations
••
Christina Fitzmaurice1, Christina Fitzmaurice2, Tomi Akinyemiju3, Faris Lami4 +172 more•Institutions (95)
901 citations
••
University of Cambridge1, Australian National University2, Norwegian Institute of Public Health3, Utrecht University4, University of Tromsø5, The George Institute for Global Health6, Johns Hopkins University7, University of Oxford8, National Institutes of Health9, University of Copenhagen10, Copenhagen University Hospital11, Fiona Stanley Hospital12, University of Western Australia13, Harry Perkins Institute of Medical Research14, University of London15, Lund University16, University of Pittsburgh17, French Institute of Health and Medical Research18, University College London19, University of Ulm20, Technische Universität München21, University of Padua22, University of Southampton23, German Cancer Research Center24, Erasmus University Medical Center25, Umeå University26, Cardiff University27, Greifswald University Hospital28, Aarhus University29, Portland State University30, University of New South Wales31, National and Kapodistrian University of Athens32, Harvard University33, University of Hawaii34, Columbia University35, University of Iowa36, Duke University37, Yamagata University38, Tuskegee University39, University of Helsinki40, University of Oulu41, Medical University of South Carolina42, University of Washington43, Kaiser Permanente44, University of Groningen45, University of Granada46, Yale University47, Prevention Institute48, University of Edinburgh49, Uppsala University50, Basque Government51, Kyushu University52, Royal Prince Alfred Hospital53, Harokopio University54, University of California, San Diego55, VU University Medical Center56, Aalborg University57, University of Eastern Finland58, Laval University59, University of Vermont60, Wake Forest University61, Wake Forest Baptist Medical Center62, Kanazawa Medical University63, Baker IDI Heart and Diabetes Institute64, Heidelberg University65, Istituto Superiore di Sanità66, Pasteur Institute67, City College of New York68, Howard University69, University of Glasgow70, International Agency for Research on Cancer71, University of Bristol72, University of Auckland73
TL;DR: Current drinkers of alcohol in high-income countries, the threshold for lowest risk of all-cause mortality was about 100 g/week, and data support limits for alcohol consumption that are lower than those recommended in most current guidelines.
711 citations
••
TL;DR: The second Gaia data release (DR2) contains very precise astrometric and photometric properties for more than one billion sources, astrophysical parameters for dozens of millions, radial velocities for millions, variability information for half a million stars from selected variability classes, and orbits for thousands of solar system objects.
Abstract: Context. The second Gaia data release (DR2) contains very precise astrometric and photometric properties for more than one billion sources, astrophysical parameters for dozens of millions, radial velocities for millions, variability information for half a million stars from selected variability classes, and orbits for thousands of solar system objects.Aims. Before the catalogue was published, these data have undergone dedicated validation processes. The goal of this paper is to describe the validation results in terms of completeness, accuracy, and precision of the various Gaia DR2 data.Methods. The validation processes include a systematic analysis of the catalogue content to detect anomalies, either individual errors or statistical properties, using statistical analysis and comparisons to external data or to models.Results. Although the astrometric, photometric, and spectroscopic data are of unprecedented quality and quantity, it is shown that the data cannot be used without dedicated attention to the limitations described here, in the catalogue documentation and in accompanying papers. We place special emphasis on the caveats for the statistical use of the data in scientific exploitation. In particular, we discuss the quality filters and the consideration of the properties, systematics, and uncertainties from astrometry to astrophysical parameters, together with the various selection functions.
690 citations
••
TL;DR: In this article, a 2D/3D-based hybrid perovskite solar cells (HPSCs) with the orthorhombic a-axis in the out-of-plane direction were shown to achieve a power conversion efficiency of 9.0% in planar p-i-n device structure.
Abstract: The low power conversion efficiency (PCE) of tin-based hybrid perovskite solar cells (HPSCs) is mainly attributed to the high background carrier density due to a high density of intrinsic defects such as Sn vacancies and oxidized species (Sn4+) that characterize Sn-based HPSCs. Herein, this study reports on the successful reduction of the background carrier density by more than one order of magnitude by depositing near-single-crystalline formamidinium tin iodide (FASnI3) films with the orthorhombic a-axis in the out-of-plane direction. Using these highly crystalline films, obtained by mixing a very small amount (0.08 m) of layered (2D) Sn perovskite with 0.92 m (3D) FASnI3, for the first time a PCE as high as 9.0% in a planar p–i–n device structure is achieved. These devices display negligible hysteresis and light soaking, as they benefit from very low trap-assisted recombination, low shunt losses, and more efficient charge collection. This represents a 50% improvement in PCE compared to the best reference cell based on a pure FASnI3 film using SnF2 as a reducing agent. Moreover, the 2D/3D-based HPSCs show considerable improved stability due to the enhanced robustness of the perovskite film compared to the reference cell.
670 citations
••
TL;DR: For the first time, specific loci that distinguish between BD and SCZ are discovered and polygenic components underlying multiple symptom dimensions are identified that point to the utility of genetics to inform symptomology and potential treatment.
••
TL;DR: The degree to which group data are able to describe individual participants is quantified, providing evidence that conclusions drawn from aggregated data may be worryingly imprecise and suggesting that literatures in social and medical sciences may overestimate the accuracy of aggregated statistical estimates.
Abstract: Only for ergodic processes will inferences based on group-level data generalize to individual experience or behavior. Because human social and psychological processes typically have an individually variable and time-varying nature, they are unlikely to be ergodic. In this paper, six studies with a repeated-measure design were used for symmetric comparisons of interindividual and intraindividual variation. Our results delineate the potential scope and impact of nonergodic data in human subjects research. Analyses across six samples (with 87-94 participants and an equal number of assessments per participant) showed some degree of agreement in central tendency estimates (mean) between groups and individuals across constructs and data collection paradigms. However, the variance around the expected value was two to four times larger within individuals than within groups. This suggests that literatures in social and medical sciences may overestimate the accuracy of aggregated statistical estimates. This observation could have serious consequences for how we understand the consistency between group and individual correlations, and the generalizability of conclusions between domains. Researchers should explicitly test for equivalence of processes at the individual and group level across the social and medical sciences.
••
Richard A. Klein1, Michelangelo Vianello2, Fred Hasselman3, Byron G. Adams4 +187 more•Institutions (118)
TL;DR: This paper conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings, and found that very little heterogeneity was attributable to the order in which the tasks were performed or whether the task were administered in lab versus online.
Abstract: We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.
••
University of Oxford1, Royal Berkshire Hospital2, Clinical Trial Service Unit3, University of Groningen4, McMaster University5, Population Health Research Institute6, Mario Negri Institute for Pharmacological Research7, Quintiles8, Wageningen University and Research Centre9, National Institute for Health Research10, Sorbonne11, National Institutes of Health12
TL;DR: This meta-analysis demonstrated that omega-3 fatty acids had no significant association with fatal or nonfatal coronary heart disease or any major vascular events, and provides no support for current recommendations for the use of such supplements in people with a history of coronaryHeart disease.
Abstract: Importance Current guidelines advocate the use of marine-derived omega-3 fatty acids supplements for the prevention of coronary heart disease and major vascular events in people with prior coronary heart disease, but large trials of omega-3 fatty acids have produced conflicting results. Objective To conduct a meta-analysis of all large trials assessing the associations of omega-3 fatty acid supplements with the risk of fatal and nonfatal coronary heart disease and major vascular events in the full study population and prespecified subgroups. Data Sources and Study Selection This meta-analysis included randomized trials that involved at least 500 participants and a treatment duration of at least 1 year and that assessed associations of omega-3 fatty acids with the risk of vascular events. Data Extraction and Synthesis Aggregated study-level data were obtained from 10 large randomized clinical trials. Rate ratios for each trial were synthesized using observed minus expected statistics and variances. Summary rate ratios were estimated by a fixed-effects meta-analysis using 95% confidence intervals for major diseases and 99% confidence intervals for all subgroups. Main Outcomes and Measures The main outcomes included fatal coronary heart disease, nonfatal myocardial infarction, stroke, major vascular events, and all-cause mortality, as well as major vascular events in study population subgroups. Results Of the 77 917 high-risk individuals participating in the 10 trials, 47 803 (61.4%) were men, and the mean age at entry was 64.0 years; the trials lasted a mean of 4.4 years. The associations of treatment with outcomes were assessed on 6273 coronary heart disease events (2695 coronary heart disease deaths and 2276 nonfatal myocardial infarctions) and 12 001 major vascular events. Randomization to omega-3 fatty acid supplementation (eicosapentaenoic acid dose range, 226-1800 mg/d) had no significant associations with coronary heart disease death (rate ratio [RR], 0.93; 99% CI, 0.83-1.03; P = .05), nonfatal myocardial infarction (RR, 0.97; 99% CI, 0.87-1.08; P = .43) or any coronary heart disease events (RR, 0.96; 95% CI, 0.90-1.01; P = .12). Neither did randomization to omega-3 fatty acid supplementation have any significant associations with major vascular events (RR, 0.97; 95% CI, 0.93-1.01; P = .10), overall or in any subgroups, including subgroups composed of persons with prior coronary heart disease, diabetes, lipid levels greater than a given cutoff level, or statin use. Conclusions and Relevance This meta-analysis demonstrated that omega-3 fatty acids had no significant association with fatal or nonfatal coronary heart disease or any major vascular events. It provides no support for current recommendations for the use of such supplements in people with a history of coronary heart disease.
••
TL;DR: This large, multi-ethnic genome-wide association study identifies 97 loci significantly associated with atrial fibrillation that are enriched for genes involved in cardiac development, electrophysiology, structure and contractile function.
Abstract: Atrial fibrillation (AF) affects more than 33 million individuals worldwide1 and has a complex heritability2. We conducted the largest meta-analysis of genome-wide association studies (GWAS) for AF to date, consisting of more than half a million individuals, including 65,446 with AF. In total, we identified 97 loci significantly associated with AF, including 67 that were novel in a combined-ancestry analysis, and 3 that were novel in a European-specific analysis. We sought to identify AF-associated genes at the GWAS loci by performing RNA-sequencing and expression quantitative trait locus analyses in 101 left atrial samples, the most relevant tissue for AF. We also performed transcriptome-wide analyses that identified 57 AF-associated genes, 42 of which overlap with GWAS loci. The identified loci implicate genes enriched within cardiac developmental, electrophysiological, contractile and structural pathways. These results extend our understanding of the biological pathways underlying AF and may facilitate the development of therapeutics for AF.
••
TL;DR: In patients with infected necrotising pancreatitis, the endoscopic step-up approach was not superior to the surgical step- up approach in reducing major complications or death, and the rate of pancreatic fistulas and length of hospital stay were lower in the endoscopy group.
••
TL;DR: Improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt.
Abstract: Interest in secondary metabolites such as RiPPs (ribosomally synthesized and posttranslationally modified peptides) is increasing worldwide. To facilitate the research in this field we have updated our mining web server. BAGEL4 is faster than its predecessor and is now fully independent from ORF-calling. Gene clusters of interest are discovered using the core-peptide database and/or through HMM motifs that are present in associated context genes. The databases used for mining have been updated and extended with literature references and links to UniProt and NCBI. Additionally, we have included automated promoter and terminator prediction and the option to upload RNA expression data, which can be displayed along with the identified clusters. Further improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt. Overall BAGEL4 provides the user with more information through a user-friendly web-interface which simplifies data evaluation. BAGEL4 is freely accessible at http://bagel4.molgenrug.nl.
••
TL;DR: In this paper, a review of more than 60 studies (plus m4ore than 65 studies on P2G) on power and energy models based on simulation and optimization was done, based on these, for power systems with up to 95% renewables, the electricity storage size is found to be below 1.5% of the annual demand (in energy terms).
Abstract: A review of more than 60 studies (plus m4ore than 65 studies on P2G) on power and energy models based on simulation and optimization was done. Based on these, for power systems with up to 95% renewables, the electricity storage size is found to be below 1.5% of the annual demand (in energy terms). While for 100% renewables energy systems (power, heat, mobility), it can remain below 6% of the annual energy demand. Combination of sectors and diverting the electricity to another sector can play a large role in reducing the storage size. From the potential alternatives to satisfy this demand, pumped hydro storage (PHS) global potential is not enough and new technologies with a higher energy density are needed. Hydrogen, with more than 250 times the energy density of PHS is a potential option to satisfy the storage need. However, changes needed in infrastructure to deal with high hydrogen content and the suitability of salt caverns for its storage can pose limitations for this technology. Power to Gas (P2G) arises as possible alternative overcoming both the facilities and the energy density issues. The global storage requirement would represent only 2% of the global annual natural gas production or 10% of the gas storage facilities (in energy equivalent). The more options considered to deal with intermittent sources, the lower the storage requirement will be. Therefore, future studies aiming to quantify storage needs should focus on the entire energy system including technology vectors (e.g. Power to Heat, Liquid, Gas, Chemicals) to avoid overestimating the amount of storage needed.
••
TL;DR: It is shown how a set of species turnover indices provide more information content regarding temporal trends in biodiversity, as they reflect how dominance and identity shift in communities over time, and several limitations of species richness as a metric of biodiversity change are summarized.
Abstract: Global concern about human impact on biological diversity has triggered an intense research agenda on drivers and consequences of biodiversity change in parallel with international policy seeking to conserve biodiversity and associated ecosystem functions. Quantifying the trends in biodiversity is far from trivial, however, as recently documented by meta-analyses, which report little if any net change in local species richness through time.
Here, we summarise several limitations of species richness as a metric of biodiversity change and show that the expectation of directional species richness trends under changing conditions is invalid. Instead, we illustrate how a set of species turnover indices provide more information content regarding temporal trends in biodiversity, as they reflect how dominance and identity shift in communities over time.
We apply these metrics to three monitoring datasets representing different ecosystem types. In all datasets, nearly complete species turnover occurred, but this was disconnected from any species richness trends. Instead, turnover was strongly influenced by changes in species presence (identities) and dominance (abundances). We further show that these metrics can detect phases of strong compositional shifts in monitoring data and thus identify a different aspect of biodiversity change decoupled from species richness.
Synthesis and applications: Temporal trends in species richness are insufficient to capture key changes in biodiversity in changing environments. In fact, reductions in environmental quality can lead to transient increases in species richness if immigration or extinction has different temporal dynamics. Thus, biodiversity monitoring programmes need to go beyond analyses of trends in richness in favour of more meaningful assessments of biodiversity change.
••
New York City Department of Health and Mental Hygiene1, University of Groningen2, World Health Organization3, Shahid Beheshti University of Medical Sciences and Health Services4, Statens Serum Institut5, California Department of Public Health6, Rio de Janeiro State University7, Post Graduate Institute of Medical Education and Research8, McGill University9, University of Pennsylvania10, Radboud University Nijmegen11, Institut de recherche pour le développement12, University Health Network13, Albert Einstein College of Medicine14, National Institutes of Health15, Centers for Disease Control and Prevention16, University of Colorado Denver17, Centre for Health Protection18, Oswaldo Cruz Foundation19, University of Cape Town20, University of Sydney21, University of Paris22, Médecins Sans Frontières23, University of California, San Francisco24, Emory University25, Brigham and Women's Hospital26, Samsung Medical Center27, Federal University of Rio de Janeiro28, Hofstra University29, New Generation University College30, Karolinska Institutet31, St. Joseph's Healthcare Hamilton32, Sofia Medical University33, Harvard University34, Columbia University35, Cornell University36, University of Texas Health Science Center at Tyler37, Partners In Health38, University of Ulsan39, University of Sassari40, Queen Mary University of London41, The Chinese University of Hong Kong42
TL;DR: Treatment outcomes were significantly better with use of linezolid, later generation fluoroquinolones, bedaquiline, clofazimine, and carbapenems for treatment of multidrug-resistant tuberculosis, and the need for trials to ascertain the optimal combination and duration of these drugs is emphasised.
••
TL;DR: The evolution of smFRET as a key tool for “dynamic structural biology” over the past 22 years is reviewed and the prospects for its use in applications such as biosensing, high-throughput screening, and molecular diagnostics are highlighted.
Abstract: Classical structural biology can only provide static snapshots of biomacromolecules. Single-molecule Forster resonance energy transfer (smFRET) paved the way for studying dynamics in macromolecular structures under biologically relevant conditions. Since its first implementation in 1996, smFRET experiments have confirmed previously hypothesized mechanisms and provided new insights into many fundamental biological processes, such as DNA maintenance and repair, transcription, translation, and membrane transport. We review 22 years of contributions of smFRET to our understanding of basic mechanisms in biochemistry, molecular biology, and structural biology. Additionally, building on current state-of-the-art implementations of smFRET, we highlight possible future directions for smFRET in applications such as biosensing, high-throughput screening, and molecular diagnostics.
••
University of Belgrade1, British Heart Foundation2, National and Kapodistrian University of Athens3, Charité4, St George's, University of London5, VU University Medical Center6, Pierre-and-Marie-Curie University7, Karolinska University Hospital8, University of Groningen9, Cyprus University of Technology10, Academy for Urban School Leadership11, Aarhus University Hospital12, Paris Diderot University13, Keele University14, Utrecht University15, University of Glasgow16, University of Cambridge17, National Institutes of Health18, University Medical Center Groningen19, University of Zurich20
TL;DR: The coexistence of type 2 diabetes mellitus and heart failure (HF), either with reduced (HFrEF) or preserved ejection fraction (HFpEF), is frequent and associated with a higher risk of HF hospitalization, all‐cause and cardiovascular (CV) mortality.
Abstract: The coexistence of type 2 diabetes mellitus (T2DM) and heart failure (HF), either with reduced (HFrEF) or preserved ejection fraction (HFpEF), is frequent (30-40% of patients) and associated with a higher risk of HF hospitalization, all-cause and cardiovascular (CV) mortality. The most important causes of HF in T2DM are coronary artery disease, arterial hypertension and a direct detrimental effect of T2DM on the myocardium. T2DM is often unrecognized in HF patients, and vice versa, which emphasizes the importance of an active search for both disorders in the clinical practice. There are no specific limitations to HF treatment in T2DM. Subanalyses of trials addressing HF treatment in the general population have shown that all HF therapies are similarly effective regardless of T2DM. Concerning T2DM treatment in HF patients, most guidelines currently recommend metformin as the first-line choice. Sulphonylureas and insulin have been the traditional second- and third-line therapies although their safety in HF is equivocal. Neither glucagon-like preptide-1 (GLP-1) receptor agonists, nor dipeptidyl peptidase-4 (DPP4) inhibitors reduce the risk for HF hospitalization. Indeed, a DPP4 inhibitor, saxagliptin, has been associated with a higher risk of HF hospitalization. Thiazolidinediones (pioglitazone and rosiglitazone) are contraindicated in patients with (or at risk of) HF. In recent trials, sodium-glucose co-transporter-2 (SGLT2) inhibitors, empagliflozin and canagliflozin, have both shown a significant reduction in HF hospitalization in patients with established CV disease or at risk of CV disease. Several ongoing trials should provide an insight into the effectiveness of SGLT2 inhibitors in patients with HFrEF and HFpEF in the absence of T2DM.
••
University of Glasgow1, University of Birmingham2, University of East Anglia3, University of Oxford4, University of Alcalá5, University of Groningen6, Monash University7, Oslo University Hospital8, Sahlgrenska University Hospital9, University of Oslo10, Baylor University Medical Center11, Hull York Medical School12, St George's, University of London13, University of Gothenburg14
TL;DR: Beta-blockers improve LVEf and prognosis for patients with heart failure in sinus rhythm with a reduced LVEF and similar benefit was observed in the subgroup of patients with LVEFs 40-49%, but did not improve prognosis.
Abstract: Aims Recent guidelines recommend that patients with heart failure and left ventricular ejection fraction (LVEF) 40-49% should be managed similar to LVEF ≥ 50%. We investigated the effect of beta-blockers according to LVEF in double-blind, randomized, placebo-controlled trials. Methods and results Individual patient data meta-analysis of 11 trials, stratified by baseline LVEF and heart rhythm (Clinicaltrials.gov: NCT0083244; PROSPERO: CRD42014010012). Primary outcomes were all-cause mortality and cardiovascular death over 1.3 years median follow-up, with an intention-to-treat analysis. For 14 262 patients in sinus rhythm, median LVEF was 27% (interquartile range 21-33%), including 575 patients with LVEF 40-49% and 244 ≥ 50%. Beta-blockers reduced all-cause and cardiovascular mortality compared to placebo in sinus rhythm, an effect that was consistent across LVEF strata, except for those in the small subgroup with LVEF ≥ 50%. For LVEF 40-49%, death occurred in 21/292 [7.2%] randomized to beta-blockers compared to 35/283 [12.4%] with placebo; adjusted hazard ratio (HR) 0.59 [95% confidence interval (CI) 0.34-1.03]. Cardiovascular death occurred in 13/292 [4.5%] with beta-blockers and 26/283 [9.2%] with placebo; adjusted HR 0.48 (95% CI 0.24-0.97). Over a median of 1.0 years following randomization (n = 4601), LVEF increased with beta-blockers in all groups in sinus rhythm except LVEF ≥50%. For patients in atrial fibrillation at baseline (n = 3050), beta-blockers increased LVEF when < 50% at baseline, but did not improve prognosis. Conclusion Beta-blockers improve LVEF and prognosis for patients with heart failure in sinus rhythm with a reduced LVEF. The data are most robust for LVEF < 40%, but similar benefit was observed in the subgroup of patients with LVEF 40-49%.
••
University of Lorraine1, Michigan State University2, University of California, Berkeley3, Duke University4, University of Bordeaux5, Leibniz Association6, Max Planck Society7, University of Lausanne8, University of Tennessee9, Oak Ridge National Laboratory10, Cornell University11, Eötvös Loránd University12, University of Turin13, Fujian Agriculture and Forestry University14, University of Groningen15, Helmholtz Centre for Environmental Research - UFZ16
TL;DR: A particular focus is placed on the understanding of BFI within complex microbial communities and in regard of the metaorganism concept, as well as recent discoveries that clarify the (molecular) mechanisms involved in bacterial-fungal relationships.
Abstract: Fungi and bacteria are found living together in a wide variety of environments. Their interactions are significant drivers of many ecosystem functions and are important for the health of plants and animals. A large number of fungal and bacterial families engage in complex interactions that lead to critical behavioural shifts of the microorganisms ranging from mutualism to antagonism. The importance of bacterial-fungal interactions (BFI) in environmental science, medicine and biotechnology has led to the emergence of a dynamic and multidisciplinary research field that combines highly diverse approaches including molecular biology, genomics, geochemistry, chemical and microbial ecology, biophysics and ecological modelling. In this review, we discuss recent advances that underscore the roles of BFI across relevant habitats and ecosystems. A particular focus is placed on the understanding of BFI within complex microbial communities and in regard of the metaorganism concept. We also discuss recent discoveries that clarify the (molecular) mechanisms involved in bacterial-fungal relationships, and the contribution of new technologies to decipher generic principles of BFI in terms of physical associations and molecular dialogues. Finally, we discuss future directions for research in order to stimulate synergy within the BFI research area and to resolve outstanding questions.
••
TL;DR: Efficient terahertz harmonic generation—challenging but important for ultrahigh-speed optoelectronic technologies—is demonstrated in graphene through a nonlinear process that could potentially be generalized to other materials.
Abstract: Multiple optical harmonic generation—the multiplication of photon energy as a result of nonlinear interaction between light and matter—is a key technology in modern electronics and optoelectronics, because it allows the conversion of optical or electronic signals into signals with much higher frequency, and the generation of frequency combs. Owing to the unique electronic band structure of graphene, which features massless Dirac fermions1–3, it has been repeatedly predicted that optical harmonic generation in graphene should be particularly efficient at the technologically important terahertz frequencies4–6. However, these predictions have yet to be confirmed experimentally under technologically relevant operation conditions. Here we report the generation of terahertz harmonics up to the seventh order in single-layer graphene at room temperature and under ambient conditions, driven by terahertz fields of only tens of kilovolts per centimetre, and with field conversion efficiencies in excess of 10−3, 10−4 and 10−5 for the third, fifth and seventh terahertz harmonics, respectively. These conversion efficiencies are remarkably high, given that the electromagnetic interaction occurs in a single atomic layer. The key to such extremely efficient generation of terahertz high harmonics in graphene is the collective thermal response of its background Dirac electrons to the driving terahertz fields. The terahertz harmonics, generated via hot Dirac fermion dynamics, were observed directly in the time domain as electromagnetic field oscillations at these newly synthesized higher frequencies. The effective nonlinear optical coefficients of graphene for the third, fifth and seventh harmonics exceed the respective nonlinear coefficients of typical solids by 7–18 orders of magnitude7–9. Our results provide a direct pathway to highly efficient terahertz frequency synthesis using the present generation of graphene electronics, which operate at much lower fundamental frequencies of only a few hundreds of gigahertz. Efficient terahertz harmonic generation—challenging but important for ultrahigh-speed optoelectronic technologies—is demonstrated in graphene through a nonlinear process that could potentially be generalized to other materials.
••
Florence Demenais1, Florence Demenais2, Patricia Margaritte-Jeannin2, Patricia Margaritte-Jeannin1 +213 more•Institutions (79)
TL;DR: A meta-analysis of GWAS studies for asthma from multiancestral cohorts identifies five new loci and finds that the asthma-associated loci are enriched near enhancer marks in immune cells, suggesting a major role of these loci in the regulation of immunologically related mechanisms.
Abstract: We examined common variation in asthma risk by conducting a meta-analysis of worldwide asthma genome-wide association studies (23,948 asthma cases, 118,538 controls) of individuals from ethnically diverse populations. We identified five new asthma loci, found two new associations at two known asthma loci, established asthma associations at two loci previously implicated in the comorbidity of asthma plus hay fever, and confirmed nine known loci. Investigation of pleiotropy showed large overlaps in genetic variants with autoimmune and inflammatory diseases. The enrichment in enhancer marks at asthma risk loci, especially in immune cells, suggested a major role of these loci in the regulation of immunologically related mechanisms.
••
TL;DR: This article provides a very basic introduction to MCMC sampling, and describes what MCMC is, and what it can be used for, with simple illustrative examples.
Abstract: Markov Chain Monte–Carlo (MCMC) is an increasingly popular method for obtaining information about distributions, especially for estimating posterior distributions in Bayesian inference. This article provides a very basic introduction to MCMC sampling. It describes what MCMC is, and what it can be used for, with simple illustrative examples. Highlighted are some of the benefits and limitations of MCMC sampling, as well as different approaches to circumventing the limitations most likely to trouble cognitive scientists.
••
University of Groningen1, PSL Research University2, École centrale de Lyon3, University of Padua4, Delft University of Technology5, Imperial College London6, Luleå University of Technology7, IMT Institute for Advanced Studies Lucca8, Technical University of Denmark9, University of Southampton10, University of Cape Town11, École Polytechnique Fédérale de Lausanne12, Aarhus University13, King's College London14, Hamburg University of Technology15, Czech Technical University in Prague16, Instituto Politécnico Nacional17, Polish Academy of Sciences18, University of Turin19, University of Trento20, Queen Mary University of London21, Saarland University22
TL;DR: This review summarizes recent advances in the area of tribology based on the outcome of a Lorentz Center workshop surveying various physical, chemical and mechanical phenomena across scales, and proposes some research directions.
••
TL;DR: A simple and effective synthesis route is reported to transform a small molecule of biological origin, thioctic acid, into a high-performance supramolecular polymeric material, which combines processability, ultrahigh stretchability, rapid self-healing ability, and reusable adhesivity to surfaces.
Abstract: Polymeric materials with integrated functionalities are required to match their ever-expanding practical applications, but there is always a trade-off between complex material performances and synthetic simplification. A simple and effective synthesis route is reported to transform a small molecule of biological origin, thioctic acid, into a high-performance supramolecular polymeric material, which combines processability, ultrahigh stretchability, rapid self-healing ability, and reusable adhesivity to surfaces. The proposed one-step preparation process of this material involves the mixing of three commercially available feedstocks at mild temperature without any external solvent and a subsequent cooling process that resulted in a dynamic, high-density, and dry supramolecular polymeric network cross-linked by three different types of dynamic chemical bonds, whose cooperative effects in the network enable high performance of this supramolecular polymeric material.
••
Columbia University1, University of Freiburg2, Delft University of Technology3, University of Düsseldorf4, Clemson University5, University of Sheffield6, Aarhus University7, Ludwig Maximilian University of Munich8, Stony Brook University9, University of California, Irvine10, University of Groningen11, University of Ulm12, Wageningen University and Research Centre13, Johns Hopkins University14, North Carolina State University15, Dresden University of Technology16, Katholieke Universiteit Leuven17, University of Hasselt18, University of Lübeck19, University of Oxford20, Seoul National University21, Kaiserslautern University of Technology22, University of Mainz23, Institute of Molecular Biotechnology24, Arizona State University25, University of Zurich26, Braunschweig University of Technology27
TL;DR: A multi-laboratory study finds that single-molecule FRET is a reproducible and reliable approach for determining accurate distances in dye-labeled DNA duplexes.
Abstract: Single-molecule Forster resonance energy transfer (smFRET) is increasingly being used to determine distances, structures, and dynamics of biomolecules in vitro and in vivo. However, generalized protocols and FRET standards to ensure the reproducibility and accuracy of measurements of FRET efficiencies are currently lacking. Here we report the results of a comparative blind study in which 20 labs determined the FRET efficiencies (E) of several dye-labeled DNA duplexes. Using a unified, straightforward method, we obtained FRET efficiencies with s.d. between ±0.02 and ±0.05. We suggest experimental and computational procedures for converting FRET efficiencies into accurate distances, and discuss potential uncertainties in the experiment and the modeling. Our quantitative assessment of the reproducibility of intensity-based smFRET measurements and a unified correction procedure represents an important step toward the validation of distance networks, with the ultimate aim of achieving reliable structural models of biomolecular systems by smFRET-based hybrid methods.