scispace - formally typeset
Search or ask a question

Showing papers by "University of Warwick published in 2016"


Journal ArticleDOI
18 Oct 2016-PeerJ
TL;DR: VSEARCH is here shown to be more accurate than USEARCH when performing searching, clustering, chimera detection and subsampling, while on a par with US EARCH for paired-ends read merging and dereplication.
Abstract: Background: VSEARCH is an open source and free of charge multithreaded 64-bit tool for processing and preparing metagenomics, genomics and population genomics nucleotide sequence data. It is designed as an alternative to the widely used USEARCH tool (Edgar, 2010) for which the source code is not publicly available, algorithm details are only rudimentarily described, and only a memory-confined 32-bit version is freely available for academic use. Methods: When searching nucleotide sequences, VSEARCH uses a fast heuristic based on words shared by the query and target sequences in order to quickly identify similar sequences, a similar strategy is probably used in USEARCH. VSEARCH then performs optimal global sequence alignment of the query against potential target sequences, using full dynamic programming instead of the seed-and-extend heuristic used by USEARCH. Pairwise alignments are computed in parallel using vectorisation and multiple threads. Results: VSEARCH includes most commands for analysing nucleotide sequences available in USEARCH version 7 and several of those available in USEARCH version 8, including searching (exact or based on global alignment), clustering by similarity (using length pre-sorting, abundance pre-sorting or a user-defined order), chimera detection (reference-based or de novo), dereplication (full length or prefix), pairwise alignment, reverse complementation, sorting, and subsampling. VSEARCH also includes commands for FASTQ file processing, i.e., format detection, filtering, read quality statistics, and merging of paired reads. Furthermore, VSEARCH extends functionality with several new commands and improvements, including shuffling, rereplication, masking of low-complexity sequences with the well-known DUST algorithm, a choice among different similarity definitions, and FASTQ file format conversion. VSEARCH is here shown to be more accurate than USEARCH when performing searching, clustering, chimera detection and subsampling, while on a par with USEARCH for paired-ends read merging. VSEARCH is slower than USEARCH when performing clustering and chimera detection, but significantly faster when performing paired-end reads merging and dereplication. VSEARCH is available at https://github.com/torognes/vsearch under either the BSD 2-clause license or the GNU General Public License version 3.0. Discussion: VSEARCH has been shown to be a fast, accurate and full-fledged alternative to USEARCH. A free and open-source versatile tool for sequence analysis is now available to the metagenomics community.

5,850 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
Theo Vos1, Christine Allen1, Megha Arora1, Ryan M Barber1  +696 moreInstitutions (260)
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.

5,050 citations


Journal ArticleDOI
Haidong Wang1, Mohsen Naghavi1, Christine Allen1, Ryan M Barber1  +841 moreInstitutions (293)
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.

4,804 citations


Journal ArticleDOI
TL;DR: It is found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%.
Abstract: The most widely used task functional magnetic resonance imaging (fMRI) analyses use parametric statistical methods that depend on a variety of assumptions. In this work, we use real resting-state data and a total of 3 million random task group analyses to compute empirical familywise error rates for the fMRI software packages SPM, FSL, and AFNI, as well as a nonparametric permutation method. For a nominal familywise error rate of 5%, the parametric statistical methods are shown to be conservative for voxelwise inference and invalid for clusterwise inference. Our results suggest that the principal cause of the invalid cluster inferences is spatial autocorrelation functions that do not follow the assumed Gaussian shape. By comparison, the nonparametric permutation test is found to produce nominal results for voxelwise as well as clusterwise inference. These findings speak to the need of validating the statistical methods being used in the field of neuroimaging.

2,946 citations


Journal ArticleDOI
John Allison1, K. Amako2, John Apostolakis3, Pedro Arce4, Makoto Asai5, Tsukasa Aso6, Enrico Bagli, Alexander Bagulya7, Sw. Banerjee8, G. Barrand9, B. R. Beck10, Alexey Bogdanov11, D. Brandt, Jeremy M. C. Brown12, Helmut Burkhardt3, Ph Canal8, D. Cano-Ott4, Stephane Chauvie, Kyung-Suk Cho13, G.A.P. Cirrone14, Gene Cooperman15, M. A. Cortés-Giraldo16, G. Cosmo3, Giacomo Cuttone14, G.O. Depaola17, Laurent Desorgher, X. Dong15, Andrea Dotti5, Victor Daniel Elvira8, Gunter Folger3, Ziad Francis18, A. Galoyan19, L. Garnier9, M. Gayer3, K. Genser8, Vladimir Grichine3, Vladimir Grichine7, Susanna Guatelli20, Susanna Guatelli21, Paul Gueye22, P. Gumplinger23, Alexander Howard24, Ivana Hřivnáčová9, S. Hwang13, Sebastien Incerti25, Sebastien Incerti26, A. Ivanchenko3, Vladimir Ivanchenko3, F.W. Jones23, S. Y. Jun8, Pekka Kaitaniemi27, Nicolas A. Karakatsanis28, Nicolas A. Karakatsanis29, M. Karamitrosi30, M.H. Kelsey5, Akinori Kimura31, Tatsumi Koi5, Hisaya Kurashige32, A. Lechner3, S. B. Lee33, Francesco Longo34, M. Maire, Davide Mancusi, A. Mantero, E. Mendoza4, B. Morgan35, K. Murakami2, T. Nikitina3, Luciano Pandola14, P. Paprocki3, J Perl5, Ivan Petrović36, Maria Grazia Pia, W. Pokorski3, J. M. Quesada16, M. Raine, Maria A.M. Reis37, Alberto Ribon3, A. Ristic Fira36, Francesco Romano14, Giorgio Ivan Russo14, Giovanni Santin38, Takashi Sasaki2, D. Sawkey39, J. I. Shin33, Igor Strakovsky40, A. Taborda37, Satoshi Tanaka41, B. Tome, Toshiyuki Toshito, H.N. Tran42, Pete Truscott, L. Urbán, V. V. Uzhinsky19, Jerome Verbeke10, M. Verderi43, B. Wendt44, H. Wenzel8, D. H. Wright5, Douglas Wright10, T. Yamashita, J. Yarba8, H. Yoshida45 
TL;DR: Geant4 as discussed by the authors is a software toolkit for the simulation of the passage of particles through matter, which is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection.
Abstract: Geant4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. The adaptation of Geant4 to multithreading, advances in physics, detector modeling and visualization, extensions to the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.

2,260 citations


Journal ArticleDOI
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2  +679 moreInstitutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.

1,533 citations


Journal ArticleDOI
TL;DR: Zoledronic acid showed no evidence of survival improvement and should not be part of standard of care for this population of men, and heterogeneity in treatment effect across prespecified subsets was not found.

1,502 citations


Journal ArticleDOI
TL;DR: The effectiveness and cost-effectiveness of exercise-based CR (exercise training alone or in combination with psychosocial or educational interventions) compared with usual care on mortality, morbidity and HRQL in patients with CHD was assessed.
Abstract: Background Coronary heart disease (CHD) is the most common cause of death globally. However, with falling CHD mortality rates, an increasing number of people living with CHD may need support to manage their symptoms and prognosis. Exercise‐based cardiac rehabilitation (CR) aims to improve the health and outcomes of people with CHD. This is an update of a Cochrane Review previously published in 2016. Objectives To assess the clinical effectiveness and cost‐effectiveness of exercise‐based CR (exercise training alone or in combination with psychosocial or educational interventions) compared with 'no exercise' control, on mortality, morbidity and health‐related quality of life (HRQoL) in people with CHD. Search methods We updated searches from the previous Cochrane Review, by searching CENTRAL, MEDLINE, Embase, and two other databases in September 2020. We also searched two clinical trials registers in June 2021. Selection criteria We included randomised controlled trials (RCTs) of exercise‐based interventions with at least six months’ follow‐up, compared with 'no exercise' control. The study population comprised adult men and women who have had a myocardial infarction (MI), coronary artery bypass graft (CABG) or percutaneous coronary intervention (PCI), or have angina pectoris, or coronary artery disease. Data collection and analysis We screened all identified references, extracted data and assessed risk of bias according to Cochrane methods. We stratified meta‐analysis by duration of follow‐up: short‐term (6 to 12 months); medium‐term (> 12 to 36 months); and long‐term ( > 3 years), and used meta‐regression to explore potential treatment effect modifiers. We used GRADE for primary outcomes at 6 to 12 months (the most common follow‐up time point). Main results This review included 85 trials which randomised 23,430 people with CHD. This latest update identified 22 new trials (7795 participants). The population included predominantly post‐MI and post‐revascularisation patients, with a mean age ranging from 47 to 77 years. In the last decade, the median percentage of women with CHD has increased from 11% to 17%, but females still account for a similarly small percentage of participants recruited overall ( < 15%). Twenty‐one of the included trials were performed in low‐ and middle‐income countries (LMICs). Overall trial reporting was poor, although there was evidence of an improvement in quality over the last decade. The median longest follow‐up time was 12 months (range 6 months to 19 years). At short‐term follow‐up (6 to 12 months), exercise‐based CR likely results in a slight reduction in all‐cause mortality (risk ratio (RR) 0.87, 95% confidence interval (CI) 0.73 to 1.04; 25 trials; moderate certainty evidence), a large reduction in MI (RR 0.72, 95% CI 0.55 to 0.93; 22 trials; number needed to treat for an additional beneficial outcome (NNTB) 75, 95% CI 47 to 298; high certainty evidence), and a large reduction in all‐cause hospitalisation (RR 0.58, 95% CI 0.43 to 0.77; 14 trials; NNTB 12, 95% CI 9 to 21; moderate certainty evidence). Exercise‐based CR likely results in little to no difference in risk of cardiovascular mortality (RR 0.88, 95% CI 0.68 to 1.14; 15 trials; moderate certainty evidence), CABG (RR 0.99, 95% CI 0.78 to 1.27; 20 trials; high certainty evidence), and PCI (RR 0.86, 95% CI 0.63 to 1.19; 13 trials; moderate certainty evidence) up to 12 months' follow‐up. We are uncertain about the effects of exercise‐based CR on cardiovascular hospitalisation, with a wide confidence interval including considerable benefit as well as harm (RR 0.80, 95% CI 0.41 to 1.59; low certainty evidence). There was evidence of substantial heterogeneity across trials for cardiovascular hospitalisations (I2 = 53%), and of small study bias for all‐cause hospitalisation, but not for all other outcomes. At medium‐term follow‐up, although there may be little to no difference in all‐cause mortality (RR 0.90, 95% CI 0.80 to 1.02; 15 trials), MI (RR 1.07, 95% CI 0.91 to 1.27; 12 trials), PCI (RR 0.96, 95% CI 0.69 to 1.35; 6 trials), CABG (RR 0.97, 95% CI 0.77 to 1.23; 9 trials), and all‐cause hospitalisation (RR 0.92, 95% CI 0.82 to 1.03; 9 trials), a large reduction in cardiovascular mortality was found (RR 0.77, 95% CI 0.63 to 0.93; 5 trials). Evidence is uncertain for difference in risk of cardiovascular hospitalisation (RR 0.92, 95% CI 0.76 to 1.12; 3 trials). At long‐term follow‐up, although there may be little to no difference in all‐cause mortality (RR 0.91, 95% CI 0.75 to 1.10), exercise‐based CR may result in a large reduction in cardiovascular mortality (RR 0.58, 95% CI 0.43 to 0.78; 8 trials) and MI (RR 0.67, 95% CI 0.50 to 0.90; 10 trials). Evidence is uncertain for CABG (RR 0.66, 95% CI 0.34 to 1.27; 4 trials), and PCI (RR 0.76, 95% CI 0.48 to 1.20; 3 trials). Meta‐regression showed benefits in outcomes were independent of CHD case mix, type of CR, exercise dose, follow‐up length, publication year, CR setting, study location, sample size or risk of bias. There was evidence that exercise‐based CR may slightly increase HRQoL across several subscales (SF‐36 mental component, physical functioning, physical performance, general health, vitality, social functioning and mental health scores) up to 12 months' follow‐up; however, these may not be clinically important differences. The eight trial‐based economic evaluation studies showed exercise‐based CR to be a potentially cost‐effective use of resources in terms of gain in quality‐adjusted life years (QALYs). Authors' conclusions This updated Cochrane Review supports the conclusions of the previous version, that exercise‐based CR provides important benefits to people with CHD, including reduced risk of MI, a likely small reduction in all‐cause mortality, and a large reduction in all‐cause hospitalisation, along with associated healthcare costs, and improved HRQoL up to 12 months' follow‐up. Over longer‐term follow‐up, benefits may include reductions in cardiovascular mortality and MI. In the last decade, trials were more likely to include females, and be undertaken in LMICs, increasing the generalisability of findings. Well‐designed, adequately‐reported RCTs of CR in people with CHD more representative of usual clinical practice are still needed. Trials should explicitly report clinical outcomes, including mortality and hospital admissions, and include validated HRQoL outcome measures, especially over longer‐term follow‐up, and assess costs and cost‐effectiveness.

1,444 citations


Journal ArticleDOI
TL;DR: It is confirmed that exercise-based CR reduces cardiovascular mortality and provides important data showing reductions in hospital admissions and improvements in quality of life.

1,213 citations


Journal ArticleDOI
TL;DR: This paper examined the impact of Chinese import competition on broad measures of technical change (patenting, IT, and TFP) using new panel data across twelve European countries from 1996 to 2007 and found that the absolute volume of innovation increases within the firms most affected by Chinese imports in their output markets.
Abstract: We examine the impact of Chinese import competition on broad measures of technical change—patenting, IT, and TFP—using new panel data across twelve European countries from 1996 to 2007. In particular, we establish that the absolute volume of innovation increases within the firms most affected by Chinese imports in their output markets. We correct for endogeneity using the removal of product-specific quotas following China's entry into the World Trade Organization in 2001. Chinese import competition led to increased technical change within firms and reallocated employment between firms towards more technologically advanced firms. These within and between effects were about equal in magnitude, and account for 14% of European technology upgrading over 2000–7 (and even more when we allow for offshoring to China). Rising Chinese import competition also led to falls in employment and the share of unskilled workers. In contrast to low-wage nations like China, developed countries had no significant effect on innovation.

Journal ArticleDOI
Aysu Okbay1, Jonathan P. Beauchamp2, Mark Alan Fontana3, James J. Lee4  +293 moreInstitutions (81)
26 May 2016-Nature
TL;DR: In this article, the results of a genome-wide association study (GWAS) for educational attainment were reported, showing that single-nucleotide polymorphisms associated with educational attainment disproportionately occur in genomic regions regulating gene expression in the fetal brain.
Abstract: Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases.

Journal ArticleDOI
TL;DR: A Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection and a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei are proposed.
Abstract: Detection and classification of cell nuclei in histopathology images of cancerous tissue stained with the standard hematoxylin and eosin stain is a challenging task due to cellular heterogeneity. Deep learning approaches have been shown to produce encouraging results on histopathology images in various studies. In this paper, we propose a Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection. SC-CNN regresses the likelihood of a pixel being the center of a nucleus, where high probability values are spatially constrained to locate in the vicinity of the centers of nuclei. For classification of nuclei, we propose a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei. The proposed approaches for detection and classification do not require segmentation of nuclei. We have evaluated them on a large dataset of colorectal adenocarcinoma images, consisting of more than 20,000 annotated nuclei belonging to four different classes. Our results show that the joint detection and classification of the proposed SC-CNN and NEP produces the highest average F1 score as compared to other recently published approaches. Prospectively, the proposed methods could offer benefit to pathology practice in terms of quantitative analysis of tissue constituents in whole-slide images, and potentially lead to a better understanding of cancer.

Journal ArticleDOI
TL;DR: The Brain Imaging Data Structure (BIDS) is developed, a standard for organizing and describing MRI datasets that uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.
Abstract: The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.

Journal ArticleDOI
TL;DR: Women who had midwife-led continuity models of care were less likely to experience regional analgesia and spontaneous vaginal birth and more likely to be attended at birth by a known midwife, according to the quality of the trial evidence.
Abstract: Background Midwives are primary providers of care for childbearing women around the world. However, there is a lack of synthesised information to establish whether there are differences in morbidity and mortality, effectiveness and psychosocial outcomes between midwife-led continuity models and other models of care. Objectives To compare midwife-led continuity models of care with other models of care for childbearing women and their infants. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (25 January 2016) and reference lists of retrieved studies. Selection criteria All published and unpublished trials in which pregnant women are randomly allocated to midwife-led continuity models of care or other models of care during pregnancy and birth. Data collection and analysis Two review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. The quality of the evidence was assessed using the GRADE approach. Main results We included 15 trials involving 17,674 women. We assessed the quality of the trial evidence for all primary outcomes (i.e. regional analgesia (epidural/spinal), caesarean birth, instrumental vaginal birth (forceps/vacuum), spontaneous vaginal birth, intact perineum, preterm birth (less than 37 weeks) and all fetal loss before and after 24 weeks plus neonatal death using the GRADE methodology: all primary outcomes were graded as of high quality.For the primary outcomes, women who had midwife-led continuity models of care were less likely to experience regional analgesia (average risk ratio (RR) 0.85, 95% confidence interval (CI) 0.78 to 0.92; participants = 17,674; studies = 14; high quality), instrumental vaginal birth (average RR 0.90, 95% CI 0.83 to 0.97; participants = 17,501; studies = 13; high quality), preterm birth less than 37 weeks (average RR 0.76, 95% CI 0.64 to 0.91; participants = 13,238; studies = eight; high quality) and less all fetal loss before and after 24 weeks plus neonatal death (average RR 0.84, 95% CI 0.71 to 0.99; participants = 17,561; studies = 13; high quality evidence). Women who had midwife-led continuity models of care were more likely to experience spontaneous vaginal birth (average RR 1.05, 95% CI 1.03 to 1.07; participants = 16,687; studies = 12; high quality). There were no differences between groups for caesarean births or intact perineum.For the secondary outcomes, women who had midwife-led continuity models of care were less likely to experience amniotomy (average RR 0.80, 95% CI 0.66 to 0.98; participants = 3253; studies = four), episiotomy (average RR 0.84, 95% CI 0.77 to 0.92; participants = 17,674; studies = 14) and fetal loss less than 24 weeks and neonatal death (average RR 0.81, 95% CI 0.67 to 0.98; participants = 15,645; studies = 11). Women who had midwife-led continuity models of care were more likely to experience no intrapartum analgesia/anaesthesia (average RR 1.21, 95% CI 1.06 to 1.37; participants = 10,499; studies = seven), have a longer mean length of labour (hours) (mean difference (MD) 0.50, 95% CI 0.27 to 0.74; participants = 3328; studies = three) and more likely to be attended at birth by a known midwife (average RR 7.04, 95% CI 4.48 to 11.08; participants = 6917; studies = seven). There were no differences between groups for fetal loss equal to/after 24 weeks and neonatal death, induction of labour, antenatal hospitalisation, antepartum haemorrhage, augmentation/artificial oxytocin during labour, opiate analgesia, perineal laceration requiring suturing, postpartum haemorrhage, breastfeeding initiation, low birthweight infant, five-minute Apgar score less than or equal to seven, neonatal convulsions, admission of infant to special care or neonatal intensive care unit(s) or in mean length of neonatal hospital stay (days).Due to a lack of consistency in measuring women's satisfaction and assessing the cost of various maternity models, these outcomes were reported narratively. The majority of included studies reported a higher rate of maternal satisfaction in midwife-led continuity models of care. Similarly, there was a trend towards a cost-saving effect for midwife-led continuity care compared to other care models. Authors' conclusions This review suggests that women who received midwife-led continuity models of care were less likely to experience intervention and more likely to be satisfied with their care with at least comparable adverse outcomes for women or their infants than women who received other models of care.Further research is needed to explore findings of fewer preterm births and fewer fetal deaths less than 24 weeks, and all fetal loss/neonatal death associated with midwife-led continuity models of care.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted genome-wide association studies of three phenotypes: subjective well-being (n = 298,420), depressive symptoms (n= 161,460), and neuroticism(n = 170,911).
Abstract: Very few genetic variants have been associated with depression and neuroticism, likely because of limitations on sample size in previous studies. Subjective well-being, a phenotype that is genetically correlated with both of these traits, has not yet been studied with genome-wide data. We conducted genome-wide association studies of three phenotypes: subjective well-being (n = 298,420), depressive symptoms (n = 161,460), and neuroticism (n = 170,911). We identify 3 variants associated with subjective well-being, 2 variants associated with depressive symptoms, and 11 variants associated with neuroticism, including 2 inversion polymorphisms. The two loci associated with depressive symptoms replicate in an independent depression sample. Joint analyses that exploit the high genetic correlations between the phenotypes (|ρ^| ≈ 0.8) strengthen the overall credibility of the findings and allow us to identify additional variants. Across our phenotypes, loci regulating expression in central nervous system and adrenal or pancreas tissues are strongly enriched for association.

Journal ArticleDOI
TL;DR: A comprehensive survey of molecular communication (MC) through a communication engineering lens is provided in this paper, which includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms.
Abstract: With much advancement in the field of nanotechnology, bioengineering, and synthetic biology over the past decade, microscales and nanoscales devices are becoming a reality. Yet the problem of engineering a reliable communication system between tiny devices is still an open problem. At the same time, despite the prevalence of radio communication, there are still areas where traditional electromagnetic waves find it difficult or expensive to reach. Points of interest in industry, cities, and medical applications often lie in embedded and entrenched areas, accessible only by ventricles at scales too small for conventional radio waves and microwaves, or they are located in such a way that directional high frequency systems are ineffective. Inspired by nature, one solution to these problems is molecular communication (MC), where chemical signals are used to transfer information. Although biologists have studied MC for decades, it has only been researched for roughly 10 year from a communication engineering lens. Significant number of papers have been published to date, but owing to the need for interdisciplinary work, much of the results are preliminary. In this survey, the recent advancements in the field of MC engineering are highlighted. First, the biological, chemical, and physical processes used by an MC system are discussed. This includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms. Then, a comprehensive survey of some of the recent works on MC through a communication engineering lens is provided. The survey ends with a technology readiness analysis of MC and future research directions.

Journal ArticleDOI
TL;DR: The 2016 Warwick Agreement on femoroacetabular impingement syndrome was convened to build an international, multidisciplinary consensus on the diagnosis and management of patients with FAI syndrome.
Abstract: The 2016 Warwick Agreement on femoroacetabular impingement (FAI) syndrome was convened to build an international, multidisciplinary consensus on the diagnosis and management of patients with FAI syndrome. 22 panel members and 1 patient from 9 countries and 5 different specialties participated in a 1-day consensus meeting on 29 June 2016. Prior to the meeting, 6 questions were agreed on, and recent relevant systematic reviews and seminal literature were circulated. Panel members gave presentations on the topics of the agreed questions at Sports Hip 2016 , an open meeting held in the UK on 27–29 June. Presentations were followed by open discussion. At the 1-day consensus meeting, panel members developed statements in response to each question through open discussion; members then scored their level of agreement with each response on a scale of 0–10. Substantial agreement (range 9.5–10) was reached for each of the 6 consensus questions, and the associated terminology was agreed on. The term ‘femoroacetabular impingement syndrome’ was introduced to reflect the central role of patients' symptoms in the disorder. To reach a diagnosis, patients should have appropriate symptoms, positive clinical signs and imaging findings. Suitable treatments are conservative care, rehabilitation, and arthroscopic or open surgery. Current understanding of prognosis and topics for future research were discussed. The 2016 Warwick Agreement on FAI syndrome is an international multidisciplinary agreement on the diagnosis, treatment principles and key terminology relating to FAI syndrome. The Warwick Agreement on femoroacetabular impingement syndrome has been endorsed by the following 25 clinical societies: American Medical Society for Sports Medicine (AMSSM), Association of Chartered Physiotherapists in Sports and Exercise Medicine (ACPSEM), Australasian College of Sports and Exercise Physicians (ACSEP), Austian Sports Physiotherapists, British Association of Sports and Exercise Medicine (BASEM), British Association of Sport Rehabilitators and Trainers (BASRaT), Canadian Academy of Sport and Exercise Medicine (CASEM), Danish Society of Sports Physical Therapy (DSSF), European College of Sports and Exercise Physicians (ECOSEP), European Society of Sports Traumatology, Knee Surgery and Arthroscopy (ESSKA), Finnish Sports Physiotherapist Association (SUFT), German-Austrian-Swiss Society for Orthopaedic Traumatologic Sports Medicine (GOTS), International Federation of Sports Physical Therapy (IFSPT), International Society for Hip Arthroscopy (ISHA), Groupo di Interesse Specialistico dell’A.I.F.I., Norwegian Association of Sports Medicine and Physical Activity (NIMF), Norwegian Sports Physiotherapy Association (FFI), Society of Sports Therapists (SST), South African Sports Medicine Association (SASMA), Sports Medicine Australia (SMA), Sports Doctors Australia (SDrA), Sports Physiotherapy New Zealand (SPNZ), Swedish Society of Exercise and Sports Medicine (SFAIM), Swiss Society of Sports Medicine (SGMS/SGSM), Swiss Sports Physiotherapy Association (SSPA).

Journal ArticleDOI
TL;DR: In this article, the authors quantified maternal mortality throughout the world by underlying cause and age from 1990 to 2015 for ages 10-54 years by systematically compiling and processing all available data sources from 186 of 195 countries and territories.

Journal ArticleDOI
Haidong Wang1, Zulfiqar A Bhutta2, Zulfiqar A Bhutta3, Matthew M Coates1  +610 moreInstitutions (263)
TL;DR: The Global Burden of Disease 2015 Study provides an analytical framework to comprehensively assess trends for under-5 mortality, age-specific and cause-specific mortality among children under 5 years, and stillbirths by geography over time and decomposed the changes in under- 5 mortality to changes in SDI at the global level.

Journal ArticleDOI
04 Mar 2016-PLOS ONE
TL;DR: The study shows that rumours that are ultimately proven true tend to be resolved faster than those that turn out to be false, and reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumours.
Abstract: As breaking news unfolds people increasingly rely on social media to stay abreast of the latest updates. The use of social media in such situations comes with the caveat that new information being released piecemeal may encourage rumours, many of which remain unverified long after their point of release. Little is known, however, about the dynamics of the life cycle of a social media rumour. In this paper we present a methodology that has enabled us to collect, identify and annotate a dataset of 330 rumour threads (4,842 tweets) associated with 9 newsworthy events. We analyse this dataset to understand how users spread, support, or deny rumours that are later proven true or false, by distinguishing two levels of status in a rumour life cycle i.e., before and after its veracity status is resolved. The identification of rumours associated with each event, as well as the tweet that resolved each rumour as true or false, was performed by journalist members of the research team who tracked the events in real time. Our study shows that rumours that are ultimately proven true tend to be resolved faster than those that turn out to be false. Whilst one can readily see users denying rumours once they have been debunked, users appear to be less capable of distinguishing true from false rumours when their veracity remains in question. In fact, we show that the prevalent tendency for users is to support every unverified rumour. We also analyse the role of different types of users, finding that highly reputable users such as news organisations endeavour to post well-grounded statements, which appear to be certain and accompanied by evidence. Nevertheless, these often prove to be unverified pieces of information that give rise to false rumours. Our study reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumours. The findings of our study provide useful insights for achieving this aim.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that the ability to predict and manage the function of these highly complex, dynamically changing communities is limited, and that close coordination of experimental data collection and method development with mathematical model building is needed to achieve significant progress in understanding of microbial dynamics and function.
Abstract: The importance of microbial communities (MCs) cannot be overstated. MCs underpin the biogeochemical cycles of the earth’s soil, oceans and the atmosphere, and perform ecosystem functions that impact plants, animals and humans. Yet our ability to predict and manage the function of these highly complex, dynamically changing communities is limited. Building predictive models that link MC composition to function is a key emerging challenge in microbial ecology. Here, we argue that addressing this challenge requires close coordination of experimental data collection and method development with mathematical model building. We discuss specific examples where model–experiment integration has already resulted in important insights into MC function and structure. We also highlight key research questions that still demand better integration of experiments and models. We argue that such integration is needed to achieve significant progress in our understanding of MC dynamics and function, and we make specific practical suggestions as to how this could be achieved.

Journal ArticleDOI
Haidong Wang1, Timothy M. Wolock1, Austin Carter1, Grant Nguyen1  +497 moreInstitutions (214)
TL;DR: This report provides national estimates of levels and trends of HIV/AIDS incidence, prevalence, coverage of antiretroviral therapy (ART), and mortality for 195 countries and territories from 1980 to 2015.

Journal ArticleDOI
TL;DR: This Timeline article highlights key milestones in the 50-year history of EBV and discusses how this virus provides a paradigm for exploiting insights at the molecular level in the diagnosis, treatment and prevention of cancer.
Abstract: It is more than 50 years since the Epstein-Barr virus (EBV), the first human tumour virus, was discovered. EBV has subsequently been found to be associated with a diverse range of tumours of both lymphoid and epithelial origin. Progress in the molecular analysis of EBV has revealed fundamental mechanisms of more general relevance to the oncogenic process. This Timeline article highlights key milestones in the 50-year history of EBV and discusses how this virus provides a paradigm for exploiting insights at the molecular level in the diagnosis, treatment and prevention of cancer.

Journal ArticleDOI
TL;DR: This paper addressed two pressing questions related to ALE meta-analysis, and showed as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative.

Journal ArticleDOI
TL;DR: The European Society of Cardiology Heart Failure Long‐Term Registry (ESC‐HF‐LT‐R) was set up with the aim of describing the clinical epidemiology and the 1‐year outcomes of patients with heart failure with the added intention of comparing differences between countries.
Abstract: Aims The European Society of Cardiology Heart Failure Long-Term Registry (ESC-HF-LT-R) was set up with the aim of describing the clinical epidemiology and the 1-year outcomes of patients with heart failure (HF) with the added intention of comparing differences between participating countries. Methods and results The ESC-HF-LT-R is a prospective, observational registry contributed to by 211 cardiology centres in 21 European and/or Mediterranean countries, all being member countries of the ESC. Between May 2011 and April 2013 it collected data on 12 440 patients, 40.5% of them hospitalized with acute HF (AHF) and 59.5% outpatients with chronic HF (CHF). The all-cause 1-year mortality rate was 23.6% for AHF and 6.4% for CHF. The combined endpoint of mortality or HF hospitalization within 1 year had a rate of 36% for AHF and 14.5% for CHF. All-cause mortality rates in the different regions ranged from 21.6% to 36.5% in patients with AHF, and from 6.9% to 15.6% in those with CHF. These differences in mortality between regions are thought reflect differences in the characteristics and/or management of these patients. Conclusion The ESC-HF-LT-R shows that 1-year all-cause mortality of patients with AHF is still high while the mortality of CHF is lower. This registry provides the opportunity to evaluate the management and outcomes of patients with HF and identify areas for improvement.

Journal ArticleDOI
TL;DR: In this article, the authors report CsSnI3 perovskite photovoltaic devices without a hole-selective interfacial layer that exhibit a stability 10 times greater than devices with the same architecture using methylammonium lead iodide perovsite, and the highest efficiency of 3.56%.
Abstract: Photovoltaics based on tin halide perovskites have not yet benefitted from the same intensive research effort that has propelled lead perovskite photovoltaics to >20% power conversion efficiency, due to the susceptibility of tin perovskites to oxidation, the low energy of defect formation and the difficultly in forming pin-hole free films. Here we report CsSnI3 perovskite photovoltaic devices without a hole-selective interfacial layer that exhibit a stability 10 times greater than devices with the same architecture using methylammonium lead iodide perovskite, and the highest efficiency to date for a CsSnI3 photovoltaic: 3.56%. The latter results in large part from a high device fill-factor, achieved using a strategy that removes the need for an electron blocking layer or an additional processing step to minimise the pinhole density in the perovskite film, based on co-depositing the perovskite precursors with SnCl2. These two findings raise the prospect that this class of lead-free perovskite photovoltaic may yet prove viable for applications.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2828 moreInstitutions (191)
TL;DR: In this article, the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015 was evaluated using the Monte Carlo simulations.
Abstract: This article documents the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015. Using a large sample of J/ψ→μμ and Z→μμ decays from 3.2 fb−1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. The reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| 2.2, the pT resolution for muons from Z→μμ decays is 2.9% while the precision of the momentum scale for low-pT muons from J/ψ→μμ decays is about 0.2%.

Journal ArticleDOI
TL;DR: How some seed characteristics that serve as adaptive responses to the natural environment are not suitable for agriculture are discussed and ways in which basic plant science could be applied to enhance seed performance in crop production are discussed.
Abstract: Seeds are central to crop production, human nutrition, and food security. A key component of the performance of crop seeds is the complex trait of seed vigour. Crop yield and resource use efficiency depend on successful plant establishment in the field, and it is the vigour of seeds that defines their ability to germinate and establish seedlings rapidly, uniformly, and robustly across diverse environmental conditions. Improving vigour to enhance the critical and yield-defining stage of crop establishment remains a primary objective of the agricultural industry and the seed/breeding companies that support it. Our knowledge of the regulation of seed germination has developed greatly in recent times, yet understanding of the basis of variation in vigour and therefore seed performance during the establishment of crops remains limited. Here we consider seed vigour at an ecophysiological, molecular, and biomechanical level. We discuss how some seed characteristics that serve as adaptive responses to the natural environment are not suitable for agriculture. Past domestication has provided incremental improvements, but further actively directed change is required to produce seeds with the characteristics required both now and in the future. We discuss ways in which basic plant science could be applied to enhance seed performance in crop production.

Journal ArticleDOI
TL;DR: It is argued that, to deal with this “Now-or-Never” bottleneck, the brain must compress and recode linguistic input as rapidly as possible, which implies that language acquisition is learning to process, rather than inducing, a grammar.
Abstract: Memory is fleeting. New material rapidly obliterates previous material. How, then, can the brain deal successfully with the continual deluge of linguistic input? We argue that, to deal with this "Now-or-Never" bottleneck, the brain must compress and recode linguistic input as rapidly as possible. This observation has strong implications for the nature of language processing: (1) the language system must "eagerly" recode and compress linguistic input; (2) as the bottleneck recurs at each new representational level, the language system must build a multilevel linguistic representation; and (3) the language system must deploy all available information predictively to ensure that local linguistic ambiguities are dealt with "Right-First-Time"; once the original input is lost, there is no way for the language system to recover. This is "Chunk-and-Pass" processing. Similarly, language learning must also occur in the here and now, which implies that language acquisition is learning to process, rather than inducing, a grammar. Moreover, this perspective provides a cognitive foundation for grammaticalization and other aspects of language change. Chunk-and-Pass processing also helps explain a variety of core properties of language, including its multilevel representational structure and duality of patterning. This approach promises to create a direct relationship between psycholinguistics and linguistic theory. More generally, we outline a framework within which to integrate often disconnected inquiries into language processing, language acquisition, and language change and evolution.