scispace - formally typeset
Search or ask a question

Showing papers in "PLOS ONE in 2016"


Journal ArticleDOI
06 Jul 2016-PLOS ONE
TL;DR: CKD has a high global prevalence with a consistent estimated global CKD prevalence of between 11 to 13% with the majority stage 3, and future research should evaluate intervention strategies deliverable at scale to delay the progression of CKD and improve CVD outcomes.
Abstract: © 2016 Hill et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Chronic kidney disease (CKD) is a global health burden with a high economic cost to health systems and is an independent risk factor for cardiovascular disease (CVD). All stages of CKD are associated with increased risks of cardiovascular morbidity, premature mortality, and/or decreased quality of life. CKD is usually asymptomatic until later stages and accurate prevalence data are lacking. Thus we sought to determine the prevalence of CKD globally, by stage, geographical location, gender and age. A systematic review and meta-analysis of observational studies estimating CKD prevalence in general populations was conducted through literature searches in 8 databases. We assessed pooled data using a random effects model. Of 5,842 potential articles, 100 studies of diverse quality were included, comprising 6,908,440 patients. Global mean(95%CI) CKD prevalence of 5 stages 13.4%(11.7-15.1%), and stages 3-5 was 10.6%(9.2-12.2%). Weighting by study quality did not affect prevalence estimates. CKD prevalence by stage was Stage-1 (eGFR>90+ACR>30): 3.5% (2.8-4.2%); Stage-2 (eGFR 60-89+ACR>30): 3.9% (2.7-5.3%); Stage-3 (eGFR 30-59): 7.6% (6.4-8.9%); Stage-4 = (eGFR 29-15): 0.4% (0.3-0.5%); and Stage-5 (eGFR<15): 0.1% (0.1-0.1%). CKD has a high global prevalence with a consistent estimated global CKD prevalence of between 11 to 13% with the majority stage 3. Future research should evaluate intervention strategies deliverable at scale to delay the progression of CKD and improve CVD outcomes.

2,321 citations


Journal ArticleDOI
03 Oct 2016-PLOS ONE
TL;DR: The objective is to understand the current research topics, challenges and future directions regarding Blockchain technology from the technical perspective, and recommendations on future research directions are provided for researchers.
Abstract: Blockchain is a decentralized transaction and data management technology developed first for Bitcoin cryptocurrency. The interest in Blockchain technology has been increasing since the idea was coined in 2008. The reason for the interest in Blockchain is its central attributes that provide security, anonymity and data integrity without any third party organization in control of the transactions, and therefore it creates interesting research areas, especially from the perspective of technical challenges and limitations. In this research, we have conducted a systematic mapping study with the goal of collecting all relevant research on Blockchain technology. Our objective is to understand the current research topics, challenges and future directions regarding Blockchain technology from the technical perspective. We have extracted 41 primary papers from scientific databases. The results show that focus in over 80% of the papers is on Bitcoin system and less than 20% deals with other Blockchain applications including e.g. smart contracts and licensing. The majority of research is focusing on revealing and improving limitations of Blockchain from privacy and security perspectives, but many of the proposed solutions lack concrete evaluation on their effectiveness. Many other Blockchain scalability related challenges including throughput and latency have been left unstudied. On the basis of this study, recommendations on future research directions are provided for researchers.

1,528 citations


Journal ArticleDOI
05 Feb 2016-PLOS ONE
TL;DR: The use of CS worldwide has increased to unprecedented levels although the gap between higher- and lower-resource settings remains.
Abstract: Background Caesarean section (CS) rates continue to evoke worldwide concern because of their steady increase, lack of consensus on the appropriate CS rate and the associated additional short- and long-term risks and costs. We present the latest CS rates and trends over the last 24 years. Methods We collected nationally-representative data on CS rates between 1990 to 2014 and calculated regional and subregional weighted averages. We conducted a longitudinal analysis calculating differences in CS rates as absolute change and as the average annual rate of increase (AARI). Results According to the latest data from 150 countries, currently 18.6% of all births occur by CS, ranging from 6% to 27.2% in the least and most developed regions, respectively. Latin America and the Caribbean region has the highest CS rates (40.5%), followed by Northern America (32.3%), Oceania (31.1%), Europe (25%), Asia (19.2%) and Africa (7.3%). Based on the data from 121 countries, the trend analysis showed that between 1990 and 2014, the global average CS rate increased 12.4% (from 6.7% to 19.1%) with an average annual rate of increase of 4.4%. The largest absolute increases occurred in Latin America and the Caribbean (19.4%, from 22.8% to 42.2%), followed by Asia (15.1%, from 4.4% to 19.5%), Oceania (14.1%, from 18.5% to 32.6%), Europe (13.8%, from 11.2% to 25%), Northern America (10%, from 22.3% to 32.3%) and Africa (4.5%, from 2.9% to 7.4%). Asia and Northern America were the regions with the highest and lowest average annual rate of increase (6.4% and 1.6%, respectively). Conclusion The use of CS worldwide has increased to unprecedented levels although the gap between higher- and lower-resource settings remains. The information presented is essential to inform policy and global and regional strategies aimed at optimizing the use of CS.

1,461 citations


Journal ArticleDOI
12 Sep 2016-PLOS ONE
TL;DR: It is found that systemic administration of mGluR subtype-specific positive allosteric modulators had opposite effects on dendritic spine densities, and this data provides insight on the ability of group I mGLURs to induce structural plasticity in the NAc and demonstrates that the group ImGluRs are capable of producing not just distinct, but opposing, effects ondendrite spine density.
Abstract: The group I metabotropic glutamate receptors (mGluR1a and mGluR5) are important modulators of neuronal structure and function. Although these receptors share common signaling pathways, they are capable of having distinct effects on cellular plasticity. We investigated the individual effects of mGluR1a or mGluR5 activation on dendritic spine density in medium spiny neurons in the nucleus accumbens (NAc), which has become relevant with the potential use of group I mGluR based therapeutics in the treatment of drug addiction. We found that systemic administration of mGluR subtype-specific positive allosteric modulators had opposite effects on dendritic spine densities. Specifically, mGluR5 positive modulation decreased dendritic spine densities in the NAc shell and core, but was without effect in the dorsal striatum, whereas increased spine densities in the NAc were observed with mGluR1a positive modulation. Additionally, direct activation of mGluR5 via CHPG administration into the NAc also decreased the density of dendritic spines. These data provide insight on the ability of group I mGluRs to induce structural plasticity in the NAc and demonstrate that the group I mGluRs are capable of producing not just distinct, but opposing, effects on dendritic spine density.

1,217 citations


Journal ArticleDOI
05 Oct 2016-PLOS ONE
TL;DR: The efficiency and usability of SeqKit enable researchers to rapidly accomplish common FASTA/Q file manipulations and demonstrates competitive performance in execution time and memory usage compared to similar tools.
Abstract: FASTA and FASTQ are basic and ubiquitous formats for storing nucleotide and protein sequences. Common manipulations of FASTA/Q file include converting, searching, filtering, deduplication, splitting, shuffling, and sampling. Existing tools only implement some of these manipulations, and not particularly efficiently, and some are only available for certain operating systems. Furthermore, the complicated installation process of required packages and running environments can render these programs less user friendly. This paper describes a cross-platform ultrafast comprehensive toolkit for FASTA/Q processing. SeqKit provides executable binary files for all major operating systems, including Windows, Linux, and Mac OSX, and can be directly used without any dependencies or pre-configurations. SeqKit demonstrates competitive performance in execution time and memory usage compared to similar tools. The efficiency and usability of SeqKit enable researchers to rapidly accomplish common FASTA/Q file manipulations. SeqKit is open source and available on Github at https://github.com/shenwei356/seqkit.

1,154 citations


Journal ArticleDOI
27 Oct 2016-PLOS ONE
TL;DR: The presence of at least two distinct haplotypes within samples collected on maize in Nigeria and São Tomé suggests multiple introductions into the African continent.
Abstract: The fall armyworm Spodoptera frugiperda is a prime noctuid pest of maize on the American continents where it has remained confined despite occasional interceptions by European quarantine services in recent years The pest has currently become a new invasive species in West and Central Africa where outbreaks were recorded for the first time in early 2016 The presence of at least two distinct haplotypes within samples collected on maize in Nigeria and Sao Tome suggests multiple introductions into the African continent Implications of this new threat to the maize crop in tropical Africa are briefly discussed

927 citations


Journal ArticleDOI
08 Jul 2016-PLOS ONE
TL;DR: Poor wellbeing and moderate to high levels of burnout are associated, in the majority of studies reviewed, with poor patient safety outcomes such as medical errors, however the lack of prospective studies reduces the ability to determine causality.
Abstract: Objective To determine whether there is an association between healthcare professionals’ wellbeing and burnout, with patient safety. Design Systematic research review. Data Sources PsychInfo (1806 to July 2015), Medline (1946 to July 2015), Embase (1947 to July 2015) and Scopus (1823 to July 2015) were searched, along with reference lists of eligible articles. Eligibility Criteria for Selecting Studies Quantitative, empirical studies that included i) either a measure of wellbeing or burnout, and ii) patient safety, in healthcare staff populations. Results Forty-six studies were identified. Sixteen out of the 27 studies that measured wellbeing found a significant correlation between poor wellbeing and worse patient safety, with six additional studies finding an association with some but not all scales used, and one study finding a significant association but in the opposite direction to the majority of studies. Twenty-one out of the 30 studies that measured burnout found a significant association between burnout and patient safety, whilst a further four studies found an association between one or more (but not all) subscales of the burnout measures employed, and patient safety. Conclusions Poor wellbeing and moderate to high levels of burnout are associated, in the majority of studies reviewed, with poor patient safety outcomes such as medical errors, however the lack of prospective studies reduces the ability to determine causality. Further prospective studies, research in primary care, conducted within the UK, and a clearer definition of healthcare staff wellbeing are needed. Implications This review illustrates the need for healthcare organisations to consider improving employees’ mental health as well as creating safer work environments when planning interventions to improve patient safety. Systematic Review Registration PROSPERO registration number: CRD42015023340.

914 citations


Journal ArticleDOI
16 Jun 2016-PLOS ONE
TL;DR: Pre-clinical data is provided that could inform clinical trials designed to test the hypothesis that improved outcomes can be achieved for TNBC patients, if selection and combination of existing chemotherapies is directed by knowledge of molecular TNBC subtypes.
Abstract: Triple-negative breast cancer (TNBC) is a heterogeneous disease that can be classified into distinct molecular subtypes by gene expression profiling. Considered a difficult-to-treat cancer, a fraction of TNBC patients benefit significantly from neoadjuvant chemotherapy and have far better overall survival. Outside of BRCA1/2 mutation status, biomarkers do not exist to identify patients most likely to respond to current chemotherapy; and, to date, no FDA-approved targeted therapies are available for TNBC patients. Previously, we developed an approach to identify six molecular subtypes TNBC (TNBCtype), with each subtype displaying unique ontologies and differential response to standard-of-care chemotherapy. Given the complexity of the varying histological landscape of tumor specimens, we used histopathological quantification and laser-capture microdissection to determine that transcripts in the previously described immunomodulatory (IM) and mesenchymal stem-like (MSL) subtypes were contributed from infiltrating lymphocytes and tumor-associated stromal cells, respectively. Therefore, we refined TNBC molecular subtypes from six (TNBCtype) into four (TNBCtype-4) tumor-specific subtypes (BL1, BL2, M and LAR) and demonstrate differences in diagnosis age, grade, local and distant disease progression and histopathology. Using five publicly available, neoadjuvant chemotherapy breast cancer gene expression datasets, we retrospectively evaluated chemotherapy response of over 300 TNBC patients from pretreatment biopsies subtyped using either the intrinsic (PAM50) or TNBCtype approaches. Combined analysis of TNBC patients demonstrated that TNBC subtypes significantly differ in response to similar neoadjuvant chemotherapy with 41% of BL1 patients achieving a pathological complete response compared to 18% for BL2 and 29% for LAR with 95% confidence intervals (CIs; [33, 51], [9, 28], [17, 41], respectively). Collectively, we provide pre-clinical data that could inform clinical trials designed to test the hypothesis that improved outcomes can be achieved for TNBC patients, if selection and combination of existing chemotherapies is directed by knowledge of molecular TNBC subtypes.

874 citations


Journal ArticleDOI
25 Jan 2016-PLOS ONE
TL;DR: The higher BP in SSA is maintained over decades, suggesting limited efficacy of prevention strategies in such group in Europe, and the lower BP in Muslim populations suggests that yet untapped lifestyle and behavioral habits may reveal advantages towards the development of hypertension.
Abstract: Background: People of Sub Saharan Africa (SSA) and South Asians(SA) ethnic minorities living in Europe have higher risk of stroke than native Europeans(EU). Study objective is to provide an assessment of gender specific absolute differences in office systolic(SBP) and diastolic(DBP) blood pressure(BP) levels between SSA, SA, and EU. Methods and Findings: We performed a systematic review and meta-analysis of observational studies conducted in Europe that examined BP in non-selected adult SSA, SA and EU subjects. Medline, PubMed, Embase, Web of Science, and Scopus were searched from their inception through January 31st 2015, for relevant articles. Outcome measures were mean SBP and DBP differences between minorities and EU, using a random effects model and tested for heterogeneity. Twenty-one studies involving 9,070 SSA, 18,421 SA, and 130,380 EU were included. Compared with EU, SSA had higher values of both SBP (3.38 mmHg, 95% CI 1.28 to 5.48 mmHg; and 6.00 mmHg, 95% CI 2.22 to 9.78 in men and women respectively) and DBP (3.29 mmHg, 95% CI 1.80 to 4.78; 5.35 mmHg, 95% CI 3.04 to 7.66). SA had lower SBP than EU(-4.57 mmHg, 95% CI -6.20 to -2.93; -2.97 mmHg, 95% CI -5.45 to -0.49) but similar DBP values. Meta-analysis by subgroup showed that SA originating from countries where Islam is the main religion had lower SBP and DBP values than EU. In multivariate meta-regression analyses, SBP difference between minorities and EU populations, was influenced by panethnicity and diabetes prevalence. Conclusions: 1) The higher BP in SSA is maintained over decades, suggesting limited efficacy of prevention strategies in such group in Europe;2) The lower BP in Muslim populations suggests that yet untapped lifestyle and behavioral habits may reveal advantages towards the development of hypertension;3) The additive effect of diabetes, emphasizes the need of new strategies for the control of hypertension in groups at high prevalence of diabetes.

792 citations


Journal ArticleDOI
15 Mar 2016-PLOS ONE
TL;DR: A framework for defining pilot and feasibility studies focusing on studies conducted in preparation for a randomised controlled trial is described, suggesting that to facilitate their identification, these studies should be clearly identified using the terms ‘feasibility’ or ‘pilot’ as appropriate.
Abstract: We describe a framework for defining pilot and feasibility studies focusing on studies conducted in preparation for a randomised controlled trial. To develop the framework, we undertook a Delphi survey; ran an open meeting at a trial methodology conference; conducted a review of definitions outside the health research context; consulted experts at an international consensus meeting; and reviewed 27 empirical pilot or feasibility studies. We initially adopted mutually exclusive definitions of pilot and feasibility studies. However, some Delphi survey respondents and the majority of open meeting attendees disagreed with the idea of mutually exclusive definitions. Their viewpoint was supported by definitions outside the health research context, the use of the terms 'pilot' and 'feasibility' in the literature, and participants at the international consensus meeting. In our framework, pilot studies are a subset of feasibility studies, rather than the two being mutually exclusive. A feasibility study asks whether something can be done, should we proceed with it, and if so, how. A pilot study asks the same questions but also has a specific design feature: in a pilot study a future study, or part of a future study, is conducted on a smaller scale. We suggest that to facilitate their identification, these studies should be clearly identified using the terms 'feasibility' or 'pilot' as appropriate. This should include feasibility studies that are largely qualitative; we found these difficult to identify in electronic searches because researchers rarely used the term 'feasibility' in the title or abstract of such studies. Investigators should also report appropriate objectives and methods related to feasibility; and give clear confirmation that their study is in preparation for a future randomised controlled trial designed to assess the effect of an intervention.

756 citations


Journal ArticleDOI
19 Apr 2016-PLOS ONE
TL;DR: This paper aims to be a new well-funded basis for unsupervised anomaly detection research by publishing the source code and the datasets, and reveals the strengths and weaknesses of the different approaches for the first time.
Abstract: Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.

Journal ArticleDOI
Liisa M. Pelttari1, Sofia Khan1, Mikko Vuorela2, Johanna I. Kiiski1, Sara Vilske1, Viivi Nevanlinna1, Salla Ranta1, Johanna Schleutker3, Johanna Schleutker4, Johanna Schleutker5, Robert Winqvist2, Anne Kallioniemi4, Thilo Dörk6, Natalia Bogdanova6, Jonine Figueroa, Paul D.P. Pharoah7, Marjanka K. Schmidt8, Alison M. Dunning7, Montserrat Garcia-Closas9, Manjeet K. Bolla7, Joe Dennis7, Kyriaki Michailidou7, Qin Wang7, John L. Hopper10, Melissa C. Southey10, Efraim H. Rosenberg8, Peter A. Fasching11, Peter A. Fasching12, Matthias W. Beckmann12, Julian Peto13, Isabel dos-Santos-Silva13, Elinor J. Sawyer14, Ian Tomlinson15, Barbara Burwinkel16, Barbara Burwinkel17, Harald Surowy16, Harald Surowy17, Pascal Guénel18, Thérèse Truong18, Stig E. Bojesen19, Stig E. Bojesen20, Børge G. Nordestgaard19, Børge G. Nordestgaard20, Javier Benitez, Anna González-Neira, Susan L. Neuhausen21, Hoda Anton-Culver22, Hermann Brenner16, Volker Arndt16, Alfons Meindl23, Rita K. Schmutzler24, Hiltrud Brauch25, Hiltrud Brauch26, Hiltrud Brauch16, Thomas Brüning27, Annika Lindblom28, Sara Margolin28, Arto Mannermaa29, Jaana M. Hartikainen29, Georgia Chenevix-Trench30, kConFab30, kConFab10, Aocs Investigators31, Laurien Van Dyck31, Hilde Janssen16, Hilde Janssen32, Jenny Chang-Claude16, Anja Rudolph, Paolo Radice, Paolo Peterlongo33, Emily Hallberg33, Janet E. Olson34, Janet E. Olson10, Graham G. Giles10, Graham G. Giles34, Roger L. Milne35, Christopher A. Haiman35, Fredrick Schumacher36, Jacques Simard36, Martine Dumont37, Martine Dumont38, Vessela N. Kristensen38, Vessela N. Kristensen37, Anne Lise Børresen-Dale39, Wei Zheng39, Alicia Beeghly-Fadiel40, Mervi Grip41, Mervi Grip42, Irene L. Andrulis42, Gord Glendon43, Peter Devilee44, Caroline Seynaeve44, Maartje J. Hooning45, Margriet Collée46, Angela Cox46, Simon S. Cross7, Mitul Shah7, Robert Luben16, Ute Hamann16, Ute Hamann47, Diana Torres48, Anna Jakubowska48, Jan Lubinski33, Fergus J. Couch, Drakoulis Yannoukakos9, Nick Orr9, Anthony J. Swerdlow28, Hatef Darabi28, Jingmei Li28, Kamila Czene28, Per Hall7, Douglas F. Easton1, Johanna Mattson1, Carl Blomqvist1, Kristiina Aittomäki1, Heli Nevanlinna 
05 May 2016-PLOS ONE
TL;DR: It is suggested that loss-of-function mutations in RAD 51B are rare, but common variation at the RAD51B region is significantly associated with familial breast cancer risk.
Abstract: Common variation on 14q24.1, close to RAD51B, has been associated with breast cancer: rs999737 and rs2588809 with the risk of female breast cancer and rs1314913 with the risk of male breast cancer. The aim of this study was to investigate the role of RAD51B variants in breast cancer predisposition, particularly in the context of familial breast cancer in Finland. We sequenced the coding region of RAD51B in 168 Finnish breast cancer patients from the Helsinki region for identification of possible recurrent founder mutations. In addition, we studied the known rs999737, rs2588809, and rs1314913 SNPs and RAD51B haplotypes in 44,791 breast cancer cases and 43,583 controls from 40 studies participating in the Breast Cancer Association Consortium (BCAC) that were genotyped on a custom chip (iCOGS). We identified one putatively pathogenic missense mutation c.541C>T among the Finnish cancer patients and subsequently genotyped the mutation in additional breast cancer cases (n = 5259) and population controls (n = 3586) from Finland and Belarus. No significant association with breast cancer risk was seen in the meta-analysis of the Finnish datasets or in the large BCAC dataset. The association with previously identified risk variants rs999737, rs2588809, and rs1314913 was replicated among all breast cancer cases and also among familial cases in the BCAC dataset. The most significant association was observed for the haplotype carrying the risk-alleles of all the three SNPs both among all cases (odds ratio (OR): 1.15, 95% confidence interval (CI): 1.11-1.19, P = 8.88 x 10-16) and among familial cases (OR: 1.24, 95% CI: 1.16-1.32, P = 6.19 x 10-11), compared to the haplotype with the respective protective alleles. Our results suggest that loss-of-function mutations in RAD51B are rare, but common variation at the RAD51B region is significantly associated with familial breast cancer risk.

Journal ArticleDOI
08 Jun 2016-PLOS ONE
TL;DR: A software package that facilitates extracting climate data for specific locations from large datasets and provides a user-friendly interface suitable for resource managers and decision makers as well as scientists is developed.
Abstract: Large volumes of gridded climate data have become available in recent years including interpolated historical data from weather stations and future predictions from general circulation models. These datasets, however, are at various spatial resolutions that need to be converted to scales meaningful for applications such as climate change risk and impact assessments or sample-based ecological research. Extracting climate data for specific locations from large datasets is not a trivial task and typically requires advanced GIS and data management skills. In this study, we developed a software package, ClimateNA, that facilitates this task and provides a user-friendly interface suitable for resource managers and decision makers as well as scientists. The software locally downscales historical and future monthly climate data layers into scale-free point estimates of climate values for the entire North American continent. The software also calculates a large number of biologically relevant climate variables that are usually derived from daily weather data. ClimateNA covers 1) 104 years of historical data (1901–2014) in monthly, annual, decadal and 30-year time steps; 2) three paleoclimatic periods (Last Glacial Maximum, Mid Holocene and Last Millennium); 3) three future periods (2020s, 2050s and 2080s); and 4) annual time-series of model projections for 2011–2100. Multiple general circulation models (GCMs) were included for both paleo and future periods, and two representative concentration pathways (RCP4.5 and 8.5) were chosen for future climate data.

Journal ArticleDOI
09 Jun 2016-PLOS ONE
TL;DR: SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters and keeps track of the whole analysis process, parameter values and versions of the R packages used.
Abstract: Background Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods. Results SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used. Conclusions SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.

Journal ArticleDOI
17 Feb 2016-PLOS ONE
TL;DR: A statistical model is developed that quantifies the effect of the majority illusion and shows that the illusion is exacerbated in networks with a heterogeneous degree distribution and disassortative structure.
Abstract: Individual’s decisions, from what product to buy to whether to engage in risky behavior, often depend on the choices, behaviors, or states of other people. People, however, rarely have global knowledge of the states of others, but must estimate them from the local observations of their social contacts. Network structure can significantly distort individual’s local observations. Under some conditions, a state that is globally rare in a network may be dramatically over-represented in the local neighborhoods of many individuals. This effect, which we call the “majority illusion,” leads individuals to systematically overestimate the prevalence of that state, which may accelerate the spread of social contagions. We develop a statistical model that quantifies this effect and validate it with measurements in synthetic and real-world networks. We show that the illusion is exacerbated in networks with a heterogeneous degree distribution and disassortative structure.

Journal ArticleDOI
04 Mar 2016-PLOS ONE
TL;DR: The study shows that rumours that are ultimately proven true tend to be resolved faster than those that turn out to be false, and reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumours.
Abstract: As breaking news unfolds people increasingly rely on social media to stay abreast of the latest updates. The use of social media in such situations comes with the caveat that new information being released piecemeal may encourage rumours, many of which remain unverified long after their point of release. Little is known, however, about the dynamics of the life cycle of a social media rumour. In this paper we present a methodology that has enabled us to collect, identify and annotate a dataset of 330 rumour threads (4,842 tweets) associated with 9 newsworthy events. We analyse this dataset to understand how users spread, support, or deny rumours that are later proven true or false, by distinguishing two levels of status in a rumour life cycle i.e., before and after its veracity status is resolved. The identification of rumours associated with each event, as well as the tweet that resolved each rumour as true or false, was performed by journalist members of the research team who tracked the events in real time. Our study shows that rumours that are ultimately proven true tend to be resolved faster than those that turn out to be false. Whilst one can readily see users denying rumours once they have been debunked, users appear to be less capable of distinguishing true from false rumours when their veracity remains in question. In fact, we show that the prevalent tendency for users is to support every unverified rumour. We also analyse the role of different types of users, finding that highly reputable users such as news organisations endeavour to post well-grounded statements, which appear to be certain and accompanied by evidence. Nevertheless, these often prove to be unverified pieces of information that give rise to false rumours. Our study reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumours. The findings of our study provide useful insights for achieving this aim.

Journal ArticleDOI
03 Nov 2016-PLOS ONE
TL;DR: It is revealed that environmental and health benefits are possible by shifting current Western diets to a variety of more sustainable dietary patterns, with reductions as high as 70–80% of GHG emissions and land use, and 50% of water use possible by adoptingustainable dietary patterns.
Abstract: Food production is a major driver of greenhouse gas (GHG) emissions, water and land use, and dietary risk factors are contributors to non-communicable diseases. Shifts in dietary patterns can therefore potentially provide benefits for both the environment and health. However, there is uncertainty about the magnitude of these impacts, and the dietary changes necessary to achieve them. We systematically review the evidence on changes in GHG emissions, land use, and water use, from shifting current dietary intakes to environmentally sustainable dietary patterns. We find 14 common sustainable dietary patterns across reviewed studies, with reductions as high as 70-80% of GHG emissions and land use, and 50% of water use (with medians of about 20-30% for these indicators across all studies) possible by adopting sustainable dietary patterns. Reductions in environmental footprints were generally proportional to the magnitude of animal-based food restriction. Dietary shifts also yielded modest benefits in all-cause mortality risk. Our review reveals that environmental and health benefits are possible by shifting current Western diets to a variety of more sustainable dietary patterns.

Journal ArticleDOI
16 May 2016-PLOS ONE
TL;DR: The CES-D has acceptable screening accuracy in the general population or primary care settings, but it should not be used as an isolated diagnostic measure of depression.
Abstract: Objective We aimed to collect and meta-analyse the existing evidence regarding the performance of the Center for Epidemiologic Studies Depression (CES-D) for detecting depression in general population and primary care settings. Method Systematic literature search in PubMed and PsychINFO. Eligible studies were: a) validation studies of screening questionnaires with information on the accuracy of the CES-D; b) samples from general populations or primary care settings; c) standardized diagnostic interviews following standard classification systems used as gold standard; and d) English or Spanish language of publication. Pooled sensitivity, specificity, likelihood ratios and diagnostic odds ratio were estimated for several cut-off points using bivariate mixed effects models for each threshold. The summary receiver operating characteristic curve was estimated with Rutter and Gatsonis mixed effects models; area under the curve was calculated. Quality of the studies was assessed with the QUADAS tool. Causes of heterogeneity were evaluated with the Rutter and Gatsonis mixed effects model including each covariate at a time. Results 28 studies (10,617 participants) met eligibility criteria. The median prevalence of Major Depression was 8.8% (IQ range from 3.8% to 12.6%). The overall area under the curve was 0.87. At the cut-off 16, sensitivity was 0.87 (95% CI: 0.82–0.92), specificity 0.70 (95% CI: 0.65–0.75), and DOR 16.2 (95% CI: 10.49–25.10). Better trade-offs between sensitivity and specificity were observed (Sensitivity = 0.83, Specificity = 0.78, diagnostic odds ratio = 16.64) for cut-off 20. None of the variables assessed as possible sources of heterogeneity was found to be statistically significant. Conclusion The CES-D has acceptable screening accuracy in the general population or primary care settings, but it should not be used as an isolated diagnostic measure of depression. Depending on the test objectives, the cut-off 20 may be more adequate than the value of 16, which is typically recommended.

Journal ArticleDOI
26 Apr 2016-PLOS ONE
TL;DR: The results indicate that gene modification via CRISPR/Cas9 is a useful approach for enhancing blast resistance in rice and targeted multiple sites within OsERF922 by using Cas9/Multi-target-sgRNAs to obtain plants harboring mutations at two or three sites.
Abstract: Rice blast is one of the most destructive diseases affecting rice worldwide. The adoption of host resistance has proven to be the most economical and effective approach to control rice blast. In recent years, sequence-specific nucleases (SSNs) have been demonstrated to be powerful tools for the improvement of crops via gene-specific genome editing, and CRISPR/Cas9 is thought to be the most effective SSN. Here, we report the improvement of rice blast resistance by engineering a CRISPR/Cas9 SSN (C-ERF922) targeting the OsERF922 gene in rice. Twenty-one C-ERF922-induced mutant plants (42.0%) were identified from 50 T0 transgenic plants. Sanger sequencing revealed that these plants harbored various insertion or deletion (InDel) mutations at the target site. We showed that all of the C-ERF922-induced allele mutations were transmitted to subsequent generations. Mutant plants harboring the desired gene modification but not containing the transferred DNA were obtained by segregation in the T1 and T2 generations. Six T2 homozygous mutant lines were further examined for a blast resistance phenotype and agronomic traits, such as plant height, flag leaf length and width, number of productive panicles, panicle length, number of grains per panicle, seed setting percentage and thousand seed weight. The results revealed that the number of blast lesions formed following pathogen infection was significantly decreased in all 6 mutant lines compared with wild-type plants at both the seedling and tillering stages. Furthermore, there were no significant differences between any of the 6 T2 mutant lines and the wild-type plants with regard to the agronomic traits tested. We also simultaneously targeted multiple sites within OsERF922 by using Cas9/Multi-target-sgRNAs (C-ERF922S1S2 and C-ERF922S1S2S3) to obtain plants harboring mutations at two or three sites. Our results indicate that gene modification via CRISPR/Cas9 is a useful approach for enhancing blast resistance in rice.

Journal ArticleDOI
25 May 2016-PLOS ONE
TL;DR: While no yield difference was observed among regions or different soil texture, wheat cultivation in the dryland was more prone to yield loss than in the non-dryland region, and potential causes and possible approaches that may minimize drought impacts are discussed.
Abstract: Drought has been a major cause of agricultural disaster, yet how it affects the vulnerability of maize and wheat production in combination with several co-varying factors (i.e., phenological phases, agro-climatic regions, soil texture) remains unclear. Using a data synthesis approach, this study aims to better characterize the effects of those co-varying factors with drought and to provide critical information on minimizing yield loss. We collected data from peer-reviewed publications between 1980 and 2015 which examined maize and wheat yield responses to drought using field experiments. We performed unweighted analysis using the log response ratio to calculate the bootstrapped confidence limits of yield responses and calculated drought sensitivities with regards to those co-varying factors. Our results showed that yield reduction varied with species, with wheat having lower yield reduction (20.6%) compared to maize (39.3%) at approximately 40% water reduction. Maize was also more sensitive to drought than wheat, particularly during reproductive phase and equally sensitive in the dryland and non-dryland regions. While no yield difference was observed among regions or different soil texture, wheat cultivation in the dryland was more prone to yield loss than in the non-dryland region. Informed by these results, we discuss potential causes and possible approaches that may minimize drought impacts.

Journal ArticleDOI
08 Jul 2016-PLOS ONE
TL;DR: The goal in this study was to see whether it was possible to achieve consensus among professionals on appropriate criteria for identifying children who might benefit from specialist services using an online Delphi technique.
Abstract: Delayed or impaired language development is a common developmental concern, yet there is little agreement about the criteria used to identify and classify language impairments in children. Children's language difficulties are at the interface between education, medicine and the allied professions, who may all adopt different approaches to conceptualising them. Our goal in this study was to use an online Delphi technique to see whether it was possible to achieve consensus among professionals on appropriate criteria for identifying children who might benefit from specialist services. We recruited a panel of 59 experts representing ten disciplines (including education, psychology, speech-language therapy/pathology, paediatrics and child psychiatry) from English-speaking countries (Australia, Canada, Ireland, New Zealand, United Kingdom and USA). The starting point for round 1 was a set of 46 statements based on articles and commentaries in a special issue of a journal focusing on this topic. Panel members rated each statement for both relevance and validity on a seven-point scale, and added free text comments. These responses were synthesised by the first two authors, who then removed, combined or modified items with a view to improving consensus. The resulting set of statements was returned to the panel for a second evaluation (round 2). Consensus (percentage reporting 'agree' or 'strongly agree') was at least 80 percent for 24 of 27 round 2 statements, though many respondents qualified their response with written comments. These were again synthesised by the first two authors. The resulting consensus statement is reported here, with additional summary of relevant evidence, and a concluding commentary on residual disagreements and gaps in the evidence base.

Journal ArticleDOI
27 Jul 2016-PLOS ONE
TL;DR: To guide interventions aimed at reducing tropical deforestation due to oil palm, recent expansions are analysed and likely future ones are modelled, and critical areas for biodiversity that oil palm expansion threatens are identified.
Abstract: Palm oil is the most widely traded vegetable oil globally, with demand projected to increase substantially in the future. Almost all oil palm grows in areas that were once tropical moist forests, some of them quite recently. The conversion to date, and future expansion, threatens biodiversity and increases greenhouse gas emissions. Today, consumer pressure is pushing companies toward deforestation-free sources of palm oil. To guide interventions aimed at reducing tropical deforestation due to oil palm, we analysed recent expansions and modelled likely future ones. We assessed sample areas to find where oil palm plantations have recently replaced forests in 20 countries, using a combination of high-resolution imagery from Google Earth and Landsat. We then compared these trends to countrywide trends in FAO data for oil palm planted area. Finally, we assessed which forests have high agricultural suitability for future oil palm development, which we refer to as vulnerable forests, and identified critical areas for biodiversity that oil palm expansion threatens. Our analysis reveals regional trends in deforestation associated with oil palm agriculture. In Southeast Asia, 45% of sampled oil palm plantations came from areas that were forests in 1989. For South America, the percentage was 31%. By contrast, in Mesoamerica and Africa, we observed only 2% and 7% of oil palm plantations coming from areas that were forest in 1989. The largest areas of vulnerable forest are in Africa and South America. Vulnerable forests in all four regions of production contain globally high concentrations of mammal and bird species at risk of extinction. However, priority areas for biodiversity conservation differ based on taxa and criteria used. Government regulation and voluntary market interventions can help incentivize the expansion of oil palm plantations in ways that protect biodiversity-rich ecosystems.

Journal ArticleDOI
07 Jun 2016-PLOS ONE
TL;DR: It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.
Abstract: A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

Journal ArticleDOI
09 Jun 2016-PLOS ONE
TL;DR: Negative environmental and public health outcomes were estimated through economic input-output life cycle assessment (EIOLCA) modeling using National Health Expenditures (NHE) for the decade 2003–2013 and compared to national totals.
Abstract: The U.S. health care sector is highly interconnected with industrial activities that emit much of the nation’s pollution to air, water, and soils. We estimate emissions directly and indirectly attributable to the health care sector, and potential harmful effects on public health. Negative environmental and public health outcomes were estimated through economic input-output life cycle assessment (EIOLCA) modeling using National Health Expenditures (NHE) for the decade 2003–2013 and compared to national totals. In 2013, the health care sector was also responsible for significant fractions of national air pollution emissions and impacts, including acid rain (12%), greenhouse gas emissions (10%), smog formation (10%) criteria air pollutants (9%), stratospheric ozone depletion (1%), and carcinogenic and non-carcinogenic air toxics (1–2%). The largest contributors to impacts are discussed from both the supply side (EIOLCA economic sectors) and demand side (NHE categories), as are trends over the study period. Health damages from these pollutants are estimated at 470,000 DALYs lost from pollution-related disease, or 405,000 DALYs when adjusted for recent shifts in power generation sector emissions. These indirect health burdens are commensurate with the 44,000–98,000 people who die in hospitals each year in the U.S. as a result of preventable medical errors, but are currently not attributed to our health system. Concerted efforts to improve environmental performance of health care could reduce expenditures directly through waste reduction and energy savings, and indirectly through reducing pollution burden on public health, and ought to be included in efforts to improve health care quality and safety.

Journal ArticleDOI
26 Feb 2016-PLOS ONE
TL;DR: This work presents a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space and demonstrates the usefulness of nonlinear observable subspaces in the design of Koop man operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.
Abstract: In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by l1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by l1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.

Journal ArticleDOI
26 May 2016-PLOS ONE
TL;DR: The results suggest that gut microbiota may differ between men and women, and that these differences may be influenced by the grade of obesity, as a function of gender and changes in body mass index.
Abstract: Intestinal microbiota changes are associated with the development of obesity. However, studies in humans have generated conflicting results due to high inter-individual heterogeneity in terms of diet, age, and hormonal factors, and the largely unexplored influence of gender. In this work, we aimed to identify differential gut microbiota signatures associated with obesity, as a function of gender and changes in body mass index (BMI). Differences in the bacterial community structure were analyzed by 16S sequencing in 39 men and 36 post-menopausal women, who had similar dietary background, matched by age and stratified according to the BMI. We observed that the abundance of the Bacteroides genus was lower in men than in women (P 33. In fact, the abundance of this genus decreased in men with an increase in BMI (P<0.001, Q<0.001). However, in women, it remained unchanged within the different ranges of BMI. We observed a higher presence of Veillonella (84.6% vs. 47.2%; X2 test P = 0.001, Q = 0.019) and Methanobrevibacter genera (84.6% vs. 47.2%; X2 test P = 0.002, Q = 0.026) in fecal samples in men compared to women. We also observed that the abundance of Bilophila was lower in men compared to women regardless of BMI (P = 0.002, Q = 0.041). Additionally, after correcting for age and sex, 66 bacterial taxa at the genus level were found to be associated with BMI and plasma lipids. Microbiota explained at P = 0.001, 31.17% variation in BMI, 29.04% in triglycerides, 33.70% in high-density lipoproteins, 46.86% in low-density lipoproteins, and 28.55% in total cholesterol. Our results suggest that gut microbiota may differ between men and women, and that these differences may be influenced by the grade of obesity. The divergence in gut microbiota observed between men and women might have a dominant role in the definition of gender differences in the prevalence of metabolic and intestinal inflammatory diseases.

Journal ArticleDOI
12 Sep 2016-PLOS ONE
TL;DR: The results suggest that the SP/AP ratio may be useful in the diagnosis of “hidden hearing loss” and that, as suggested by animal models, the noise-induced loss of cochlear nerve synapses leads to deficits in hearing abilities in difficult listening situations, despite the presence of normal thresholds at standard audiometric frequencies.
Abstract: Recent work suggests that hair cells are not the most vulnerable elements in the inner ear; rather, it is the synapses between hair cells and cochlear nerve terminals that degenerate first in the aging or noise-exposed ear. This primary neural degeneration does not affect hearing thresholds, but likely contributes to problems understanding speech in difficult listening environments, and may be important in the generation of tinnitus and/or hyperacusis. To look for signs of cochlear synaptopathy in humans, we recruited college students and divided them into low-risk and high-risk groups based on self-report of noise exposure and use of hearing protection. Cochlear function was assessed by otoacoustic emissions and click-evoked electrocochleography; hearing was assessed by behavioral audiometry and word recognition with or without noise or time compression and reverberation. Both groups had normal thresholds at standard audiometric frequencies, however, the high-risk group showed significant threshold elevation at high frequencies (10–16 kHz), consistent with early stages of noise damage. Electrocochleography showed a significant difference in the ratio between the waveform peaks generated by hair cells (Summating Potential; SP) vs. cochlear neurons (Action Potential; AP), i.e. the SP/AP ratio, consistent with selective neural loss. The high-risk group also showed significantly poorer performance on word recognition in noise or with time compression and reverberation, and reported heightened reactions to sound consistent with hyperacusis. These results suggest that the SP/AP ratio may be useful in the diagnosis of “hidden hearing loss” and that, as suggested by animal models, the noise-induced loss of cochlear nerve synapses leads to deficits in hearing abilities in difficult listening situations, despite the presence of normal thresholds at standard audiometric frequencies.

Journal ArticleDOI
06 Jun 2016-PLOS ONE
TL;DR: Sommer as discussed by the authors is an open-source R package to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures.
Abstract: Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.

Journal ArticleDOI
14 Jan 2016-PLOS ONE
TL;DR: This study analyzes two datasets retrieved from the Web of Science with the aim of giving a scientometric description of what the concept of CS entails, and accounts for its development over time, and what strands of research that has adopted CS and an assessment of what scientific output has been achieved in CS-related projects.
Abstract: Context The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health. Objective In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms. Results Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data.

Journal ArticleDOI
02 May 2016-PLOS ONE
TL;DR: The comparison of costs of nature-based defence projects and engineering structures show that salt-marshes and mangroves can be two to five times cheaper than a submerged breakwater for wave heights up to half a metre and, within their limits, become more cost effective at greater depths.
Abstract: There is great interest in the restoration and conservation of coastal habitats for protection from flooding and erosion. This is evidenced by the growing number of analyses and reviews of the effectiveness of habitats as natural defences and increasing funding world-wide for nature-based defences-i.e. restoration projects aimed at coastal protection; yet, there is no synthetic information on what kinds of projects are effective and cost effective for this purpose. This paper addresses two issues critical for designing restoration projects for coastal protection: (i) a synthesis of the costs and benefits of projects designed for coastal protection (nature-based defences) and (ii) analyses of the effectiveness of coastal habitats (natural defences) in reducing wave heights and the biophysical parameters that influence this effectiveness. We (i) analyse data from sixty-nine field measurements in coastal habitats globally and examine measures of effectiveness of mangroves, salt-marshes, coral reefs and seagrass/kelp beds for wave height reduction; (ii) synthesise the costs and coastal protection benefits of fifty-two nature-based defence projects and; (iii) estimate the benefits of each restoration project by combining information on restoration costs with data from nearby field measurements. The analyses of field measurements show that coastal habitats have significant potential for reducing wave heights that varies by habitat and site. In general, coral reefs and salt-marshes have the highest overall potential. Habitat effectiveness is influenced by: a) the ratios of wave height-to-water depth and habitat width-to-wavelength in coral reefs; and b) the ratio of vegetation height-to-water depth in salt-marshes. The comparison of costs of nature-based defence projects and engineering structures show that salt-marshes and mangroves can be two to five times cheaper than a submerged breakwater for wave heights up to half a metre and, within their limits, become more cost effective at greater depths. Nature-based defence projects also report benefits ranging from reductions in storm damage to reductions in coastal structure costs.