scispace - formally typeset
Search or ask a question

Showing papers in "PLOS ONE in 2017"


Journal ArticleDOI
16 Feb 2017-PLOS ONE
TL;DR: Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%.
Abstract: This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License.

2,228 citations


Journal ArticleDOI
18 Oct 2017-PLOS ONE
TL;DR: This analysis estimates a seasonal decline of 76%, and mid-summer decline of 82% in flying insect biomass over the 27 years of study, and shows that this decline is apparent regardless of habitat type, while changes in weather, land use, and habitat characteristics cannot explain this overall decline.
Abstract: Global declines in insects have sparked wide interest among scientists, politicians, and the general public. Loss of insect diversity and abundance is expected to provoke cascading effects on food webs and to jeopardize ecosystem services. Our understanding of the extent and underlying causes of this decline is based on the abundance of single species or taxonomic groups only, rather than changes in insect biomass which is more relevant for ecological functioning. Here, we used a standardized protocol to measure total insect biomass using Malaise traps, deployed over 27 years in 63 nature protection areas in Germany (96 unique location-year combinations) to infer on the status and trend of local entomofauna. Our analysis estimates a seasonal decline of 76%, and mid-summer decline of 82% in flying insect biomass over the 27 years of study. We show that this decline is apparent regardless of habitat type, while changes in weather, land use, and habitat characteristics cannot explain this overall decline. This yet unrecognized loss of insect biomass must be taken into account in evaluating declines in abundance of species depending on insects as a food source, and ecosystem functioning in the European landscape.

2,065 citations


Journal ArticleDOI
11 May 2017-PLOS ONE
TL;DR: The ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Abstract: Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.

1,015 citations


Journal ArticleDOI
06 Jun 2017-PLOS ONE
TL;DR: The pot operon is important for the regulation of protein expression and biofilm formation in both encapsulated and NCC1 nonencapsulated Streptococcus pneumoniae, however, in contrast to encapsulated pneumococcal strains, polyamine acquisition via the PotABCD is not required for MNZ67 murine colonization, persistence in the lungs, or full virulence in a model of OM.
Abstract: Streptococcus pneumoniae is commonly found in the human nasopharynx and is the causative agent of multiple diseases. Since invasive pneumococcal infections are associated with encapsulated pneumococci, the capsular polysaccharide is the target of licensed pneumococcal vaccines. However, there is an increasing distribution of non-vaccine serotypes, as well as nonencapsulated S. pneumoniae (NESp). Both encapsulated and nonencapsulated pneumococci possess the polyamine oligo-transport operon (potABCD). Previous research has shown inactivation of the pot operon in encapsulated pneumococci alters protein expression and leads to a significant reduction in pneumococcal murine colonization, but the role of the pot operon in NESp is unknown. Here, we demonstrate deletion of potD from the NESp NCC1 strain MNZ67 does impact expression of the key proteins pneumolysin and PspK, but it does not inhibit murine colonization. Additionally, we show the absence of potD significantly increases biofilm production, both in vitro and in vivo. In a chinchilla model of otitis media (OM), the absence of potD does not significantly affect MNZ67 virulence, but it does significantly reduce the pathogenesis of the virulent encapsulated strain TIGR4 (serotype 4). Deletion of potD also significantly reduced persistence of TIGR4 in the lungs but increased persistence of PIP01 in the lungs. We conclude the pot operon is important for the regulation of protein expression and biofilm formation in both encapsulated and NCC1 nonencapsulated Streptococcus pneumoniae. However, in contrast to encapsulated pneumococcal strains, polyamine acquisition via the pot operon is not required for MNZ67 murine colonization, persistence in the lungs, or full virulence in a model of OM. Therefore, NESp virulence regulation needs to be further established to identify potential NESp therapeutic targets.

988 citations


Journal ArticleDOI
02 Jun 2017-PLOS ONE
TL;DR: The proposed MCC-classifier has a close performance to SVM-imba while being simpler and more efficient and an optimal Bayes classifier for the MCC metric using an approach based on Frechet derivative.
Abstract: Data imbalance is frequently encountered in biomedical applications Resampling techniques can be used in binary classification to tackle this issue However such solutions are not desired when the number of samples in the small class is limited Moreover the use of inadequate performance metrics, such as accuracy, lead to poor generalization results because the classifiers tend to predict the largest size class One of the good approaches to deal with this issue is to optimize performance metrics that are designed to handle data imbalance Matthews Correlation Coefficient (MCC) is widely used in Bioinformatics as a performance metric We are interested in developing a new classifier based on the MCC metric to handle imbalanced data We derive an optimal Bayes classifier for the MCC metric using an approach based on Frechet derivative We show that the proposed algorithm has the nice theoretical property of consistency Using simulated data, we verify the correctness of our optimality result by searching in the space of all possible binary classifiers The proposed classifier is evaluated on 64 datasets from a wide range data imbalance We compare both classification performance and CPU efficiency for three classifiers: 1) the proposed algorithm (MCC-classifier), the Bayes classifier with a default threshold (MCC-base) and imbalanced SVM (SVM-imba) The experimental evaluation shows that MCC-classifier has a close performance to SVM-imba while being simpler and more efficient

850 citations


Journal ArticleDOI
04 Apr 2017-PLOS ONE
TL;DR: In this article, the authors assessed whether machine-learning can improve cardiovascular risk prediction and found that machine learning offers an opportunity to improve accuracy by exploiting complex interactions between risk factors, which can increase the number of patients who could benefit from preventive treatment, while avoiding unnecessary treatment of others.
Abstract: Background Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Methods Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). Findings 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The 78 highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 79 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Conclusions Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others.

765 citations


Journal ArticleDOI
26 Oct 2017-PLOS ONE
TL;DR: BBMerge provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.
Abstract: Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highly sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.

752 citations


Journal ArticleDOI
01 Jun 2017-PLOS ONE
TL;DR: A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed and the sensitivity of the method for cancer cases is 95.6%.
Abstract: Breast cancer is one of the main causes of cancer death worldwide The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization This design allows the extension of the proposed system to whole-slide histology images The features extracted by the CNN are also used for training a Support Vector Machine classifier Accuracies of 778% for four class and 833% for carcinoma/non-carcinoma are achieved The sensitivity of our method for cancer cases is 956%

743 citations


Journal ArticleDOI
26 Jan 2017-PLOS ONE
TL;DR: Many different psychological, contextual, sociodemographic and physical barriers that are specific to certain risk groups were identified and map knowledge gaps in understanding influenza vaccine hesitancy to derive directions for further research and inform interventions in this area.
Abstract: Background Influenza vaccine hesitancy is a significant threat to global efforts to reduce the burden of seasonal and pandemic influenza. Potential barriers of influenza vaccination need to be identified to inform interventions to raise awareness, influenza vaccine acceptance and uptake. Objective This review aims to (1) identify relevant studies and extract individual barriers of seasonal and pandemic influenza vaccination for risk groups and the general public; and (2) map knowledge gaps in understanding influenza vaccine hesitancy to derive directions for further research and inform interventions in this area. Methods Thirteen databases covering the areas of Medicine, Bioscience, Psychology, Sociology and Public Health were searched for peer-reviewed articles published between the years 2005 and 2016. Following the PRISMA approach, 470 articles were selected and analyzed for significant barriers to influenza vaccine uptake or intention. The barriers for different risk groups and flu types were clustered according to a conceptual framework based on the Theory of Planned Behavior and discussed using the 4C model of reasons for non-vaccination. Results Most studies were conducted in the American and European region. Health care personnel (HCP) and the general public were the most studied populations, while parental decisions for children at high risk were under-represented. This study also identifies understudied concepts. A lack of confidence, inconvenience, calculation and complacency were identified to different extents as barriers to influenza vaccine uptake in risk groups. Conclusion Many different psychological, contextual, sociodemographic and physical barriers that are specific to certain risk groups were identified. While most sociodemographic and physical variables may be significantly related to influenza vaccine hesitancy, they cannot be used to explain its emergence or intensity. Psychological determinants were meaningfully related to uptake and should therefore be measured in a valid and comparable way. A compendium of measurements for future use is suggested as supporting information.

738 citations


Journal ArticleDOI
04 Oct 2017-PLOS ONE
TL;DR: Several prospective and high-quality studies showed physical, psychological and occupational consequences of job burnout, which highlight the need for preventive interventions and early identification of this health condition in the work environment.
Abstract: Burnout is a syndrome that results from chronic stress at work, with several consequences to workers’ well-being and health. This systematic review aimed to summarize the evidence of the physical, psychological and occupational consequences of job burnout in prospective studies. The PubMed, Science Direct, PsycInfo, SciELO, LILACS and Web of Science databases were searched without language or date restrictions. The Transparent Reporting of Systematic Reviews and Meta-Analyses guidelines were followed. Prospective studies that analyzed burnout as the exposure condition were included. Among the 993 articles initially identified, 61 fulfilled the inclusion criteria, and 36 were analyzed because they met three criteria that must be followed in prospective studies. Burnout was a significant predictor of the following physical consequences: hypercholesterolemia, type 2 diabetes, coronary heart disease, hospitalization due to cardiovascular disorder, musculoskeletal pain, changes in pain experiences, prolonged fatigue, headaches, gastrointestinal issues, respiratory problems, severe injuries and mortality below the age of 45 years. The psychological effects were insomnia, depressive symptoms, use of psychotropic and antidepressant medications, hospitalization for mental disorders and psychological ill-health symptoms. Job dissatisfaction, absenteeism, new disability pension, job demands, job resources and presenteeism were identified as professional outcomes. Conflicting findings were observed. In conclusion, several prospective and high-quality studies showed physical, psychological and occupational consequences of job burnout. The individual and social impacts of burnout highlight the need for preventive interventions and early identification of this health condition in the work environment.

677 citations


Journal ArticleDOI
14 Jul 2017-PLOS ONE
TL;DR: A novel deep learning framework where wavelet transforms, stacked autoencoders and long-short term memory are combined for stock price forecasting and shows that the proposed model outperforms other similar models in both predictive accuracy and profitability performance.
Abstract: The application of deep learning approaches to finance has received a great deal of attention from both investors and researchers. This study presents a novel deep learning framework where wavelet transforms (WT), stacked autoencoders (SAEs) and long-short term memory (LSTM) are combined for stock price forecasting. The SAEs for hierarchically extracted deep features is introduced into stock price forecasting for the first time. The deep learning framework comprises three stages. First, the stock price time series is decomposed by WT to eliminate noise. Second, SAEs is applied to generate deep high-level features for predicting the stock price. Third, high-level denoising features are fed into LSTM to forecast the next day’s closing price. Six market indices and their corresponding index futures are chosen to examine the performance of the proposed model. Results show that the proposed model outperforms other similar models in both predictive accuracy and profitability performance.

Journal ArticleDOI
01 Feb 2017-PLOS ONE
TL;DR: The summary measure of overall physical activity is lower in older participants and age-related differences in activity are most prominent in the afternoon and evening, which lays the foundation for studies of physical activity and its health consequences.
Abstract: BACKGROUND: Physical activity has not been objectively measured in prospective cohorts with sufficiently large numbers to reliably detect associations with multiple health outcomes. Technological advances now make this possible. We describe the methods used to collect and analyse accelerometer measured physical activity in over 100,000 participants of the UK Biobank study, and report variation by age, sex, day, time of day, and season. METHODS: Participants were approached by email to wear a wrist-worn accelerometer for seven days that was posted to them. Physical activity information was extracted from 100Hz raw triaxial acceleration data after calibration, removal of gravity and sensor noise, and identification of wear / non-wear episodes. We report age- and sex-specific wear-time compliance and accelerometer measured physical activity, overall and by hour-of-day, week-weekend day and season. RESULTS: 103,712 datasets were received (44.8% response), with a median wear-time of 6.9 days (IQR:6.5-7.0). 96,600 participants (93.3%) provided valid data for physical activity analyses. Vector magnitude, a proxy for overall physical activity, was 7.5% (2.35mg) lower per decade of age (Cohen's d = 0.9). Women had a higher vector magnitude than men, apart from those aged 45-54yrs. There were major differences in vector magnitude by time of day (d = 0.66). Vector magnitude differences between week and weekend days (d = 0.12 for men, d = 0.09 for women) and between seasons (d = 0.27 for men, d = 0.15 for women) were small. CONCLUSIONS: It is feasible to collect and analyse objective physical activity data in large studies. The summary measure of overall physical activity is lower in older participants and age-related differences in activity are most prominent in the afternoon and evening. This work lays the foundation for studies of physical activity and its health consequences. Our summary variables are part of the UK Biobank dataset and can be used by researchers as exposures, confounding factors or outcome variables in future analyses.

Journal ArticleDOI
08 Jun 2017-PLOS ONE
TL;DR: The results are consistent with a causal role of fasting insulin and low-density lipoprotein cholesterol in lung cancer etiology, as well as for BMI in squamous cell and small cell carcinoma, and the latter relation may be mediated by a previously unrecognized effect of obesity on smoking behavior.
Abstract: Background: Assessing the relationship between lung cancer and metabolic conditions is challenging because of the confounding effect of tobacco. Mendelian randomization (MR), or the use of genetic ...

Journal ArticleDOI
17 Jan 2017-PLOS ONE
TL;DR: In this article, the authors performed a systematic review to assess the short-, middle and long-term consequences of sarcopenia, and the results showed a higher rate of mortality among sarcopenic subjects (pooled OR of 3.596 (95% CI 2.96-4.37).
Abstract: Objective The purpose of this study was to perform a systematic review to assess the short-, middle- and long-term consequences of sarcopenia. Methods Prospective studies assessing the consequences of sarcopenia were searched across different electronic databases (MEDLINE, EMBASE, EBM Reviews, Cochrane Database of Systematic Reviews, EBM Reviews ACP Journal Club, EBM Reviews DARE and AMED). Only studies that used the definition of the European Working Group on Sarcopenia in Older People to diagnose sarcopenia were included. Study selection and data extraction were performed by two independent reviewers. For outcomes reported by three or more studies, a meta-analysis was performed. The study results are expressed as odds ratios (OR) with 95% CI. Results Of the 772 references identified through the database search, 17 were included in this systematic review. The number of participants in the included studies ranged from 99 to 6658, and the duration of follow-up varied from 3 months to 9.8 years. Eleven out of 12 studies assessed the impact of sarcopenia on mortality. The results showed a higher rate of mortality among sarcopenic subjects (pooled OR of 3.596 (95% CI 2.96–4.37)). The effect was higher in people aged 79 years or older compared with younger subjects (p = 0.02). Sarcopenia is also associated with functional decline (pooled OR of 6 studies 3.03 (95% CI 1.80–5.12)), a higher rate of falls (2/2 studies found a significant association) and a higher incidence of hospitalizations (1/1 study). The impact of sarcopenia on the incidence of fractures and the length of hospital stay was less clear (only 1/2 studies showed an association for both outcomes). Conclusion Sarcopenia is associated with several harmful outcomes, making this geriatric syndrome a real public health burden.

Journal ArticleDOI
05 Apr 2017-PLOS ONE
TL;DR: The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments and describes the progression from competitive to collaborative behavior when the incentive to cooperate is increased.
Abstract: Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.

Journal ArticleDOI
09 Jan 2017-PLOS ONE
TL;DR: It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.
Abstract: Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic) use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD). Using the Bergen Social Media Addiction Scale (BSMAS) and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

Journal ArticleDOI
25 Sep 2017-PLOS ONE
TL;DR: The MRI Quality Control tool (MRIQC), a tool for extracting quality measures and fitting a binary (accept/exclude) classifier, is introduced, which performs with high accuracy in intra-site prediction, but performance on unseen sites leaves space for improvement.
Abstract: Quality control of MRI is essential for excluding problematic acquisitions and avoiding bias in subsequent image processing and analysis. Visual inspection is subjective and impractical for large scale datasets. Although automated quality assessments have been demonstrated on single-site datasets, it is unclear that solutions can generalize to unseen data acquired at new sites. Here, we introduce the MRI Quality Control tool (MRIQC), a tool for extracting quality measures and fitting a binary (accept/exclude) classifier. Our tool can be run both locally and as a free online service via the OpenNeuro.org portal. The classifier is trained on a publicly available, multi-site dataset (17 sites, N = 1102). We perform model selection evaluating different normalization and feature exclusion approaches aimed at maximizing across-site generalization and estimate an accuracy of 76%±13% on new sites, using leave-one-site-out cross-validation. We confirm that result on a held-out dataset (2 sites, N = 265) also obtaining a 76% accuracy. Even though the performance of the trained classifier is statistically above chance, we show that it is susceptible to site effects and unable to account for artifacts specific to new sites. MRIQC performs with high accuracy in intra-site prediction, but performance on unseen sites leaves space for improvement which might require more labeled data and new approaches to the between-site variability. Overcoming these limitations is crucial for a more objective quality assessment of neuroimaging data, and to enable the analysis of extremely large and multi-site samples.

Journal ArticleDOI
05 May 2017-PLOS ONE
TL;DR: It was found that inoculating messages that explain the flawed argumentation technique used in the misinformation or that highlight the scientific consensus on climate change were effective in neutralizing those adverse effects of misinformation.
Abstract: Misinformation can undermine a well-functioning democracy. For example, public misconceptions about climate change can lead to lowered acceptance of the reality of climate change and lowered support for mitigation policies. This study experimentally explored the impact of misinformation about climate change and tested several pre-emptive interventions designed to reduce the influence of misinformation. We found that false-balance media coverage (giving contrarian views equal voice with climate scientists) lowered perceived consensus overall, although the effect was greater among free-market supporters. Likewise, misinformation that confuses people about the level of scientific agreement regarding anthropogenic global warming (AGW) had a polarizing effect, with free-market supporters reducing their acceptance of AGW and those with low free-market support increasing their acceptance of AGW. However, we found that inoculating messages that (1) explain the flawed argumentation technique used in the misinformation or that (2) highlight the scientific consensus on climate change were effective in neutralizing those adverse effects of misinformation. We recommend that climate communication messages should take into account ways in which scientific content can be distorted, and include pre-emptive inoculation messages.

Journal ArticleDOI
05 Jun 2017-PLOS ONE
TL;DR: Two new U-Gompertz models are proposed, special cases of the Unified-Richards (U-Rich Richards) model and thus belong to the Richards family of U-models, which have a set of parameters, which are comparable across models in the family, without conversion equations.
Abstract: The Gompertz model is well known and widely used in many aspects of biology. It has been frequently used to describe the growth of animals and plants, as well as the number or volume of bacteria and cancer cells. Numerous parametrisations and re-parametrisations of varying usefulness are found in the literature, whereof the Gompertz-Laird is one of the more commonly used. Here, we review, present, and discuss the many re-parametrisations and some parameterisations of the Gompertz model, which we divide into Ti (type I)- and W0 (type II)-forms. In the W0-form a starting-point parameter, meaning birth or hatching value (W0), replaces the inflection-time parameter (Ti). We also propose new "unified" versions (U-versions) of both the traditional Ti -form and a simplified W0-form. In these, the growth-rate constant represents the relative growth rate instead of merely an unspecified growth coefficient. We also present U-versions where the growth-rate parameters return absolute growth rate (instead of relative). The new U-Gompertz models are special cases of the Unified-Richards (U-Richards) model and thus belong to the Richards family of U-models. As U-models, they have a set of parameters, which are comparable across models in the family, without conversion equations. The improvements are simple, and may seem trivial, but are of great importance to those who study organismal growth, as the two new U-Gompertz forms give easy and fast access to all shape parameters needed for describing most types of growth following the shape of the Gompertz model.

Journal ArticleDOI
23 Jan 2017-PLOS ONE
TL;DR: Comparing the exosomes extracted by three different exosome isolation kits and differential ultracentrifugation using six different volumes of a non-cancerous human serum shows that three kits are viable alternatives to UC, even when limited amounts of biological samples are available.
Abstract: Exosomes play a role in cell-to-cell signaling and serve as possible biomarkers. Isolating exosomes with reliable quality and substantial concentration is a major challenge. Our purpose is to compare the exosomes extracted by three different exosome isolation kits (miRCURY, ExoQuick, and Invitrogen Total Exosome Isolation Reagent) and differential ultracentrifugation (UC) using six different volumes of a non-cancerous human serum (5 ml, 1 ml, 500 μl, 250 μl, 100 μl, and 50 μl) and three different volumes (1 ml, 500 μl and 100 μl) of six individual commercial serum samples collected from human donors. The smaller starting volumes (100 μl and 50 μl) are used to mimic conditions of limited availability of heterogeneous biological samples. The isolated exosomes were characterized based upon size, quantity, zeta potential, CD63 and CD9 protein expression, and exosomal RNA (exRNA) quality and quantity using several complementary methods: nanoparticle tracking analysis (NTA) with ZetaView, western blot, transmission electron microscopy (TEM), the Agilent Bioanalyzer system, and droplet digital PCR (ddPCR). Our NTA results showed that all isolation techniques produced exosomes within the expected size range (40–150 nm). The three kits, though, produced a significantly higher yield (80–300 fold) of exosomes as compared to UC for all serum volumes, except 5 mL. We also found that exosomes isolated by the different techniques and serum volumes had similar zeta potentials to previous studies. Western blot analysis and TEM immunogold labelling confirmed the expression of two common exosomal protein markers, CD63 and CD9, in samples isolated by all techniques. All exosome isolations yielded high quality exRNA, containing mostly small RNA with a peak between 25 and 200 nucleotides in size. ddPCR results indicated that exosomes isolated from similar serum volumes but different isolation techniques rendered similar concentrations of two selected exRNA: hsa-miR-16 and hsa-miR-451. In summary, the three commercial exosome isolation kits are viable alternatives to UC, even when limited amounts of biological samples are available.

Journal ArticleDOI
07 Feb 2017-PLOS ONE
TL;DR: Patients reported more beneficial health behaviours, less symptoms and higher quality of life and to be more satisfied with treatment when they had higher trust in their health care professional when the interplay between trust and health outcomes was found.
Abstract: Objective To examine whether patients’ trust in the health care professional is associated with health outcomes. Study selection We searched 4 major electronic databases for studies that reported quantitative data on the association between trust in the health care professional and health outcome. We screened the full-texts of 400 publications and included 47 studies in our meta-analysis. Data extraction and data synthesis We conducted random effects meta-analyses and meta-regressions and calculated correlation coefficients with corresponding 95% confidence intervals. Two interdependent researchers assessed the quality of the included studies using the Strengthening the Reporting of Observational Studies in Epidemiology guidelines. Results Overall, we found a small to moderate correlation between trust and health outcomes (r = 0.24, 95% CI: 0.19–0.29). Subgroup analyses revealed a moderate correlation between trust and self-rated subjective health outcomes (r = 0.30, 0.24–0.35). Correlations between trust and objective (r = -0.02, -0.08–0.03) as well as observer-rated outcomes (r = 0.10, -0.16–0.36) were non-significant. Exploratory analyses showed a large correlation between trust and patient satisfaction and somewhat smaller correlations with health behaviours, quality of life and symptom severity. Heterogeneity was small to moderate across the analyses. Conclusions From a clinical perspective, patients reported more beneficial health behaviours, less symptoms and higher quality of life and to be more satisfied with treatment when they had higher trust in their health care professional. There was evidence for upward bias in the summarized results. Prospective studies are required to deepen our understanding of the complex interplay between trust and health outcomes.

Journal ArticleDOI
21 Dec 2017-PLOS ONE
TL;DR: This work presents an extended review on the topic that includes the evaluation of six methods of mapping reads, including pseudo-alignment and quasi-mapping and nine methods of differential expression analysis from RNA-Seq data, and the results indicated that mapping methods have minimal impact on the final DEGs analysis.
Abstract: The correct identification of differentially expressed genes (DEGs) between specific conditions is a key in the understanding phenotypic variation. High-throughput transcriptome sequencing (RNA-Seq) has become the main option for these studies. Thus, the number of methods and softwares for differential expression analysis from RNA-Seq data also increased rapidly. However, there is no consensus about the most appropriate pipeline or protocol for identifying differentially expressed genes from RNA-Seq data. This work presents an extended review on the topic that includes the evaluation of six methods of mapping reads, including pseudo-alignment and quasi-mapping and nine methods of differential expression analysis from RNA-Seq data. The adopted methods were evaluated based on real RNA-Seq data, using qRT-PCR data as reference (gold-standard). As part of the results, we developed a software that performs all the analysis presented in this work, which is freely available at https://github.com/costasilvati/consexpression. The results indicated that mapping methods have minimal impact on the final DEGs analysis, considering that adopted data have an annotated reference genome. Regarding the adopted experimental model, the DEGs identification methods that have more consistent results were the limma+voom, NOIseq and DESeq2. Additionally, the consensus among five DEGs identification methods guarantees a list of DEGs with great accuracy, indicating that the combination of different methods can produce more suitable results. The consensus option is also included for use in the available software.

Journal ArticleDOI
29 Jun 2017-PLOS ONE
TL;DR: This is the first review to identify the 81 outcome measures the research community uses for disease-modifying trials in mild-to-moderate dementia, and recommended core outcomes were cognition as the fundamental deficit in dementia and to indicate disease modification, serial structural MRIs.
Abstract: Background There are no disease-modifying treatments for dementia. There is also no consensus on disease modifying outcomes. We aimed to produce the first evidence-based consensus on core outcome measures for trials of disease modification in mild-to-moderate dementia. Methods and findings We defined disease-modification interventions as those aiming to change the underlying pathology. We systematically searched electronic databases and previous systematic reviews for published and ongoing trials of disease-modifying treatments in mild-to-moderate dementia. We included 149/22,918 of the references found; with 81 outcome measures from 125 trials. Trials involved participants with Alzheimer’s disease (AD) alone (n = 111), or AD and mild cognitive impairment (n = 8) and three vascular dementia. We divided outcomes by the domain measured (cognition, activities of daily living, biological markers, neuropsychiatric symptoms, quality of life, global). We calculated the number of trials and of participants using each outcome. We detailed psychometric properties of each outcome. We sought the views of people living with dementia and family carers in three cities through Alzheimer’s society focus groups. Attendees at a consensus conference (experts in dementia research, disease-modification and harmonisation measures) decided on the core set of outcomes using these results. Recommended core outcomes were cognition as the fundamental deficit in dementia and to indicate disease modification, serial structural MRIs. Cognition should be measured by Mini Mental State Examination or Alzheimer's Disease Assessment Scale-Cognitive Subscale. MRIs would be optional for patients. We also made recommendations for measuring important, but non-core domains which may not change despite disease modification. Limitations Most trials were about AD. Specific instruments may be superseded. We searched one database for psychometric properties. Interpretation This is the first review to identify the 81 outcome measures the research community uses for disease-modifying trials in mild-to-moderate dementia. Our recommendations will facilitate designing, comparing and meta-analysing disease modification trials in mild-to-moderate dementia, increasing their value.

Journal ArticleDOI
21 Dec 2017-PLOS ONE
TL;DR: ABR is associated with a high mortality risk and increased economic costs with ESKAPE pathogens implicated as the main cause of increased mortality.
Abstract: Introduction Despite evidence of the high prevalence of antibiotic resistant infections in developing countries, studies on the clinical and economic impact of antibiotic resistance (ABR) to inform interventions to contain its emergence and spread are limited. The aim of this study was to analyze the published literature on the clinical and economic implications of ABR in developing countries. Methods A systematic search was carried out in Medline via PubMed and Web of Sciences and included studies published from January 01, 2000 to December 09, 2016. All papers were considered and a quality assessment was performed using the Newcastle-Ottawa quality assessment scale (NOS). Results Of 27 033 papers identified, 40 studies met the strict inclusion and exclusion criteria and were finally included in the qualitative and quantitative analysis. Mortality was associated with resistant bacteria, and statistical significance was evident with an odds ratio (OR) 2.828 (95%CI, 2.231–3.584; p = 0.000). ESKAPE pathogens was associated with the highest risk of mortality and with high statistical significance (OR 3.217; 95%CIs; 2.395–4.321; p = 0.001). Eight studies showed that ABR, and especially antibiotic-resistant ESKAPE bacteria significantly increased health care costs. Conclusion ABR is associated with a high mortality risk and increased economic costs with ESKAPE pathogens implicated as the main cause of increased mortality. Patients with non-communicable disease co-morbidities were identified as high-risk populations.

Journal ArticleDOI
17 Oct 2017-PLOS ONE
TL;DR: It is proposed that placental dysfunction may mediate adverse pregnancy outcome in AMA and stillbirth risk increases with increasing maternal age, not wholly explained by maternal co-morbidities and use of ART.
Abstract: Background Advanced maternal age (AMA; ≥35 years) is an increasing trend and is reported to be associated with various pregnancy complications. Objective To determine the risk of stillbirth and other adverse pregnancy outcomes in women of AMA. Search strategy Embase, Medline (Ovid), Cochrane Database of Systematic Reviews, ClinicalTrials.gov, LILACS and conference proceedings were searched from ≥2000. Selection criteria Cohort and case-control studies reporting data on one or more co-primary outcomes (stillbirth or fetal growth restriction (FGR)) and/or secondary outcomes in mothers ≥35 years and <35 years. Data collection and analysis The effect of age on pregnancy outcome was investigated by random effects meta-analysis and meta-regression. Stillbirth rates were correlated to rates of maternal diabetes, obesity, hypertension and use of assisted reproductive therapies (ART). Main results Out of 1940 identified titles; 63 cohort studies and 12 case-control studies were included in the meta-analysis. AMA increased the risk of stillbirth (OR 1.75, 95%CI 1.62 to 1.89) with a population attributable risk of 4.7%. Similar trends were seen for risks of FGR, neonatal death, NICU unit admission restriction and GDM. The relationship between AMA and stillbirth was not related to maternal morbidity or ART. Conclusions Stillbirth risk increases with increasing maternal age. This is not wholly explained by maternal co-morbidities and use of ART. We propose that placental dysfunction may mediate adverse pregnancy outcome in AMA. Further prospective studies are needed to directly test this hypothesis.

Journal ArticleDOI
02 Nov 2017-PLOS ONE
TL;DR: Investigating when barley cultivation dispersed from southwest Asia to regions of eastern Asia and how the eastern spring barley evolved in this context indicates that the eastern dispersals of wheat and barley were distinct in both space and time.
Abstract: Today, farmers in many regions of eastern Asia sow their barley grains in the spring and harvest them in the autumn of the same year (spring barley). However, when it was first domesticated in southwest Asia, barley was grown between the autumn and subsequent spring (winter barley), to complete their life cycles before the summer drought. The question of when the eastern barley shifted from the original winter habit to flexible growing schedules is of significance in terms of understanding its spread. This article investigates when barley cultivation dispersed from southwest Asia to regions of eastern Asia and how the eastern spring barley evolved in this context. We report 70 new radiocarbon measurements obtained directly from barley grains recovered from archaeological sites in eastern Eurasia. Our results indicate that the eastern dispersals of wheat and barley were distinct in both space and time. We infer that barley had been cultivated in a range of markedly contrasting environments by the second millennium BC. In this context, we consider the distribution of known haplotypes of a flowering-time gene in barley, Ppd-H1, and infer that the distributions of those haplotypes may reflect the early dispersal of barley. These patterns of dispersal resonate with the second and first millennia BC textual records documenting sowing and harvesting times for barley in central/eastern China.

Journal ArticleDOI
08 Jun 2017-PLOS ONE
TL;DR: The primary driver of anthropogenic mangrove loss was found to be the conversion ofMangrove to aquaculture/agriculture, although substantial advance of mangroves was also evident in many regions.
Abstract: For the period 1996-2010, we provide the first indication of the drivers behind mangrove land cover and land use change across the (pan-)tropics using time-series Japanese Earth Resources Satellite (JERS-1) Synthetic Aperture Radar (SAR) and Advanced Land Observing Satellite (ALOS) Phased Array-type L-band SAR (PALSAR) data. Multi-temporal radar mosaics were manually interpreted for evidence of loss and gain in forest extent and its associated driver. Mangrove loss as a consequence of human activities was observed across their entire range. Between 1996-2010 12% of the 1168 1°x1° radar mosaic tiles examined contained evidence of mangrove loss, as a consequence of anthropogenic degradation, with this increasing to 38% when combined with evidence of anthropogenic activity prior to 1996. The greatest proportion of loss was observed in Southeast Asia, whereby approximately 50% of the tiles in the region contained evidence of mangrove loss, corresponding to 18.4% of the global mangrove forest tiles. Southeast Asia contained the greatest proportion (33.8%) of global mangrove forest. The primary driver of anthropogenic mangrove loss was found to be the conversion of mangrove to aquaculture/agriculture, although substantial advance of mangroves was also evident in many regions.

Journal ArticleDOI
29 Dec 2017-PLOS ONE
TL;DR: It is appropriate to ask healthy adults: “If you would shoot a ball on a target, which leg would you use to shoot the ball?” to determine leg dominance in bilateral mobilizing tasks, but a considerable number of the participants switched the dominant leg in a unilateral stabilizing task.
Abstract: Context Since decades leg dominance is suggested to be important in rehabilitation and return to play in athletes with anterior cruciate ligament injuries. However, an ideal method to determine leg dominance in relation to task performance is still lacking. Objective To test the agreement between self-reported and observed leg dominance in bilateral mobilizing and unilateral stabilizing tasks, and to assess whether the dominant leg switches between bilateral mobilizing tasks and unilateral stabilizing tasks. Design Cross-sectional study. Participants Forty-one healthy adults: 21 men aged 36 ± 17 years old and 20 women aged 36 ±15 years old. Measurement and analysis Participants self-reported leg dominance in the Waterloo Footedness Questionnaire-Revised (WFQ-R), and leg dominance was observed during performance of four bilateral mobilizing tasks and two unilateral stabilizing tasks. Descriptive statistics and crosstabs were used to report the percentages of agreement. Results The leg used to kick a ball had 100% agreement between the self-reported and observed dominant leg for both men and women. The dominant leg in kicking a ball and standing on one leg was the same in 66.7% of the men and 85.0% of the women. The agreement with jumping with one leg was lower: 47.6% for men and 70.0% for women. Conclusions It is appropriate to ask healthy adults: “If you would shoot a ball on a target, which leg would you use to shoot the ball?” to determine leg dominance in bilateral mobilizing tasks. However, a considerable number of the participants switched the dominant leg in a unilateral stabilizing task.

Journal ArticleDOI
09 Nov 2017-PLOS ONE
TL;DR: The findings in this study suggest that school health programs promoting active lifestyles among children and adolescents may contribute to the improvement of health-related quality of life.
Abstract: Background The association between physical activity, sedentary behavior and health-related quality of life in children and adolescents has been mostly investigated in those young people with chronic disease conditions. No systematic review to date has synthesized the relationship between physical activity, sedentary behavior and health-related quality of life in the general healthy population of children and adolescents. The purpose of this study was to review systematically the existing literature that evaluated the relations between physical activity, sedentary behavior and health-related quality of life in the general population of children and adolescents. Methods We conducted a computer search for English language literature from databases of MEDLINE, EMBASE, PSYCINFO and PubMed-related articles as well as the reference lists of existing literature between 1946 and the second week of January 2017 to retrieve eligible studies. We included the studies that assessed associations between physical activity and/or sedentary behavior and health-related quality of life among the general population of children and adolescents aged between 3-18 years. The study design included cross-sectional, longitudinal and health intervention studies. We excluded the studies that examined associations between physical activity, sedentary behavior and health-related quality of life among children and adolescents with specific chronic diseases, and other studies and reports including reviews, meta-analyses, study protocols, comments, letters, case reports and guidelines. We followed up the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement in the reporting of this review. The risk of bias of the primary studies was assessed by the Newcastle-Ottawa Scale. We synthesized the difference in health-related quality of life scores between different levels of physical activity and sedentary time. Results In total, 31 studies met the inclusion criteria and were synthesized in the review. Most of the included studies used a cross-sectional design (n = 21). There were six longitudinal studies and three school-based physical activity intervention studies. One study used both cross-sectional and longitudinal designs. We found that higher levels of physical activity were associated with better health-related quality of life and increased time of sedentary behavior was linked to lower health-related quality of life among children and adolescents. A dose-response relation between physical activity, sedentary behavior and health-related quality of life was observed in several studies suggesting that the higher frequency of physical activity or the less time being sedentary, the better the health-related quality of life. Conclusions The findings in this study suggest that school health programs promoting active lifestyles among children and adolescents may contribute to the improvement of health-related quality of life. Future research is needed to extend studies on longitudinal relationships between physical activity, sedentary behavior and health-related quality of life, and on effects of physical activity interventions on health-related quality of life among children and youth.

Journal ArticleDOI
04 Aug 2017-PLOS ONE
TL;DR: It could be that young adults with personality type A experiencing high stress level and low mood may lack positive stress coping mechanisms and mood management techniques and are thus highly susceptible to smartphone addiction.
Abstract: Objectives The study aims to assess prevalence of smartphone addiction symptoms, and to ascertain whether depression or anxiety, independently, contributes to smartphone addiction level among a sample of Lebanese university students, while adjusting simultaneously for important sociodemographic, academic, lifestyle, personality trait, and smartphone-related variables. Methods A random sample of 688 undergraduate university students (mean age = 20.64 ±1.88 years; 53% men) completed a survey composed of a) questions about socio-demographics, academics, lifestyle behaviors, personality type, and smartphone use-related variables; b) 26-item Smartphone Addiction Inventory (SPAI) Scale; and c) brief screeners of depression and anxiety (PHQ-2 and GAD-2), which constitute the two core DSM-IV items for major depressive disorder and generalized anxiety disorder, respectively. Results Prevalence rates of smartphone-related compulsive behavior, functional impairment, tolerance and withdrawal symptoms were substantial. 35.9% felt tired during daytime due to late-night smartphone use, 38.1% acknowledged decreased sleep quality, and 35.8% slept less than four hours due to smartphone use more than once. Whereas gender, residence, work hours per week, faculty, academic performance (GPA), lifestyle habits (smoking and alcohol drinking), and religious practice did not associate with smartphone addiction score; personality type A, class (year 2 vs. year 3), younger age at first smartphone use, excessive use during a weekday, using it for entertainment and not using it to call family members, and having depression or anxiety, showed statistically significant associations with smartphone addiction. Depression and anxiety scores emerged as independent positive predictors of smartphone addiction, after adjustment for confounders. Conclusion Several independent positive predictors of smartphone addiction emerged including depression and anxiety. It could be that young adults with personality type A experiencing high stress level and low mood may lack positive stress coping mechanisms and mood management techniques and are thus highly susceptible to smartphone addiction.