scispace - formally typeset
Search or ask a question

Showing papers in "PLOS Medicine in 2018"


Journal ArticleDOI
TL;DR: Pneumonia-screening CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.
Abstract: Background There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task. Methods and findings A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855–0.866) on the joint MSH–NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927–0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745–0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH–NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system–specific biases. Conclusion Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

943 citations


Journal ArticleDOI
TL;DR: CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs, achieved radiologist-level performance on 11 pathologies and did not achieve radiologists' level performance on 3 pathologies.
Abstract: Background Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists.

796 citations


Journal ArticleDOI
TL;DR: When compared with vaginal delivery, cesarean delivery is associated with a reduced rate of urinary incontinence and pelvic organ prolapse, but this should be weighed against the association with increased risks for fertility, future pregnancy, and long-term childhood outcomes.
Abstract: Background Cesarean birth rates continue to rise worldwide with recent (2016) reported rates of 24.5% in Western Europe, 32% in North America, and 41% in South America. The objective of this systematic review is to describe the long-term risks and benefits of cesarean delivery for mother, baby, and subsequent pregnancies. The primary maternal outcome was pelvic floor dysfunction, the primary baby outcome was asthma, and the primary subsequent pregnancy outcome was perinatal death. Methods and findings Medline, Embase, Cochrane, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases were systematically searched for published studies in human subjects (last search 25 May 2017), supplemented by manual searches. Included studies were randomized controlled trials (RCTs) and large (more than 1,000 participants) prospective cohort studies with greater than or equal to one-year follow-up comparing outcomes of women delivering by cesarean delivery and by vaginal delivery. Two assessors screened 30,327 abstracts. Studies were graded for risk of bias by two assessors using the Scottish Intercollegiate Guideline Network (SIGN) Methodology Checklist and the Risk of Bias Assessment tool for Non-Randomized Studies. Results were pooled in fixed effects meta-analyses or in random effects models when significant heterogeneity was present (I2 ≥ 40%). One RCT and 79 cohort studies (all from high income countries) were included, involving 29,928,274 participants. Compared to vaginal delivery, cesarean delivery was associated with decreased risk of urinary incontinence, odds ratio (OR) 0.56 (95% CI 0.47 to 0.66; n = 58,900; 8 studies) and pelvic organ prolapse (OR 0.29, 0.17 to 0.51; n = 39,208; 2 studies). Children delivered by cesarean delivery had increased risk of asthma up to the age of 12 years (OR 1.21, 1.11 to 1.32; n = 887,960; 13 studies) and obesity up to the age of 5 years (OR 1.59, 1.33 to 1.90; n = 64,113; 6 studies). Pregnancy after cesarean delivery was associated with increased risk of miscarriage (OR 1.17, 1.03 to 1.32; n = 151,412; 4 studies) and stillbirth (OR 1.27, 1.15 to 1.40; n = 703,562; 8 studies), but not perinatal mortality (OR 1.11, 0.89 to 1.39; n = 91,429; 2 studies). Pregnancy following cesarean delivery was associated with increased risk of placenta previa (OR 1.74, 1.62 to 1.87; n = 7,101,692; 10 studies), placenta accreta (OR 2.95, 1.32 to 6.60; n = 705,108; 3 studies), and placental abruption (OR 1.38, 1.27 to 1.49; n = 5,667,160; 6 studies). This is a comprehensive review adhering to a registered protocol, and guidelines for the Meta-analysis of Observational Studies in Epidemiology were followed, but it is based on predominantly observational data, and in some meta-analyses, between-study heterogeneity is high; therefore, causation cannot be inferred and the results should be interpreted with caution. Conclusions When compared with vaginal delivery, cesarean delivery is associated with a reduced rate of urinary incontinence and pelvic organ prolapse, but this should be weighed against the association with increased risks for fertility, future pregnancy, and long-term childhood outcomes. This information could be valuable in counselling women on mode of delivery.

459 citations


Journal ArticleDOI
TL;DR: A deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams is developed and the assertion that deep learning models can improve the performance of clinical experts during medical imaging interpretation is supported.
Abstract: Background Magnetic resonance imaging (MRI) of the knee is the preferred method for diagnosing knee injuries. However, interpretation of knee MRI is time-intensive and subject to diagnostic error and variability. An automated system for interpreting knee MRI could prioritize high-risk patients and assist clinicians in making diagnoses. Deep learning methods, in being able to automatically learn layers of features, are well suited for modeling the complex relationships between medical images and their interpretations. In this study we developed a deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams. We then measured the effect of providing the model’s predictions to clinical experts during interpretation. Methods and findings Our dataset consisted of 1,370 knee MRI exams performed at Stanford University Medical Center between January 1, 2001, and December 31, 2012 (mean age 38.0 years; 569 [41.5%] female patients). The majority vote of 3 musculoskeletal radiologists established reference standard labels on an internal validation set of 120 exams. We developed MRNet, a convolutional neural network for classifying MRI series and combined predictions from 3 series per exam using logistic regression. In detecting abnormalities, ACL tears, and meniscal tears, this model achieved area under the receiver operating characteristic curve (AUC) values of 0.937 (95% CI 0.895, 0.980), 0.965 (95% CI 0.938, 0.993), and 0.847 (95% CI 0.780, 0.914), respectively, on the internal validation set. We also obtained a public dataset of 917 exams with sagittal T1-weighted series and labels for ACL injury from Clinical Hospital Centre Rijeka, Croatia. On the external validation set of 183 exams, the MRNet trained on Stanford sagittal T2-weighted series achieved an AUC of 0.824 (95% CI 0.757, 0.892) in the detection of ACL injuries with no additional training, while an MRNet trained on the rest of the external data achieved an AUC of 0.911 (95% CI 0.864, 0.958). We additionally measured the specificity, sensitivity, and accuracy of 9 clinical experts (7 board-certified general radiologists and 2 orthopedic surgeons) on the internal validation set both with and without model assistance. Using a 2-sided Pearson’s chi-squared test with adjustment for multiple comparisons, we found no significant differences between the performance of the model and that of unassisted general radiologists in detecting abnormalities. General radiologists achieved significantly higher sensitivity in detecting ACL tears (p-value = 0.002; q-value = 0.019) and significantly higher specificity in detecting meniscal tears (p-value = 0.003; q-value = 0.019). Using a 1-tailed t test on the change in performance metrics, we found that providing model predictions significantly increased clinical experts’ specificity in identifying ACL tears (p-value < 0.001; q-value = 0.006). The primary limitations of our study include lack of surgical ground truth and the small size of the panel of clinical experts. Conclusions Our deep learning model can rapidly generate accurate clinical pathology classifications of knee MRI exams from both internal and external datasets. Moreover, our results support the assertion that deep learning models can improve the performance of clinical experts during medical imaging interpretation. Further research is needed to validate the model prospectively and to determine its utility in the clinical setting.

407 citations


Journal ArticleDOI
TL;DR: Evidence that deep learning networks may be used for mortality risk stratification based on standard-of-care CT images from NSCLC patients is provided and the biological basis of the captured phenotypes as being linked to cell cycle and transcriptional processes is presented.
Abstract: Background Non-small-cell lung cancer (NSCLC) patients often demonstrate varying clinical courses and outcomes, even within the same tumor stage. This study explores deep learning applications in medical imaging allowing for the automated quantification of radiographic characteristics and potentially improving patient stratification. Methods and findings We performed an integrative analysis on 7 independent datasets across 5 institutions totaling 1,194 NSCLC patients (age median = 68.3 years [range 32.5–93.3], survival median = 1.7 years [range 0.0–11.7]). Using external validation in computed tomography (CT) data, we identified prognostic signatures using a 3D convolutional neural network (CNN) for patients treated with radiotherapy (n = 771, age median = 68.0 years [range 32.5–93.3], survival median = 1.3 years [range 0.0–11.7]). We then employed a transfer learning approach to achieve the same for surgery patients (n = 391, age median = 69.1 years [range 37.2–88.0], survival median = 3.1 years [range 0.0–8.8]). We found that the CNN predictions were significantly associated with 2-year overall survival from the start of respective treatment for radiotherapy (area under the receiver operating characteristic curve [AUC] = 0.70 [95% CI 0.63–0.78], p < 0.001) and surgery (AUC = 0.71 [95% CI 0.60–0.82], p < 0.001) patients. The CNN was also able to significantly stratify patients into low and high mortality risk groups in both the radiotherapy (p < 0.001) and surgery (p = 0.03) datasets. Additionally, the CNN was found to significantly outperform random forest models built on clinical parameters—including age, sex, and tumor node metastasis stage—as well as demonstrate high robustness against test–retest (intraclass correlation coefficient = 0.91) and inter-reader (Spearman’s rank-order correlation = 0.88) variations. To gain a better understanding of the characteristics captured by the CNN, we identified regions with the most contribution towards predictions and highlighted the importance of tumor-surrounding tissue in patient stratification. We also present preliminary findings on the biological basis of the captured phenotypes as being linked to cell cycle and transcriptional processes. Limitations include the retrospective nature of this study as well as the opaque black box nature of deep learning networks. Conclusions Our results provide evidence that deep learning networks may be used for mortality risk stratification based on standard-of-care CT images from NSCLC patients. This evidence motivates future research into better deciphering the clinical and biological basis of deep learning networks as well as validation in prospective data.

363 citations


Journal ArticleDOI
TL;DR: It is argued that machine learning in medicine must offer data protection, algorithmic transparency, and accountability to earn the trust of patients and clinicians.
Abstract: Effy Vayena and colleagues argue that machine learning in medicine must offer data protection, algorithmic transparency, and accountability to earn the trust of patients and clinicians.

338 citations


Journal ArticleDOI
TL;DR: The approach identifies salient T2D genetically anchored and physiologically informed pathways, and supports the use of genetics to deconstruct T1D heterogeneity.
Abstract: Background Type 2 diabetes (T2D) is a heterogeneous disease for which (1) disease-causing pathways are incompletely understood and (2) subclassification may improve patient management Unlike other biomarkers, germline genetic markers do not change with disease progression or treatment In this paper, we test whether a germline genetic approach informed by physiology can be used to deconstruct T2D heterogeneity First, we aimed to categorize genetic loci into groups representing likely disease mechanistic pathways Second, we asked whether the novel clusters of genetic loci we identified have any broad clinical consequence, as assessed in four separate subsets of individuals with T2D Methods and findings In an effort to identify mechanistic pathways driven by established T2D genetic loci, we applied Bayesian nonnegative matrix factorization (bNMF) clustering to genome-wide association study (GWAS) results for 94 independent T2D genetic variants and 47 diabetes-related traits We identified five robust clusters of T2D loci and traits, each with distinct tissue-specific enhancer enrichment based on analysis of epigenomic data from 28 cell types Two clusters contained variant-trait associations indicative of reduced beta cell function, differing from each other by high versus low proinsulin levels The three other clusters displayed features of insulin resistance: obesity mediated (high body mass index [BMI] and waist circumference [WC]), "lipodystrophy-like" fat distribution (low BMI, adiponectin, and high-density lipoprotein [HDL] cholesterol, and high triglycerides), and disrupted liver lipid metabolism (low triglycerides) Increased cluster genetic risk scores were associated with distinct clinical outcomes, including increased blood pressure, coronary artery disease (CAD), and stroke We evaluated the potential for clinical impact of these clusters in four studies containing individuals with T2D (Metabolic Syndrome in Men Study [METSIM], N = 487; Ashkenazi, N = 509; Partners Biobank, N = 2,065; UK Biobank [UKBB], N = 14,813) Individuals with T2D in the top genetic risk score decile for each cluster reproducibly exhibited the predicted cluster-associated phenotypes, with approximately 30% of all individuals assigned to just one cluster top decile Limitations of this study include that the genetic variants used in the cluster analysis were restricted to those associated with T2D in populations of European ancestry Conclusion Our approach identifies salient T2D genetically anchored and physiologically informed pathways, and supports the use of genetics to deconstruct T2D heterogeneity Classification of patients by these genetic pathways may offer a step toward genetically informed T2D patient management

330 citations


Journal ArticleDOI
TL;DR: Different sphingolipid species identified map to several biologically relevant pathways implicated in AD, including tau phosphorylation, amyloid-β (Aβ) metabolism, calcium homeostasis, acetylcholine biosynthesis, and apoptosis.
Abstract: Background The metabolic basis of Alzheimer disease (AD) is poorly understood, and the relationships between systemic abnormalities in metabolism and AD pathogenesis are unclear. Understanding how global perturbations in metabolism are related to severity of AD neuropathology and the eventual expression of AD symptoms in at-risk individuals is critical to developing effective disease-modifying treatments. In this study, we undertook parallel metabolomics analyses in both the brain and blood to identify systemic correlates of neuropathology and their associations with prodromal and preclinical measures of AD progression. Methods and findings Quantitative and targeted metabolomics (Biocrates AbsoluteIDQ [identification and quantification] p180) assays were performed on brain tissue samples from the autopsy cohort of the Baltimore Longitudinal Study of Aging (BLSA) (N = 44, mean age = 81.33, % female = 36.36) from AD (N = 15), control (CN; N = 14), and “asymptomatic Alzheimer’s disease” (ASYMAD, i.e., individuals with significant AD pathology but no cognitive impairment during life; N = 15) participants. Using machine-learning methods, we identified a panel of 26 metabolites from two main classes—sphingolipids and glycerophospholipids—that discriminated AD and CN samples with accuracy, sensitivity, and specificity of 83.33%, 86.67%, and 80%, respectively. We then assayed these 26 metabolites in serum samples from two well-characterized longitudinal cohorts representing prodromal (Alzheimer’s Disease Neuroimaging Initiative [ADNI], N = 767, mean age = 75.19, % female = 42.63) and preclinical (BLSA) (N = 207, mean age = 78.68, % female = 42.63) AD, in which we tested their associations with magnetic resonance imaging (MRI) measures of AD-related brain atrophy, cerebrospinal fluid (CSF) biomarkers of AD pathology, risk of conversion to incident AD, and trajectories of cognitive performance. We developed an integrated blood and brain endophenotype score that summarized the relative importance of each metabolite to severity of AD pathology and disease progression (Endophenotype Association Score in Early Alzheimer’s Disease [EASE-AD]). Finally, we mapped the main metabolite classes emerging from our analyses to key biological pathways implicated in AD pathogenesis. We found that distinct sphingolipid species including sphingomyelin (SM) with acyl residue sums C16:0, C18:1, and C16:1 (SM C16:0, SM C18:1, SM C16:1) and hydroxysphingomyelin with acyl residue sum C14:1 (SM (OH) C14:1) were consistently associated with severity of AD pathology at autopsy and AD progression across prodromal and preclinical stages. Higher log-transformed blood concentrations of all four sphingolipids in cognitively normal individuals were significantly associated with increased risk of future conversion to incident AD: SM C16:0 (hazard ratio [HR] = 4.430, 95% confidence interval [CI] = 1.703–11.520, p = 0.002), SM C16:1 (HR = 3.455, 95% CI = 1.516–7.873, p = 0.003), SM (OH) C14:1 (HR = 3.539, 95% CI = 1.373–9.122, p = 0.009), and SM C18:1 (HR = 2.255, 95% CI = 1.047–4.855, p = 0.038). The sphingolipid species identified map to several biologically relevant pathways implicated in AD, including tau phosphorylation, amyloid-β (Aβ) metabolism, calcium homeostasis, acetylcholine biosynthesis, and apoptosis. Our study has limitations: the relatively small number of brain tissue samples may have limited our power to detect significant associations, control for heterogeneity between groups, and replicate our findings in independent, autopsy-derived brain samples. Conclusions We present a novel framework to identify biologically relevant brain and blood metabolites associated with disease pathology and progression during the prodromal and preclinical stages of AD. Our results show that perturbations in sphingolipid metabolism are consistently associated with endophenotypes across preclinical and prodromal AD, as well as with AD pathology at autopsy. Sphingolipids may be biologically relevant biomarkers for the early detection of AD, and correcting perturbations in sphingolipid metabolism may be a plausible and novel therapeutic strategy in AD.

286 citations


Journal ArticleDOI
TL;DR: A comprehensive characterisation of future heatwave-related excess mortality across various regions and under alternative scenarios of greenhouse gas emissions, different assumptions of adaptation, and different scenarios of population change is provided to help decision makers in planning adaptation and mitigation strategies for climate change.
Abstract: BACKGROUND: Heatwaves are a critical public health problem. There will be an increase in the frequency and severity of heatwaves under changing climate. However, evidence about the impacts of clima ...

241 citations


Journal ArticleDOI
TL;DR: The qualitative and quantitative evidence demonstrate the extensive harms associated with criminalisation of sex work, including laws and enforcement targeting the sale and purchase of sex, and activities relating to sex work organisation.
Abstract: Background Sex workers are at disproportionate risk of violence and sexual and emotional ill health, harms that have been linked to the criminalisation of sex work. We synthesised evidence on the extent to which sex work laws and policing practices affect sex workers’ safety, health, and access to services, and the pathways through which these effects occur. Methods and findings We searched bibliographic databases between 1 January 1990 and 9 May 2018 for qualitative and quantitative research involving sex workers of all genders and terms relating to legislation, police, and health. We operationalised categories of lawful and unlawful police repression of sex workers or their clients, including criminal and administrative penalties. We included quantitative studies that measured associations between policing and outcomes of violence, health, and access to services, and qualitative studies that explored related pathways. We conducted a meta-analysis to estimate the average effect of experiencing sexual/physical violence, HIV or sexually transmitted infections (STIs), and condomless sex, among individuals exposed to repressive policing compared to those unexposed. Qualitative studies were synthesised iteratively, inductively, and thematically. We reviewed 40 quantitative and 94 qualitative studies. Repressive policing of sex workers was associated with increased risk of sexual/physical violence from clients or other parties (odds ratio [OR] 2.99, 95% CI 1.96–4.57), HIV/STI (OR 1.87, 95% CI 1.60–2.19), and condomless sex (OR 1.42, 95% CI 1.03–1.94). The qualitative synthesis identified diverse forms of police violence and abuses of power, including arbitrary arrest, bribery and extortion, physical and sexual violence, failure to provide access to justice, and forced HIV testing. It showed that in contexts of criminalisation, the threat and enactment of police harassment and arrest of sex workers or their clients displaced sex workers into isolated work locations, disrupting peer support networks and service access, and limiting risk reduction opportunities. It discouraged sex workers from carrying condoms and exacerbated existing inequalities experienced by transgender, migrant, and drug-using sex workers. Evidence from decriminalised settings suggests that sex workers in these settings have greater negotiating power with clients and better access to justice. Quantitative findings were limited by high heterogeneity in the meta-analysis for some outcomes and insufficient data to conduct meta-analyses for others, as well as variable sample size and study quality. Few studies reported whether arrest was related to sex work or another offence, limiting our ability to assess the associations between sex work criminalisation and outcomes relative to other penalties or abuses of police power, and all studies were observational, prohibiting any causal inference. Few studies included trans- and cisgender male sex workers, and little evidence related to emotional health and access to healthcare beyond HIV/STI testing. Conclusions Together, the qualitative and quantitative evidence demonstrate the extensive harms associated with criminalisation of sex work, including laws and enforcement targeting the sale and purchase of sex, and activities relating to sex work organisation. There is an urgent need to reform sex-work-related laws and institutional practices so as to reduce harms and barriers to the realisation of health.

231 citations


Journal ArticleDOI
TL;DR: TB treatment outcomes are improved with the use of adherence interventions, such as patient education and counseling, incentives and enablers, psychological interventions, reminders and tracers, and digital health technologies.
Abstract: Author(s): Alipanah, Narges; Jarlsberg, Leah; Miller, Cecily; Linh, Nguyen Nhat; Falzon, Dennis; Jaramillo, Ernesto; Nahid, Payam | Abstract: BackgroundIncomplete adherence to tuberculosis (TB) treatment increases the risk of delayed culture conversion with continued transmission in the community, as well as treatment failure, relapse, and development or amplification of drug resistance. We conducted a systematic review and meta-analysis of adherence interventions, including directly observed therapy (DOT), to determine which approaches lead to improved TB treatment outcomes.Methods and findingsWe systematically reviewed Medline as well as the references of published review articles for relevant studies of adherence to multidrug treatment of both drug-susceptible and drug-resistant TB through February 3, 2018. We included randomized controlled trials (RCTs) as well as prospective and retrospective cohort studies (CSs) with an internal or external control group that evaluated any adherence intervention and conducted a meta-analysis of their impact on TB treatment outcomes. Our search identified 7,729 articles, of which 129 met the inclusion criteria for quantitative analysis. Seven adherence categories were identified, including DOT offered by different providers and at various locations, reminders and tracers, incentives and enablers, patient education, digital technologies (short message services [SMSs] via mobile phones and video-observed therapy [VOT]), staff education, and combinations of these interventions. When compared with DOT alone, self-administered therapy (SAT) was associated with lower rates of treatment success (CS: risk ratio [RR] 0.81, 95% CI 0.73-0.89; RCT: RR 0.94, 95% CI 0.89-0.98), adherence (CS: RR 0.83, 95% CI 0.75-0.93), and sputum smear conversion (RCT: RR 0.92, 95% CI 0.87-0.98) as well as higher rates of development of drug resistance (CS: RR 4.19, 95% CI 2.34-7.49). When compared to DOT provided by healthcare providers, DOT provided by family members was associated with a lower rate of adherence (CS: RR 0.86, 95% CI 0.79-0.94). DOT delivery in the community versus at the clinic was associated with a higher rate of treatment success (CS: RR 1.08, 95% CI 1.01-1.15) and sputum conversion at the end of two months (CS: RR 1.05, 95% CI 1.02-1.08) as well as lower rates of treatment failure (CS: RR 0.56, 95% CI 0.33-0.95) and loss to follow-up (CS: RR 0.63, 95% CI 0.40-0.98). Medication monitors improved adherence and treatment success and VOT was comparable with DOT. SMS reminders led to a higher treatment completion rate in one RCT and were associated with higher rates of cure and sputum conversion when used in combination with medication monitors. TB treatment outcomes improved when patient education, healthcare provider education, incentives and enablers, psychological interventions, reminders and tracers, or mobile digital technologies were employed. Our findings are limited by the heterogeneity of the included studies and lack of standardized research methodology on adherence interventions.ConclusionTB treatment outcomes are improved with the use of adherence interventions, such as patient education and counseling, incentives and enablers, psychological interventions, reminders and tracers, and digital health technologies. Trained healthcare providers as well as community delivery provides patient-centered DOT options that both enhance adherence and improve treatment outcomes as compared to unsupervised, SAT alone.

Journal ArticleDOI
TL;DR: Women diagnosed with GDM were at very high risk of developing type 2 diabetes and had a significantly increased incidence of hypertension and IHD.
Abstract: Background Gestational diabetes mellitus (GDM) is associated with developing type 2 diabetes, but very few studies have examined its effect on developing cardiovascular disease. Methods and findings We conducted a retrospective cohort study utilizing a large primary care database in the United Kingdom. From 1 February 1990 to 15 May 2016, 9,118 women diagnosed with GDM were identified and randomly matched with 37,281 control women by age and timing of pregnancy (up to 3 months). Adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs) were calculated for cardiovascular risk factors and cardiovascular disease. Women with GDM were more likely to develop type 2 diabetes (IRR = 21.96; 95% CI 18.31–26.34) and hypertension (IRR = 1.85; 95% CI 1.59–2.16) after adjusting for age, Townsend (deprivation) quintile, body mass index, and smoking. For ischemic heart disease (IHD), the IRR was 2.78 (95% CI 1.37–5.66), and for cerebrovascular disease 0.95 (95% CI 0.51–1.77; p-value = 0.87), after adjusting for the above covariates and lipid-lowering medication and hypertension at baseline. Follow-up screening for type 2 diabetes and cardiovascular risk factors was poor. Limitations include potential selective documentation of severe GDM for women in primary care, higher surveillance for outcomes in women diagnosed with GDM than control women, and a short median follow-up postpartum period, with a small number of outcomes for IHD and cerebrovascular disease. Conclusions Women diagnosed with GDM were at very high risk of developing type 2 diabetes and had a significantly increased incidence of hypertension and IHD. Identifying this group of women in general practice and targeting cardiovascular risk factors could improve long-term outcomes.

Journal ArticleDOI
TL;DR: In a Policy Forum, Peter Hotez and colleagues discuss vaccination exemptions in US states and possible consequences for infectious disease outbreaks.
Abstract: In a Policy Forum, Peter Hotez and colleagues discuss vaccination exemptions in US states and possible consequences for infectious disease outbreaks.

Journal ArticleDOI
TL;DR: A relationship between maternal diet and risk of immune-mediated diseases in the child is supported, and evidence from 19 intervention trials suggests that oral supplementation with nonpathogenic micro-organisms during late pregnancy and lactation may reduce risk of eczema.
Abstract: Background There is uncertainty about the influence of diet during pregnancy and infancy on a child’s immune development. We assessed whether variations in maternal or infant diet can influence risk of allergic or autoimmune disease. Methods and findings Two authors selected studies, extracted data, and assessed risk of bias. Grading of Recommendations Assessment, Development and Evaluation (GRADE) was used to assess certainty of findings. We searched Medical Literature Analysis and Retrieval System Online (MEDLINE), Excerpta Medica dataBASE (EMBASE), Web of Science, Central Register of Controlled Trials (CENTRAL), and Literatura Latino Americana em Ciencias da Saude (LILACS) between January 1946 and July 2013 for observational studies and until December 2017 for intervention studies that evaluated the relationship between diet during pregnancy, lactation, or the first year of life and future risk of allergic or autoimmune disease. We identified 260 original studies (964,143 participants) of milk feeding, including 1 intervention trial of breastfeeding promotion, and 173 original studies (542,672 participants) of other maternal or infant dietary exposures, including 80 trials of maternal (n = 26), infant (n = 32), or combined (n = 22) interventions. Risk of bias was high in 125 (48%) milk feeding studies and 44 (25%) studies of other dietary exposures. Evidence from 19 intervention trials suggests that oral supplementation with nonpathogenic micro-organisms (probiotics) during late pregnancy and lactation may reduce risk of eczema (Risk Ratio [RR] 0.78; 95% CI 0.68–0.90; I2 = 61%; Absolute Risk Reduction 44 cases per 1,000; 95% CI 20–64), and 6 trials suggest that fish oil supplementation during pregnancy and lactation may reduce risk of allergic sensitisation to egg (RR 0.69, 95% CI 0.53–0.90; I2 = 15%; Absolute Risk Reduction 31 cases per 1,000; 95% CI 10–47). GRADE certainty of these findings was moderate. We found weaker support for the hypotheses that breastfeeding promotion reduces risk of eczema during infancy (1 intervention trial), that longer exclusive breastfeeding is associated with reduced type 1 diabetes mellitus (28 observational studies), and that probiotics reduce risk of allergic sensitisation to cow’s milk (9 intervention trials), where GRADE certainty of findings was low. We did not find that other dietary exposures—including prebiotic supplements, maternal allergenic food avoidance, and vitamin, mineral, fruit, and vegetable intake—influence risk of allergic or autoimmune disease. For many dietary exposures, data were inconclusive or inconsistent, such that we were unable to exclude the possibility of important beneficial or harmful effects. In this comprehensive systematic review, we were not able to include more recent observational studies or verify data via direct contact with authors, and we did not evaluate measures of food diversity during infancy. Conclusions Our findings support a relationship between maternal diet and risk of immune-mediated diseases in the child. Maternal probiotic and fish oil supplementation may reduce risk of eczema and allergic sensitisation to food, respectively.

Journal ArticleDOI
TL;DR: Phenotypic Age was associated with mortality among seemingly healthy participants—defined as those who reported being disease-free and who had normal BMI—as well as among oldest-old adults, even after adjustment for disease prevalence, and was robust to stratifications by age, race/ethnicity, education, disease count, and health behaviors.
Abstract: Background: A person's rate of aging has important implications for his/her risk of death and disease; thus, quantifying aging using observable characteristics has important applications for clinical, basic, and observational research. Based on routine clinical chemistry biomarkers, we previously developed a novel aging measure, Phenotypic Age, representing the expected age within the population that corresponds to a person's estimated mortality risk. The aim of this study was to assess its applicability for differentiating risk for a variety of health outcomes within diverse subpopulations that include healthy and unhealthy groups, distinct age groups, and persons with various race/ethnic, socioeconomic, and health behavior characteristics. Methods and findings: Phenotypic Age was calculated based on a linear combination of chronological age and 9 multi-system clinical chemistry biomarkers in accordance with our previously established method. We also estimated Phenotypic Age Acceleration (PhenoAgeAccel), which represents Phenotypic Age after accounting for chronological age (i.e., whether a person appears older [positive value] or younger [negative value] than expected, physiologically). All analyses were conducted using NHANES IV (1999-2010, an independent sample from that originally used to develop the measure). Our analytic sample consisted of 11,432 adults aged 20-84 years and 185 oldest-old adults top-coded at age 85 years. We observed a total of 1,012 deaths, ascertained over 12.6 years of follow-up (based on National Death Index data through December 31, 2011). Proportional hazard models and receiver operating characteristic curves were used to evaluate all-cause and cause-specific mortality predictions. Overall, participants with more diseases had older Phenotypic Age. For instance, among young adults, those with 1 disease were 0.2 years older phenotypically than disease-free persons, and those with 2 or 3 diseases were about 0.6 years older phenotypically. After adjusting for chronological age and sex, Phenotypic Age was significantly associated with all-cause mortality and cause-specific mortality (with the exception of cerebrovascular disease mortality). Results for all-cause mortality were robust to stratifications by age, race/ethnicity, education, disease count, and health behaviors. Further, Phenotypic Age was associated with mortality among seemingly healthy participants-defined as those who reported being disease-free and who had normal BMI-as well as among oldest-old adults, even after adjustment for disease prevalence. The main limitation of this study was the lack of longitudinal data on Phenotypic Age and disease incidence. Conclusions: In a nationally representative US adult population, Phenotypic Age was associated with mortality even after adjusting for chronological age. Overall, this association was robust across different stratifications, particularly by age, disease count, health behaviors, and cause of death. We also observed a strong association between Phenotypic Age and the disease count an individual had. These findings suggest that this new aging measure may serve as a useful tool to facilitate identification of at-risk individuals and evaluation of the efficacy of interventions, and may also facilitate investigation into potential biological mechanisms of aging. Nevertheless, further evaluation in other cohorts is needed.

Journal ArticleDOI
TL;DR: This is the first IPDMA on internet-based interventions that has shown them to be effective in curbing various patterns of adult problem drinking in both community and healthcare settings and human-supported interventions were superior to fully automated ones on both outcome measures.
Abstract: BACKGROUND: Face-to-face brief interventions for problem drinking are effective, but they have found limited implementation in routine care and the community. Internet-based interventions could overcome this treatment gap. We investigated effectiveness and moderators of treatment outcomes in internet-based interventions for adult problem drinking (iAIs). METHODS AND FINDINGS: Systematic searches were performed in medical and psychological databases to 31 December 2016. A one-stage individual patient data meta-analysis (IPDMA) was conducted with a linear mixed model complete-case approach, using baseline and first follow-up data. The primary outcome measure was mean weekly alcohol consumption in standard units (SUs, 10 grams of ethanol). Secondary outcome was treatment response (TR), defined as less than 14/21 SUs for women/men weekly. Putative participant, intervention, and study moderators were included. Robustness was verified in three sensitivity analyses: a two-stage IPDMA, a one-stage IPDMA using multiple imputation, and a missing-not-at-random (MNAR) analysis. We obtained baseline data for 14,198 adult participants (19 randomised controlled trials [RCTs], mean age 40.7 [SD = 13.2], 47.6% women). Their baseline mean weekly alcohol consumption was 38.1 SUs (SD = 26.9). Most were regular problem drinkers (80.1%, SUs 44.7, SD = 26.4) and 19.9% (SUs 11.9, SD = 4.1) were binge-only drinkers. About one third were heavy drinkers, meaning that women/men consumed, respectively, more than 35/50 SUs of alcohol at baseline (34.2%, SUs 65.9, SD = 27.1). Post-intervention data were available for 8,095 participants. Compared with controls, iAI participants showed a greater mean weekly decrease at follow-up of 5.02 SUs (95% CI -7.57 to -2.48, p < 0.001) and a higher rate of TR (odds ratio [OR] 2.20, 95% CI 1.63-2.95, p < 0.001, number needed to treat [NNT] = 4.15, 95% CI 3.06-6.62). Persons above age 55 showed higher TR than their younger counterparts (OR = 1.66, 95% CI 1.21-2.27, p = 0.002). Drinking profiles were not significantly associated with treatment outcomes. Human-supported interventions were superior to fully automated ones on both outcome measures (comparative reduction: -6.78 SUs, 95% CI -12.11 to -1.45, p = 0.013; TR: OR = 2.23, 95% CI 1.22-4.08, p = 0.009). Participants treated in iAIs based on personalised normative feedback (PNF) alone were significantly less likely to sustain low-risk drinking at follow-up than those in iAIs based on integrated therapeutic principles (OR = 0.52, 95% CI 0.29-0.93, p = 0.029). The use of waitlist control in RCTs was associated with significantly better treatment outcomes than the use of other types of control (comparative reduction: -9.27 SUs, 95% CI -13.97 to -4.57, p < 0.001; TR: OR = 3.74, 95% CI 2.13-6.53, p < 0.001). The overall quality of the RCTs was high; a major limitation included high study dropout (43%). Sensitivity analyses confirmed the robustness of our primary analyses. CONCLUSION: To our knowledge, this is the first IPDMA on internet-based interventions that has shown them to be effective in curbing various patterns of adult problem drinking in both community and healthcare settings. Waitlist control may be conducive to inflation of treatment outcomes.

Journal ArticleDOI
TL;DR: The human bnMAb VRC01LS was safe and well tolerated when delivered intravenously or subcutaneously and the half-life was more than 4-fold greater when compared to wild-type VRC03, designed for extended serum half- life by increased binding affinity to the neonatal Fc receptor.
Abstract: Background VRC01 is a human broadly neutralizing monoclonal antibody (bnMAb) against the CD4-binding site of the HIV-1 envelope glycoprotein (Env) that is currently being evaluated in a Phase IIb adult HIV-1 prevention efficacy trial. VRC01LS is a modified version of VRC01, designed for extended serum half-life by increased binding affinity to the neonatal Fc receptor. Methods and findings This Phase I dose-escalation study of VRC01LS in HIV-negative healthy adults was conducted by the Vaccine Research Center (VRC) at the National Institutes of Health (NIH) Clinical Center (Bethesda, MD). The age range of the study volunteers was 21–50 years; 51% of study volunteers were male and 49% were female. Primary objectives were safety and tolerability of VRC01LS intravenous (IV) infusions at 5, 20, and 40 mg/kg infused once, 20 mg/kg given three times at 12-week intervals, and subcutaneous (SC) delivery at 5 mg/kg delivered once, or three times at 12-week intervals. Secondary objectives were pharmacokinetics (PK), serum neutralization activity, and development of antidrug antibodies. Enrollment began on November 16, 2015, and concluded on August 23, 2017. This report describes the safety data for the first 37 volunteers who received administrations of VRC01LS. There were no serious adverse events (SAEs) or dose-limiting toxicities. Mild malaise and myalgia were the most common adverse events (AEs). There were six AEs assessed as possibly related to VRC01LS administration, and all were mild in severity and resolved during the study. PK data were modeled based on the first dose of VRC01LS in the first 25 volunteers to complete their schedule of evaluations. The mean (±SD) serum concentration 12 weeks after one IV administration of 20 mg/kg or 40 mg/kg were 180 ± 43 μg/mL (n = 7) and 326 ± 35 μg/mL (n = 5), respectively. The mean (±SD) serum concentration 12 weeks after one IV and SC administration of 5 mg/kg were 40 ± 3 μg/mL (n = 2) and 25 ± 5 μg/mL (n = 9), respectively. Over the 5–40 mg/kg IV dose range (n = 16), the clearance was 36 ± 8 mL/d with an elimination half-life of 71 ± 18 days. VRC01LS retained its expected neutralizing activity in serum, and anti-VRC01 antibody responses were not detected. Potential limitations of this study include the small sample size typical of Phase I trials and the need to further describe the PK properties of VRC01LS administered on multiple occasions. Conclusions The human bnMAb VRC01LS was safe and well tolerated when delivered intravenously or subcutaneously. The half-life was more than 4-fold greater when compared to wild-type VRC01 historical data. The reduced clearance and extended half-life may make it possible to achieve therapeutic levels with less frequent and lower-dose administrations. This would potentially lower the costs of manufacturing and improve the practicality of using passively administered monoclonal antibodies (mAbs) for the prevention of HIV-1 infection. Trial registration ClinicalTrials.gov NCT02599896

Journal ArticleDOI
TL;DR: A large human-annotated dataset of chest X-rays containing pneumothorax was created and deep convolutional networks to screen for potentially emergent moderate or large pneumothOrax at the time of image acquisition to provide a high specificity screening solution when human review might be delayed, such as overnight.
Abstract: Author(s): Taylor, Andrew G; Mielke, Clinton; Mongan, John | Abstract: BackgroundPneumothorax can precipitate a life-threatening emergency due to lung collapse and respiratory or circulatory distress. Pneumothorax is typically detected on chest X-ray; however, treatment is reliant on timely review of radiographs. Since current imaging volumes may result in long worklists of radiographs awaiting review, an automated method of prioritizing X-rays with pneumothorax may reduce time to treatment. Our objective was to create a large human-annotated dataset of chest X-rays containing pneumothorax and to train deep convolutional networks to screen for potentially emergent moderate or large pneumothorax at the time of image acquisition.Methods and findingsIn all, 13,292 frontal chest X-rays (3,107 with pneumothorax) were visually annotated by radiologists. This dataset was used to train and evaluate multiple network architectures. Images showing large- or moderate-sized pneumothorax were considered positive, and those with trace or no pneumothorax were considered negative. Images showing small pneumothorax were excluded from training. Using an internal validation set (n = 1,993), we selected the 2 top-performing models; these models were then evaluated on a held-out internal test set based on area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive predictive value (PPV). The final internal test was performed initially on a subset with small pneumothorax excluded (as in training; n = 1,701), then on the full test set (n = 1,990), with small pneumothorax included as positive. External evaluation was performed using the National Institutes of Health (NIH) ChestX-ray14 set, a public dataset labeled for chest pathology based on text reports. All images labeled with pneumothorax were considered positive, because the NIH set does not classify pneumothorax by size. In internal testing, our "high sensitivity model" produced a sensitivity of 0.84 (95% CI 0.78-0.90), specificity of 0.90 (95% CI 0.89-0.92), and AUC of 0.94 for the test subset with small pneumothorax excluded. Our "high specificity model" showed sensitivity of 0.80 (95% CI 0.72-0.86), specificity of 0.97 (95% CI 0.96-0.98), and AUC of 0.96 for this set. PPVs were 0.45 (95% CI 0.39-0.51) and 0.71 (95% CI 0.63-0.77), respectively. Internal testing on the full set showed expected decreased performance (sensitivity 0.55, specificity 0.90, and AUC 0.82 for high sensitivity model and sensitivity 0.45, specificity 0.97, and AUC 0.86 for high specificity model). External testing using the NIH dataset showed some further performance decline (sensitivity 0.28-0.49, specificity 0.85-0.97, and AUC 0.75 for both). Due to labeling differences between internal and external datasets, these findings represent a preliminary step towards external validation.ConclusionsWe trained automated classifiers to detect moderate and large pneumothorax in frontal chest X-rays at high levels of performance on held-out test data. These models may provide a high specificity screening solution to detect moderate or large pneumothorax on images collected when human review might be delayed, such as overnight. They are not intended for unsupervised diagnosis of all pneumothoraces, as many small pneumothoraces (and some larger ones) are not detected by the algorithm. Implementation studies are warranted to develop appropriate, effective clinician alerts for the potentially critical finding of pneumothorax, and to assess their impact on reducing time to treatment.

Journal ArticleDOI
TL;DR: The genetic results suggest that for a subset of patients, immune dysfunction may contribute to FTD risk, and have potential implications for clinical trials targeting immune dysfunction in patients with FTD.
Abstract: BACKGROUND:Converging evidence suggests that immune-mediated dysfunction plays an important role in the pathogenesis of frontotemporal dementia (FTD). Although genetic studies have shown that immune-associated loci are associated with increased FTD risk, a systematic investigation of genetic overlap between immune-mediated diseases and the spectrum of FTD-related disorders has not been performed. METHODS AND FINDINGS:Using large genome-wide association studies (GWASs) (total n = 192,886 cases and controls) and recently developed tools to quantify genetic overlap/pleiotropy, we systematically identified single nucleotide polymorphisms (SNPs) jointly associated with FTD-related disorders-namely, FTD, corticobasal degeneration (CBD), progressive supranuclear palsy (PSP), and amyotrophic lateral sclerosis (ALS)-and 1 or more immune-mediated diseases including Crohn disease, ulcerative colitis (UC), rheumatoid arthritis (RA), type 1 diabetes (T1D), celiac disease (CeD), and psoriasis. We found up to 270-fold genetic enrichment between FTD and RA, up to 160-fold genetic enrichment between FTD and UC, up to 180-fold genetic enrichment between FTD and T1D, and up to 175-fold genetic enrichment between FTD and CeD. In contrast, for CBD and PSP, only 1 of the 6 immune-mediated diseases produced genetic enrichment comparable to that seen for FTD, with up to 150-fold genetic enrichment between CBD and CeD and up to 180-fold enrichment between PSP and RA. Further, we found minimal enrichment between ALS and the immune-mediated diseases tested, with the highest levels of enrichment between ALS and RA (up to 20-fold). For FTD, at a conjunction false discovery rate < 0.05 and after excluding SNPs in linkage disequilibrium, we found that 8 of the 15 identified loci mapped to the human leukocyte antigen (HLA) region on Chromosome (Chr) 6. We also found novel candidate FTD susceptibility loci within LRRK2 (leucine rich repeat kinase 2), TBKBP1 (TBK1 binding protein 1), and PGBD5 (piggyBac transposable element derived 5). Functionally, we found that the expression of FTD-immune pleiotropic genes (particularly within the HLA region) is altered in postmortem brain tissue from patients with FTD and is enriched in microglia/macrophages compared to other central nervous system cell types. The main study limitation is that the results represent only clinically diagnosed individuals. Also, given the complex interconnectedness of the HLA region, we were not able to define the specific gene or genes on Chr 6 responsible for our pleiotropic signal. CONCLUSIONS:We show immune-mediated genetic enrichment specifically in FTD, particularly within the HLA region. Our genetic results suggest that for a subset of patients, immune dysfunction may contribute to FTD risk. These findings have potential implications for clinical trials targeting immune dysfunction in patients with FTD.

Journal ArticleDOI
TL;DR: It is indicated that transient increase in air pollution levels may increase the risk of ischemic stroke, which may have significant public health implications for the reduction of isChemic stroke burden in China.
Abstract: Background Evidence of the short-term effects of ambient air pollution on the risk of ischemic stroke in low- and middle-income countries is limited and inconsistent. We aimed to examine the associations between air pollution and daily hospital admissions for ischemic stroke in China. Methods and findings We identified hospital admissions for ischemic stroke in 2014–2016 from the national database covering up to 0.28 billion people who received Urban Employee Basic Medical Insurance (UEBMI) in China. We examined the associations between air pollution and daily ischemic stroke admission using a two-stage method. Poisson time-series regression models were firstly fitted to estimate the effects of air pollution in each city. Random-effects meta-analyses were then conducted to combine the estimates. Meta-regression models were applied to explore potential effect modifiers. More than 2 million hospital admissions for ischemic stroke were identified in 172 cities in China. In single-pollutant models, increases of 10 μg/m3 in particulate matter with aerodynamic diameter <2.5 μm (PM2.5), sulfur dioxide (SO2), nitrogen dioxide (NO2), and ozone (O3) and 1 mg/m3 in carbon monoxide (CO) concentrations were associated with 0.34% (95% confidence interval [CI], 0.20%–0.48%), 1.37% (1.05%–1.70%), 1.82% (1.45%–2.19%), 0.01% (−0.14%–0.16%), and 3.24% (2.05%–4.43%) increases in hospital admissions for ischemic stroke on the same day, respectively. SO2 and NO2 associations remained significant in two-pollutant models, but not PM2.5 and CO associations. The effect estimates were greater in cities with lower air pollutant levels and higher air temperatures, as well as in elderly subgroups. The main limitation of the present study was the unavailability of data on individual exposure to ambient air pollution. Conclusions As the first national study in China to systematically examine the associations between short-term exposure to ambient air pollution and ischemic stroke, our findings indicate that transient increase in air pollution levels may increase the risk of ischemic stroke, which may have significant public health implications for the reduction of ischemic stroke burden in China.

Journal ArticleDOI
TL;DR: The risk of dementia diagnosis decreased over time after TBI, but it was still evident >30 years after the trauma, and it persisted after adjustment for familial factors.
Abstract: BACKGROUND: Traumatic brain injury (TBI) has been associated with dementia. The questions of whether the risk of dementia decreases over time after TBI, whether it is similar for different TBI type ...

Journal ArticleDOI
TL;DR: The implementation of fiscal austerity measures in Brazil can be responsible for substantively higher childhood morbidity and mortality than expected under maintenance of social protection—threatening attainment of Sustainable Development Goals for child health and reducing inequality.
Abstract: Background Since 2015, a major economic crisis in Brazil has led to increasing poverty and the implementation of long-term fiscal austerity measures that will substantially reduce expenditure on social welfare programmes as a percentage of the country’s GDP over the next 20 years. The Bolsa Familia Programme (BFP)—one of the largest conditional cash transfer programmes in the world—and the nationwide primary healthcare strategy (Estrategia Saude da Familia [ESF]) are affected by fiscal austerity, despite being among the policy interventions with the strongest estimated impact on child mortality in the country. We investigated how reduced coverage of the BFP and ESF—compared to an alternative scenario where the level of social protection under these programmes is maintained—may affect the under-five mortality rate (U5MR) and socioeconomic inequalities in child health in the country until 2030, the end date of the Sustainable Development Goals. Methods and findings We developed and validated a microsimulation model, creating a synthetic cohort of all 5,507 Brazilian municipalities for the period 2017–2030. This model was based on the longitudinal dataset and effect estimates from a previously published study that evaluated the effects of poverty, the BFP, and the ESF on child health. We forecast the economic crisis and the effect of reductions in BFP and ESF coverage due to current fiscal austerity on the U5MR, and compared this scenario with a scenario where these programmes maintain the levels of social protection by increasing or decreasing with the size of Brazil’s vulnerable populations (policy response scenarios). We used fixed effects multivariate regression models including BFP and ESF coverage and accounting for secular trends, demographic and socioeconomic changes, and programme duration effects. With the maintenance of the levels of social protection provided by the BFP and ESF, in the most likely economic crisis scenario the U5MR is expected to be 8.57% (95% CI: 6.88%–10.24%) lower in 2030 than under fiscal austerity—a cumulative 19,732 (95% CI: 10,207–29,285) averted under-five deaths between 2017 and 2030. U5MRs from diarrhoea, malnutrition, and lower respiratory tract infections are projected to be 39.3% (95% CI: 36.9%–41.8%), 35.8% (95% CI: 31.5%–39.9%), and 8.5% (95% CI: 4.1%–12.0%) lower, respectively, in 2030 under the maintenance of BFP and ESF coverage, with 123,549 fewer under-five hospitalisations from all causes over the study period. Reduced coverage of the BFP and ESF will also disproportionately affect U5MR in the most vulnerable areas, with the U5MR in the poorest quintile of municipalities expected to be 11.0% (95% CI: 8.0%–13.8%) lower in 2030 under the maintenance of BFP and ESF levels of social protection than under fiscal austerity, compared to no difference in the richest quintile. Declines in health inequalities over the last decade will also stop under a fiscal austerity scenario: the U5MR concentration index is expected to remain stable over the period 2017–2030, compared to a 13.3% (95% CI: 5.6%–21.8%) reduction under the maintenance of BFP and ESF levels of protection. Limitations of our analysis are the ecological nature of the study, uncertainty around future macroeconomic scenarios, and potential changes in other factors affecting child health. A wide range of sensitivity analyses were conducted to minimise these limitations. Conclusions The implementation of fiscal austerity measures in Brazil can be responsible for substantively higher childhood morbidity and mortality than expected under maintenance of social protection—threatening attainment of Sustainable Development Goals for child health and reducing inequality.

Journal ArticleDOI
TL;DR: The pooled estimates for prevalence increased by up to three percentage points when these were adjusted for national rates of stunting or low birth weight (LBW) and all-site-pooled estimates for NDDs were 9.2% (95% CI 7.5–11.2) in children of 2–<6 and 6–9 year age categories, respectively.
Abstract: Background Neurodevelopmental disorders (NDDs) compromise the development and attainment of full social and economic potential at individual, family, community, and country levels. Paucity of data on NDDs slows down policy and programmatic action in most developing countries despite perceived high burden. Methods and findings We assessed 3,964 children (with almost equal number of boys and girls distributed in 2–<6 and 6–9 year age categories) identified from five geographically diverse populations in India using cluster sampling technique (probability proportionate to population size). These were from the North-Central, i.e., Palwal (N = 998; all rural, 16.4% non-Hindu, 25.3% from scheduled caste/tribe [SC-ST] [these are considered underserved communities who are eligible for affirmative action]); North, i.e., Kangra (N = 997; 91.6% rural, 3.7% non-Hindu, 25.3% SC-ST); East, i.e., Dhenkanal (N = 981; 89.8% rural, 1.2% non-Hindu, 38.0% SC-ST); South, i.e., Hyderabad (N = 495; all urban, 25.7% non-Hindu, 27.3% SC-ST) and West, i.e., North Goa (N = 493; 68.0% rural, 11.4% non-Hindu, 18.5% SC-ST). All children were assessed for vision impairment (VI), epilepsy (Epi), neuromotor impairments including cerebral palsy (NMI-CP), hearing impairment (HI), speech and language disorders, autism spectrum disorders (ASDs), and intellectual disability (ID). Furthermore, 6–9-year-old children were also assessed for attention deficit hyperactivity disorder (ADHD) and learning disorders (LDs). We standardized sample characteristics as per Census of India 2011 to arrive at district level and all-sites-pooled estimates. Site-specific prevalence of any of seven NDDs in 2–<6 year olds ranged from 2.9% (95% CI 1.6–5.5) to 18.7% (95% CI 14.7–23.6), and for any of nine NDDs in the 6–9-year-old children, from 6.5% (95% CI 4.6–9.1) to 18.5% (95% CI 15.3–22.3). Two or more NDDs were present in 0.4% (95% CI 0.1–1.7) to 4.3% (95% CI 2.2–8.2) in the younger age category and 0.7% (95% CI 0.2–2.0) to 5.3% (95% CI 3.3–8.2) in the older age category. All-site-pooled estimates for NDDs were 9.2% (95% CI 7.5–11.2) and 13.6% (95% CI 11.3–16.2) in children of 2–<6 and 6–9 year age categories, respectively, without significant difference according to gender, rural/urban residence, or religion; almost one-fifth of these children had more than one NDD. The pooled estimates for prevalence increased by up to three percentage points when these were adjusted for national rates of stunting or low birth weight (LBW). HI, ID, speech and language disorders, Epi, and LDs were the common NDDs across sites. Upon risk modelling, noninstitutional delivery, history of perinatal asphyxia, neonatal illness, postnatal neurological/brain infections, stunting, LBW/prematurity, and older age category (6–9 year) were significantly associated with NDDs. The study sample was underrepresentative of stunting and LBW and had a 15.6% refusal. These factors could be contributing to underestimation of the true NDD burden in our population. Conclusions The study identifies NDDs in children aged 2–9 years as a significant public health burden for India. HI was higher than and ASD prevalence comparable to the published global literature. Most risk factors of NDDs were modifiable and amenable to public health interventions.

Journal ArticleDOI
TL;DR: Extracting and curating a large, local institution’s EHR data for machine learning purposes resulted in models with strong predictive performance that can be used in clinical settings as decision support tools for identification of high-risk patients as well as patient evaluation and care management.
Abstract: Background Pythia is an automated, clinically curated surgical data pipeline and repository housing all surgical patient electronic health record (EHR) data from a large, quaternary, multisite health institute for data science initiatives. In an effort to better identify high-risk surgical patients from complex data, a machine learning project trained on Pythia was built to predict postoperative complication risk.

Journal ArticleDOI
TL;DR: Higher levels of 15:0, 17: 0, and t16:1n-7 were associated with a lower risk of T2D, and similar associations were present in both genders but stronger in women than in men.
Abstract: BACKGROUND: We aimed to investigate prospective associations of circulating or adipose tissue odd-chain fatty acids 15:0 and 17:0 and trans-palmitoleic acid, t16:1n-7, as potential biomarkers of dairy fat intake, with incident type 2 diabetes (T2D). METHODS AND FINDINGS: Sixteen prospective cohorts from 12 countries (7 from the United States, 7 from Europe, 1 from Australia, 1 from Taiwan) performed new harmonised individual-level analysis for the prospective associations according to a standardised plan. In total, 63,682 participants with a broad range of baseline ages and BMIs and 15,180 incident cases of T2D over the average of 9 years of follow-up were evaluated. Study-specific results were pooled using inverse-variance-weighted meta-analysis. Prespecified interactions by age, sex, BMI, and race/ethnicity were explored in each cohort and were meta-analysed. Potential heterogeneity by cohort-specific characteristics (regions, lipid compartments used for fatty acid assays) was assessed with metaregression. After adjustment for potential confounders, including measures of adiposity (BMI, waist circumference) and lipogenesis (levels of palmitate, triglycerides), higher levels of 15:0, 17:0, and t16:1n-7 were associated with lower incidence of T2D. In the most adjusted model, the hazard ratio (95% CI) for incident T2D per cohort-specific 10th to 90th percentile range of 15:0 was 0.80 (0.73-0.87); of 17:0, 0.65 (0.59-0.72); of t16:1n7, 0.82 (0.70-0.96); and of their sum, 0.71 (0.63-0.79). In exploratory analyses, similar associations for 15:0, 17:0, and the sum of all three fatty acids were present in both genders but stronger in women than in men (pinteraction < 0.001). Whereas studying associations with biomarkers has several advantages, as limitations, the biomarkers do not distinguish between different food sources of dairy fat (e.g., cheese, yogurt, milk), and residual confounding by unmeasured or imprecisely measured confounders may exist. CONCLUSIONS: In a large meta-analysis that pooled the findings from 16 prospective cohort studies, higher levels of 15:0, 17:0, and t16:1n-7 were associated with a lower risk of T2D.

Journal ArticleDOI
TL;DR: CM plus community reinforcement approach had the highest number of statistically significant results in head-to-head comparisons, being more efficacious than cognitive behavioural therapy (CBT) and at longest follow-up,Community reinforcement approach was more effective than non-contingent rewards, supportive-expressive psychodynamic therapy, TAU, and 12-step programme.
Abstract: BACKGROUND Clinical guidelines recommend psychosocial interventions for cocaine and/or amphetamine addiction as first-line treatment, but it is still unclear which intervention, if any, should be offered first. We aimed to estimate the comparative effectiveness of all available psychosocial interventions (alone or in combination) for the short- and long-term treatment of people with cocaine and/or amphetamine addiction. METHODS AND FINDINGS We searched published and unpublished randomised controlled trials (RCTs) comparing any structured psychosocial intervention against an active control or treatment as usual (TAU) for the treatment of cocaine and/or amphetamine addiction in adults. Primary outcome measures were efficacy (proportion of patients in abstinence, assessed by urinalysis) and acceptability (proportion of patients who dropped out due to any cause) at the end of treatment, but we also measured the acute (12 weeks) and long-term (longest duration of study follow-up) effects of the interventions and the longest duration of abstinence. Odds ratios (ORs) and standardised mean differences were estimated using pairwise and network meta-analysis with random effects. The risk of bias of the included studies was assessed with the Cochrane tool, and the strength of evidence with the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. We followed the PRISMA for Network Meta-Analyses (PRISMA-NMA) guidelines, and the protocol was registered in PROSPERO (CRD 42017042900). We included 50 RCTs evaluating 12 psychosocial interventions or TAU in 6,942 participants. The strength of evidence ranged from high to very low. Compared to TAU, contingency management (CM) plus community reinforcement approach was the only intervention that increased the number of abstinent patients at the end of treatment (OR 2.84, 95% CI 1.24-6.51, P = 0.013), and also at 12 weeks (OR 7.60, 95% CI 2.03-28.37, P = 0.002) and at longest follow-up (OR 3.08, 95% CI 1.33-7.17, P = 0.008). At the end of treatment, CM plus community reinforcement approach had the highest number of statistically significant results in head-to-head comparisons, being more efficacious than cognitive behavioural therapy (CBT) (OR 2.44, 95% CI 1.02-5.88, P = 0.045), non-contingent rewards (OR 3.31, 95% CI 1.32-8.28, P = 0.010), and 12-step programme plus non-contingent rewards (OR 4.07, 95% CI 1.13-14.69, P = 0.031). CM plus community reinforcement approach was also associated with fewer dropouts than TAU, both at 12 weeks and the end of treatment (OR 3.92, P < 0.001, and 3.63, P < 0.001, respectively). At the longest follow-up, community reinforcement approach was more effective than non-contingent rewards, supportive-expressive psychodynamic therapy, TAU, and 12-step programme (OR ranging between 2.71, P = 0.026, and 4.58, P = 0.001), but the combination of community reinforcement approach with CM was superior also to CBT alone, CM alone, CM plus CBT, and 12-step programme plus non-contingent rewards (ORs between 2.50, P = 0.039, and 5.22, P < 0.001). The main limitations of our study were the quality of included studies and the lack of blinding, which may have increased the risk of performance bias. However, our analyses were based on objective outcomes, which are less likely to be biased. CONCLUSIONS To our knowledge, this network meta-analysis is the most comprehensive synthesis of data for psychosocial interventions in individuals with cocaine and/or amphetamine addiction. Our findings provide the best evidence base currently available to guide decision-making about psychosocial interventions for individuals with cocaine and/or amphetamine addiction and should inform patients, clinicians, and policy-makers.

Journal ArticleDOI
TL;DR: Combining data from 18 HIV prevention studies, the findings highlight important features of STI/BV epidemiology among sub-Saharan African women and offers a new approach to obtaining critical information on STI and BV prevalence in LMICs.
Abstract: BACKGROUND: Estimates of sexually transmitted infection (STI) prevalence are essential for efforts to prevent and control STIs. Few large STI prevalence studies exist, especially for low- and middle-income countries (LMICs). Our primary objective was to estimate the prevalence of chlamydia, gonorrhea, trichomoniasis, syphilis, herpes simplex virus type 2 (HSV-2), and bacterial vaginosis (BV) among women in sub-Saharan Africa by age, region, and population type. METHODS AND FINDINGS: We analyzed individual-level data from 18 HIV prevention studies (cohort studies and randomized controlled trials; conducted during 1993-2011), representing >37,000 women, that tested participants for ≥1 selected STIs or BV at baseline. We used a 2-stage meta-analysis to combine data. After calculating the proportion of participants with each infection and standard error by study, we used a random-effects model to obtain a summary mean prevalence of each infection and 95% confidence interval (CI) across ages, regions, and population types. Despite substantial study heterogeneity for some STIs/populations, several patterns emerged. Across the three primary region/population groups (South Africa community-based, Southern/Eastern Africa community-based, and Eastern Africa higher-risk), prevalence was higher among 15-24-year-old than 25-49-year-old women for all STIs except HSV-2. In general, higher-risk populations had greater prevalence of gonorrhea and syphilis than clinic/community-based populations. For chlamydia, prevalence among 15-24-year-olds was 10.3% (95% CI: 7.4%, 14.1%; I2 = 75.7%) among women specifically recruited from higher-risk settings for HIV in Eastern Africa and was 15.1% (95% CI: 12.7%, 17.8%; I2 = 82.3%) in South African clinic/community-based populations. Among clinic/community-based populations, prevalence was generally greater in South Africa than in Southern/Eastern Africa for most STIs; for gonorrhea, prevalence among 15-24-year-olds was 4.6% (95% CI: 3.3%, 6.4%; I2 = 82.8%) in South Africa and was 1.7% (95% CI: 1.2%, 2.6%; I2 = 55.2%) in Southern/Eastern Africa. Across the three primary region/population groups, HSV-2 and BV prevalence was high among 25-49-year-olds (ranging from 70% to 83% and 33% to 44%, respectively). The main study limitation is that the data are not from random samples of the target populations. CONCLUSIONS: Combining data from 18 HIV prevention studies, our findings highlight important features of STI/BV epidemiology among sub-Saharan African women. This methodology can be used where routine STI surveillance is limited and offers a new approach to obtaining critical information on STI and BV prevalence in LMICs.

Journal ArticleDOI
TL;DR: In a Perspective, Joshua Knowles and Euan Ashley discuss the potential for use of genetic risk scores in clinical practice and the need to understand how these scores can be modified for clinical practice.
Abstract: In a Perspective, Joshua Knowles and Euan Ashley discuss the potential for use of genetic risk scores in clinical practice

Journal ArticleDOI
TL;DR: The findings suggest that the WHELD intervention confers benefits in terms of QoL, agitation, and neuropsychiatric symptoms, albeit with relatively small effect sizes, as well as cost saving in a model that can readily be implemented in nursing homes.
Abstract: Background: Agitation is a common, distressing and challenging symptom affecting large numbers of people with dementia and impacting significantly on quality of life (QoL) There is an urgent need for evidence-based, cost-effective psychosocial intervention to improve these outcomes, particularly in the absence of safe, effective pharmacological therapies This study aimed to conduct a large and rigorous RCT to evaluate the efficacy of a person-centered care and psychosocial intervention (WHELD) on QoL, agitation and antipsychotic use in people with dementia living in nursing homes, and to determine the cost of the intervention Methods and Findings: This was a randomized controlled cluster trial comparing the WHELD intervention with treatment as usual in people with dementia living in 69 UK nursing homes, using an intention to treat analysis All nursing homes allocated to the WHELD intervention received staff training in person-centered care (PCC), social interaction (SoI) and education regarding antipsychotic medications (AM) followed by ongoing delivery through a care staff champion model The primary outcome measure was QoL (DEMQOL-proxy) Key secondary outcomes were agitation (Cohen Mansfield Agitation Inventory), neuropsychiatric symptoms (NPI) and antipsychotic use Other secondary outcome measures were global deterioration (CDR), mood (Cornell Scale for Depression in Dementia CSSD), unmet needs (Camberwell Assessment of Need in the Elderly -CANE), mortality, quality of interactions (Quality of Interactions Scale -QUIS), pain (Abbey pain scale) and cost Intervention costs were calculated using published cost function figures and compared with usual costs 847 people were randomized to WHELD or treatment as usual, of whom 553 completed the nine month RCT The WHELD intervention conferred a statistically significant improvement in QoL compared to treatment as usual over nine months (DEMQOL proxy z score 282, p=00042, Mean Difference 254 SEM 088, 95% Confidence Intervals (CI) 081, 428, Cohen's D 024) There was also statisticallya significant benefits in agitation (CMAI Z score 268 p=00076, Mean Difference 427 SEM 159, 95% CI -739, -115, Cohen's D 023) and in overall neuropsychiatric symptoms (Z score 352 Mean Difference 455 SEM 128 p=000045, 95% CI -707,-202, Cohen's D 030) The benefits were greatest in people with moderate-moderately severe dementia There was also a statistically significant benefit in positive care interactions as measured by QUIS (197% increase, SEM 894, 95% CI 212, 3716, Cohen's D 055, P=003) There were no statistically significant differences between the WHELD intervention and treatment as usual for the other secondary outcomes A sensitivity analysis using a pre-specified imputation model confirmed statistically significant benefits in DEMQOL proxy, and CMAI and NPI with the WHELD intervention compared to treatment as usual Antipsychotic drug prescribing was at a low stable level in both treatment groups across the study and the WHELD treatment intervention did not reduce antipsychotic use The WHELD intervention reduced cost compared to treatment as usual, and the benefits achieved were therefore associated with a cost saving The main limitation was that antipsychotic review was based on augmenting processes within care homes to trigger medical review and did not in this study involve proactive primary care education The high mortality rate leading to non-completion in a significant proportion of participants leads to interpretation challenges for this study and for all long term intervention studies in nursing homes Conclusions These findings suggest that this staff training and non-pharmacological intervention for people with dementia living in nursing homes may be able to achieve benefits to QoL, agitation and neuropsychiatric symptoms, as well as cost saving in a model that can readily be implemented into nursing homes The benefits in QoL, agitation and neuropsychiatric symptoms had a small effect size The benefits to agitation and neuropsychiatric symptoms are comparable to (agitation) or better than (NPI) the benefits seen with antipsychotic drugs Importantly, the benefits were achieved in the context of a cost saving and used a model that can readily be implemented into nursing homes

Journal ArticleDOI
TL;DR: In a Policy Forum, Irva Hertz-Picciotto and colleagues review the scientific evidence linking organophosphate pesticides to cognitive, behavioral, and neurological deficits in children and recommend actions to reduce exposures.
Abstract: In a Policy Forum, Irva Hertz-Picciotto and colleagues review the scientific evidence linking organophosphate pesticides to cognitive, behavioral, and neurological deficits in children and recommend actions to reduce exposures.