scispace - formally typeset
Search or ask a question

Showing papers in "Proceedings of the National Academy of Sciences of the United States of America in 2019"


Journal ArticleDOI
TL;DR: The measure-by-measure evaluation indicated that strengthening industrial emission standards, upgrades on industrial boilers, phasing out outdated industrial capacities, and promoting clean fuels in the residential sector were major effective measures in reducing PM2.5 pollution and health burdens in China.
Abstract: From 2013 to 2017, with the implementation of the toughest-ever clean air policy in China, significant declines in fine particle (PM2.5) concentrations occurred nationwide. Here we estimate the drivers of the improved PM2.5 air quality and the associated health benefits in China from 2013 to 2017 based on a measure-specific integrated evaluation approach, which combines a bottom-up emission inventory, a chemical transport model, and epidemiological exposure-response functions. The estimated national population-weighted annual mean PM2.5 concentrations decreased from 61.8 (95%CI: 53.3-70.0) to 42.0 µg/m3 (95% CI: 35.7-48.6) in 5 y, with dominant contributions from anthropogenic emission abatements. Although interannual meteorological variations could significantly alter PM2.5 concentrations, the corresponding effects on the 5-y trends were relatively small. The measure-by-measure evaluation indicated that strengthening industrial emission standards (power plants and emission-intensive industrial sectors), upgrades on industrial boilers, phasing out outdated industrial capacities, and promoting clean fuels in the residential sector were major effective measures in reducing PM2.5 pollution and health burdens. These measures were estimated to contribute to 6.6- (95% CI: 5.9-7.1), 4.4- (95% CI: 3.8-4.9), 2.8- (95% CI: 2.5-3.0), and 2.2- (95% CI: 2.0-2.5) µg/m3 declines in the national PM2.5 concentration in 2017, respectively, and further reduced PM2.5-attributable excess deaths by 0.37 million (95% CI: 0.35-0.39), or 92% of the total avoided deaths. Our study confirms the effectiveness of China's recent clean air actions, and the measure-by-measure evaluation provides insights into future clean air policy making in China and in other developing and polluting countries.

1,085 citations


Journal ArticleDOI
TL;DR: It is discovered and demonstrated that ferroptosis, a programmed iron-dependent cell death, as a mechanism in murine models of doxorubicin (DOX)- and ischemia/reperfusion (I/R)-induced cardiomyopathy and Mitochondria-targeted antioxidant MitoTEMPO significantly rescued DOX cardiopathy, supporting oxidative damage of mitochondria as a major mechanism in ferroPTosis-induced heart damage.
Abstract: Heart disease is the leading cause of death worldwide. A key pathogenic factor in the development of lethal heart failure is loss of terminally differentiated cardiomyocytes. However, mechanisms of cardiomyocyte death remain unclear. Here, we discovered and demonstrated that ferroptosis, a programmed iron-dependent cell death, as a mechanism in murine models of doxorubicin (DOX)- and ischemia/reperfusion (I/R)-induced cardiomyopathy. In canonical apoptosis and/or necroptosis-defective Ripk3−/−, Mlkl−/−, or Fadd−/−Mlkl−/− mice, DOX-treated cardiomyocytes showed features of typical ferroptotic cell death. Consistently, compared with dexrazoxane, the only FDA-approved drug for treating DOX-induced cardiotoxicity, inhibition of ferroptosis by ferrostatin-1 significantly reduced DOX cardiomyopathy. RNA-sequencing results revealed that heme oxygenase-1 (Hmox1) was significantly up-regulated in DOX-treated murine hearts. Administering DOX to mice induced cardiomyopathy with a rapid, systemic accumulation of nonheme iron via heme degradation by Nrf2-mediated up-regulation of Hmox1, which effect was abolished in Nrf2-deficent mice. Conversely, zinc protoporphyrin IX, an Hmox1 antagonist, protected the DOX-treated mice, suggesting free iron released on heme degradation is necessary and sufficient to induce cardiac injury. Given that ferroptosis is driven by damage to lipid membranes, we further investigated and found that excess free iron accumulated in mitochondria and caused lipid peroxidation on its membrane. Mitochondria-targeted antioxidant MitoTEMPO significantly rescued DOX cardiomyopathy, supporting oxidative damage of mitochondria as a major mechanism in ferroptosis-induced heart damage. Importantly, ferrostatin-1 and iron chelation also ameliorated heart failure induced by both acute and chronic I/R in mice. These findings highlight that targeting ferroptosis serves as a cardioprotective strategy for cardiomyopathy prevention.

918 citations


Journal ArticleDOI
TL;DR: In this article, the effect of meteorological variability on ozone trends was investigated using a multiple linear regression model and the residual of this regression showed increasing ozone trends of 1-3 ppbv a−1 in megacity clusters of eastern China that they attributed to changes in anthropogenic emissions.
Abstract: Observations of surface ozone available from ∼1,000 sites across China for the past 5 years (2013–2017) show severe summertime pollution and regionally variable trends. We resolve the effect of meteorological variability on the ozone trends by using a multiple linear regression model. The residual of this regression shows increasing ozone trends of 1–3 ppbv a−1 in megacity clusters of eastern China that we attribute to changes in anthropogenic emissions. By contrast, ozone decreased in some areas of southern China. Anthropogenic NOx emissions in China are estimated to have decreased by 21% during 2013–2017, whereas volatile organic compounds (VOCs) emissions changed little. Decreasing NOx would increase ozone under the VOC-limited conditions thought to prevail in urban China while decreasing ozone under rural NOx-limited conditions. However, simulations with the Goddard Earth Observing System Chemical Transport Model (GEOS-Chem) indicate that a more important factor for ozone trends in the North China Plain is the ∼40% decrease of fine particulate matter (PM2.5) over the 2013–2017 period, slowing down the aerosol sink of hydroperoxy (HO2) radicals and thus stimulating ozone production.

864 citations


Journal ArticleDOI
TL;DR: The authors define interpretability in the context of machine learning and introduce the predictive, descriptive, relevant (PDR) framework for discussing interpretations, with three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevance judged relative to a human audience.
Abstract: Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the predictive, descriptive, relevant (PDR) framework for discussing interpretations. The PDR framework provides 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post hoc categories, with subgroups including sparsity, modularity, and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often underappreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.

851 citations


Journal ArticleDOI
TL;DR: This work shows how classical theory and modern practice can be reconciled within a single unified performance curve and proposes a mechanism underlying its emergence, and provides evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets.
Abstract: Breakthroughs in machine learning are rapidly changing science and society, yet our fundamental understanding of this technology has lagged far behind. Indeed, one of the central tenets of the field, the bias-variance trade-off, appears to be at odds with the observed behavior of methods used in modern machine-learning practice. The bias-variance trade-off implies that a model should balance underfitting and overfitting: Rich enough to express underlying structure in data and simple enough to avoid fitting spurious patterns. However, in modern practice, very rich models such as neural networks are trained to exactly fit (i.e., interpolate) the data. Classically, such models would be considered overfitted, and yet they often obtain high accuracy on test data. This apparent contradiction has raised questions about the mathematical foundations of machine learning and their relevance to practitioners. In this paper, we reconcile the classical understanding and the modern practice within a unified performance curve. This "double-descent" curve subsumes the textbook U-shaped bias-variance trade-off curve by showing how increasing model capacity beyond the point of interpolation results in improved performance. We provide evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets, and we posit a mechanism for its emergence. This connection between the performance and the structure of machine-learning models delineates the limits of classical analyses and has implications for both the theory and the practice of machine learning.

826 citations


Journal ArticleDOI
TL;DR: This large analysis integrating mCRPC genomics with histology and clinical outcomes identifies RB1 genomic alteration as a potent predictor of poor outcome, and is a community resource for further interrogation of clinical and molecular associations.
Abstract: Heterogeneity in the genomic landscape of metastatic prostate cancer has become apparent through several comprehensive profiling efforts, but little is known about the impact of this heterogeneity on clinical outcome. Here, we report comprehensive genomic and transcriptomic analysis of 429 patients with metastatic castration-resistant prostate cancer (mCRPC) linked with longitudinal clinical outcomes, integrating findings from whole-exome, transcriptome, and histologic analysis. For 128 patients treated with a first-line next-generation androgen receptor signaling inhibitor (ARSI; abiraterone or enzalutamide), we examined the association of 18 recurrent DNA- and RNA-based genomic alterations, including androgen receptor (AR) variant expression, AR transcriptional output, and neuroendocrine expression signatures, with clinical outcomes. Of these, only RB1 alteration was significantly associated with poor survival, whereas alterations in RB1, AR, and TP53 were associated with shorter time on treatment with an ARSI. This large analysis integrating mCRPC genomics with histology and clinical outcomes identifies RB1 genomic alteration as a potent predictor of poor outcome, and is a community resource for further interrogation of clinical and molecular associations.

712 citations


Journal ArticleDOI
TL;DR: During the entire period, the mass loss concentrated in areas closest to warm, salty, subsurface, circumpolar deep water (CDW), consistent with enhanced polar westerlies pushing CDW toward Antarctica to melt its floating ice shelves, destabilize the glaciers, and raise sea level.
Abstract: We use updated drainage inventory, ice thickness, and ice velocity data to calculate the grounding line ice discharge of 176 basins draining the Antarctic Ice Sheet from 1979 to 2017. We compare the results with a surface mass balance model to deduce the ice sheet mass balance. The total mass loss increased from 40 ± 9 Gt/y in 1979–1990 to 50 ± 14 Gt/y in 1989–2000, 166 ± 18 Gt/y in 1999–2009, and 252 ± 26 Gt/y in 2009–2017. In 2009–2017, the mass loss was dominated by the Amundsen/Bellingshausen Sea sectors, in West Antarctica (159 ± 8 Gt/y), Wilkes Land, in East Antarctica (51 ± 13 Gt/y), and West and Northeast Peninsula (42 ± 5 Gt/y). The contribution to sea-level rise from Antarctica averaged 3.6 ± 0.5 mm per decade with a cumulative 14.0 ± 2.0 mm since 1979, including 6.9 ± 0.6 mm from West Antarctica, 4.4 ± 0.9 mm from East Antarctica, and 2.5 ± 0.4 mm from the Peninsula (i.e., East Antarctica is a major participant in the mass loss). During the entire period, the mass loss concentrated in areas closest to warm, salty, subsurface, circumpolar deep water (CDW), that is, consistent with enhanced polar westerlies pushing CDW toward Antarctica to melt its floating ice shelves, destabilize the glaciers, and raise sea level.

654 citations


Journal ArticleDOI
TL;DR: It is highlighted that improved understanding of the emission sources, physical/chemical processes during haze evolution, and interactions with meteorological/climatic changes are necessary to unravel the causes, mechanisms, and trends for haze pollution.
Abstract: Regional severe haze represents an enormous environmental problem in China, influencing air quality, human health, ecosystem, weather, and climate. These extremes are characterized by exceedingly high concentrations of fine particulate matter (smaller than 2.5 µm, or PM2.5) and occur with extensive temporal (on a daily, weekly, to monthly timescale) and spatial (over a million square kilometers) coverage. Although significant advances have been made in field measurements, model simulations, and laboratory experiments for fine PM over recent years, the causes for severe haze formation have not yet to be systematically/comprehensively evaluated. This review provides a synthetic synopsis of recent advances in understanding the fundamental mechanisms of severe haze formation in northern China, focusing on emission sources, chemical formation and transformation, and meteorological and climatic conditions. In particular, we highlight the synergetic effects from the interactions between anthropogenic emissions and atmospheric processes. Current challenges and future research directions to improve the understanding of severe haze pollution as well as plausible regulatory implications on a scientific basis are also discussed.

586 citations


Journal ArticleDOI
TL;DR: PD-1 blockade may facilitate the proliferation of highly suppressive PD-1+ eTreg cells in HPDs, resulting in inhibition of antitumor immunity andletion of the former may help treat and prevent HPD.
Abstract: PD-1 blockade is a cancer immunotherapy effective in various types of cancer. In a fraction of treated patients, however, it causes rapid cancer progression called hyperprogressive disease (HPD). With our observation of HPD in ∼10% of anti–PD-1 monoclonal antibody (mAb)-treated advanced gastric cancer (GC) patients, we explored how anti–PD-1 mAb caused HPD in these patients and how HPD could be treated and prevented. In the majority of GC patients, tumor-infiltrating FoxP3highCD45RA−CD4+ T cells [effector Treg (eTreg) cells], which were abundant and highly suppressive in tumors, expressed PD-1 at equivalent levels as tumor-infiltrating CD4+ or CD8+ effector/memory T cells and at much higher levels than circulating eTreg cells. Comparison of GC tissue samples before and after anti–PD-1 mAb therapy revealed that the treatment markedly increased tumor-infiltrating proliferative (Ki67+) eTreg cells in HPD patients, contrasting with their reduction in non-HPD patients. Functionally, circulating and tumor-infiltrating PD-1+ eTreg cells were highly activated, showing higher expression of CTLA-4 than PD-1− eTreg cells. PD-1 blockade significantly enhanced in vitro Treg cell suppressive activity. Similarly, in mice, genetic ablation or antibody-mediated blockade of PD-1 in Treg cells increased their proliferation and suppression of antitumor immune responses. Taken together, PD-1 blockade may facilitate the proliferation of highly suppressive PD-1+ eTreg cells in HPDs, resulting in inhibition of antitumor immunity. The presence of actively proliferating PD-1+ eTreg cells in tumors is therefore a reliable marker for HPD. Depletion of eTreg cells in tumor tissues would be effective in treating and preventing HPD in PD-1 blockade cancer immunotherapy.

554 citations


Journal ArticleDOI
TL;DR: A metalearner, the X-learner, is proposed, which can adapt to structural properties, such as the smoothness and sparsity of the underlying treatment effect, and is shown to be easy to use and to produce results that are interpretable.
Abstract: There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms-such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks-to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods.

546 citations


Journal ArticleDOI
TL;DR: A custom deep autoencoder network is designed to discover a coordinate transformation into a reduced space where the dynamics may be sparsely represented, and the governing equations and the associated coordinate system are simultaneously learned.
Abstract: The discovery of governing equations from scientific data has the potential to transform data-rich fields that lack well-characterized quantitative descriptions. Advances in sparse regression are currently enabling the tractable identification of both the structure and parameters of a nonlinear dynamical system from data. The resulting models have the fewest terms necessary to describe the dynamics, balancing model complexity with descriptive ability, and thus promoting interpretability and generalizability. This provides an algorithmic approach to Occam's razor for model discovery. However, this approach fundamentally relies on an effective coordinate system in which the dynamics have a simple representation. In this work, we design a custom deep autoencoder network to discover a coordinate transformation into a reduced space where the dynamics may be sparsely represented. Thus, we simultaneously learn the governing equations and the associated coordinate system. We demonstrate this approach on several example high-dimensional systems with low-dimensional behavior. The resulting modeling framework combines the strengths of deep neural networks for flexible representation and sparse identification of nonlinear dynamics (SINDy) for parsimonious models. This method places the discovery of coordinates and models on an equal footing.

Journal ArticleDOI
TL;DR: This treatment protocol appears to be feasible and safe in men with low- or intermediate-risk localized prostate cancer without serious complications or deleterious changes in genitourinary function and results in greatly reduced patient morbidity and improved functional outcomes.
Abstract: Biocompatible gold nanoparticles designed to absorb light at wavelengths of high tissue transparency have been of particular interest for biomedical applications. The ability of such nanoparticles to convert absorbed near-infrared light to heat and induce highly localized hyperthermia has been shown to be highly effective for photothermal cancer therapy, resulting in cell death and tumor remission in a multitude of preclinical animal models. Here we report the initial results of a clinical trial in which laser-excited gold-silica nanoshells (GSNs) were used in combination with magnetic resonance–ultrasound fusion imaging to focally ablate low-intermediate-grade tumors within the prostate. The overall goal is to provide highly localized regional control of prostate cancer that also results in greatly reduced patient morbidity and improved functional outcomes. This pilot device study reports feasibility and safety data from 16 cases of patients diagnosed with low- or intermediate-risk localized prostate cancer. After GSN infusion and high-precision laser ablation, patients underwent multiparametric MRI of the prostate at 48 to 72 h, followed by postprocedure mpMRI/ultrasound targeted fusion biopsies at 3 and 12 mo, as well as a standard 12-core systematic biopsy at 12 mo. GSN-mediated focal laser ablation was successfully achieved in 94% (15/16) of patients, with no significant difference in International Prostate Symptom Score or Sexual Health Inventory for Men observed after treatment. This treatment protocol appears to be feasible and safe in men with low- or intermediate-risk localized prostate cancer without serious complications or deleterious changes in genitourinary function.

Journal ArticleDOI
TL;DR: Comparing passive lectures with active learning using a randomized experimental approach and identical course materials, it is found that students in the active classroom learn more, but they feel like they learn less, and attempts to evaluate instruction based on students’ perceptions of learning could inadvertently promote inferior pedagogical methods.
Abstract: We compared students’ self-reported perception of learning with their actual learning under controlled conditions in large-enrollment introductory college physics courses taught using 1) active instruction (following best practices in the discipline) and 2) passive instruction (lectures by experienced and highly rated instructors). Both groups received identical class content and handouts, students were randomly assigned, and the instructor made no effort to persuade students of the benefit of either method. Students in active classrooms learned more (as would be expected based on prior research), but their perception of learning, while positive, was lower than that of their peers in passive environments. This suggests that attempts to evaluate instruction based on students’ perceptions of learning could inadvertently promote inferior (passive) pedagogical methods. For instance, a superstar lecturer could create such a positive feeling of learning that students would choose those lectures over active learning. Most importantly, these results suggest that when students experience the increased cognitive effort associated with active learning, they initially take that effort to signify poorer learning. That disconnect may have a detrimental effect on students’ motivation, engagement, and ability to self-regulate their own learning. Although students can, on their own, discover the increased value of being actively engaged during a semester-long course, their learning may be impaired during the initial part of the course. We discuss strategies that instructors can use, early in the semester, to improve students’ response to being actively engaged in the classroom.

Journal ArticleDOI
TL;DR: Even in years of high SMB, enhanced glacier discharge has remained sufficiently high above equilibrium to maintain an annual mass loss every year since 1998, and the acceleration in mass loss switched from positive in 2000–2010 to negative in 2010–2018, which illustrates the difficulty of extrapolating short records into longer-term trends.
Abstract: We reconstruct the mass balance of the Greenland Ice Sheet using a comprehensive survey of thickness, surface elevation, velocity, and surface mass balance (SMB) of 260 glaciers from 1972 to 2018. We calculate mass discharge, D, into the ocean directly for 107 glaciers (85% of D) and indirectly for 110 glaciers (15%) using velocity-scaled reference fluxes. The decadal mass balance switched from a mass gain of +47 ± 21 Gt/y in 1972-1980 to a loss of 51 ± 17 Gt/y in 1980-1990. The mass loss increased from 41 ± 17 Gt/y in 1990-2000, to 187 ± 17 Gt/y in 2000-2010, to 286 ± 20 Gt/y in 2010-2018, or sixfold since the 1980s, or 80 ± 6 Gt/y per decade, on average. The acceleration in mass loss switched from positive in 2000-2010 to negative in 2010-2018 due to a series of cold summers, which illustrates the difficulty of extrapolating short records into longer-term trends. Cumulated since 1972, the largest contributions to global sea level rise are from northwest (4.4 ± 0.2 mm), southeast (3.0 ± 0.3 mm), and central west (2.0 ± 0.2 mm) Greenland, with a total 13.7 ± 1.1 mm for the ice sheet. The mass loss is controlled at 66 ± 8% by glacier dynamics (9.1 mm) and 34 ± 8% by SMB (4.6 mm). Even in years of high SMB, enhanced glacier discharge has remained sufficiently high above equilibrium to maintain an annual mass loss every year since 1998.

Journal ArticleDOI
TL;DR: It is shown how being misinformed is a function of a person’s ability and motivation to spot falsehoods, but also of other group-level and societal factors that increase the chances of citizens to be exposed to correct(ive) information.
Abstract: Concerns about public misinformation in the United States—ranging from politics to science—are growing. Here, we provide an overview of how and why citizens become (and sometimes remain) misinformed about science. Our discussion focuses specifically on misinformation among individual citizens. However, it is impossible to understand individual information processing and acceptance without taking into account social networks, information ecologies, and other macro-level variables that provide important social context. Specifically, we show how being misinformed is a function of a person’s ability and motivation to spot falsehoods, but also of other group-level and societal factors that increase the chances of citizens to be exposed to correct(ive) information. We conclude by discussing a number of research areas—some of which echo themes of the 2017 National Academies of Sciences, Engineering, and Medicine’s Communicating Science Effectively report—that will be particularly important for our future understanding of misinformation, specifically a systems approach to the problem of misinformation, the need for more systematic analyses of science communication in new media environments, and a (re)focusing on traditionally underserved audiences.

Journal ArticleDOI
TL;DR: The high-mobility group (HMG)-box transcription factors TOX and TOX2, as well as members of the NR4A family of nuclear receptors, are targets of the calcium/calcineurin-regulated transcription factor NFAT, even in the absence of its partner AP-1 (FOS-JUN).
Abstract: T cells expressing chimeric antigen receptors (CAR T cells) have shown impressive therapeutic efficacy against leukemias and lymphomas. However, they have not been as effective against solid tumors because they become hyporesponsive (“exhausted” or “dysfunctional”) within the tumor microenvironment, with decreased cytokine production and increased expression of several inhibitory surface receptors. Here we define a transcriptional network that mediates CD8+ T cell exhaustion. We show that the high-mobility group (HMG)-box transcription factors TOX and TOX2, as well as members of the NR4A family of nuclear receptors, are targets of the calcium/calcineurin-regulated transcription factor NFAT, even in the absence of its partner AP-1 (FOS-JUN). Using a previously established CAR T cell model, we show that TOX and TOX2 are highly induced in CD8+ CAR+ PD-1high TIM3high (“exhausted”) tumor-infiltrating lymphocytes (CAR TILs), and CAR TILs deficient in both TOX and TOX2 (Tox DKO) are more effective than wild-type (WT), TOX-deficient, or TOX2-deficient CAR TILs in suppressing tumor growth and prolonging survival of tumor-bearing mice. Like NR4A-deficient CAR TILs, Tox DKO CAR TILs show increased cytokine expression, decreased expression of inhibitory receptors, and increased accessibility of regions enriched for motifs that bind activation-associated nuclear factor κB (NFκB) and basic region-leucine zipper (bZIP) transcription factors. These data indicate that Tox and Nr4a transcription factors are critical for the transcriptional program of CD8+ T cell exhaustion downstream of NFAT. We provide evidence for positive regulation of NR4A by TOX and of TOX by NR4A, and suggest that disruption of TOX and NR4A expression or activity could be promising strategies for cancer immunotherapy.

Journal ArticleDOI
TL;DR: It is found that laypeople—on average—are quite good at distinguishing between lower- and higher-quality sources, and having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.
Abstract: Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.

Journal ArticleDOI
TL;DR: It is concluded that to save millions of lives and restore aerosol-perturbed rainfall patterns, while limiting global warming to 2 °C, a rapid phaseout of fossil-fuel-related emissions and major reductions of other anthropogenic sources are needed.
Abstract: Anthropogenic greenhouse gases and aerosols are associated with climate change and human health risks. We used a global model to estimate the climate and public health outcomes attributable to fossil fuel use, indicating the potential benefits of a phaseout. We show that it can avoid an excess mortality rate of 3.61 (2.96-4.21) million per year from outdoor air pollution worldwide. This could be up to 5.55 (4.52-6.52) million per year by additionally controlling nonfossil anthropogenic sources. Globally, fossil-fuel-related emissions account for about 65% of the excess mortality, and 70% of the climate cooling by anthropogenic aerosols. The chemical influence of air pollution on aeolian dust contributes to the aerosol cooling. Because aerosols affect the hydrologic cycle, removing the anthropogenic emissions in the model increases rainfall by 10-70% over densely populated regions in India and 10-30% over northern China, and by 10-40% over Central America, West Africa, and the drought-prone Sahel, thus contributing to water and food security. Since aerosols mask the anthropogenic rise in global temperature, removing fossil-fuel-generated particles liberates 0.51(±0.03) °C and all pollution particles 0.73(±0.03) °C warming, reaching around 2 °C over North America and Northeast Asia. The steep temperature increase from removing aerosols can be moderated to about 0.36(±0.06) °C globally by the simultaneous reduction of tropospheric ozone and methane. We conclude that a rapid phaseout of fossil-fuel-related emissions and major reductions of other anthropogenic sources are needed to save millions of lives, restore aerosol-perturbed rainfall patterns, and limit global warming to 2 °C.

Journal ArticleDOI
TL;DR: The ability to perform spatially resolved, genome-wide RNA profiling with high detection efficiency and accuracy by MERFISH could help address a wide array of questions ranging from the regulation of gene expression in cells to the development of cell fate and organization in tissues.
Abstract: The expression profiles and spatial distributions of RNAs regulate many cellular functions. Image-based transcriptomic approaches provide powerful means to measure both expression and spatial information of RNAs in individual cells within their native environment. Among these approaches, multiplexed error-robust fluorescence in situ hybridization (MERFISH) has achieved spatially resolved RNA quantification at transcriptome scale by massively multiplexing single-molecule FISH measurements. Here, we increased the gene throughput of MERFISH and demonstrated simultaneous measurements of RNA transcripts from ∼10,000 genes in individual cells with ∼80% detection efficiency and ∼4% misidentification rate. We combined MERFISH with cellular structure imaging to determine subcellular compartmentalization of RNAs. We validated this approach by showing enrichment of secretome transcripts at the endoplasmic reticulum, and further revealed enrichment of long noncoding RNAs, RNAs with retained introns, and a subgroup of protein-coding mRNAs in the cell nucleus. Leveraging spatially resolved RNA profiling, we developed an approach to determine RNA velocity in situ using the balance of nuclear versus cytoplasmic RNA counts. We applied this approach to infer pseudotime ordering of cells and identified cells at different cell-cycle states, revealing ∼1,600 genes with putative cell cycle-dependent expression and a gradual transcription profile change as cells progress through cell-cycle stages. Our analysis further revealed cell cycle-dependent and cell cycle-independent spatial heterogeneity of transcriptionally distinct cells. We envision that the ability to perform spatially resolved, genome-wide RNA profiling with high detection efficiency and accuracy by MERFISH could help address a wide array of questions ranging from the regulation of gene expression in cells to the development of cell fate and organization in tissues.

Journal ArticleDOI
TL;DR: A hierarchical anode consisting of a nickel–iron hydroxide electrocatalyst layer uniformly coated on a sulfide layer formed on Ni substrate was developed, affording superior catalytic activity and corrosion resistance in seawater electrolysis.
Abstract: Electrolysis of water to generate hydrogen fuel is an attractive renewable energy storage technology. However, grid-scale freshwater electrolysis would put a heavy strain on vital water resources. Developing cheap electrocatalysts and electrodes that can sustain seawater splitting without chloride corrosion could address the water scarcity issue. Here we present a multilayer anode consisting of a nickel–iron hydroxide (NiFe) electrocatalyst layer uniformly coated on a nickel sulfide (NiSx) layer formed on porous Ni foam (NiFe/NiSx-Ni), affording superior catalytic activity and corrosion resistance in solar-driven alkaline seawater electrolysis operating at industrially required current densities (0.4 to 1 A/cm2) over 1,000 h. A continuous, highly oxygen evolution reaction-active NiFe electrocatalyst layer drawing anodic currents toward water oxidation and an in situ-generated polyatomic sulfate and carbonate-rich passivating layers formed in the anode are responsible for chloride repelling and superior corrosion resistance of the salty-water-splitting anode.

Journal ArticleDOI
TL;DR: It is found that foods associated with improved adult health also often have low environmental impacts, indicating that the same dietary transitions that would lower incidences of noncommunicable diseases would also help meet environmental sustainability targets.
Abstract: Food choices are shifting globally in ways that are negatively affecting both human health and the environment. Here we consider how consuming an additional serving per day of each of 15 foods is associated with 5 health outcomes in adults and 5 aspects of agriculturally driven environmental degradation. We find that while there is substantial variation in the health outcomes of different foods, foods associated with a larger reduction in disease risk for one health outcome are often associated with larger reductions in disease risk for other health outcomes. Likewise, foods with lower impacts on one metric of environmental harm tend to have lower impacts on others. Additionally, of the foods associated with improved health (whole grain cereals, fruits, vegetables, legumes, nuts, olive oil, and fish), all except fish have among the lowest environmental impacts, and fish has markedly lower impacts than red meats and processed meats. Foods associated with the largest negative environmental impacts—unprocessed and processed red meat—are consistently associated with the largest increases in disease risk. Thus, dietary transitions toward greater consumption of healthier foods would generally improve environmental sustainability, although processed foods high in sugars harm health but can have relatively low environmental impacts. These findings could help consumers, policy makers, and food companies to better understand the multiple health and environmental implications of food choices.

Journal ArticleDOI
TL;DR: It is found that default mode network functional connectivity remains a prime target for understanding the pathophysiology of depression, with particular relevance to revealing mechanisms of effective treatments, and reduced rather than increased FC within the DMN is found.
Abstract: Major depressive disorder (MDD) is common and disabling, but its neuropathophysiology remains unclear. Most studies of functional brain networks in MDD have had limited statistical power and data analysis approaches have varied widely. The REST-meta-MDD Project of resting-state fMRI (R-fMRI) addresses these issues. Twenty-five research groups in China established the REST-meta-MDD Consortium by contributing R-fMRI data from 1,300 patients with MDD and 1,128 normal controls (NCs). Data were preprocessed locally with a standardized protocol before aggregated group analyses. We focused on functional connectivity (FC) within the default mode network (DMN), frequently reported to be increased in MDD. Instead, we found decreased DMN FC when we compared 848 patients with MDD to 794 NCs from 17 sites after data exclusion. We found FC reduction only in recurrent MDD, not in first-episode drug-naive MDD. Decreased DMN FC was associated with medication usage but not with MDD duration. DMN FC was also positively related to symptom severity but only in recurrent MDD. Exploratory analyses also revealed alterations in FC of visual, sensory-motor, and dorsal attention networks in MDD. We confirmed the key role of DMN in MDD but found reduced rather than increased FC within the DMN. Future studies should test whether decreased DMN FC mediates response to treatment. All R-fMRI indices of data contributed by the REST-meta-MDD consortium are being shared publicly via the R-fMRI Maps Project.

Journal ArticleDOI
TL;DR: This work uses data on police-involved deaths to estimate how the risk of being killed by police use of force in the United States varies across social groups, and finds that African American men and women, American Indian and Alaska Native women and men, and Latino men face higher lifetime risk than do their white peers.
Abstract: We use data on police-involved deaths to estimate how the risk of being killed by police use of force in the United States varies across social groups. We estimate the lifetime and age-specific risks of being killed by police by race and sex. We also provide estimates of the proportion of all deaths accounted for by police use of force. We find that African American men and women, American Indian/Alaska Native men and women, and Latino men face higher lifetime risk of being killed by police than do their white peers. We find that Latina women and Asian/Pacific Islander men and women face lower risk of being killed by police than do their white peers. Risk is highest for black men, who (at current levels of risk) face about a 1 in 1,000 chance of being killed by police over the life course. The average lifetime odds of being killed by police are about 1 in 2,000 for men and about 1 in 33,000 for women. Risk peaks between the ages of 20 y and 35 y for all groups. For young men of color, police use of force is among the leading causes of death.

Journal ArticleDOI
TL;DR: An unprecedented singlet oxygen mediated Fenton-like process catalyzed by ∼2-nm Fe2O3 nanoparticles distributed inside multiwalled carbon nanotubes with inner diameter of ∼7 nm, showing exotic catalytic activities, unforeseen adsorption-dependent selectivity, and pH stability for the oxidation of organic compounds.
Abstract: For several decades, the iron-based Fenton-like catalysis has been believed to be mediated by hydroxyl radicals or high-valent iron-oxo species, while only sporadic evidence supported the generation of singlet oxygen (1O2) in the Haber-Weiss cycle. Herein, we report an unprecedented singlet oxygen mediated Fenton-like process catalyzed by ∼2-nm Fe2O3 nanoparticles distributed inside multiwalled carbon nanotubes with inner diameter of ∼7 nm. Unlike the traditional Fenton-like processes, this delicately designed system was shown to selectively oxidize the organic dyes that could be adsorbed with oxidation rates linearly proportional to the adsorption affinity. It also exhibited remarkably higher degradation activity (22.5 times faster) toward a model pollutant methylene blue than its nonconfined analog. Strikingly, the unforeseen stability at pH value up to 9.0 greatly expands the use of Fenton-like catalysts in alkaline conditions. This work represents a fundamental breakthrough toward the design and understanding of the Fenton-like system under nanoconfinement, might cause implications in other fields, especially in biological systems.

Journal ArticleDOI
TL;DR: A general mathematical framework to quantify ecological stochasticity under different situations in which deterministic factors drive the communities more similar or dissimilar than null expectation is proposed.
Abstract: Understanding the community assembly mechanisms controlling biodiversity patterns is a central issue in ecology. Although it is generally accepted that both deterministic and stochastic processes play important roles in community assembly, quantifying their relative importance is challenging. Here we propose a general mathematical framework to quantify ecological stochasticity under different situations in which deterministic factors drive the communities more similar or dissimilar than null expectation. An index, normalized stochasticity ratio (NST), was developed with 50% as the boundary point between more deterministic ( 50%) assembly. NST was tested with simulated communities by considering abiotic filtering, competition, environmental noise, and spatial scales. All tested approaches showed limited performance at large spatial scales or under very high environmental noise. However, in all of the other simulated scenarios, NST showed high accuracy (0.90 to 1.00) and precision (0.91 to 0.99), with averages of 0.37 higher accuracy (0.1 to 0.7) and 0.33 higher precision (0.0 to 1.8) than previous approaches. NST was also applied to estimate stochasticity in the succession of a groundwater microbial community in response to organic carbon (vegetable oil) injection. Our results showed that community assembly was shifted from more deterministic (NST = 21%) to more stochastic (NST = 70%) right after organic carbon input. As the vegetable oil was consumed, the community gradually returned to be more deterministic (NST = 27%). In addition, our results demonstrated that null model algorithms and community similarity metrics had strong effects on quantifying ecological stochasticity.

Journal ArticleDOI
TL;DR: An inflammatory polysaccharide produced by the gut bacterium Ruminococcus gnavus is found and characterized, which induces the production of inflammatory cytokines like TNFα by dendritic cells and may contribute to the association between R.gnavus and Crohn’s disease.
Abstract: A substantial and increasing number of human diseases are associated with changes in the gut microbiota, and discovering the molecules and mechanisms underlying these associations represents a major research goal. Multiple studies associate Ruminococcus gnavus , a prevalent gut microbe, with Crohn’s disease, a major type of inflammatory bowel disease. We have found that R. gnavus synthesizes and secretes a complex glucorhamnan polysaccharide with a rhamnose backbone and glucose sidechains. Chemical and spectroscopic studies indicated that the glucorhamnan was largely a repeating unit of five sugars with a linear backbone formed from three rhamnose units and a short sidechain composed of two glucose units. The rhamnose backbone is made from 1,2- and 1,3-linked rhamnose units, and the sidechain has a terminal glucose linked to a 1,6-glucose. This glucorhamnan potently induces inflammatory cytokine (TNFα) secretion by dendritic cells, and TNFα secretion is dependent on toll-like receptor 4 (TLR4). We also identify a putative biosynthetic gene cluster for this molecule, which has the four biosynthetic genes needed to convert glucose to rhamnose and the five glycosyl transferases needed to build the repeating pentasaccharide unit of the inflammatory glucorhamnan.

Journal ArticleDOI
TL;DR: The findings of a structured expert judgement study, using unique techniques for modeling correlations between inter- and intra-ice sheet processes and their tail dependences, find that a global total SLR exceeding 2 m by 2100 lies within the 90% uncertainty bounds for a high emission scenario.
Abstract: Despite considerable advances in process understanding, numerical modeling, and the observational record of ice sheet contributions to global mean sea-level rise (SLR) since the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change, severe limitations remain in the predictive capability of ice sheet models. As a consequence, the potential contributions of ice sheets remain the largest source of uncertainty in projecting future SLR. Here, we report the findings of a structured expert judgement study, using unique techniques for modeling correlations between inter- and intra-ice sheet processes and their tail dependences. We find that since the AR5, expert uncertainty has grown, in particular because of uncertain ice dynamic effects. For a +2 °C temperature scenario consistent with the Paris Agreement, we obtain a median estimate of a 26 cm SLR contribution by 2100, with a 95th percentile value of 81 cm. For a +5 °C temperature scenario more consistent with unchecked emissions growth, the corresponding values are 51 and 178 cm, respectively. Inclusion of thermal expansion and glacier contributions results in a global total SLR estimate that exceeds 2 m at the 95th percentile. Our findings support the use of scenarios of 21st century global total SLR exceeding 2 m for planning purposes. Beyond 2100, uncertainty and projected SLR increase rapidly. The 95th percentile ice sheet contribution by 2200, for the +5 °C scenario, is 7.5 m as a result of instabilities coming into play in both West and East Antarctica. Introducing process correlations and tail dependences increases estimates by roughly 15%.

Journal ArticleDOI
TL;DR: The results show that, in addition to not sharing equally in the direct benefits of fossil fuel use, many poor countries have been significantly harmed by the warming arising from wealthy countries’ energy consumption.
Abstract: Understanding the causes of economic inequality is critical for achieving equitable economic development. To investigate whether global warming has affected the recent evolution of inequality, we combine counterfactual historical temperature trajectories from a suite of global climate models with extensively replicated empirical evidence of the relationship between historical temperature fluctuations and economic growth. Together, these allow us to generate probabilistic country-level estimates of the influence of anthropogenic climate forcing on historical economic output. We find very high likelihood that anthropogenic climate forcing has increased economic inequality between countries. For example, per capita gross domestic product (GDP) has been reduced 17–31% at the poorest four deciles of the population-weighted country-level per capita GDP distribution, yielding a ratio between the top and bottom deciles that is 25% larger than in a world without global warming. As a result, although between-country inequality has decreased over the past half century, there is ∼90% likelihood that global warming has slowed that decrease. The primary driver is the parabolic relationship between temperature and economic growth, with warming increasing growth in cool countries and decreasing growth in warm countries. Although there is uncertainty in whether historical warming has benefited some temperate, rich countries, for most poor countries there is >90% likelihood that per capita GDP is lower today than if global warming had not occurred. Thus, our results show that, in addition to not sharing equally in the direct benefits of fossil fuel use, many poor countries have been significantly harmed by the warming arising from wealthy countries’ energy consumption.

Journal ArticleDOI
TL;DR: The satellite record reveals that a gradual, decades-long overall increase in Antarctic sea ice extents reversed in 2014, with subsequent rates of decrease in 2014–2017 far exceeding the more widely publicized decay rates experienced in the Arctic.
Abstract: Following over 3 decades of gradual but uneven increases in sea ice coverage, the yearly average Antarctic sea ice extents reached a record high of 12.8 × 106 km2 in 2014, followed by a decline so precipitous that they reached their lowest value in the 40-y 1979–2018 satellite multichannel passive-microwave record, 10.7 × 106 km2, in 2017. In contrast, it took the Arctic sea ice cover a full 3 decades to register a loss that great in yearly average ice extents. Still, when considering the 40-y record as a whole, the Antarctic sea ice continues to have a positive overall trend in yearly average ice extents, although at 11,300 ± 5,300 km2⋅y−1, this trend is only 50% of the trend for 1979–2014, before the precipitous decline. Four of the 5 sectors into which the Antarctic sea ice cover is divided all also have 40-y positive trends that are well reduced from their 2014–2017 values. The one anomalous sector in this regard, the Bellingshausen/Amundsen Seas, has a 40-y negative trend, with the yearly average ice extents decreasing overall in the first 3 decades, reaching a minimum in 2007, and exhibiting an overall upward trend since 2007 (i.e., reflecting a reversal in the opposite direction from the other 4 sectors and the Antarctic sea ice cover as a whole).

Journal ArticleDOI
TL;DR: It is shown that high levels of green space presence during childhood are associated with lower risk of a wide spectrum of psychiatric disorders later in life, supporting efforts to better integrate natural environments into urban planning and childhood life.
Abstract: Urban residence is associated with a higher risk of some psychiatric disorders, but the underlying drivers remain unknown. There is increasing evidence that the level of exposure to natural environments impacts mental health, but few large-scale epidemiological studies have assessed the general existence and importance of such associations. Here, we investigate the prospective association between green space and mental health in the Danish population. Green space presence was assessed at the individual level using high-resolution satellite data to calculate the normalized difference vegetation index within a 210 × 210 m square around each person’s place of residence (∼1 million people) from birth to the age of 10. We show that high levels of green space presence during childhood are associated with lower risk of a wide spectrum of psychiatric disorders later in life. Risk for subsequent mental illness for those who lived with the lowest level of green space during childhood was up to 55% higher across various disorders compared with those who lived with the highest level of green space. The association remained even after adjusting for urbanization, socioeconomic factors, parental history of mental illness, and parental age. Stronger association of cumulative green space presence during childhood compared with single-year green space presence suggests that presence throughout childhood is important. Our results show that green space during childhood is associated with better mental health, supporting efforts to better integrate natural environments into urban planning and childhood life.