scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: How autophagy can promote cancer through suppressing p53 and preventing energy crisis, cell death, senescence, and an anti-tumor immune response is discussed.
Abstract: Macroautophagy (referred to here as autophagy) is induced by starvation to capture and degrade intracellular proteins and organelles in lysosomes, which recycles intracellular components to sustain metabolism and survival. Autophagy also plays a major homeostatic role in controlling protein and organelle quality and quantity. Dysfunctional autophagy contributes to many diseases. In cancer, autophagy can be neutral, tumor-suppressive, or tumor-promoting in different contexts. Large-scale genomic analysis of human cancers indicates that the loss or mutation of core autophagy genes is uncommon, whereas oncogenic events that activate autophagy and lysosomal biogenesis have been identified. Autophagic flux, however, is difficult to measure in human tumor samples, making functional assessment of autophagy problematic in a clinical setting. Autophagy impacts cellular metabolism, the proteome, and organelle numbers and quality, which alter cell functions in diverse ways. Moreover, autophagy influences the interaction between the tumor and the host by promoting stress adaptation and suppressing activation of innate and adaptive immune responses. Additionally, autophagy can promote a cross-talk between the tumor and the stroma, which can support tumor growth, particularly in a nutrient-limited microenvironment. Thus, the role of autophagy in cancer is determined by nutrient availability, microenvironment stress, and the presence of an immune system. Here we discuss recent developments in the role of autophagy in cancer, in particular how autophagy can promote cancer through suppressing p53 and preventing energy crisis, cell death, senescence, and an anti-tumor immune response.

587 citations


Journal ArticleDOI
TL;DR: An overview of stroke in the 21st century from a public health perspective is provided and the public health burden of stroke is set to rise over future decades because of demographic transitions of populations, particularly in developing countries.
Abstract: Stroke is ranked as the second leading cause of death worldwide with an annual mortality rate of about 5.5 million. Not only does the burden of stroke lie in the high mortality but the high morbidity also results in up to 50% of survivors being chronically disabled. Thus stroke is a disease of immense public health importance with serious economic and social consequences. The public health burden of stroke is set to rise over future decades because of demographic transitions of populations, particularly in developing countries. This paper provides an overview of stroke in the 21st century from a public health perspective.

587 citations


Journal ArticleDOI
TL;DR: Recent advancements in the care of burn patients with a focus on the pathophysiology and treatment of burn wounds are reviewed, including improvements in patient stabilization and care.
Abstract: Burns are a prevalent and burdensome critical care problem. The priorities of specialized facilities focus on stabilizing the patient, preventing infection, and optimizing functional recovery. Research on burns has generated sustained interest over the past few decades, and several important advancements have resulted in more effective patient stabilization and decreased mortality, especially among young patients and those with burns of intermediate extent. However, for the intensivist, challenges often exist that complicate patient support and stabilization. Furthermore, burn wounds are complex and can present unique difficulties that require late intervention or life-long rehabilitation. In addition to improvements in patient stabilization and care, research in burn wound care has yielded advancements that will continue to improve functional recovery. This article reviews recent advancements in the care of burn patients with a focus on the pathophysiology and treatment of burn wounds.

587 citations


Journal ArticleDOI
14 Feb 2018-BMJ
TL;DR: In this large prospective study, a 10% increase in the proportion of ultra-processed foods in the diet was associated with a significant increase of greater than 10% in risks of overall and breast cancer.
Abstract: Objective To assess the prospective associations between consumption of ultra-processed food and risk of cancer. Design Population based cohort study. Setting and participants 104 980 participants aged at least 18 years (median age 42.8 years) from the French NutriNet-Sante cohort (2009-17). Dietary intakes were collected using repeated 24 hour dietary records, designed to register participants’ usual consumption for 3300 different food items. These were categorised according to their degree of processing by the NOVA classification. Main outcome measures Associations between ultra-processed food intake and risk of overall, breast, prostate, and colorectal cancer assessed by multivariable Cox proportional hazard models adjusted for known risk factors. Results Ultra-processed food intake was associated with higher overall cancer risk (n=2228 cases; hazard ratio for a 10% increment in the proportion of ultra-processed food in the diet 1.12 (95% confidence interval 1.06 to 1.18); P for trend Conclusions In this large prospective study, a 10% increase in the proportion of ultra-processed foods in the diet was associated with a significant increase of greater than 10% in risks of overall and breast cancer. Further studies are needed to better understand the relative effect of the various dimensions of processing (nutritional composition, food additives, contact materials, and neoformed contaminants) in these associations. Study registration Clinicaltrials.gov NCT03335644.

586 citations


Journal ArticleDOI
Renu R. Bahadoer1, Esmée A Dijkstra2, Boudewijn van Etten2, Corrie A.M. Marijnen3, Corrie A.M. Marijnen1, Hein Putter1, Elma Meershoek-Klein Kranenbarg1, Annet G H Roodvoets1, Iris D. Nagtegaal4, Regina G. H. Beets-Tan3, Lennart Blomqvist5, Tone Fokstuen5, Albert J. ten Tije, Jaume Capdevila6, Mathijs P. Hendriks, Ibrahim Edhemovic7, Andrés Cervantes8, Per Nilsson5, Bengt Glimelius9, Cornelis J.H. van de Velde1, Geke A. P. Hospers2, L. Østergaard, F. Svendsen Jensen, P. Pfeiffer, K.E.J. Jensen, M.P. Hendriks, W.H. Schreurs, H.P. Knol, J.J. van der Vliet, J.B. Tuynman, A.M.E. Bruynzeel, E.D. Kerver, S. Festen, M E van Leerdam, G.L. Beets, L.G.H. Dewit, C.J.A. Punt, Pieter J. Tanis, E.D. Geijsen, P. Nieboer, W.A. Bleeker, A.J. Ten Tije, R.M.P.H. Crolla, A.C.M. van de Luijtgaarden, J.W.T. Dekker, J.M. Immink, F.J.F. Jeurissen, A.W.K.S. Marinelli, H.M. Ceha, T.C. Stam, P. Quarles an Ufford, W.H. Steup, A.L.T. Imholz, R.J.I. Bosker, J.H.M. Bekker, G.J. Creemers, G.A.P. Nieuwenhuijzen, H. van den Berg, W.M. van der Deure, R.F. Schmitz, J.M. van Rooijen, A.F.T. Olieman, A.C.M. van den Bergh, Derk Jan A. de Groot, Klaas Havenga, Jannet C. Beukema, J. de Boer, P.H.J.M. Veldman, E.J.M. Siemerink, J.W.P. Vanstiphout, B. de Valk, Q.A.J. Eijsbouts, M.B. Polée, C. Hoff, A. Slot, H.W. Kapiteijn, K.C.M.J. Peeters, F.P. Peters, P.A. Nijenhuis, S.A. Radema, H. de Wilt, P. Braam, G.J. Veldhuis, D. Hess, T. Rozema, O. Reerink, D. Ten Bokkel Huinink, A. Pronk, Janet R. Vos, M. Tascilar, G.A. Patijn, C. Kersten, O. Mjåland, M. Grønlie Guren, A.N. Nesbakken, J. Benedik, I. Edhemovic7, V. Velenik, J. Capdevila6, E. Espin, R. Salazar, S. Biondo, V. Pachón, J. die Trill, J. Aparicio, E. Garcia Granero, M.J. Safont, J.C. Bernal, A. Cervantes8, A. Espí Macías, L. Malmberg, G. Svaninger, H. Hörberg, G. Dafnis, A. Berglund, L. Österlund, K. Kovacs, J. Hol, S. Ottosson, G. Carlsson, C. Bratthäll, J. Assarsson, B.L. Lödén, P. Hede, I. Verbiené, O. Hallböök, A. Johnsson, M.L. Lydrup, K. Villmann, P. Matthiessen, J.H. Svensson, J. Haux, S. Skullman, T. Fokstuen5, Torbjörn Holm, P. Flygare, M. Walldén, B. Lindh, O. Lundberg, C. Radu, L. Påhlman, A. Piwowar, K. Smedh, U. Palenius, S. Jangmalm, P. Parinkh, H. Kim, M.L. Silviera 
TL;DR: The Rectal cancer And Preoperative Induction therapy followed by Dedicated Operation (RAPIDO) trial aimed to reduce distant metastases without compromising locoregional control.
Abstract: Summary Background Systemic relapses remain a major problem in locally advanced rectal cancer. Using short-course radiotherapy followed by chemotherapy and delayed surgery, the Rectal cancer And Preoperative Induction therapy followed by Dedicated Operation (RAPIDO) trial aimed to reduce distant metastases without compromising locoregional control. Methods In this multicentre, open-label, randomised, controlled, phase 3 trial, participants were recruited from 54 centres in the Netherlands, Sweden, Spain, Slovenia, Denmark, Norway, and the USA. Patients were eligible if they were aged 18 years or older, with an Eastern Cooperative Oncology Group (ECOG) performance status of 0–1, had a biopsy-proven, newly diagnosed, primary, locally advanced rectal adenocarcinoma, which was classified as high risk on pelvic MRI (with at least one of the following criteria: clinical tumour [cT] stage cT4a or cT4b, extramural vascular invasion, clinical nodal [cN] stage cN2, involved mesorectal fascia, or enlarged lateral lymph nodes), were mentally and physically fit for chemotherapy, and could be assessed for staging within 5 weeks before randomisation. Eligible participants were randomly assigned (1:1), using a management system with a randomly varying block design (each block size randomly chosen to contain two to four allocations), stratified by centre, ECOG performance status, cT stage, and cN stage, to either the experimental or standard of care group. All investigators remained masked for the primary endpoint until a prespecified number of events was reached. Patients allocated to the experimental treatment group received short-course radiotherapy (5 × 5 Gy over a maximum of 8 days) followed by six cycles of CAPOX chemotherapy (capecitabine 1000 mg/m2 orally twice daily on days 1–14, oxaliplatin 130 mg/m2 intravenously on day 1, and a chemotherapy-free interval between days 15–21) or nine cycles of FOLFOX4 (oxaliplatin 85 mg/m2 intravenously on day 1, leucovorin [folinic acid] 200 mg/m2 intravenously on days 1 and 2, followed by bolus fluorouracil 400 mg/m2 intravenously and fluorouracil 600 mg/m2 intravenously for 22 h on days 1 and 2, and a chemotherapy-free interval between days 3–14) followed by total mesorectal excision. Choice of CAPOX or FOLFOX4 was per physician discretion or hospital policy. Patients allocated to the standard of care group received 28 daily fractions of 1·8 Gy up to 50·4 Gy or 25 fractions of 2·0 Gy up to 50·0 Gy (per physician discretion or hospital policy), with concomitant twice-daily oral capecitabine 825 mg/m2 followed by total mesorectal excision and, if stipulated by hospital policy, adjuvant chemotherapy with eight cycles of CAPOX or 12 cycles of FOLFOX4. The primary endpoint was 3-year disease-related treatment failure, defined as the first occurrence of locoregional failure, distant metastasis, new primary colorectal tumour, or treatment-related death, assessed in the intention-to-treat population. Safety was assessed by intention to treat. This study is registered with the EudraCT, 2010-023957-12, and ClinicalTrials.gov , NCT01558921 , and is now complete. Findings Between June 21, 2011, and June 2, 2016, 920 patients were enrolled and randomly assigned to a treatment, of whom 912 were eligible (462 in the experimental group; 450 in the standard of care group). Median follow-up was 4·6 years (IQR 3·5–5·5). At 3 years after randomisation, the cumulative probability of disease-related treatment failure was 23·7% (95% CI 19·8–27·6) in the experimental group versus 30·4% (26·1–34·6) in the standard of care group (hazard ratio 0·75, 95% CI 0·60–0·95; p=0·019). The most common grade 3 or higher adverse event during preoperative therapy in both groups was diarrhoea (81 [18%] of 460 patients in the experimental group and 41 [9%] of 441 in the standard of care group) and neurological toxicity during adjuvant chemotherapy in the standard of care group (16 [9%] of 187 patients). Serious adverse events occurred in 177 (38%) of 460 participants in the experimental group and, in the standard of care group, in 87 (34%) of 254 patients without adjuvant chemotherapy and in 64 (34%) of 187 with adjuvant chemotherapy. Treatment-related deaths occurred in four participants in the experimental group (one cardiac arrest, one pulmonary embolism, two infectious complications) and in four participants in the standard of care group (one pulmonary embolism, one neutropenic sepsis, one aspiration, one suicide due to severe depression). Interpretation The observed decreased probability of disease-related treatment failure in the experimental group is probably indicative of the increased efficacy of preoperative chemotherapy as opposed to adjuvant chemotherapy in this setting. Therefore, the experimental treatment can be considered as a new standard of care in high-risk locally advanced rectal cancer. Funding Dutch Cancer Foundation, Swedish Cancer Society, Spanish Ministry of Economy and Competitiveness, and Spanish Clinical Research Network.

586 citations


Journal ArticleDOI
30 Apr 2019
TL;DR: Paulo Freire in a recent book as mentioned in this paper declarando aversa ao neoliberalism e sua influence in sociedade, torning-a desigual e excludente.
Abstract: Paulo Freire inicia seu livro declarando sua aversao ao neoliberalismo e sua influencia na sociedade, tornando-a desigual e excludente. Critica ainda a malvadez transvestida de etica que o mercado adota para o seu proprio beneficio.

586 citations


Posted Content
TL;DR: The overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.
Abstract: The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?

586 citations


Posted Content
TL;DR: T2T-ViT as mentioned in this paper proposes a token-to-token transformation to progressively transform the image to tokens by recursively aggregating neighboring tokens into one token (Token-To-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced.
Abstract: Transformers, which are popular for language modeling, have been explored for solving vision tasks recently, \eg, the Vision Transformer (ViT) for image classification. The ViT model splits each image into a sequence of tokens with fixed length and then applies multiple Transformer layers to model their global relation for classification. However, ViT achieves inferior performance to CNNs when trained from scratch on a midsize dataset like ImageNet. We find it is because: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines among neighboring pixels, leading to low training sample efficiency; 2) the redundant attention backbone design of ViT leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-ViT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study. Notably, T2T-ViT reduces the parameter count and MACs of vanilla ViT by half, while achieving more than 3.0\% improvement when trained from scratch on ImageNet. It also outperforms ResNets and achieves comparable performance with MobileNets by directly training on ImageNet. For example, T2T-ViT with comparable size to ResNet50 (21.5M parameters) can achieve 83.3\% top1 accuracy in image resolution 384$\times$384 on ImageNet. (Code: this https URL)

586 citations


Journal ArticleDOI
Roel Aaij1, Bernardo Adeva2, Marco Adinolfi3, Ziad Ajaltouni4  +818 moreInstitutions (68)
TL;DR: In this article, a test of lepton universality is performed by measuring the ratio of the branching fractions of the B$0$ → K$*0}$ e$+}$ π$−}$ decays, and the ratio is measured in two regions of the dilepton invariant mass squared.
Abstract: A test of lepton universality, performed by measuring the ratio of the branching fractions of the B$^{0}$ → K$^{*0}$ μ$^{+}$ μ$^{−}$ and B$^{0}$ → K$^{*0}$ e$^{+}$ e$^{−}$ decays, $ {R}_{K^{*0}} $ , is presented. The K$^{*0}$ meson is reconstructed in the final state K$^{+}$ π$^{−}$, which is required to have an invariant mass within 100 MeV/c$^{2}$ of the known K$^{*}$(892)$^{0}$ mass. The analysis is performed using proton-proton collision data, corresponding to an integrated luminosity of about 3 fb$^{−1}$, collected by the LHCb experiment at centre-of-mass energies of 7 and 8 TeV. The ratio is measured in two regions of the dilepton invariant mass squared, q$^{2}$, to be $ {R}_{K^{*0}}=\left\{\begin{array}{l}{0.66_{-}^{+}}_{0.07}^{0.11}\left(\mathrm{stat}\right)\pm 0.03\left(\mathrm{syst}\right)\kern1em \mathrm{f}\mathrm{o}\mathrm{r}\kern1em 0.045<{q}^2<1.1\kern0.5em {\mathrm{GeV}}^2/{c}^4,\hfill \\ {}{0.69_{-}^{+}}_{0.07}^{0.11}\left(\mathrm{stat}\right)\pm 0.05\left(\mathrm{syst}\right)\kern1em \mathrm{f}\mathrm{o}\mathrm{r}\kern1em 1.1<{q}^2<6.0\kern0.5em {\mathrm{GeV}}^2/{c}^4.\hfill \end{array}\right. $

586 citations


Journal ArticleDOI
TL;DR: The findings of this study offer useful suggestions for policy-makers, designers, developers and researchers, which will enable them to get better acquainted with the key aspects of the e-learning system usage successfully during COVID-19 pandemic.
Abstract: The provision and usage of online and e-learning system is becoming the main challenge for many universities during COVID-19 pandemic. E-learning system such as Blackboard has several fantastic features that would be valuable for use during this COVID-19 pandemic. However, the successful usage of e-learning system relies on understanding the adoption factors as well as the main challenges that face the current e-learning systems. There is lack of agreement about the critical challenges and factors that shape the successful usage of e-learning system during COVID-19 pandemic; hence, a clear gap has been identified in the knowledge on the critical challenges and factors of e-learning usage during this pandemic. Therefore, this study aims to explore the critical challenges that face the current e-learning systems and investigate the main factors that support the usage of e-learning system during COVID-19 pandemic. This study employed the interview method using thematic analysis through NVivo software. The interview was conducted with 30 students and 31 experts in e-learning systems at six universities from Jordan and Saudi Arabia. The findings of this study offer useful suggestions for policy-makers, designers, developers and researchers, which will enable them to get better acquainted with the key aspects of the e-learning system usage successfully during COVID-19 pandemic.

586 citations


Journal ArticleDOI
TL;DR: In this paper, a more socioeconomically and racially diverse urban population, Conventional and Expanded ACEs were measured to help understand whether Conventional ACEs alone can sufficiently measure adversity, particularly among various subgroups.

Journal ArticleDOI
TL;DR: These guidelines summarise current recommendations for imaging, pain management, conservative treatment, and MET for renal and ureteral stones and evaluate the optimal measures for diagnosis and conservative and medical treatment of urolithiasis.

Posted Content
TL;DR: This work develops a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor that allows for different classes of summarization models which can extract sentences or words.
Abstract: Traditional approaches to extractive summarization rely heavily on human-engineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs. Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation.

Journal ArticleDOI
TL;DR: In this large, routine prenatal-screening population, cfDNA testing for trisomy 21 had higher sensitivity, a lower false positive rate, and higher positive predictive value than did standard screening with the measurement of nuchal translucency and biochemical analytes.
Abstract: In this prospective, multicenter, blinded study conducted at 35 international centers, we assigned pregnant women presenting for aneuploidy screening at 10 to 14 weeks of gestation to undergo both standard screening (with measurement of nuchal translucency and biochemical analytes) and cfDNA testing. Participants received the results of standard screening; the results of cfDNA testing were blinded. Determination of the birth outcome was based on diagnostic genetic testing or newborn examination. The primary outcome was the area under the receiver-operatingcharacteristic curve (AUC) for trisomy 21 (Down’s syndrome) with cfDNA testing versus standard screening. We also evaluated cfDNA testing and standard screening to assess the risk of trisomies 18 and 13. Results Of 18,955 women who were enrolled, results from 15,841 were available for analysis. The mean maternal age was 30.7 years, and the mean gestational age at testing was 12.5 weeks. The AUC for trisomy 21 was 0.999 for cfDNA testing and 0.958 for standard screening (P = 0.001). Trisomy 21 was detected in 38 of 38 women (100%; 95% confidence interval [CI], 90.7 to 100) in the cfDNA-testing group, as compared with 30 of 38 women (78.9%; 95% CI, 62.7 to 90.4) in the standard-screening group (P = 0.008). False positive rates were 0.06% (95% CI, 0.03 to 0.11) in the cfDNA group and 5.4% (95% CI, 5.1 to 5.8) in the standard-screening group (P<0.001). The positive predictive value for cfDNA testing was 80.9% (95% CI, 66.7 to 90.9), as compared with 3.4% (95% CI, 2.3 to 4.8) for standard screening (P<0.001). Conclusions In this large, routine prenatal-screening population, cfDNA testing for trisomy 21 had higher sensitivity, a lower false positive rate, and higher positive predictive value than did standard screening with the measurement of nuchal translucency and biochemical analytes. (Funded by Ariosa Diagnostics and Perinatal Quality Foundation; NEXT ClinicalTrials.gov number, NCT01511458.)

Journal ArticleDOI
TL;DR: This review provides a summary of both historic and recent studies on the role of EMT in the metastatic cascade from various experimental systems, including cancer cell lines, genetic mouse tumor models, and clinical human breast cancer tissues.

Journal ArticleDOI
TL;DR: A high-quality genome assembly of Camellia sinensis var.
Abstract: Tea, one of the world’s most important beverage crops, provides numerous secondary metabolites that account for its rich taste and health benefits. Here we present a high-quality sequence of the genome of tea, Camellia sinensis var. sinensis (CSS), using both Illumina and PacBio sequencing technologies. At least 64% of the 3.1-Gb genome assembly consists of repetitive sequences, and the rest yields 33,932 high-confidence predictions of encoded proteins. Divergence between two major lineages, CSS and Camellia sinensis var. assamica (CSA), is calculated to ∼0.38 to 1.54 million years ago (Mya). Analysis of genic collinearity reveals that the tea genome is the product of two rounds of whole-genome duplications (WGDs) that occurred ∼30 to 40 and ∼90 to 100 Mya. We provide evidence that these WGD events, and subsequent paralogous duplications, had major impacts on the copy numbers of secondary metabolite genes, particularly genes critical to producing three key quality compounds: catechins, theanine, and caffeine. Analyses of transcriptome and phytochemistry data show that amplification and transcriptional divergence of genes encoding a large acyltransferase family and leucoanthocyanidin reductases are associated with the characteristic young leaf accumulation of monomeric galloylated catechins in tea, while functional divergence of a single member of the glutamine synthetase gene family yielded theanine synthetase. This genome sequence will facilitate understanding of tea genome evolution and tea metabolite pathways, and will promote germplasm utilization for breeding improved tea varieties.

Journal ArticleDOI
TL;DR: It is shown that cohesin suppresses compartments but is required for TADs and loops, that CTCF defines their boundaries, and that the cohes in unloading factor WAPL and its PDS5 binding partners control the length of loops.
Abstract: Mammalian genomes are spatially organized into compartments, topologically associating domains (TADs), and loops to facilitate gene regulation and other chromosomal functions. How compartments, TAD ...

Journal ArticleDOI
11 Aug 2016-Cell
TL;DR: This review provides a comprehensive overview of epigenetic studies from invertebrate organisms, vertebrate models, tissues, and in vitro systems and establishes links between common operative aging pathways and hallmark chromatin signatures that can be used to identify "druggable" targets to counter human aging and age-related disease.

Journal ArticleDOI
TL;DR: The meta-analysed effect of endurance training on VO2max was a possibly large beneficial effect and a likely moderate greater additional increase for subjects with lower baseline fitness, when compared with no-exercise controls.
Abstract: Enhancing cardiovascular fitness can lead to substantial health benefits. High-intensity interval training (HIT) is an efficient way to develop cardiovascular fitness, yet comparisons between this type of training and traditional endurance training are equivocal. Our objective was to meta-analyse the effects of endurance training and HIT on the maximal oxygen consumption (VO2max) of healthy, young to middle-aged adults. Six electronic databases were searched (MEDLINE, PubMed, SPORTDiscus, Web of Science, CINAHL and Google Scholar) for original research articles. A search was conducted and search terms included ‘high intensity’, ‘HIT’, ‘sprint interval training’, ‘endurance training’, ‘peak oxygen uptake’, and ‘VO2max’. Inclusion criteria were controlled trials, healthy adults aged 18–45 years, training duration ≥2 weeks, VO2max assessed pre- and post-training. Twenty-eight studies met the inclusion criteria and were included in the meta-analysis. This resulted in 723 participants with a mean ± standard deviation (SD) age and initial fitness of 25.1 ± 5 years and 40.8 ± 7.9 mL·kg−1·min−1, respectively. We made probabilistic magnitude-based inferences for meta-analysed effects based on standardised thresholds for small, moderate and large changes (0.2, 0.6 and 1.2, respectively) derived from between-subject SDs for baseline VO2max. The meta-analysed effect of endurance training on VO2max was a possibly large beneficial effect (4.9 mL·kg−1·min−1; 95 % confidence limits ±1.4 mL·kg−1·min−1), when compared with no-exercise controls. A possibly moderate additional increase was observed for typically younger subjects (2.4 mL·kg−1·min−1; ±2.1 mL·kg−1·min−1) and interventions of longer duration (2.2 mL·kg−1·min−1; ±3.0 mL·kg−1·min−1), and a small additional improvement for subjects with lower baseline fitness (1.4 mL·kg−1·min−1; ±2.0 mL·kg−1·min−1). When compared with no-exercise controls, there was likely a large beneficial effect of HIT (5.5 mL·kg−1·min−1; ±1.2 mL·kg−1·min−1), with a likely moderate greater additional increase for subjects with lower baseline fitness (3.2 mL·kg−1·min−1; ±1.9 mL·kg−1·min−1) and interventions of longer duration (3.0 mL·kg−1·min−1; ±1.9 mL·kg−1·min−1), and a small lesser effect for typically longer HIT repetitions (−1.8 mL·kg−1·min−1; ±2.7 mL·kg−1·min−1). The modifying effects of age (0.8 mL·kg−1·min−1; ±2.1 mL·kg−1·min−1) and work/rest ratio (0.5 mL·kg−1·min−1; ±1.6 mL·kg−1·min−1) were unclear. When compared with endurance training, there was a possibly small beneficial effect for HIT (1.2 mL·kg−1·min−1; ±0.9 mL·kg−1·min−1) with small additional improvements for typically longer HIT repetitions (2.2 mL·kg−1·min−1; ±2.1 mL·kg−1·min−1), older subjects (1.8 mL·kg−1·min−1; ±1.7 mL·kg−1·min−1), interventions of longer duration (1.7 mL·kg−1·min−1; ±1.7 mL·kg−1·min−1), greater work/rest ratio (1.6 mL·kg−1·min−1; ±1.5 mL·kg−1·min−1) and lower baseline fitness (0.8 mL·kg−1·min−1; ±1.3 mL·kg−1·min−1). Endurance training and HIT both elicit large improvements in the VO2max of healthy, young to middle-aged adults, with the gains in VO2max being greater following HIT when compared with endurance training.

Journal ArticleDOI
TL;DR: Some of the important areas of research regarding innate and adaptive immune response in schizophrenia and related psychotic disorders that, it is thought, will be of interest to psychiatric clinicians and researchers are described.

Journal ArticleDOI
TL;DR: It is highlighted that improved understanding of the emission sources, physical/chemical processes during haze evolution, and interactions with meteorological/climatic changes are necessary to unravel the causes, mechanisms, and trends for haze pollution.
Abstract: Regional severe haze represents an enormous environmental problem in China, influencing air quality, human health, ecosystem, weather, and climate. These extremes are characterized by exceedingly high concentrations of fine particulate matter (smaller than 2.5 µm, or PM2.5) and occur with extensive temporal (on a daily, weekly, to monthly timescale) and spatial (over a million square kilometers) coverage. Although significant advances have been made in field measurements, model simulations, and laboratory experiments for fine PM over recent years, the causes for severe haze formation have not yet to be systematically/comprehensively evaluated. This review provides a synthetic synopsis of recent advances in understanding the fundamental mechanisms of severe haze formation in northern China, focusing on emission sources, chemical formation and transformation, and meteorological and climatic conditions. In particular, we highlight the synergetic effects from the interactions between anthropogenic emissions and atmospheric processes. Current challenges and future research directions to improve the understanding of severe haze pollution as well as plausible regulatory implications on a scientific basis are also discussed.

Journal ArticleDOI
TL;DR: In this article, the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 were estimated using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform.
Abstract: Neutron stars are not only of astrophysical interest, but are also of great interest to nuclear physicists, because their attributes can be used to determine the properties of the dense matter in their cores. One of the most informative approaches for determining the equation of state of this dense matter is to measure both a star's equatorial circumferential radius $R_e$ and its gravitational mass $M$. Here we report estimates of the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 obtained using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER). This approach is thought to be less subject to systematic errors than other approaches for estimating neutron star radii. We explored a variety of emission patterns on the stellar surface. Our best-fit model has three oval, uniform-temperature emitting spots and provides an excellent description of the pulse waveform observed using NICER. The radius and mass estimates given by this model are $R_e = 13.02^{+1.24}_{-1.06}$ km and $M = 1.44^{+0.15}_{-0.14}\ M_\odot$ (68%). The independent analysis reported in the companion paper by Riley et al. (2019) explores different emitting spot models, but finds spot shapes and locations and estimates of $R_e$ and $M$ that are consistent with those found in this work. We show that our measurements of $R_e$ and $M$ for PSR J0030$+$0451 improve the astrophysical constraints on the equation of state of cold, catalyzed matter above nuclear saturation density.

Journal ArticleDOI
TL;DR: A robust pseudovirus-based neutralization assay for SARS-CoV-2 is established and is glad to share pseudoviruses and related protocols with the developers of vaccines or therapeutics to fight against this lethal virus.
Abstract: Pseudoviruses are useful virological tools because of their safety and versatility, especially for emerging and re-emerging viruses. Due to its high pathogenicity and infectivity and the lack of effective vaccines and therapeutics, live SARS-CoV-2 has to be handled under biosafety level 3 conditions, which has hindered the development of vaccines and therapeutics. Based on a VSV pseudovirus production system, a pseudovirus-based neutralization assay has been developed for evaluating neutralizing antibodies against SARS-CoV-2 in biosafety level 2 facilities. The key parameters for this assay were optimized, including cell types, cell numbers, virus inoculum. When tested against the SARS-CoV-2 pseudovirus, SARS-CoV-2 convalescent patient sera showed high neutralizing potency, which underscore its potential as therapeutics. The limit of detection for this assay was determined as 22.1 and 43.2 for human and mouse serum samples respectively using a panel of 120 negative samples. The cutoff values were set as 30 and 50 for human and mouse serum samples, respectively. This assay showed relatively low coefficient of variations with 15.9% and 16.2% for the intra- and inter-assay analyses respectively. Taken together, we established a robust pseudovirus-based neutralization assay for SARS-CoV-2 and are glad to share pseudoviruses and related protocols with the developers of vaccines or therapeutics to fight against this lethal virus.

Journal ArticleDOI
TL;DR: A divide-and-conquer approach which first classifies sentences into different types, then performs sentiment analysis separately on sentences from each type, which shows that sentence type classification can improve the performance of sentence-level sentiment analysis.
Abstract: A divide-and-conquer method classifying sentence types before sentiment analysis.Classifying sentence types by the number of opinion targets a sentence contain.A data-driven approach automatically extract features from input sentences. Different types of sentences express sentiment in very different ways. Traditional sentence-level sentiment classification research focuses on one-technique-fits-all solution or only centers on one special type of sentences. In this paper, we propose a divide-and-conquer approach which first classifies sentences into different types, then performs sentiment analysis separately on sentences from each type. Specifically, we find that sentences tend to be more complex if they contain more sentiment targets. Thus, we propose to first apply a neural network based sequence model to classify opinionated sentences into three types according to the number of targets appeared in a sentence. Each group of sentences is then fed into a one-dimensional convolutional neural network separately for sentiment classification. Our approach has been evaluated on four sentiment classification datasets and compared with a wide range of baselines. Experimental results show that: (1) sentence type classification can improve the performance of sentence-level sentiment analysis; (2) the proposed approach achieves state-of-the-art results on several benchmarking datasets.

Journal ArticleDOI
TL;DR: The important question of the temporal organization of large-scale brain networks is addressed, finding that the spontaneous transitions between networks of interacting brain areas are predictable and highly organized into a hierarchy of two distinct metastates.
Abstract: The brain recruits neuronal populations in a temporally coordinated manner in task and at rest. However, the extent to which large-scale networks exhibit their own organized temporal dynamics is unclear. We use an approach designed to find repeating network patterns in whole-brain resting fMRI data, where networks are defined as graphs of interacting brain areas. We find that the transitions between networks are nonrandom, with certain networks more likely to occur after others. Further, this nonrandom sequencing is itself hierarchically organized, revealing two distinct sets of networks, or metastates, that the brain has a tendency to cycle within. One metastate is associated with sensory and motor regions, and the other involves areas related to higher order cognition. Moreover, we find that the proportion of time that a subject spends in each brain network and metastate is a consistent subject-specific measure, is heritable, and shows a significant relationship with cognitive traits.

01 Jan 2016
TL;DR: Anthropocene or Capitalocene? as mentioned in this paper offers answers to these questions from a dynamic group of leading critical scholars who challenge the theory and history offered by the most significant environmental concept of our times: the Anthropocene.
Abstract: Anthropocene or Capitalocene? offers answers to these questions from a dynamic group of leading critical scholars. They challenge the theory and history offered by the most significant environmental concept of our times: the Anthropocene. But are we living in the Anthropocene, literally the “Age of Man”? Is a different response more compelling, and better suited to the strange—and often terrifying—times in which we live? The contributors to this book diagnose the problems of Anthropocene thinking and propose an alternative: the global crises of the twenty-first century are rooted in the Capitalocene; not the Age of Man but the Age of Capital.

Journal ArticleDOI
TL;DR: Analysis of tumor genomic profiles from 38,028 patients is reported to identify 221 cases with METex14 mutations, including 126 distinct sequence variants, and identify a unique subset of patients likely to derive benefit from MET inhibitors.
Abstract: Focal amplification and activating point mutation of the MET gene are well-characterized oncogenic drivers that confer susceptibility to targeted MET inhibitors. Recurrent somatic splice site alterations at MET exon 14 ( MET ex14) that result in exon skipping and MET activation have been characterized, but their full diversity and prevalence across tumor types are unknown. Here, we report analysis of tumor genomic profiles from 38,028 patients to identify 221 cases with MET ex14 mutations (0.6%), including 126 distinct sequence variants. MET ex14 mutations are detected most frequently in lung adenocarcinoma (3%), but also frequently in other lung neoplasms (2.3%), brain glioma (0.4%), and tumors of unknown primary origin (0.4%). Further in vitro studies demonstrate sensitivity to MET inhibitors in cells harboring MET ex14 alterations. We also report three new patient cases with MET ex14 alterations in lung or histiocytic sarcoma tumors that showed durable response to two different MET-targeted therapies. The diversity of MET ex14 mutations indicates that diagnostic testing via comprehensive genomic profiling is necessary for detection in a clinical setting. Significance: Here we report the identification of diverse exon 14 splice site alterations in MET that result in constitutive activity of this receptor and oncogenic transformation in vitro . Patients whose tumors harbored these alterations derived meaningful clinical benefit from MET inhibitors. Collectively, these data support the role of MET ex14 alterations as drivers of tumorigenesis, and identify a unique subset of patients likely to derive benefit from MET inhibitors. Cancer Discov; 5(8); 850–9. ©2015 AACR . See related commentary by Ma, [p. 802][1] . See related article by Paik et al., [p. 842][2] . This article is highlighted in the In This Issue feature, [p. 783][3] [1]: /lookup/volpage/5/802?iss=8 [2]: /lookup/volpage/5/842?iss=8 [3]: /lookup/volpage/5/783?iss=8

Journal ArticleDOI
TL;DR: The impact of COVID-19 on loneliness across different social strata, its implications in the modern digitalized age and a way forward with possible solutions to the same are looked at.
Abstract: The world is facing a global public health crisis for the last three months, as the coronavirus disease 2019 (COVID-19) emerges as a menacing pandemic. Besides the rising number of cases and fatalities with this pandemic, there has also been significant socio-economic, political and psycho-social impact. Billions of people are quarantined in their own homes as nations have locked down to implement social distancing as a measure to contain the spread of infection. Those affected and suspicious cases are isolated. This social isolation leads to chronic loneliness and boredom, which if long enough can have detrimental effects on physical and mental well-being. The timelines of the growing pandemic being uncertain, the isolation is compounded by mass panic and anxiety. Crisis often affects the human mind in crucial ways, enhancing threat arousal and snowballing the anxiety. Rational and logical decisions are replaced by biased and faulty decisions based on mere ‘faith and belief’. This important social threat of a pandemic is largely neglected. We look at the impact of COVID-19 on loneliness across different social strata, its implications in the modern digitalized age and outline a way forward with possible solutions to the same. There is no doubt that national and global economies are suffering, the health systems are under severe pressure, mass hysteria has acquired a frantic pace and people’s hope and aspirations are taking a merciless beating. The uncertainty of a new and relatively unknown infection increases the anxiety, which gets compounded by isolation in lockdown. As global public health agencies like World Health Organization (WHO) and Centre for Disease Control and Prevention (CDC) struggle to contain the outbreak, social distancing is repeatedly suggested as one of the most useful preventive strategies. It has been used successfully in the past to slow or prevent community transmission during pandemics (WHO, 2019). While certain countries like China have just started recovering from their three-month lockdown, countries like Iran, Italy and South Korea have been badly hit irrespective of these measures and those like India have initiated nation-wide shutdown and curfews to prevent the community transmission of COVID-19. Ironically however, the social distancing is a misnomer, which implies physical separation to prevent the viral spread. The modern world has rarely been so isolated and restricted. Multiple restrictions have been imposed on public movement to contain the spread of the virus. People are forced to stay at home and are burdened with the heft of quarantine. Individuals are waking up every day wrapped in a freezing cauldron of social isolation, sheer boredom and a penetrating feeling of loneliness. The modern man has known little like this, in an age of rapid travel and communication. Though during the earlier outbreaks of Severe Acute Respiratory Syndrome (SARS), Middle East Respiratory Syndrome (MERS), Spanish flu, Ebola and Plague the world was equally shaken with millions of casualties, the dominance of technology was not as much as to make the distancing felt amplified (Smith, 2006). In this era of digitalization, social media, social hangouts, eateries, pubs, bars, malls, movie theatres to keep us distracted creating apparent ‘social ties’. Humankind has always known what to do next, with their lives generally following a regular trail. But this sudden cataclysmic turn of events have brought them face to face with a dire reckoning – how to live with oneself. It is indeed a frightening realization when a whole generation or two knows how to deal with a nuclear fallout but are at their wit’s end on how to spend time with oneself. Ironically, however, it has Social isolation in Covid-19: The impact of loneliness

Journal ArticleDOI
TL;DR: A comprehensive review of studies on Asian aerosols, monsoons, and their interactions is provided in this article, where a new paradigm is proposed on investigating aerosol-monsoon interactions, in which natural aerosols such as desert dust, black carbon from biomass burning, and biogenic aerosols from vegetation are considered integral components of an intrinsic aerosolmonsoon climate system, subject to external forcing of global warming, anthropogenic aerosol, and land use and change.
Abstract: The increasing severity of droughts/floods and worsening air quality from increasing aerosols in Asia monsoon regions are the two gravest threats facing over 60% of the world population living in Asian monsoon regions. These dual threats have fueled a large body of research in the last decade on the roles of aerosols in impacting Asian monsoon weather and climate. This paper provides a comprehensive review of studies on Asian aerosols, monsoons, and their interactions. The Asian monsoon region is a primary source of emissions of diverse species of aerosols from both anthropogenic and natural origins. The distributions of aerosol loading are strongly influenced by distinct weather and climatic regimes, which are, in turn, modulated by aerosol effects. On a continental scale, aerosols reduce surface insolation and weaken the land-ocean thermal contrast, thus inhibiting the development of monsoons. Locally, aerosol radiative effects alter the thermodynamic stability and convective potential of the lower atmosphere leading to reduced temperatures, increased atmospheric stability, and weakened wind and atmospheric circulations. The atmospheric thermodynamic state, which determines the formation of clouds, convection, and precipitation, may also be altered by aerosols serving as cloud condensation nuclei or ice nuclei. Absorbing aerosols such as black carbon and desert dust in Asian monsoon regions may also induce dynamical feedback processes, leading to a strengthening of the early monsoon and affecting the subsequent evolution of the monsoon. Many mechanisms have been put forth regarding how aerosols modulate the amplitude, frequency, intensity, and phase of different monsoon climate variables. A wide range of theoretical, observational, and modeling findings on the Asian monsoon, aerosols, and their interactions are synthesized. A new paradigm is proposed on investigating aerosol-monsoon interactions, in which natural aerosols such as desert dust, black carbon from biomass burning, and biogenic aerosols from vegetation are considered integral components of an intrinsic aerosol-monsoon climate system, subject to external forcing of global warming, anthropogenic aerosols, and land use and change. Future research on aerosol-monsoon interactions calls for an integrated approach and international collaborations based on long-term sustained observations, process measurements, and improved models, as well as using observations to constrain model simulations and projections.

Journal ArticleDOI
TL;DR: An attempt to tie in gamification with service marketing theory, which conceptualizes the consumer as a co-producer of the service as well as proposing a definition for gamification, one that emphasizes its experiential nature.
Abstract: “Gamification” has gained considerable scholarly and practitioner attention; however, the discussion in academia has been largely confined to the human–computer interaction and game studies domains. Since gamification is often used in service design, it is important that the concept be brought in line with the service literature. So far, though, there has been a dearth of such literature. This article is an attempt to tie in gamification with service marketing theory, which conceptualizes the consumer as a co-producer of the service. It presents games as service systems composed of operant and operand resources. It proposes a definition for gamification, one that emphasizes its experiential nature. The definition highlights four important aspects of gamification: affordances, psychological mediators, goals of gamification and the context of gamification. Using the definition the article identifies four possible gamifying actors and examines gamification as communicative staging of the service environment.