Showing papers by "University of California, Irvine published in 2016"
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.
5,050 citations
••
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.
4,804 citations
••
TL;DR: All of the major steps in RNA-seq data analysis are reviewed, including experimental design, quality control, read alignment, quantification of gene and transcript levels, visualization, differential gene expression, alternative splicing, functional analysis, gene fusion detection and eQTL mapping.
Abstract: RNA-sequencing (RNA-seq) has a wide variety of applications, but no single analysis pipeline can be used in all cases. We review all of the major steps in RNA-seq data analysis, including experimental design, quality control, read alignment, quantification of gene and transcript levels, visualization, differential gene expression, alternative splicing, functional analysis, gene fusion detection and eQTL mapping. We highlight the challenges associated with each step. We discuss the analysis of small RNAs and the integration of RNA-seq with other functional genomics techniques. Finally, we discuss the outlook for novel technologies that are changing the state of the art in transcriptomics.
1,963 citations
••
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2 +679 more•Institutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.
1,533 citations
••
Stanford University1, Cognition and Brain Sciences Unit2, The Mind Research Network3, Nathan Kline Institute for Psychiatric Research4, Montreal Neurological Institute and Hospital5, University of Oxford6, Wellcome Trust Centre for Neuroimaging7, Dartmouth College8, National Institutes of Health9, Otto-von-Guericke University Magdeburg10, University of California, Irvine11, Shandong University12, University of Warwick13, MIND Institute14, Lawrence Berkeley National Laboratory15, Helen Wills Neuroscience Institute16, University of Washington17, Georgia State University18, California Institute of Technology19
TL;DR: The Brain Imaging Data Structure (BIDS) is developed, a standard for organizing and describing MRI datasets that uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.
Abstract: The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.
1,037 citations
••
University of Aberdeen1, University of California, Irvine2, Technical University of Berlin3, Hertie School of Governance4, Potsdam Institute for Climate Impact Research5, Stanford University6, University of New England (United States)7, Utrecht University8, Netherlands Environmental Assessment Agency9, ETH Zurich10, International Institute for Applied Systems Analysis11, Centre national de la recherche scientifique12, University of Oslo13, Met Office14, University of Exeter15, University of East Anglia16, University of São Paulo17, University of Maryland, College Park18, Carnegie Mellon University19, National Institute for Environmental Studies20, Pacific Northwest National Laboratory21, Korea University22
TL;DR: In this article, the authors quantify potential global impacts of different negative emissions technologies on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application.
Abstract: To have a >50% chance of limiting warming below 2 °C, most recent scenarios from integrated assessment models (IAMs) require large-scale deployment of negative emissions technologies (NETs). These are technologies that result in the net removal of greenhouse gases from the atmosphere. We quantify potential global impacts of the different NETs on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application. Resource implications vary between technologies and need to be satisfactorily addressed if NETs are to have a significant role in achieving climate goals.
974 citations
••
TL;DR: The rate of complete remission was higher with inotuzumab ozogamicin than with standard therapy, and a higher percentage of patients in the inotzumabozogamic in group had results below the threshold for minimal residual disease.
Abstract: BackgroundThe prognosis for adults with relapsed acute lymphoblastic leukemia is poor. We sought to determine whether inotuzumab ozogamicin, an anti-CD22 antibody conjugated to calicheamicin, results in better outcomes in patients with relapsed or refractory acute lymphoblastic leukemia than does standard therapy. MethodsIn this phase 3 trial, we randomly assigned adults with relapsed or refractory acute lymphoblastic leukemia to receive either inotuzumab ozogamicin (inotuzumab ozogamicin group) or standard intensive chemotherapy (standard-therapy group). The primary end points were complete remission (including complete remission with incomplete hematologic recovery) and overall survival. ResultsOf the 326 patients who underwent randomization, the first 218 (109 in each group) were included in the primary intention-to-treat analysis of complete remission. The rate of complete remission was significantly higher in the inotuzumab ozogamicin group than in the standard-therapy group (80.7% [95% confidence in...
942 citations
••
University of California, Irvine1, University of Southern California2, Yale University3, Oslo University Hospital4, Karolinska Institutet5, University of Oslo6, University of California, San Diego7, University of Göttingen8, Trinity College, Dublin9, National University of Ireland, Galway10, University of Amsterdam11, VU University Amsterdam12, University of Pennsylvania13, University of California, San Francisco14, San Francisco VA Medical Center15, University of Minnesota16, Dresden University of Technology17, Harvard University18, University of New Mexico19, University of Iowa20, Utrecht University21, University of California, Los Angeles22, University of Cantabria23, Northwestern University24, University of Edinburgh25, Osaka University26, Georgia State University27
TL;DR: Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches, and validates that collaborative data analyses can readily be used across brain phenotypes and disorders.
Abstract: The profile of brain structural abnormalities in schizophrenia is still not fully understood, despite decades of research using brain scans. To validate a prospective meta-analysis approach to analyzing multicenter neuroimaging data, we analyzed brain MRI scans from 2028 schizophrenia patients and 2540 healthy controls, assessed with standardized methods at 15 centers worldwide. We identified subcortical brain volumes that differentiated patients from controls, and ranked them according to their effect sizes. Compared with healthy controls, patients with schizophrenia had smaller hippocampus (Cohen's d=-0.46), amygdala (d=-0.31), thalamus (d=-0.31), accumbens (d=-0.25) and intracranial volumes (d=-0.12), as well as larger pallidum (d=0.21) and lateral ventricle volumes (d=0.37). Putamen and pallidum volume augmentations were positively associated with duration of illness and hippocampal deficits scaled with the proportion of unmedicated patients. Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches. This first ENIGMA Schizophrenia Working Group study validates that collaborative data analyses can readily be used across brain phenotypes and disorders and encourages analysis and data sharing efforts to further our understanding of severe mental illness.
919 citations
••
Institute for Health Metrics and Evaluation1, College of Health Sciences, Bahrain2, Harvard University3, Kwame Nkrumah University of Science and Technology4, Charité5, Ahmadu Bello University6, University of the Philippines Manila7, Pontifical Xavierian University8, Madawalabu University9, World Bank10, Public Health Foundation of India11, Guy's and St Thomas' NHS Foundation Trust12, Griffith University13, University of New South Wales14, Massey University15, University of Peradeniya16, University of Sydney17, Chinese Center for Disease Control and Prevention18, Russian Academy of Sciences19, Tehran University of Medical Sciences20, Auckland University of Technology21, James Cook University22, Monash University23, University of California, San Francisco24, Arabian Gulf University25, Central South University26, Virginia Commonwealth University27, Jordan University of Science and Technology28, Health Services Academy29, Oregon Health & Science University30, University of Sheffield31, University at Albany, SUNY32, Aintree University Hospitals NHS Foundation Trust33, Swansea University34, University of York35, South African Medical Research Council36, Children's Hospital of Philadelphia37, Addis Ababa University38, Curtin University39, University of Washington40, Queensland University of Technology41, University of British Columbia42, Suez Canal University43, Karolinska Institutet44, University of Alabama at Birmingham45, An-Najah National University46, Tufts Medical Center47, Norwegian Institute of Public Health48, Stavanger University Hospital49, University of Cape Town50, University of California, Irvine51, University of Illinois at Urbana–Champaign52, St. John's University53, Hanoi Medical University54, Johns Hopkins University55, National Research University – Higher School of Economics56, University of Gondar57, University of Hong Kong58, Jackson State University59, Wuhan University60
TL;DR: An overview of injury estimates from the 2013 update of GBD is provided, with detailed information on incidence, mortality, DALYs and rates of change from 1990 to 2013 for 26 causes of injury, globally, by region and by country.
Abstract: Background The Global Burden of Diseases (GBD), Injuries, and Risk Factors study used the disability-adjusted life year (DALY) to quantify the burden of diseases, injuries, and risk factors. This paper provides an overview of injury estimates from the 2013 update of GBD, with detailed information on incidence, mortality, DALYs and rates of change from 1990 to 2013 for 26 causes of injury, globally, by region and by country.
Methods Injury mortality was estimated using the extensive GBD mortality database, corrections for ill-defined cause of death and the cause of death ensemble modelling tool. Morbidity estimation was based on inpatient and outpatient data sets, 26 cause-of-injury and 47 nature-of-injury categories, and seven follow-up studies with patient-reported long-term outcome measures.
Results In 2013, 973 million (uncertainty interval (UI) 942 to 993) people sustained injuries that warranted some type of healthcare and 4.8 million (UI 4.5 to 5.1) people died from injuries. Between 1990 and 2013 the global age-standardised injury DALY rate decreased by 31% (UI 26% to 35%). The rate of decline in DALY rates was significant for 22 cause-of-injury categories, including all the major injuries.
Conclusions Injuries continue to be an important cause of morbidity and mortality in the developed and developing world. The decline in rates for almost all injuries is so prominent that it warrants a general statement that the world is becoming a safer place to live in. However, the patterns vary widely by cause, age, sex, region and time and there are still large improvements that need to be made.
883 citations
••
Pacific Northwest National Laboratory1, Yale University2, National Center for Atmospheric Research3, Marine Biological Laboratory4, Colorado State University5, Wageningen University and Research Centre6, University of California, Irvine7, Kansas State University8, University of Oregon9, Michigan Technological University10, University of Sydney11, University of Minnesota12, Duke University13, University of Tennessee14, University of Copenhagen15, Spanish National Research Council16, University of New Hampshire17, Northeast Normal University18, University of California, Berkeley19, University of Oklahoma20, Hungarian Academy of Sciences21, Swedish University of Agricultural Sciences22, University of Manchester23, Tsinghua University24, National University of Singapore25, Chinese Academy of Sciences26, University of Hohenheim27, University of Georgia28, Hampshire College29, Boston University30, University of Alaska Anchorage31
TL;DR: In this article, the authors present a comprehensive analysis of warming-induced changes in soil carbon stocks by assembling data from 49 field experiments located across North America, Europe and Asia, and provide estimates of soil carbon sensitivity to warming that may help to constrain Earth system model projections.
Abstract: The majority of the Earth's terrestrial carbon is stored in the soil. If anthropogenic warming stimulates the loss of this carbon to the atmosphere, it could drive further planetary warming. Despite evidence that warming enhances carbon fluxes to and from the soil, the net global balance between these responses remains uncertain. Here we present a comprehensive analysis of warming-induced changes in soil carbon stocks by assembling data from 49 field experiments located across North America, Europe and Asia. We find that the effects of warming are contingent on the size of the initial soil carbon stock, with considerable losses occurring in high-latitude areas. By extrapolating this empirical relationship to the global scale, we provide estimates of soil carbon sensitivity to warming that may help to constrain Earth system model projections. Our empirical relationship suggests that global soil carbon stocks in the upper soil horizons will fall by 30 ± 30 petagrams of carbon to 203 ± 161 petagrams of carbon under one degree of warming, depending on the rate at which the effects of warming are realized. Under the conservative assumption that the response of soil carbon to warming occurs within a year, a business-as-usual climate scenario would drive the loss of 55 ± 50 petagrams of carbon from the upper soil horizons by 2050. This value is around 12-17 per cent of the expected anthropogenic emissions over this period. Despite the considerable uncertainty in our estimates, the direction of the global soil carbon response is consistent across all scenarios. This provides strong empirical support for the idea that rising temperatures will stimulate the net loss of soil carbon to the atmosphere, driving a positive land carbon-climate feedback that could accelerate climate change.
••
Yasser Iturria-Medina, Roberto C. Sotero1, Paule-Joanne Toussaint, José María Mateos-Pérez +311 more•Institutions (60)
TL;DR: Imaging results suggest that intra-brain vascular dysregulation is an early pathological event during disease development, suggesting early memory deficit associated with the primary disease factors.
Abstract: Multifactorial mechanisms underlying late-onset Alzheimer's disease (LOAD) are poorly characterized from an integrative perspective. Here spatiotemporal alterations in brain amyloid-β deposition, metabolism, vascular, functional activity at rest, structural properties, cognitive integrity and peripheral proteins levels are characterized in relation to LOAD progression. We analyse over 7,700 brain images and tens of plasma and cerebrospinal fluid biomarkers from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Through a multifactorial data-driven analysis, we obtain dynamic LOAD-abnormality indices for all biomarkers, and a tentative temporal ordering of disease progression. Imaging results suggest that intra-brain vascular dysregulation is an early pathological event during disease development. Cognitive decline is noticeable from initial LOAD stages, suggesting early memory deficit associated with the primary disease factors. High abnormality levels are also observed for specific proteins associated with the vascular system's integrity. Although still subjected to the sensitivity of the algorithms and biomarkers employed, our results might contribute to the development of preventive therapeutic interventions.
••
University of Exeter1, University of Massachusetts Amherst2, Purdue University3, University of Washington4, Agricultural Research Service5, National Park Service6, University of California, Davis7, University of Michigan8, Stanford University9, University of California, Irvine10, University of Southampton11
TL;DR: It is found that one-sixth of the global land surface is highly vulnerable to invasion, including substantial areas in developing economies and biodiversity hotspots, and there is a clear need for proactive invasion strategies in areas with high poverty levels, high biodiversity and low historical levels of invasion.
Abstract: Invasive alien species (IAS) threaten human livelihoods and biodiversity globally. Increasing globalization facilitates IAS arrival, and environmental changes, including climate change, facilitate IAS establishment. Here we provide the first global, spatial analysis of the terrestrial threat from IAS in light of twenty-first century globalization and environmental change, and evaluate national capacities to prevent and manage species invasions. We find that one-sixth of the global land surface is highly vulnerable to invasion, including substantial areas in developing economies and biodiversity hotspots. The dominant invasion vectors differ between high-income countries (imports, particularly of plants and pets) and low-income countries (air travel). Uniting data on the causes of introduction and establishment can improve early-warning and eradication schemes. Most countries have limited capacity to act against invasions. In particular, we reveal a clear need for proactive invasion strategies in areas with high poverty levels, high biodiversity and low historical levels of invasion.
••
Université Paris-Saclay1, Goddard Space Flight Center2, Commonwealth Scientific and Industrial Research Organisation3, National Oceanic and Atmospheric Administration4, National Institute of Geophysics and Volcanology5, Linköping University6, Netherlands Institute for Space Research7, Food and Agriculture Organization8, Stanford University9, University of Sheffield10, University of California, Irvine11, National Institute of Water and Atmospheric Research12, Max Planck Society13, École Polytechnique14, Yale University15, University of Victoria16, Jet Propulsion Laboratory17, Met Office18, International Institute for Applied Systems Analysis19, National Institute for Environmental Studies20, Oeschger Centre for Climate Change Research21, National Center for Atmospheric Research22, City University of New York23, Princeton University24, University of Bristol25, Lund University26, Japan Agency for Marine-Earth Science and Technology27, Université du Québec à Montréal28, University of Oslo29, Centre national de la recherche scientifique30, Massachusetts Institute of Technology31, Lawrence Berkeley National Laboratory32, University of Hohenheim33, Japan Meteorological Agency34, Auburn University35, Imperial College London36, Royal Netherlands Meteorological Institute37, VU University Amsterdam38, University of California, San Diego39, Environment Canada40, University of Toronto41, Northwest A&F University42
TL;DR: The Global Carbon Project (GCP) as discussed by the authors is a consortium of multi-disciplinary scientists, including atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions.
Abstract: . The global methane (CH4) budget is becoming an increasingly important component for managing realistic pathways to mitigate climate change. This relevance, due to a shorter atmospheric lifetime and a stronger warming potential than carbon dioxide, is challenged by the still unexplained changes of atmospheric CH4 over the past decade. Emissions and concentrations of CH4 are continuing to increase, making CH4 the second most important human-induced greenhouse gas after carbon dioxide. Two major difficulties in reducing uncertainties come from the large variety of diffusive CH4 sources that overlap geographically, and from the destruction of CH4 by the very short-lived hydroxyl radical (OH). To address these difficulties, we have established a consortium of multi-disciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate research on the methane cycle, and producing regular (∼ biennial) updates of the global methane budget. This consortium includes atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions. Following Kirschke et al. (2013), we propose here the first version of a living review paper that integrates results of top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up models, inventories and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, and inventories for anthropogenic emissions, data-driven extrapolations). For the 2003–2012 decade, global methane emissions are estimated by top-down inversions at 558 Tg CH4 yr−1, range 540–568. About 60 % of global emissions are anthropogenic (range 50–65 %). Since 2010, the bottom-up global emission inventories have been closer to methane emissions in the most carbon-intensive Representative Concentrations Pathway (RCP8.5) and higher than all other RCP scenarios. Bottom-up approaches suggest larger global emissions (736 Tg CH4 yr−1, range 596–884) mostly because of larger natural emissions from individual sources such as inland waters, natural wetlands and geological sources. Considering the atmospheric constraints on the top-down budget, it is likely that some of the individual emissions reported by the bottom-up approaches are overestimated, leading to too large global emissions. Latitudinal data from top-down emissions indicate a predominance of tropical emissions (∼ 64 % of the global budget, The most important source of uncertainty on the methane budget is attributable to emissions from wetland and other inland waters. We show that the wetland extent could contribute 30–40 % on the estimated range for wetland emissions. Other priorities for improving the methane budget include the following: (i) the development of process-based models for inland-water emissions, (ii) the intensification of methane observations at local scale (flux measurements) to constrain bottom-up land surface models, and at regional scale (surface networks and satellites) to constrain top-down inversions, (iii) improvements in the estimation of atmospheric loss by OH, and (iv) improvements of the transport models integrated in top-down inversions. The data presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( http://doi.org/10.3334/CDIAC/GLOBAL_METHANE_BUDGET_2016_V1.1 ) and the Global Carbon Project.
••
Paul Scherrer Institute1, University of Tokyo2, École Polytechnique Fédérale de Lausanne3, University of Pavia4, University College London5, Waseda University6, Sapienza University of Rome7, Novosibirsk State University8, Budker Institute of Nuclear Physics9, Novosibirsk State Technical University10, KEK11, University of California, Irvine12, Joint Institute for Nuclear Research13
TL;DR: The final results of the search for the lepton flavour violating decay were presented in this paper, based on the full dataset collected by the MEG experiment at the Paul Scherrer Institut in the period 2009-2013.
Abstract: The final results of the search for the lepton flavour violating decay $$\mathrm {\mu }^+ \rightarrow \mathrm {e}^+ \mathrm {\gamma }$$
based on the full dataset collected by the MEG experiment at the Paul Scherrer Institut in the period 2009–2013 and totalling $$7.5\times 10^{14}$$
stopped muons on target are presented. No significant excess of events is observed in the dataset with respect to the expected background and a new upper limit on the branching ratio of this decay of $$ \mathcal{B} (\mu ^+ \rightarrow \mathrm{e}^+ \gamma ) < 4.2 \times 10^{-13}$$
(90 % confidence level) is established, which represents the most stringent limit on the existence of this decay to date.
••
TL;DR: In this study, the selective BTK inhibitor acalabrutinib had promising safety and efficacy profiles in patients with relapsed CLL, including those with chromosome 17p13.1 deletion.
Abstract: BACKGROUND Irreversible inhibition of Bruton’s tyrosine kinase (BTK) by ibrutinib represents an important therapeutic advance for the treatment of chronic lymphocytic leukemia (CLL). However, ibrutinib also irreversibly inhibits alternative kinase targets, which potentially compromises its therapeutic index. Acalabrutinib (ACP-196) is a more selective, irreversible BTK inhibitor that is specifically designed to improve on the safety and efficacy of first-generation BTK inhibitors. METHODS In this uncontrolled, phase 1–2, multicenter study, we administered oral acalabrutinib to 61 patients who had relapsed CLL to assess the safety, efficacy, pharmacokinetics, and pharmacodynamics of acalabrutinib. Patients were treated with acalabrutinib at a dose of 100 to 400 mg once daily in the dose-escalation (phase 1) portion of the study and 100 mg twice daily in the expansion (phase 2) portion. RESULTS The median age of the patients was 62 years, and patients had received a median of three previous therapies for CLL; 31% had chromosome 17p13.1 deletion, and 75% had unmutated immunoglobulin heavy-chain variable genes. No dose-limiting toxic effects occurred during the dose-escalation portion of the study. The most common adverse events observed were headache (in 43% of the patients), diarrhea (in 39%), and increased weight (in 26%). Most adverse events were of grade 1 or 2. At a median followup of 14.3 months, the overall response rate was 95%, including 85% with a partial response and 10% with a partial response with lymphocytosis; the remaining 5% of patients had stable disease. Among patients with chromosome 17p13.1 deletion, the overall response rate was 100%. No cases of Richter’s transformation (CLL that has evolved into large-cell lymphoma) and only one case of CLL progression have occurred. CONCLUSIONS In this study, the selective BTK inhibitor acalabrutinib had promising safety and efficacy profiles in patients with relapsed CLL, including those with chromosome 17p13.1 deletion. (Funded by the Acerta Pharma and others; ClinicalTrials.gov number, NCT02029443.)
••
University of Helsinki1, University of Oulu2, University of Tampere3, Turku University Hospital4, University of Turku5, Hannover Medical School6, University of Cambridge7, Netherlands Cancer Institute8, Institute of Cancer Research9, University of Melbourne10, University of Erlangen-Nuremberg11, University of California, Los Angeles12, University of London13, King's College London14, Wellcome Trust Centre for Human Genetics15, Heidelberg University16, German Cancer Research Center17, French Institute of Health and Medical Research18, Copenhagen University Hospital19, University of Copenhagen20, Beckman Research Institute21, University of California, Irvine22, Technische Universität München23, University of Cologne24, University of Tübingen25, Bosch26, Ruhr University Bochum27, Karolinska Institutet28, University of Eastern Finland29, QIMR Berghofer Medical Research Institute30, Katholieke Universiteit Leuven31, University of Hamburg32, Mayo Clinic33, Cancer Council Victoria34, University of Southern California35, Laval University36, The Breast Cancer Research Foundation37, Oslo University Hospital38, Vanderbilt University39, Oulu University Hospital40, University of Toronto41, Lunenfeld-Tanenbaum Research Institute42, Leiden University Medical Center43, Erasmus University Rotterdam44, Erasmus University Medical Center45, University of Sheffield46, Pontifical Xavierian University47, Pomeranian Medical University48
TL;DR: It is suggested that loss-of-function mutations in RAD 51B are rare, but common variation at the RAD51B region is significantly associated with familial breast cancer risk.
Abstract: Common variation on 14q24.1, close to RAD51B, has been associated with breast cancer: rs999737 and rs2588809 with the risk of female breast cancer and rs1314913 with the risk of male breast cancer. The aim of this study was to investigate the role of RAD51B variants in breast cancer predisposition, particularly in the context of familial breast cancer in Finland. We sequenced the coding region of RAD51B in 168 Finnish breast cancer patients from the Helsinki region for identification of possible recurrent founder mutations. In addition, we studied the known rs999737, rs2588809, and rs1314913 SNPs and RAD51B haplotypes in 44,791 breast cancer cases and 43,583 controls from 40 studies participating in the Breast Cancer Association Consortium (BCAC) that were genotyped on a custom chip (iCOGS). We identified one putatively pathogenic missense mutation c.541C>T among the Finnish cancer patients and subsequently genotyped the mutation in additional breast cancer cases (n = 5259) and population controls (n = 3586) from Finland and Belarus. No significant association with breast cancer risk was seen in the meta-analysis of the Finnish datasets or in the large BCAC dataset. The association with previously identified risk variants rs999737, rs2588809, and rs1314913 was replicated among all breast cancer cases and also among familial cases in the BCAC dataset. The most significant association was observed for the haplotype carrying the risk-alleles of all the three SNPs both among all cases (odds ratio (OR): 1.15, 95% confidence interval (CI): 1.11-1.19, P = 8.88 x 10-16) and among familial cases (OR: 1.24, 95% CI: 1.16-1.32, P = 6.19 x 10-11), compared to the haplotype with the respective protective alleles. Our results suggest that loss-of-function mutations in RAD51B are rare, but common variation at the RAD51B region is significantly associated with familial breast cancer risk.
•
TL;DR: This work takes the skeleton as the input at each time slot and introduces a novel regularization scheme to learn the co-occurrence features of skeleton joints, and proposes a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons.
Abstract: Skeleton based action recognition distinguishes human actions using the trajectories of skeleton joints, which provide a very good representation for describing actions. Considering that recurrent neural networks (RNNs) with Long Short-Term Memory (LSTM) can learn feature representations and model long-term temporal dependencies automatically, we propose an end-to-end fully connected deep LSTM network for skeleton based action recognition. Inspired by the observation that the co-occurrences of the joints intrinsically characterize human actions, we take the skeleton as the input at each time slot and introduce a novel regularization scheme to learn the co-occurrence features of skeleton joints. To train the deep LSTM network effectively, we propose a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons. Experimental results on three human action recognition datasets consistently demonstrate the effectiveness of the proposed model.
••
TL;DR: The 3D structure of a disease-relevant Aβ(1–42) fibril polymorph is determined combining data from solid-state NMR spectroscopy and mass-per-length measurements from EM, forming a double-horseshoe–like cross–β-sheet entity with maximally buried hydrophobic side chains.
Abstract: Amyloid-β (Aβ) is present in humans as a 39- to 42-amino acid residue metabolic product of the amyloid precursor protein. Although the two predominant forms, Aβ(1–40) and Aβ(1–42), differ in only two residues, they display different biophysical, biological, and clinical behavior. Aβ(1–42) is the more neurotoxic species, aggregates much faster, and dominates in senile plaque of Alzheimer’s disease (AD) patients. Although small Aβ oligomers are believed to be the neurotoxic species, Aβ amyloid fibrils are, because of their presence in plaques, a pathological hallmark of AD and appear to play an important role in disease progression through cell-to-cell transmissibility. Here, we solved the 3D structure of a disease-relevant Aβ(1–42) fibril polymorph, combining data from solid-state NMR spectroscopy and mass-per-length measurements from EM. The 3D structure is composed of two molecules per fibril layer, with residues 15–42 forming a double-horseshoe–like cross–β-sheet entity with maximally buried hydrophobic side chains. Residues 1–14 are partially ordered and in a β-strand conformation, but do not display unambiguous distance restraints to the remainder of the core structure.
••
Baylor College of Medicine1, Beth Israel Deaconess Medical Center2, Emory University3, Ochsner Medical Center4, Icahn School of Medicine at Mount Sinai5, Cedars-Sinai Medical Center6, University of Tennessee Health Science Center7, University of Texas Health Science Center at San Antonio8, American Association of Clinical Endocrinologists9, Tulane University10, University of Alabama at Birmingham11, Wayne State University12, The American College of Financial Services13, University of California, San Diego14, University of Washington15, University of Miami16, Washington University in St. Louis17, University of California, Irvine18
TL;DR: This chapter discusses the development and use of eicosapentaenoic acid as a treatment for diabetic ketoacidosis and its applications in conventional and regenerative medicine.
••
TL;DR: The Extended Baryon Oscillation Spectroscopic Survey (eBOSS) as mentioned in this paper uses four different tracers of the underlying matter density field to expand the volume covered by BOSS and map the large-scale structures over the relatively unconstrained redshift range 0.6 0.87.
Abstract: In a six-year program started in 2014 July, the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) will conduct novel cosmological observations using the BOSS spectrograph at Apache Point Observatory. These observations will be conducted simultaneously with the Time Domain Spectroscopic Survey (TDSS) designed for variability studies and the Spectroscopic Identification of eROSITA Sources (SPIDERS) program designed for studies of X-ray sources. In particular, eBOSS will measure with percent-level precision the distance-redshift relation with baryon acoustic oscillations (BAO) in the clustering of matter. eBOSS will use four different tracers of the underlying matter density field to vastly expand the volume covered by BOSS and map the large-scale-structures over the relatively unconstrained redshift range 0.6 0.6 sample of BOSS galaxies. With ~195,000 new emission line galaxy redshifts, we expect BAO measurements of d_A(z) to an accuracy of 3.1% and H(z) to 4.7% at an effective redshift of z = 0.87. A sample of more than 500,000 spectroscopically confirmed quasars will provide the first BAO distance measurements over the redshift range 0.9 2.1; these new data will enhance the precision of dA(z) and H(z) at z > 2.1 by a factor of 1.44 relative to BOSS. Furthermore, eBOSS will provide improved tests of General Relativity on cosmological scales through redshift-space distortion measurements, improved tests for non-Gaussianity in the primordial density field, and new constraints on the summed mass of all neutrino species. Here, we provide an overview of the cosmological goals, spectroscopic target sample, demonstration of spectral quality from early data, and projected cosmological constraints from eBOSS.
••
Nicholas J Kassebaum1, Ryan M Barber1, Zulfiqar A Bhutta2, Zulfiqar A Bhutta3 +613 more•Institutions (272)
TL;DR: In this article, the authors quantified maternal mortality throughout the world by underlying cause and age from 1990 to 2015 for ages 10-54 years by systematically compiling and processing all available data sources from 186 of 195 countries and territories.
••
TL;DR: GBHs are the most heavily applied herbicide in the world and usage continues to rise; Worldwide, GBHs often contaminate drinking water sources, precipitation, and air, especially in agricultural regions and regulatory estimates of tolerable daily intakes for glyphosate in the United States and European Union are based on outdated science.
Abstract: The broad-spectrum herbicide glyphosate (common trade name “Roundup”) was first sold to farmers in 1974. Since the late 1970s, the volume of glyphosate-based herbicides (GBHs) applied has increased approximately 100-fold. Further increases in the volume applied are likely due to more and higher rates of application in response to the widespread emergence of glyphosate-resistant weeds and new, pre-harvest, dessicant use patterns. GBHs were developed to replace or reduce reliance on herbicides causing well-documented problems associated with drift and crop damage, slipping efficacy, and human health risks. Initial industry toxicity testing suggested that GBHs posed relatively low risks to non-target species, including mammals, leading regulatory authorities worldwide to set high acceptable exposure limits. To accommodate changes in GBH use patterns associated with genetically engineered, herbicide-tolerant crops, regulators have dramatically increased tolerance levels in maize, oilseed (soybeans and canola), and alfalfa crops and related livestock feeds. Animal and epidemiology studies published in the last decade, however, point to the need for a fresh look at glyphosate toxicity. Furthermore, the World Health Organization’s International Agency for Research on Cancer recently concluded that glyphosate is “probably carcinogenic to humans.” In response to changing GBH use patterns and advances in scientific understanding of their potential hazards, we have produced a Statement of Concern drawing on emerging science relevant to the safety of GBHs. Our Statement of Concern considers current published literature describing GBH uses, mechanisms of action, toxicity in laboratory animals, and epidemiological studies. It also examines the derivation of current human safety standards. We conclude that: (1) GBHs are the most heavily applied herbicide in the world and usage continues to rise; (2) Worldwide, GBHs often contaminate drinking water sources, precipitation, and air, especially in agricultural regions; (3) The half-life of glyphosate in water and soil is longer than previously recognized; (4) Glyphosate and its metabolites are widely present in the global soybean supply; (5) Human exposures to GBHs are rising; (6) Glyphosate is now authoritatively classified as a probable human carcinogen; (7) Regulatory estimates of tolerable daily intakes for glyphosate in the United States and European Union are based on outdated science. We offer a series of recommendations related to the need for new investments in epidemiological studies, biomonitoring, and toxicology studies that draw on the principles of endocrinology to determine whether the effects of GBHs are due to endocrine disrupting activities. We suggest that common commercial formulations of GBHs should be prioritized for inclusion in government-led toxicology testing programs such as the U.S. National Toxicology Program, as well as for biomonitoring as conducted by the U.S. Centers for Disease Control and Prevention.
••
TL;DR: The DanQ model, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence, improves considerably upon other models across several metrics.
Abstract: Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.
••
TL;DR: To inform the political discourse with scientific evidence, the literature was reviewed to identify what is known and not known about the effects of cannabis use on human behavior, including cognition, motivation, and psychosis.
Abstract: With a political debate about the potential risks and benefits of cannabis use as a backdrop, the wave of legalization and liberalization initiatives continues to spread. Four states (Colorado, Washington, Oregon, and Alaska) and the District of Columbia have passed laws that legalized cannabis for recreational use by adults, and 23 others plus the District of Columbia now regulate cannabis use for medical purposes. These policy changes could trigger a broad range of unintended consequences, with profound and lasting implications for the health and social systems in our country. Cannabis use is emerging as one among many interacting factors that can affect brain development and mental function. To inform the political discourse with scientific evidence, the literature was reviewed to identify what is known and not known about the effects of cannabis use on human behavior, including cognition, motivation, and psychosis.
••
Haidong Wang1, Zulfiqar A Bhutta2, Zulfiqar A Bhutta3, Matthew M Coates1 +610 more•Institutions (263)
TL;DR: The Global Burden of Disease 2015 Study provides an analytical framework to comprehensively assess trends for under-5 mortality, age-specific and cause-specific mortality among children under 5 years, and stillbirths by geography over time and decomposed the changes in under- 5 mortality to changes in SDI at the global level.
••
TL;DR: The data suggest that induction of NETs by cancer cells is a previously unidentified metastasis-promoting tumor-host interaction and a potential therapeutic target, and treatment with NET-digesting, DNase I–coated nanoparticles markedly reduced lung metastases in mice.
Abstract: Neutrophils, the most abundant type of leukocytes in blood, can form neutrophil extracellular traps (NETs). These are pathogen-trapping structures generated by expulsion of the neutrophil's DNA with associated proteolytic enzymes. NETs produced by infection can promote cancer metastasis. We show that metastatic breast cancer cells can induce neutrophils to form metastasis-supporting NETs in the absence of infection. Using intravital imaging, we observed NET-like structures around metastatic 4T1 cancer cells that had reached the lungs of mice. We also found NETs in clinical samples of triple-negative human breast cancer. The formation of NETs stimulated the invasion and migration of breast cancer cells in vitro. Inhibiting NET formation or digesting NETs with deoxyribonuclease I (DNase I) blocked these processes. Treatment with NET-digesting, DNase I-coated nanoparticles markedly reduced lung metastases in mice. Our data suggest that induction of NETs by cancer cells is a previously unidentified metastasis-promoting tumor-host interaction and a potential therapeutic target.
••
Forschungszentrum Jülich1, University of California, Davis2, Université catholique de Louvain3, ETH Zurich4, University of Southampton5, University of Texas at Austin6, University of Bonn7, James Hutton Institute8, University of California, Irvine9, Université Paris-Saclay10, Desert Research Institute11, Ghent University12, Washington State University13, Katholieke Universiteit Leuven14, University of Aberdeen15, Institut national de la recherche agronomique16, Polish Academy of Sciences17, University of Vienna18, University of Sydney19, University of Stuttgart20, Agricultural Research Service21, University of Naples Federico II22, University of California, Riverside23, Netherlands Environmental Assessment Agency24, Monash University25, University of Tübingen26, University of New England (Australia)27
TL;DR: Key challenges in modeling soil processes are identified, including the systematic incorporation of heterogeneity and uncertainty, the integration of data and models, and strategies for effective integration of knowledge on physical, chemical, and biological soil processes.
Abstract: The remarkable complexity of soil and its importance to a wide range of ecosystem services presents major challenges to the modeling of soil processes. Although major progress in soil models has occurred in the last decades, models of soil processes remain disjointed between disciplines or ecosystem services, with considerable uncertainty remaining in the quality of predictions and several challenges that remain yet to be addressed. First, there is a need to improve exchange of knowledge and experience among the different disciplines in soil science and to reach out to other Earth science communities. Second, the community needs to develop a new generation of soil models based on a systemic approach comprising relevant physical, chemical, and biological processes to address critical knowledge gaps in our understanding of soil processes and their interactions. Overcoming these challenges will facilitate exchanges between soil modeling and climate, plant, and social science modeling communities. It will allow us to contribute to preserve and improve our assessment of ecosystem services and advance our understanding of climate-change feedback mechanisms, among others, thereby facilitating and strengthening communication among scientific disciplines and society. We review the role of modeling soil processes in quantifying key soil processes that shape ecosystem services, with a focus on provisioning and regulating services. We then identify key challenges in modeling soil processes, including the systematic incorporation of heterogeneity and uncertainty, the integration of data and models, and strategies for effective integration of knowledge on physical, chemical, and biological soil processes. We discuss how the soil modeling community could best interface with modern modeling activities in other disciplines, such as climate, ecology, and plant research, and how to weave novel observation and measurement techniques into soil models. We propose the establishment of an international soil modeling consortium to coherently advance soil modeling activities and foster communication with other Earth science disciplines. Such a consortium should promote soil modeling platforms and data repository for model development, calibration and intercomparison essential for addressing contemporary challenges.
••
TL;DR: Alectinib is highly active and well tolerated in patients with advanced, crizotinib-refractory ALK-positive NSCLC, including those with CNS metastases, and the cumulative CNS progression rate was lower than the cumulative non-CNS progression rate at 12 months for all patients.
Abstract: PurposeCrizotinib confers improved progression-free survival compared with chemotherapy in anaplastic lymphoma kinase (ALK)-rearranged non–small-cell lung cancer (NSCLC), but progression invariably occurs. We investigated the efficacy and safety of alectinib, a potent and selective ALK inhibitor with excellent CNS penetration, in patients with crizotinib-refractory ALK-positive NSCLC.Patients and MethodsAlectinib 600 mg was administered orally twice daily. The primary end point was objective response rate (ORR) by central independent review committee (IRC).ResultsOf the 138 patients treated, 84 patients (61%) had CNS metastases at baseline, and 122 were response evaluable (RE) by IRC. ORR by IRC was 50% (95% CI, 41% to 59%), and the median duration of response (DOR) was 11.2 months (95% CI, 9.6 months to not reached). In 96 patients (79%) previously treated with chemotherapy, the ORR was 45% (95% CI, 35% to 55%). Median IRC-assessed progression-free survival for all 138 patients was 8.9 months (95% CI, 5....
••
Harvard University1, Wayne State University2, Memorial Sloan Kettering Cancer Center3, Oregon Health & Science University4, University of Colorado Boulder5, University of Pittsburgh6, Ohio State University7, Fox Chase Cancer Center8, University of Texas MD Anderson Cancer Center9, Hoffmann-La Roche10, University of California, Irvine11
TL;DR: Alectinib showed clinical activity and was well tolerated in patients with ALK-positive NSCLC who had progressed on crizotinib and could be a suitable treatment for patients with AlK- positive disease who have progressed oncrizotin ib.
Abstract: Summary Background Alectinib—a highly selective, CNS-active, ALK inhibitor—showed promising clinical activity in crizotinib-naive and crizotinib-resistant patients with ALK -rearranged ( ALK -positive) non-small-cell lung cancer (NSCLC). We aimed to assess the safety and efficacy of alectinib in patients with ALK -positive NSCLC who progressed on previous crizotinib. Methods We did a phase 2 study at 27 centres in the USA and Canada. We enrolled patients aged 18 years or older with stage IIIB–IV, ALK -positive NSCLC who had progressed after crizotinib. Patients were treated with oral alectinib 600 mg twice daily until progression, death, or withdrawal. The primary endpoint was the proportion of patients achieving an objective response by an independent review committee using Response Evaluation Criteria in Solid Tumors, version 1.1. Response endpoints were assessed in the response-evaluable population (ie, patients with measurable disease at baseline who received at least one dose of study drug), and efficacy and safety analyses were done in the intention-to-treat population (all enrolled patients). This study is registered with ClinicalTrials.gov, number NCT01871805. The study is ongoing and patients are still receiving treatment. Findings Between Sept 4, 2013, and Aug 4, 2014, 87 patients were enrolled into the study (intention-to-treat population). At the time of the primary analysis (median follow-up 4·8 months [IQR 3·3–7·1]), 33 of 69 patients with measurable disease at baseline had a confirmed partial response; thus, the proportion of patients achieving an objective response by the independent review committee was 48% (95% CI 36–60). Adverse events were predominantly grade 1 or 2, most commonly constipation (31 [36%]), fatigue (29 [33%]), myalgia 21 [24%]), and peripheral oedema 20 [23%]). The most common grade 3 and 4 adverse events were changes in laboratory values, including increased blood creatine phosphokinase (seven [8%]), increased alanine aminotransferase (five [6%]), and increased aspartate aminotransferase (four [5%]). Two patients died: one had a haemorrhage (judged related to study treatment), and one had disease progression and a history of stroke (judged unrelated to treatment). Interpretation Alectinib showed clinical activity and was well tolerated in patients with ALK -positive NSCLC who had progressed on crizotinib. Therefore, alectinib could be a suitable treatment for patients with ALK -positive disease who have progressed on crizotinib. Funding F Hoffmann-La Roche.