scispace - formally typeset
Search or ask a question

Showing papers by "University of California, San Diego published in 2016"


Journal ArticleDOI
Monkol Lek, Konrad J. Karczewski1, Konrad J. Karczewski2, Eric Vallabh Minikel1, Eric Vallabh Minikel2, Kaitlin E. Samocha, Eric Banks2, Timothy Fennell2, Anne H. O’Donnell-Luria1, Anne H. O’Donnell-Luria3, Anne H. O’Donnell-Luria2, James S. Ware, Andrew J. Hill4, Andrew J. Hill1, Andrew J. Hill2, Beryl B. Cummings2, Beryl B. Cummings1, Taru Tukiainen2, Taru Tukiainen1, Daniel P. Birnbaum2, Jack A. Kosmicki, Laramie E. Duncan2, Laramie E. Duncan1, Karol Estrada2, Karol Estrada1, Fengmei Zhao2, Fengmei Zhao1, James Zou2, Emma Pierce-Hoffman2, Emma Pierce-Hoffman1, Joanne Berghout5, David Neil Cooper6, Nicole A. Deflaux7, Mark A. DePristo2, Ron Do, Jason Flannick1, Jason Flannick2, Menachem Fromer, Laura D. Gauthier2, Jackie Goldstein1, Jackie Goldstein2, Namrata Gupta2, Daniel P. Howrigan1, Daniel P. Howrigan2, Adam Kiezun2, Mitja I. Kurki2, Mitja I. Kurki1, Ami Levy Moonshine2, Pradeep Natarajan, Lorena Orozco, Gina M. Peloso2, Gina M. Peloso1, Ryan Poplin2, Manuel A. Rivas2, Valentin Ruano-Rubio2, Samuel A. Rose2, Douglas M. Ruderfer8, Khalid Shakir2, Peter D. Stenson6, Christine Stevens2, Brett Thomas1, Brett Thomas2, Grace Tiao2, María Teresa Tusié-Luna, Ben Weisburd2, Hong-Hee Won9, Dongmei Yu, David Altshuler2, David Altshuler10, Diego Ardissino, Michael Boehnke11, John Danesh12, Stacey Donnelly2, Roberto Elosua, Jose C. Florez2, Jose C. Florez1, Stacey Gabriel2, Gad Getz1, Gad Getz2, Stephen J. Glatt13, Christina M. Hultman14, Sekar Kathiresan, Markku Laakso15, Steven A. McCarroll2, Steven A. McCarroll1, Mark I. McCarthy16, Mark I. McCarthy17, Dermot P.B. McGovern18, Ruth McPherson19, Benjamin M. Neale1, Benjamin M. Neale2, Aarno Palotie, Shaun Purcell8, Danish Saleheen20, Jeremiah M. Scharf, Pamela Sklar, Patrick F. Sullivan21, Patrick F. Sullivan14, Jaakko Tuomilehto22, Ming T. Tsuang23, Hugh Watkins17, Hugh Watkins16, James G. Wilson24, Mark J. Daly2, Mark J. Daly1, Daniel G. MacArthur1, Daniel G. MacArthur2 
18 Aug 2016-Nature
TL;DR: The aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC) provides direct evidence for the presence of widespread mutational recurrence.
Abstract: Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.

8,758 citations


Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
Haidong Wang1, Mohsen Naghavi1, Christine Allen1, Ryan M Barber1  +841 moreInstitutions (293)
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.

4,804 citations


Posted Content
TL;DR: On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.
Abstract: We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

2,760 citations


Journal ArticleDOI
Mingxun Wang1, Jeremy Carver1, Vanessa V. Phelan2, Laura M. Sanchez2, Neha Garg2, Yao Peng1, Don D. Nguyen1, Jeramie D. Watrous2, Clifford A. Kapono1, Tal Luzzatto-Knaan2, Carla Porto2, Amina Bouslimani2, Alexey V. Melnik2, Michael J. Meehan2, Wei-Ting Liu3, Max Crüsemann4, Paul D. Boudreau4, Eduardo Esquenazi, Mario Sandoval-Calderón5, Roland D. Kersten6, Laura A. Pace2, Robert A. Quinn7, Katherine R. Duncan8, Cheng-Chih Hsu1, Dimitrios J. Floros1, Ronnie G. Gavilan, Karin Kleigrewe4, Trent R. Northen9, Rachel J. Dutton10, Delphine Parrot11, Erin E. Carlson12, Bertrand Aigle13, Charlotte Frydenlund Michelsen14, Lars Jelsbak14, Christian Sohlenkamp5, Pavel A. Pevzner1, Anna Edlund15, Anna Edlund16, Jeffrey S. McLean17, Jeffrey S. McLean15, Jörn Piel18, Brian T. Murphy19, Lena Gerwick4, Chih-Chuang Liaw20, Yu-Liang Yang21, Hans-Ulrich Humpf22, Maria Maansson14, Robert A. Keyzers23, Amy C. Sims24, Andrew R. Johnson25, Ashley M. Sidebottom25, Brian E. Sedio26, Andreas Klitgaard14, Charles B. Larson2, Charles B. Larson4, Cristopher A. Boya P., Daniel Torres-Mendoza, David Gonzalez2, Denise Brentan Silva27, Denise Brentan Silva28, Lucas Miranda Marques27, Daniel P. Demarque27, Egle Pociute, Ellis C. O’Neill4, Enora Briand4, Enora Briand11, Eric J. N. Helfrich18, Eve A. Granatosky29, Evgenia Glukhov4, Florian Ryffel18, Hailey Houson, Hosein Mohimani1, Jenan J. Kharbush4, Yi Zeng1, Julia A. Vorholt18, Kenji L. Kurita30, Pep Charusanti1, Kerry L. McPhail31, Kristian Fog Nielsen14, Lisa Vuong, Maryam Elfeki19, Matthew F. Traxler32, Niclas Engene33, Nobuhiro Koyama2, Oliver B. Vining31, Ralph S. Baric24, Ricardo Pianta Rodrigues da Silva27, Samantha J. Mascuch4, Sophie Tomasi11, Stefan Jenkins9, Venkat R. Macherla, Thomas Hoffman, Vinayak Agarwal4, Philip G. Williams34, Jingqui Dai34, Ram P. Neupane34, Joshua R. Gurr34, Andrés M. C. Rodríguez27, Anne Lamsa1, Chen Zhang1, Kathleen Dorrestein2, Brendan M. Duggan2, Jehad Almaliti2, Pierre-Marie Allard35, Prasad Phapale, Louis-Félix Nothias36, Theodore Alexandrov, Marc Litaudon36, Jean-Luc Wolfender35, Jennifer E. Kyle37, Thomas O. Metz37, Tyler Peryea38, Dac-Trung Nguyen38, Danielle VanLeer38, Paul Shinn38, Ajit Jadhav38, Rolf Müller, Katrina M. Waters37, Wenyuan Shi15, Xueting Liu39, Lixin Zhang39, Rob Knight1, Paul R. Jensen4, Bernhard O. Palsson1, Kit Pogliano1, Roger G. Linington30, Marcelino Gutiérrez, Norberto Peporine Lopes27, William H. Gerwick2, William H. Gerwick4, Bradley S. Moore4, Bradley S. Moore2, Pieter C. Dorrestein4, Pieter C. Dorrestein2, Nuno Bandeira2, Nuno Bandeira1 
TL;DR: In GNPS, crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations and data-driven social-networking should facilitate identification of spectra and foster collaborations.
Abstract: The potential of the diverse chemistries present in natural products (NP) for biotechnology and medicine remains untapped because NP databases are not searchable with raw data and the NP community has no way to share data other than in published papers. Although mass spectrometry (MS) techniques are well-suited to high-throughput characterization of NP, there is a pressing need for an infrastructure to enable sharing and curation of data. We present Global Natural Products Social Molecular Networking (GNPS; http://gnps.ucsd.edu), an open-access knowledge base for community-wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. In GNPS, crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations. Data-driven social-networking should facilitate identification of spectra and foster collaborations. We also introduce the concept of 'living data' through continuous reanalysis of deposited data.

2,365 citations


Journal ArticleDOI
TL;DR: These guidelines are intended for use by healthcare professionals who care for patients at risk for hospital-acquired pneumonia (HAP) and ventilator-associated pneumonia (VAP), including specialists in infectious diseases, pulmonary diseases, critical care, and surgeons, anesthesiologists, hospitalists, and any clinicians and healthcare providers caring for hospitalized patients with nosocomial pneumonia.
Abstract: It is important to realize that guidelines cannot always account for individual variation among patients. They are not intended to supplant physician judgment with respect to particular patients or special clinical situations. IDSA considers adherence to these guidelines to be voluntary, with the ultimate determination regarding their application to be made by the physician in the light of each patient's individual circumstances.These guidelines are intended for use by healthcare professionals who care for patients at risk for hospital-acquired pneumonia (HAP) and ventilator-associated pneumonia (VAP), including specialists in infectious diseases, pulmonary diseases, critical care, and surgeons, anesthesiologists, hospitalists, and any clinicians and healthcare providers caring for hospitalized patients with nosocomial pneumonia. The panel's recommendations for the diagnosis and treatment of HAP and VAP are based upon evidence derived from topic-specific systematic literature reviews.

2,359 citations



Journal ArticleDOI
TL;DR: To develop a new evidence‐based, pharmacologic treatment guideline for rheumatoid arthritis (RA), a large number of patients with RA are referred to a single clinic for treatment with these medications.
Abstract: Objective To develop a new evidence-based, pharmacologic treatment guideline for rheumatoid arthritis (RA). Methods We conducted systematic reviews to synthesize the evidence for the benefits and harms of various treatment options. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology to rate the quality of evidence. We employed a group consensus process to grade the strength of recommendations (either strong or conditional). A strong recommendation indicates that clinicians are certain that the benefits of an intervention far outweigh the harms (or vice versa). A conditional recommendation denotes uncertainty over the balance of benefits and harms and/or more significant variability in patient values and preferences. Results The guideline covers the use of traditional disease-modifying antirheumatic drugs (DMARDs), biologic agents, tofacitinib, and glucocorticoids in early (<6 months) and established (≥6 months) RA. In addition, it provides recommendations on using a treat-to-target approach, tapering and discontinuing medications, and the use of biologic agents and DMARDs in patients with hepatitis, congestive heart failure, malignancy, and serious infections. The guideline addresses the use of vaccines in patients starting/receiving DMARDs or biologic agents, screening for tuberculosis in patients starting/receiving biologic agents or tofacitinib, and laboratory monitoring for traditional DMARDs. The guideline includes 74 recommendations: 23% are strong and 77% are conditional. Conclusion This RA guideline should serve as a tool for clinicians and patients (our two target audiences) for pharmacologic treatment decisions in commonly encountered clinical situations. These recommendations are not prescriptive, and the treatment decisions should be made by physicians and patients through a shared decision-making process taking into account patients’ values, preferences, and comorbidities. These recommendations should not be used to limit or deny access to therapies.

2,083 citations


Journal ArticleDOI
TL;DR: The American Pain Society, with input from the American Society of Anesthesiologists, developed a clinical practice guideline to promote evidence-based, effective, and safer postoperative pain management in children and adults.

1,806 citations


Proceedings ArticleDOI
11 Apr 2016
TL;DR: This paper builds novel models for the One-Class Collaborative Filtering setting, where the goal is to estimate users' fashion-aware personalized ranking functions based on their past feedback and combines high-level visual features extracted from a deep convolutional neural network, users' past feedback, as well as evolving trends within the community.
Abstract: Building a successful recommender system depends on understanding both the dimensions of people's preferences as well as their dynamics. In certain domains, such as fashion, modeling such preferences can be incredibly difficult, due to the need to simultaneously model the visual appearance of products as well as their evolution over time. The subtle semantics and non-linear dynamics of fashion evolution raise unique challenges especially considering the sparsity and large scale of the underlying datasets. In this paper we build novel models for the One-Class Collaborative Filtering setting, where our goal is to estimate users' fashion-aware personalized ranking functions based on their past feedback. To uncover the complex and evolving visual factors that people consider when evaluating products, our method combines high-level visual features extracted from a deep convolutional neural network, users' past feedback, as well as evolving trends within the community. Experimentally we evaluate our method on two large real-world datasets from Amazon.com, where we show it to outperform state-of-the-art personalized ranking measures, and also use it to visualize the high-level fashion trends across the 11-year span of our dataset.

Journal ArticleDOI
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2  +679 moreInstitutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.

Journal ArticleDOI
TL;DR: Selective targeting of BCL2 with venetoclax had a manageable safety profile and induced substantial responses in patients with relapsed CLL or SLL, including those with poor prognostic features.
Abstract: BACKGROUND New treatments have improved outcomes for patients with relapsed chronic lymphocytic leukemia (CLL), but complete remissions remain uncommon. Venetoclax has a distinct mechanism of action; it targets BCL2, a protein central to the survival of CLL cells. METHODS We conducted a phase 1 dose-escalation study of daily oral venetoclax in patients with relapsed or refractory CLL or small lymphocytic lymphoma (SLL) to assess safety, pharmacokinetic profile, and efficacy. In the dose-escalation phase, 56 patients received active treatment in one of eight dose groups that ranged from 150 to 1200 mg per day. In an expansion cohort, 60 additional patients were treated with a weekly stepwise ramp-up in doses as high as 400 mg per day. RESULTS The majority of the study patients had received multiple previous treatments, and 89% had poor prognostic clinical or genetic features. Venetoclax was active at all dose levels. Clinical tumor lysis syndrome occurred in 3 of 56 patients in the dose-escalation cohort, with one death. After adjustments to the dose-escalation schedule, clinical tumor lysis syndrome did not occur in any of the 60 patients in the expansion cohort. Other toxic effects in cluded mild diarrhea (in 52% of the patients), upper respiratory tract infection (in 48%), nausea (in 47%), and grade 3 or 4 neutropenia (in 41%). A maximum tolerated dose was not identified. Among the 116 patients who received venetoclax, 92 (79%) had a response. Response rates ranged from 71 to 79% among patients in subgroups with an adverse prognosis, including those with resistance to fludarabine, those with chromosome 17p deletions (deletion 17p CLL), and those with unmutated IGHV. Complete remissions occurred in 20% of the patients, including 5% who had no minimal residual disease on flow cytometry. The 15-month progression-free survival estimate for the 400-mg dose groups was 69%. CONCLUSIONS Selective targeting of BCL2 with venetoclax had a manageable safety profile and induced substantial responses in patients with relapsed CLL or SLL, including those with poor prognostic features. (Funded by AbbVie and Genentech; ClinicalTrials.gov number, NCT01328626.)

Journal ArticleDOI
10 Nov 2016-Nature
TL;DR: Extraordinary progress in understanding the biology of ALS provides new reasons for optimism that meaningful therapies will be identified, and emerging themes include dysfunction in RNA metabolism and protein homeostasis, with specific defects in nucleocytoplasmic trafficking.
Abstract: Amyotrophic lateral sclerosis (ALS) is a progressive and uniformly fatal neurodegenerative disease. A plethora of genetic factors have been identified that drive the degeneration of motor neurons in ALS, increase susceptibility to the disease or influence the rate of its progression. Emerging themes include dysfunction in RNA metabolism and protein homeostasis, with specific defects in nucleocytoplasmic trafficking, the induction of stress at the endoplasmic reticulum and impaired dynamics of ribonucleoprotein bodies such as RNA granules that assemble through liquid-liquid phase separation. Extraordinary progress in understanding the biology of ALS provides new reasons for optimism that meaningful therapies will be identified.

Book ChapterDOI
08 Oct 2016
TL;DR: A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi- scale object detection, which is learned end-to-end, by optimizing a multi-task loss.
Abstract: A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.

Journal ArticleDOI
University of East Anglia1, University of Oslo2, Commonwealth Scientific and Industrial Research Organisation3, University of Exeter4, Oak Ridge National Laboratory5, National Oceanic and Atmospheric Administration6, Woods Hole Research Center7, University of California, San Diego8, Karlsruhe Institute of Technology9, Cooperative Institute for Marine and Atmospheric Studies10, Centre national de la recherche scientifique11, University of Maryland, College Park12, National Institute of Water and Atmospheric Research13, Woods Hole Oceanographic Institution14, Flanders Marine Institute15, Alfred Wegener Institute for Polar and Marine Research16, Netherlands Environmental Assessment Agency17, University of Illinois at Urbana–Champaign18, Leibniz Institute of Marine Sciences19, Max Planck Society20, University of Paris21, Hobart Corporation22, University of Bern23, Oeschger Centre for Climate Change Research24, National Center for Atmospheric Research25, University of Miami26, Council of Scientific and Industrial Research27, University of Colorado Boulder28, National Institute for Environmental Studies29, Joint Institute for the Study of the Atmosphere and Ocean30, Geophysical Institute, University of Bergen31, Goddard Space Flight Center32, Montana State University33, University of New Hampshire34, Bjerknes Centre for Climate Research35, Imperial College London36, Lamont–Doherty Earth Observatory37, Auburn University38, Wageningen University and Research Centre39, VU University Amsterdam40, Met Office41
TL;DR: In this article, the authors quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community.
Abstract: . Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates and consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models. We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2006–2015), EFF was 9.3 ± 0.5 GtC yr−1, ELUC 1.0 ± 0.5 GtC yr−1, GATM 4.5 ± 0.1 GtC yr−1, SOCEAN 2.6 ± 0.5 GtC yr−1, and SLAND 3.1 ± 0.9 GtC yr−1. For year 2015 alone, the growth in EFF was approximately zero and emissions remained at 9.9 ± 0.5 GtC yr−1, showing a slowdown in growth of these emissions compared to the average growth of 1.8 % yr−1 that took place during 2006–2015. Also, for 2015, ELUC was 1.3 ± 0.5 GtC yr−1, GATM was 6.3 ± 0.2 GtC yr−1, SOCEAN was 3.0 ± 0.5 GtC yr−1, and SLAND was 1.9 ± 0.9 GtC yr−1. GATM was higher in 2015 compared to the past decade (2006–2015), reflecting a smaller SLAND for that year. The global atmospheric CO2 concentration reached 399.4 ± 0.1 ppm averaged over 2015. For 2016, preliminary data indicate the continuation of low growth in EFF with +0.2 % (range of −1.0 to +1.8 %) based on national emissions projections for China and USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. In spite of the low growth of EFF in 2016, the growth rate in atmospheric CO2 concentration is expected to be relatively high because of the persistence of the smaller residual terrestrial sink (SLAND) in response to El Nino conditions of 2015–2016. From this projection of EFF and assumed constant ELUC for 2016, cumulative emissions of CO2 will reach 565 ± 55 GtC (2075 ± 205 GtCO2) for 1870–2016, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quere et al., 2015b, a, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( doi:10.3334/CDIAC/GCP_2016 ).

Journal ArticleDOI
25 Feb 2016
TL;DR: Modification of modified 16S rRNA gene and internal transcribed spacer (ITS) primers for archaea/bacteria and fungi with nonaquatic samples demonstrated that two recently modified primer pairs that target taxonomically discriminatory regions of bacterial and fungal genomic DNA do not introduce new biases when used on a variety of sample types.
Abstract: Designing primers for PCR-based taxonomic surveys that amplify a broad range of phylotypes in varied community samples is a difficult challenge, and the comparability of data sets amplified with varied primers requires attention. Here, we examined the performance of modified 16S rRNA gene and internal transcribed spacer (ITS) primers for archaea/bacteria and fungi, respectively, with nonaquatic samples. We moved primer bar codes to the 5' end, allowing for a range of different 3' primer pairings, such as the 515f/926r primer pair, which amplifies variable regions 4 and 5 of the 16S rRNA gene. We additionally demonstrated that modifications to the 515f/806r (variable region 4) 16S primer pair, which improves detection of Thaumarchaeota and clade SAR11 in marine samples, do not degrade performance on taxa already amplified effectively by the original primer set. Alterations to the fungal ITS primers did result in differential but overall improved performance compared to the original primers. In both cases, the improved primers should be widely adopted for amplicon studies. IMPORTANCE We continue to uncover a wealth of information connecting microbes in important ways to human and environmental ecology. As our scientific knowledge and technical abilities improve, the tools used for microbiome surveys can be modified to improve the accuracy of our techniques, ensuring that we can continue to identify groundbreaking connections between microbes and the ecosystems they populate, from ice caps to the human body. It is important to confirm that modifications to these tools do not cause new, detrimental biases that would inhibit the field rather than continue to move it forward. We therefore demonstrated that two recently modified primer pairs that target taxonomically discriminatory regions of bacterial and fungal genomic DNA do not introduce new biases when used on a variety of sample types, from soil to human skin. This confirms the utility of these primers for maintaining currently recommended microbiome research techniques as the state of the art.

Journal ArticleDOI
TL;DR: This review focuses on recent developments in the understanding of the molecular actions of the core Hippo kinase cascade and discusses key open questions in the regulation and function of the Hippo pathway.
Abstract: The Hippo pathway was initially identified in Drosophila melanogaster screens for tissue growth two decades ago and has been a subject extensively studied in both Drosophila and mammals in the last several years. The core of the Hippo pathway consists of a kinase cascade, transcription coactivators, and DNA-binding partners. Recent studies have expanded the Hippo pathway as a complex signaling network with >30 components. This pathway is regulated by intrinsic cell machineries, such as cell-cell contact, cell polarity, and actin cytoskeleton, as well as a wide range of signals, including cellular energy status, mechanical cues, and hormonal signals that act through G-protein-coupled receptors. The major functions of the Hippo pathway have been defined to restrict tissue growth in adults and modulate cell proliferation, differentiation, and migration in developing organs. Furthermore, dysregulation of the Hippo pathway leads to aberrant cell growth and neoplasia. In this review, we focus on recent developments in our understanding of the molecular actions of the core Hippo kinase cascade and discuss key open questions in the regulation and function of the Hippo pathway.

Journal ArticleDOI
11 May 2016-Nature
TL;DR: It is demonstrated that the ZIKVBR infects fetuses, causing intra-uterine growth restriction (IUGR), and crosses the placenta and causes microcephaly by targeting cortical progenitor cells, inducing cell death by apoptosis and autophagy, impairing neurodevelopment.
Abstract: Zika virus (ZIKV) is an arbovirus belonging to the genus Flavivirus (family Flaviviridae) and was first described in 1947 in Uganda following blood analyses of sentinel Rhesus monkeys. Until the twentieth century, the African and Asian lineages of the virus did not cause meaningful infections in humans. However, in 2007, vectored by Aedes aegypti mosquitoes, ZIKV caused the first noteworthy epidemic on the Yap Island in Micronesia. Patients experienced fever, skin rash, arthralgia and conjunctivitis. From 2013 to 2015, the Asian lineage of the virus caused further massive outbreaks in New Caledonia and French Polynesia. In 2013, ZIKV reached Brazil, later spreading to other countries in South and Central America. In Brazil, the virus has been linked to congenital malformations, including microcephaly and other severe neurological diseases, such as Guillain-Barre syndrome. Despite clinical evidence, direct experimental proof showing that the Brazilian ZIKV (ZIKV(BR)) strain causes birth defects remains absent. Here we demonstrate that ZIKV(BR) infects fetuses, causing intrauterine growth restriction, including signs of microcephaly, in mice. Moreover, the virus infects human cortical progenitor cells, leading to an increase in cell death. We also report that the infection of human brain organoids results in a reduction of proliferative zones and disrupted cortical layers. These results indicate that ZIKV(BR) crosses the placenta and causes microcephaly by targeting cortical progenitor cells, inducing cell death by apoptosis and autophagy, and impairing neurodevelopment. Our data reinforce the growing body of evidence linking the ZIKV(BR) outbreak to the alarming number of cases of congenital brain malformations. Our model can be used to determine the efficiency of therapeutic approaches to counteracting the harmful impact of ZIKV(BR) in human neurodevelopment.

Journal ArticleDOI
TL;DR: The results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.
Abstract: Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.

Journal ArticleDOI
TL;DR: The enormous health loss attributable to viral hepatitis, and the availability of effective vaccines and treatments, suggests an important opportunity to improve public health.

Journal ArticleDOI
25 Mar 2016-Science
TL;DR: This work set out to define a minimal cellular genome experimentally by designing and building one, then testing it for viability, and applied whole-genome design and synthesis to the problem of minimizing a cellular genome.
Abstract: We used whole-genome design and complete chemical synthesis to minimize the 1079-kilobase pair synthetic genome of Mycoplasma mycoides JCVI-syn1.0. An initial design, based on collective knowledge of molecular biology combined with limited transposon mutagenesis data, failed to produce a viable cell. Improved transposon mutagenesis methods revealed a class of quasi-essential genes that are needed for robust growth, explaining the failure of our initial design. Three cycles of design, synthesis, and testing, with retention of quasi-essential genes, produced JCVI-syn3.0 (531 kilobase pairs, 473 genes), which has a genome smaller than that of any autonomously replicating cell found in nature. JCVI-syn3.0 retains almost all genes involved in the synthesis and processing of macromolecules. Unexpectedly, it also contains 149 genes with unknown biological functions. JCVI-syn3.0 is a versatile platform for investigating the core functions of life and for exploring whole-genome design.

Journal ArticleDOI
TL;DR: An enhanced CLIP (eCLIP) protocol is developed that decreases requisite amplification by ∼1,000-fold, decreasing discarded PCR duplicate reads by ∼60% while maintaining single-nucleotide binding resolution, and improves specificity in the discovery of authentic binding sites.
Abstract: As RNA-binding proteins (RBPs) play essential roles in cellular physiology by interacting with target RNA molecules, binding site identification by UV crosslinking and immunoprecipitation (CLIP) of ribonucleoprotein complexes is critical to understanding RBP function. However, current CLIP protocols are technically demanding and yield low-complexity libraries with high experimental failure rates. We have developed an enhanced CLIP (eCLIP) protocol that decreases requisite amplification by ~1,000-fold, decreasing discarded PCR duplicate reads by ~60% while maintaining single-nucleotide binding resolution. By simplifying the generation of paired IgG and size-matched input controls, eCLIP improves specificity in the discovery of authentic binding sites. We generated 102 eCLIP experiments for 73 diverse RBPs in HepG2 and K562 cells (available at https://www.encodeproject.org), demonstrating that eCLIP enables large-scale and robust profiling, with amplification and sample requirements similar to those of ChIP-seq. eCLIP enables integrative analysis of diverse RBPs to reveal factor-specific profiles, common artifacts for CLIP and RNA-centric perspectives on RBP activity.

Journal ArticleDOI
TL;DR: The results explain the outstanding sulfur problem during the historic London Fog formation and elucidate the chemical mechanism of severe haze in China, and suggest that effective haze mitigation is achievable by intervening in the sulfate formation process with NH3 and NO2 emission control measures.
Abstract: Sulfate aerosols exert profound impacts on human and ecosystem health, weather, and climate, but their formation mechanism remains uncertain. Atmospheric models consistently underpredict sulfate levels under diverse environmental conditions. From atmospheric measurements in two Chinese megacities and complementary laboratory experiments, we show that the aqueous oxidation of SO2 by NO2 is key to efficient sulfate formation but is only feasible under two atmospheric conditions: on fine aerosols with high relative humidity and NH3 neutralization or under cloud conditions. Under polluted environments, this SO2 oxidation process leads to large sulfate production rates and promotes formation of nitrate and organic matter on aqueous particles, exacerbating severe haze development. Effective haze mitigation is achievable by intervening in the sulfate formation process with enforced NH3 and NO2 control measures. In addition to explaining the polluted episodes currently occurring in China and during the 1952 London Fog, this sulfate production mechanism is widespread, and our results suggest a way to tackle this growing problem in China and much of the developing world.

Journal ArticleDOI
28 Jun 2016-JAMA
TL;DR: Among ambulatory adults aged 75 years or older, treating to an SBP target of less than 120 mm Hg compared with an SBp target of more than 140mm Hg resulted in significantly lower rates of fatal and nonfatal major cardiovascular events and death from any cause.
Abstract: Importance The appropriate treatment target for systolic blood pressure (SBP) in older patients with hypertension remains uncertain. Objective To evaluate the effects of intensive ( Design, Setting, and Participants A multicenter, randomized clinical trial of patients aged 75 years or older who participated in the Systolic Blood Pressure Intervention Trial (SPRINT). Recruitment began on October 20, 2010, and follow-up ended on August 20, 2015. Interventions Participants were randomized to an SBP target of less than 120 mm Hg (intensive treatment group, n = 1317) or an SBP target of less than 140 mm Hg (standard treatment group, n = 1319). Main Outcomes and Measures The primary cardiovascular disease outcome was a composite of nonfatal myocardial infarction, acute coronary syndrome not resulting in a myocardial infarction, nonfatal stroke, nonfatal acute decompensated heart failure, and death from cardiovascular causes. All-cause mortality was a secondary outcome. Results Among 2636 participants (mean age, 79.9 years; 37.9% women), 2510 (95.2%) provided complete follow-up data. At a median follow-up of 3.14 years, there was a significantly lower rate of the primary composite outcome (102 events in the intensive treatment group vs 148 events in the standard treatment group; hazard ratio [HR], 0.66 [95% CI, 0.51-0.85]) and all-cause mortality (73 deaths vs 107 deaths, respectively; HR, 0.67 [95% CI, 0.49-0.91]). The overall rate of serious adverse events was not different between treatment groups (48.4% in the intensive treatment group vs 48.3% in the standard treatment group; HR, 0.99 [95% CI, 0.89-1.11]). Absolute rates of hypotension were 2.4% in the intensive treatment group vs 1.4% in the standard treatment group (HR, 1.71 [95% CI, 0.97-3.09]), 3.0% vs 2.4%, respectively, for syncope (HR, 1.23 [95% CI, 0.76-2.00]), 4.0% vs 2.7% for electrolyte abnormalities (HR, 1.51 [95% CI, 0.99-2.33]), 5.5% vs 4.0% for acute kidney injury (HR, 1.41 [95% CI, 0.98-2.04]), and 4.9% vs 5.5% for injurious falls (HR, 0.91 [95% CI, 0.65-1.29]). Conclusions and Relevance Among ambulatory adults aged 75 years or older, treating to an SBP target of less than 120 mm Hg compared with an SBP target of less than 140 mm Hg resulted in significantly lower rates of fatal and nonfatal major cardiovascular events and death from any cause. Trial Registration clinicaltrials.gov Identifier:NCT01206062

Journal ArticleDOI
22 Sep 2016
TL;DR: Primary open-angle glaucoma (POAG) is the most common type and management of POAG includes topical drug therapies and surgery to reduce IOP, although new therapies targeting neuroprotection of RGCs and axonal regeneration are under development.
Abstract: Glaucoma is an optic neuropathy that is characterized by the progressive degeneration of the optic nerve, leading to visual impairment. Glaucoma is the main cause of irreversible blindness worldwide, but typically remains asymptomatic until very severe. Open-angle glaucoma comprises the majority of cases in the United States and western Europe, of which, primary open-angle glaucoma (POAG) is the most common type. By contrast, in China and other Asian countries, angle-closure glaucoma is highly prevalent. These two types of glaucoma are characterized based on the anatomic configuration of the aqueous humour outflow pathway. The pathophysiology of POAG is not well understood, but it is an optic neuropathy that is thought to be associated with intraocular pressure (IOP)-related damage to the optic nerve head and resultant loss of retinal ganglion cells (RGCs). POAG is generally diagnosed during routine eye examination, which includes fundoscopic evaluation and visual field assessment (using perimetry). An increase in IOP, measured by tonometry, is not essential for diagnosis. Management of POAG includes topical drug therapies and surgery to reduce IOP, although new therapies targeting neuroprotection of RGCs and axonal regeneration are under development.

Journal ArticleDOI
TL;DR: Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches, and validates that collaborative data analyses can readily be used across brain phenotypes and disorders.
Abstract: The profile of brain structural abnormalities in schizophrenia is still not fully understood, despite decades of research using brain scans. To validate a prospective meta-analysis approach to analyzing multicenter neuroimaging data, we analyzed brain MRI scans from 2028 schizophrenia patients and 2540 healthy controls, assessed with standardized methods at 15 centers worldwide. We identified subcortical brain volumes that differentiated patients from controls, and ranked them according to their effect sizes. Compared with healthy controls, patients with schizophrenia had smaller hippocampus (Cohen's d=-0.46), amygdala (d=-0.31), thalamus (d=-0.31), accumbens (d=-0.25) and intracranial volumes (d=-0.12), as well as larger pallidum (d=0.21) and lateral ventricle volumes (d=0.37). Putamen and pallidum volume augmentations were positively associated with duration of illness and hippocampal deficits scaled with the proportion of unmedicated patients. Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches. This first ENIGMA Schizophrenia Working Group study validates that collaborative data analyses can readily be used across brain phenotypes and disorders and encourages analysis and data sharing efforts to further our understanding of severe mental illness.

Journal ArticleDOI
TL;DR: This portion of the NCCN Guidelines discusses general principles for the diagnosis, staging, and treatment of STS of the extremities, superficial trunk, or head and neck; outlines treatment recommendations by disease stage; and reviews the evidence to support the guidelines recommendations.
Abstract: Soft tissue sarcomas (STS) are rare solid tumors of mesenchymal cell origin that display a heterogenous mix of clinical and pathologic characteristics. STS can develop from fat, muscle, nerves, blood vessels, and other connective tissues. The evaluation and treatment of patients with STS requires a multidisciplinary team with demonstrated expertise in the management of these tumors. The complete NCCN Guidelines for STS provide recommendations for the diagnosis, evaluation, and treatment of extremity/superficial trunk/head and neck STS, as well as intra-abdominal/retroperitoneal STS, gastrointestinal stromal tumors, desmoid tumors, and rhabdomyosarcoma. This portion of the NCCN Guidelines discusses general principles for the diagnosis, staging, and treatment of STS of the extremities, superficial trunk, or head and neck; outlines treatment recommendations by disease stage; and reviews the evidence to support the guidelines recommendations.

Journal ArticleDOI
01 Dec 2016-Nature
TL;DR: The HITI method presented here establishes new avenues for basic research and targeted gene therapies and demonstrates the efficacy of HITI in improving visual function using a rat model of the retinal degeneration condition retinitis pigmentosa.
Abstract: Targeted genome editing via engineered nucleases is an exciting area of biomedical research and holds potential for clinical applications. Despite rapid advances in the field, in vivo targeted transgene integration is still infeasible because current tools are inefficient, especially for non-dividing cells, which compose most adult tissues. This poses a barrier for uncovering fundamental biological principles and developing treatments for a broad range of genetic disorders. Based on clustered regularly interspaced short palindromic repeat/Cas9 (CRISPR/Cas9) technology, here we devise a homology-independent targeted integration (HITI) strategy, which allows for robust DNA knock-in in both dividing and non-dividing cells in vitro and, more importantly, in vivo (for example, in neurons of postnatal mammals). As a proof of concept of its therapeutic potential, we demonstrate the efficacy of HITI in improving visual function using a rat model of the retinal degeneration condition retinitis pigmentosa. The HITI method presented here establishes new avenues for basic research and targeted gene therapies.

Journal ArticleDOI
TL;DR: In this paper, the authors employ an approximation that makes a nonlinear term structure model extremely tractable for analysis of an economy operating near the zero lower bound for interest rates and show that such a model oers an excellent description of the data and can be used to summarize the macroeconomic eects of unconventional monetary policy at the zero upper bound.
Abstract: This paper employs an approximation that makes a nonlinear term structure model extremely tractable for analysis of an economy operating near the zero lower bound for interest rates. We show that such a model oers an excellent description of the data and can be used to summarize the macroeconomic eects of unconventional monetary policy at the zero lower bound. Our estimates imply that the eorts by the Federal Reserve to stimulate the economy since 2009 succeeded in making the unemployment rate in May 2013 0.23% lower than it otherwise would have been.