Showing papers by "University of Toronto published in 2016"
••
University College London1, The Feinstein Institute for Medical Research2, University of Pittsburgh3, Guy's and St Thomas' NHS Foundation Trust4, Monash University5, Vanderbilt University6, Emory University7, Washington University in St. Louis8, Brown University9, University of Toronto10, Sunnybrook Health Sciences Centre11
TL;DR: The task force concluded the term severe sepsis was redundant and updated definitions and clinical criteria should replace previous definitions, offer greater consistency for epidemiologic studies and clinical trials, and facilitate earlier recognition and more timely management of patients with sepsi or at risk of developing sepsic shock.
Abstract: Importance Definitions of sepsis and septic shock were last revised in 2001. Considerable advances have since been made into the pathobiology (changes in organ function, morphology, cell biology, biochemistry, immunology, and circulation), management, and epidemiology of sepsis, suggesting the need for reexamination. Objective To evaluate and, as needed, update definitions for sepsis and septic shock. Process A task force (n = 19) with expertise in sepsis pathobiology, clinical trials, and epidemiology was convened by the Society of Critical Care Medicine and the European Society of Intensive Care Medicine. Definitions and clinical criteria were generated through meetings, Delphi processes, analysis of electronic health record databases, and voting, followed by circulation to international professional societies, requesting peer review and endorsement (by 31 societies listed in the Acknowledgment). Key Findings From Evidence Synthesis Limitations of previous definitions included an excessive focus on inflammation, the misleading model that sepsis follows a continuum through severe sepsis to shock, and inadequate specificity and sensitivity of the systemic inflammatory response syndrome (SIRS) criteria. Multiple definitions and terminologies are currently in use for sepsis, septic shock, and organ dysfunction, leading to discrepancies in reported incidence and observed mortality. The task force concluded the term severe sepsis was redundant. Recommendations Sepsis should be defined as life-threatening organ dysfunction caused by a dysregulated host response to infection. For clinical operationalization, organ dysfunction can be represented by an increase in the Sequential [Sepsis-related] Organ Failure Assessment (SOFA) score of 2 points or more, which is associated with an in-hospital mortality greater than 10%. Septic shock should be defined as a subset of sepsis in which particularly profound circulatory, cellular, and metabolic abnormalities are associated with a greater risk of mortality than with sepsis alone. Patients with septic shock can be clinically identified by a vasopressor requirement to maintain a mean arterial pressure of 65 mm Hg or greater and serum lactate level greater than 2 mmol/L (>18 mg/dL) in the absence of hypovolemia. This combination is associated with hospital mortality rates greater than 40%. In out-of-hospital, emergency department, or general hospital ward settings, adult patients with suspected infection can be rapidly identified as being more likely to have poor outcomes typical of sepsis if they have at least 2 of the following clinical criteria that together constitute a new bedside clinical score termed quickSOFA (qSOFA): respiratory rate of 22/min or greater, altered mentation, or systolic blood pressure of 100 mm Hg or less. Conclusions and Relevance These updated definitions and clinical criteria should replace previous definitions, offer greater consistency for epidemiologic studies and clinical trials, and facilitate earlier recognition and more timely management of patients with sepsis or at risk of developing sepsis.
14,699 citations
••
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.
10,728 citations
••
University of Bristol1, Harvard University2, University Hospitals Bristol NHS Foundation Trust3, Research Triangle Park4, University of Toronto5, University of Oxford6, University of Ottawa7, Paris Descartes University8, University of London9, University of York10, University of Birmingham11, University of Southern Denmark12, University of Liverpool13, University of East Anglia14, Loyola University Chicago15, University of Aberdeen16, Kaiser Permanente17, Baruch College18, McMaster University19, Cochrane Collaboration20, McGill University21, Ottawa Hospital Research Institute22, University of Louisville23, University of Melbourne24
TL;DR: Risk of Bias In Non-randomised Studies - of Interventions is developed, a new tool for evaluating risk of bias in estimates of the comparative effectiveness of interventions from studies that did not use randomisation to allocate units or clusters of individuals to comparison groups.
Abstract: Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.
8,028 citations
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.
5,050 citations
••
University of Calgary1, Maastricht University2, Erasmus University Rotterdam3, Royal Melbourne Hospital4, University of Amsterdam5, Bellvitge University Hospital6, Florey Institute of Neuroscience and Mental Health7, UCLA Medical Center8, University Hospital Bonn9, State University of New York System10, University of Toronto11, Beaumont Hospital12, Philadelphia College of Osteopathic Medicine13, Altair Engineering14, University of California, Los Angeles15, University of Pittsburgh16
TL;DR: Endovascular thrombectomy is of benefit to most patients with acute ischaemic stroke caused by occlusion of the proximal anterior circulation, irrespective of patient characteristics or geographical location, and will have global implications on structuring systems of care to provide timely treatment.
4,846 citations
••
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.
4,804 citations
••
University of Texas Southwestern Medical Center1, Harvard University2, Novo Nordisk3, University of Erlangen-Nuremberg4, Ruhr University Bochum5, Cleveland Clinic6, University of London7, Imperial College London8, George Washington University9, University of Toronto10, University of North Carolina at Chapel Hill11
TL;DR: In the time-to-event analysis, the rate of the first occurrence of death from cardiovascular causes, nonfatal myocardial infarction, orNonfatal stroke among patients with type 2 diabetes mellitus was lower with liraglutide than with placebo.
Abstract: BackgroundThe cardiovascular effect of liraglutide, a glucagon-like peptide 1 analogue, when added to standard care in patients with type 2 diabetes, remains unknown. MethodsIn this double-blind trial, we randomly assigned patients with type 2 diabetes and high cardiovascular risk to receive liraglutide or placebo. The primary composite outcome in the time-to-event analysis was the first occurrence of death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke. The primary hypothesis was that liraglutide would be noninferior to placebo with regard to the primary outcome, with a margin of 1.30 for the upper boundary of the 95% confidence interval of the hazard ratio. No adjustments for multiplicity were performed for the prespecified exploratory outcomes. ResultsA total of 9340 patients underwent randomization. The median follow-up was 3.8 years. The primary outcome occurred in significantly fewer patients in the liraglutide group (608 of 4668 patients [13.0%]) than in the placebo ...
4,409 citations
••
01 Jan 2016TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Abstract: Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
3,703 citations
••
TL;DR: In patients with type 2 diabetes who were at high cardiovascular risk, the rate of cardiovascular death, nonfatal myocardial infarction, orNonfatal stroke was significantly lower among patients receiving semaglutide than among those receiving placebo, an outcome that confirmed the noninferiority of semag lutide.
Abstract: BackgroundRegulatory guidance specifies the need to establish cardiovascular safety of new diabetes therapies in patients with type 2 diabetes in order to rule out excess cardiovascular risk. The cardiovascular effects of semaglutide, a glucagon-like peptide 1 analogue with an extended half-life of approximately 1 week, in type 2 diabetes are unknown. MethodsWe randomly assigned 3297 patients with type 2 diabetes who were on a standard-care regimen to receive once-weekly semaglutide (0.5 mg or 1.0 mg) or placebo for 104 weeks. The primary composite outcome was the first occurrence of cardiovascular death, nonfatal myocardial infarction, or nonfatal stroke. We hypothesized that semaglutide would be noninferior to placebo for the primary outcome. The noninferiority margin was 1.8 for the upper boundary of the 95% confidence interval of the hazard ratio. ResultsAt baseline, 2735 of the patients (83.0%) had established cardiovascular disease, chronic kidney disease, or both. The primary outcome occurred in 10...
3,551 citations
••
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
••
TL;DR: This Perspective explores and explains the fundamental dogma of nanoparticle delivery to tumours and answers two central questions: ‘ how many nanoparticles accumulate in a tumour?’ and ‘how does this number affect the clinical translation of nanomedicines?'
Abstract: This Perspective explores and explains the fundamental dogma of nanoparticle delivery to tumours and answers two central questions: ‘how many nanoparticles accumulate in a tumour?’ and ‘how does this number affect the clinical translation of nanomedicines?’
••
University of Milan1, St. Michael's GAA, Sligo2, University of Toronto3, Paris Diderot University4, University of Paris5, University Health Network6, St. Michael's Hospital7, Australian National University8, Uppsala University9, Queen's University Belfast10, Sapienza University of Rome11, Sunnybrook Health Sciences Centre12, Harvard University13, Leipzig University14
TL;DR: Clinician recognition of ARDS was associated with higher PEEP, greater use of neuromuscular blockade, and prone positioning, which indicates the potential for improvement in the management of patients with ARDS.
Abstract: IMPORTANCE Limited information exists about the epidemiology, recognition, management, and outcomes of patients with the acute respiratory distress syndrome (ARDS). OBJECTIVES To evaluate intensive ...
••
Imperial College London1, University of Barcelona2, Keio University3, University of Duisburg-Essen4, Queen's University5, Peter MacCallum Cancer Centre6, University of Michigan7, University of São Paulo8, Yale University9, Northern General Hospital10, University of Caen Lower Normandy11, Fred Hutchinson Cancer Research Center12, University of Oxford13, Memorial Sloan Kettering Cancer Center14, University of Sydney15, Sungkyunkwan University16, Seoul National University17, Kyorin University18, University of Copenhagen19, Nippon Medical School20, Katholieke Universiteit Leuven21, University of Texas MD Anderson Cancer Center22, University of Antwerp23, Hyogo College of Medicine24, University of Western Australia25, Glenfield Hospital26, Cleveland Clinic27, Icahn School of Medicine at Mount Sinai28, University of Turin29, Université libre de Bruxelles30, Juntendo University31, National Cancer Research Institute32, Mayo Clinic33, University of Toronto34, Sinai Grace Hospital35, Netherlands Cancer Institute36, Hiroshima University37, City of Hope National Medical Center38, University of Chicago39, New York University40, Georgetown University41, University of Tokushima42, University of Pisa43, Osaka University44, University of Valencia45, Good Samaritan Hospital46, Military Medical Academy47, Fundación Favaloro48, Autonomous University of Barcelona49, Complutense University of Madrid50, University of Oviedo51, National and Kapodistrian University of Athens52, Rovira i Virgili University53, Autonomous University of Madrid54, Ghent University55
TL;DR: The methods used to evaluate the resultant Stage groupings and the proposals put forward for the 8th edition of the TNM Classification for lung cancer due to be published late 2016 are described.
•
01 Apr 2016TL;DR: This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.
Abstract: Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.
••
Wellcome Trust Sanger Institute1, University of Michigan2, University of Oxford3, University of Geneva4, University of Exeter5, Greifswald University Hospital6, National Research Council7, University of Bristol8, University of Colorado Boulder9, University of Washington10, Fred Hutchinson Cancer Research Center11, SUNY Downstate Medical Center12, Erasmus University Rotterdam13, University of Trieste14, VU University Amsterdam15, South London and Maudsley NHS Foundation Trust16, King's College London17, University of Edinburgh18, Harvard University19, National Institutes of Health20, Harokopio University21, Innsbruck Medical University22, Broad Institute23, University of Helsinki24, Lund University25, Norwegian University of Science and Technology26, University of Cambridge27, University of Minnesota28, Technische Universität München29, University of North Carolina at Chapel Hill30, University of Toronto31, McGill University32, Leiden University33, University of Pennsylvania34, University of Groningen35, Utrecht University36, Churchill Hospital37
TL;DR: A reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies.
Abstract: We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies, and it can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently.
••
University of Melbourne1, Royal Children's Hospital2, Columbia University3, World Health Organization4, University of London5, American University of Beirut6, University of Oregon7, Public Health Foundation of India8, University College London9, Burnet Institute10, United Nations Population Fund11, Aga Khan University12, University of Toronto13, Obafemi Awolowo University14, Jawaharlal Nehru University15, UNICEF16, Kunming Medical University17
TL;DR: This Commission outlines the opportunities and challenges for investment in adolescent health and wellbeing at both country and global levels (panel 1).
••
TL;DR: A room-temperature synthesis to produce gelled oxyhydroxides materials with an atomically homogeneous metal distribution that exhibit the lowest overpotential reported at 10 milliamperes per square centimeter in alkaline electrolyte and shows no evidence of degradation after more than 500 hours of operation.
Abstract: Earth-abundant first-row (3d) transition metal-based catalysts have been developed for the oxygen-evolution reaction (OER); however, they operate at overpotentials substantially above thermodynamic requirements. Density functional theory suggested that non-3d high-valency metals such as tungsten can modulate 3d metal oxides, providing near-optimal adsorption energies for OER intermediates. We developed a room-temperature synthesis to produce gelled oxyhydroxides materials with an atomically homogeneous metal distribution. These gelled FeCoW oxyhydroxides exhibit the lowest overpotential (191 millivolts) reported at 10 milliamperes per square centimeter in alkaline electrolyte. The catalyst shows no evidence of degradation after more than 500 hours of operation. X-ray absorption and computational studies reveal a synergistic interplay between tungsten, iron, and cobalt in producing a favorable local coordination environment and electronic structure that enhance the energetics for OER.
••
TL;DR: A perovskite mixed material comprising a series of differently quantum-size-tuned grains that funnels photoexcitations to the lowest-bandgap light-emitter in the mixture functions as charge carrier concentrators, ensuring that radiative recombination successfully outcompetes trapping and hence non-radiatives recombination.
Abstract: Organometal halide perovskites exhibit large bulk crystal domain sizes, rare traps, excellent mobilities and carriers that are free at room temperature-properties that support their excellent performance in charge-separating devices. In devices that rely on the forward injection of electrons and holes, such as light-emitting diodes (LEDs), excellent mobilities contribute to the efficient capture of non-equilibrium charge carriers by rare non-radiative centres. Moreover, the lack of bound excitons weakens the competition of desired radiative (over undesired non-radiative) recombination. Here we report a perovskite mixed material comprising a series of differently quantum-size-tuned grains that funnels photoexcitations to the lowest-bandgap light-emitter in the mixture. The materials function as charge carrier concentrators, ensuring that radiative recombination successfully outcompetes trapping and hence non-radiative recombination. We use the new material to build devices that exhibit an external quantum efficiency (EQE) of 8.8% and a radiance of 80 W sr-1 m-2. These represent the brightest and most efficient solution-processed near-infrared LEDs to date.
••
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2 +679 more•Institutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.
••
King's College London1, Guy's and St Thomas' NHS Foundation Trust2, Ohio State University3, Rhode Island Hospital4, University of Pittsburgh5, Kaiser Permanente6, Hofstra University7, The Feinstein Institute for Medical Research8, Sunnybrook Health Sciences Centre9, University of Toronto10, University College London11
TL;DR: A consensus process using results from a systematic review, surveys, and cohort studies found that adult patients with septic shock can be identified using the clinical criteria of hypotension requiring vasopressor therapy to maintain mean BP 65 mm Hg or greater and having a serum lactate level greater than 2 mmol/L after adequate fluid resuscitation.
Abstract: Importance Septic shock currently refers to a state of acute circulatory failure associated with infection. Emerging biological insights and reported variation in epidemiology challenge the validity of this definition. Objective To develop a new definition and clinical criteria for identifying septic shock in adults. Design, Setting, and Participants The Society of Critical Care Medicine and the European Society of Intensive Care Medicine convened a task force (19 participants) to revise current sepsis/septic shock definitions. Three sets of studies were conducted: (1) a systematic review and meta-analysis of observational studies in adults published between January 1, 1992, and December 25, 2015, to determine clinical criteria currently reported to identify septic shock and inform the Delphi process; (2) a Delphi study among the task force comprising 3 surveys and discussions of results from the systematic review, surveys, and cohort studies to achieve consensus on a new septic shock definition and clinical criteria; and (3) cohort studies to test variables identified by the Delphi process using Surviving Sepsis Campaign (SSC) (2005-2010; n = 28 150), University of Pittsburgh Medical Center (UPMC) (2010-2012; n = 1 309 025), and Kaiser Permanente Northern California (KPNC) (2009-2013; n = 1 847 165) electronic health record (EHR) data sets. Main Outcomes and Measures Evidence for and agreement on septic shock definitions and criteria. Results The systematic review identified 44 studies reporting septic shock outcomes (total of 166 479 patients) from a total of 92 sepsis epidemiology studies reporting different cutoffs and combinations for blood pressure (BP), fluid resuscitation, vasopressors, serum lactate level, and base deficit to identify septic shock. The septic shock–associated crude mortality was 46.5% (95% CI, 42.7%-50.3%), with significant between-study statistical heterogeneity ( I 2 = 99.5%; τ 2 = 182.5; P Conclusions and Relevance Based on a consensus process using results from a systematic review, surveys, and cohort studies, septic shock is defined as a subset of sepsis in which underlying circulatory, cellular, and metabolic abnormalities are associated with a greater risk of mortality than sepsis alone. Adult patients with septic shock can be identified using the clinical criteria of hypotension requiring vasopressor therapy to maintain mean BP 65 mm Hg or greater and having a serum lactate level greater than 2 mmol/L after adequate fluid resuscitation.
••
TL;DR: A method to convert discrete representations of molecules to and from a multidimensional continuous representation that allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds is reported.
Abstract: We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in the set of molecules with fewer that nine heavy atoms.
••
TL;DR: The application of regression models in the presence of competing risks, modeling the effect of covariates on the cause-specific hazard of the outcome or modeling theeffect of covariate on the cumulative incidence function is illustrated by examining cause- specific mortality in patients hospitalized with heart failure.
Abstract: Competing risks occur frequently in the analysis of survival data. A competing risk is an event whose occurrence precludes the occurrence of the primary event of interest. In a study examining time to death attributable to cardiovascular causes, death attributable to noncardiovascular causes is a competing risk. When estimating the crude incidence of outcomes, analysts should use the cumulative incidence function, rather than the complement of the Kaplan-Meier survival function. The use of the Kaplan-Meier survival function results in estimates of incidence that are biased upward, regardless of whether the competing events are independent of one another. When fitting regression models in the presence of competing risks, researchers can choose from 2 different families of models: modeling the effect of covariates on the cause-specific hazard of the outcome or modeling the effect of covariates on the cumulative incidence function. The former allows one to estimate the effect of the covariates on the rate of occurrence of the outcome in those subjects who are currently event free. The latter allows one to estimate the effect of covariates on the absolute risk of the outcome over time. The former family of models may be better suited for addressing etiologic questions, whereas the latter model may be better suited for estimating a patient's clinical prognosis. We illustrate the application of these methods by examining cause-specific mortality in patients hospitalized with heart failure. Statistical software code in both R and SAS is provided.
••
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.
••
TL;DR: In this article, the authors discuss the properties of perovskites that benefit light emission, review recent progress in perov-skite electroluminescent diodes and optically pumped lasers, and examine the remaining challenges in achieving continuous-wave and electrically driven lasing.
Abstract: The prospects for light-emitting diodes and lasers based on perovskite materials are reviewed. The field of solution-processed semiconductors has made great strides; however, it has yet to enable electrically driven lasers. To achieve this goal, improved materials are required that combine efficient (>50% quantum yield) radiative recombination under high injection, large and balanced charge-carrier mobilities in excess of 10 cm2 V−1 s−1, free-carrier densities greater than 1017 cm−3 and gain coefficients exceeding 104 cm−1. Solid-state perovskites are — in addition to galvanizing the field of solar electricity — showing great promise in photonic sources, and may be the answer to realizing solution-cast laser diodes. Here, we discuss the properties of perovskites that benefit light emission, review recent progress in perovskite electroluminescent diodes and optically pumped lasers, and examine the remaining challenges in achieving continuous-wave and electrically driven lasing.
••
TL;DR: The ESP block holds promise as a simple and safe technique for thoracic analgesia in both chronic neuropathic pain as well as acute postsurgical or posttraumatic pain.
••
TL;DR: It is reported that nanostructured electrodes produce, at low applied overpotentials, local high electric fields that concentrate electrolyte cations, which leads to a high local concentration of CO2 close to the active CO2 reduction reaction surface, which surpasses by an order of magnitude the performance of the best gold nanorods, nanoparticles and oxide-derived noble metal catalysts.
Abstract: Electrochemical reduction of carbon dioxide (CO2) to carbon monoxide (CO) is the first step in the synthesis of more complex carbon-based fuels and feedstocks using renewable electricity. Unfortunately, the reaction suffers from slow kinetics owing to the low local concentration of CO2 surrounding typical CO2 reduction reaction catalysts. Alkali metal cations are known to overcome this limitation through non-covalent interactions with adsorbed reagent species, but the effect is restricted by the solubility of relevant salts. Large applied electrode potentials can also enhance CO2 adsorption, but this comes at the cost of increased hydrogen (H2) evolution. Here we report that nanostructured electrodes produce, at low applied overpotentials, local high electric fields that concentrate electrolyte cations, which in turn leads to a high local concentration of CO2 close to the active CO2 reduction reaction surface. Simulations reveal tenfold higher electric fields associated with metallic nanometre-sized tips compared to quasi-planar electrode regions, and measurements using gold nanoneedles confirm a field-induced reagent concentration that enables the CO2 reduction reaction to proceed with a geometric current density for CO of 22 milliamperes per square centimetre at -0.35 volts (overpotential of 0.24 volts). This performance surpasses by an order of magnitude the performance of the best gold nanorods, nanoparticles and oxide-derived noble metal catalysts. Similarly designed palladium nanoneedle electrocatalysts produce formate with a Faradaic efficiency of more than 90 per cent and an unprecedented geometric current density for formate of 10 milliamperes per square centimetre at -0.2 volts, demonstrating the wider applicability of the field-induced reagent concentration concept.
•
02 Nov 2016TL;DR: The Concrete distribution as mentioned in this paper is a new family of distributions with closed form densities and a simple reparameterization, which enables optimizing large scale stochastic computation graphs via gradient descent.
Abstract: The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce CONCRETE random variables—CONtinuous relaxations of disCRETE random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
••
TL;DR: This work demonstrates that the circular RNA circ-Foxo3 was highly expressed in non-cancer cells and were associated with cell cycle progression, and formed a ternary complex that arrested the function of CDK2 and blockedcell cycle progression.
Abstract: Most RNAs generated by the human genome have no protein-coding ability and are termed non-coding RNAs. Among these include circular RNAs, which include exonic circular RNAs (circRNA), mainly found in the cytoplasm, and intronic RNAs (ciRNA), predominantly detected in the nucleus. The biological functions of circular RNAs remain largely unknown, although ciRNAs have been reported to promote gene transcription, while circRNAs may function as microRNA sponges. We demonstrate that the circular RNA circ-Foxo3 was highly expressed in non-cancer cells and were associated with cell cycle progression. Silencing endogenous circ-Foxo3 promoted cell proliferation. Ectopic expression of circ-Foxo3 repressed cell cycle progression by binding to the cell cycle proteins cyclin-dependent kinase 2 (also known as cell division protein kinase 2 or CDK2) and cyclin-dependent kinase inhibitor 1 (or p21), resulting in the formation of a ternary complex. Normally, CDK2 interacts with cyclin A and cyclin E to facilitate cell cycle entry, while p21works to inhibit these interactions and arrest cell cycle progression. The formation of this circ-Foxo3-p21-CDK2 ternary complex arrested the function of CDK2 and blocked cell cycle progression.
••
University of Birmingham1, Bernhard Nocht Institute for Tropical Medicine2, University of Toronto3, Ontario Institute for Cancer Research4, Public Health England5, European Centre for Disease Prevention and Control6, University of Edinburgh7, Robert Koch Institute8, Swiss Tropical and Public Health Institute9, University College London10, Paul Ehrlich Institute11, University of Liverpool12, Rega Institute for Medical Research13, Kenya Medical Research Institute14, Friedrich Loeffler Institute15, Janssen-Cilag16, Technische Universität München17, Public Health Agency of Canada18, Pasteur Institute19, Sandia National Laboratories20, MRIGlobal21, World Health Organization22, University of London23, Norwegian Institute of Public Health24, Defence Science and Technology Laboratory25, Bundeswehr Institute of Microbiology26, National Institutes of Health27
TL;DR: This paper presents sequence data and analysis of 142 EBOV samples collected during the period March to October 2015 and shows that real-time genomic surveillance is possible in resource-limited settings and can be established rapidly to monitor outbreaks.
Abstract: A nanopore DNA sequencer is used for real-time genomic surveillance of the Ebola virus epidemic in the field in Guinea; the authors demonstrate that it is possible to pack a genomic surveillance laboratory in a suitcase and transport it to the field for on-site virus sequencing, generating results within 24 hours of sample collection. This paper reports the use of nanopore DNA sequencers (known as MinIONs) for real-time genomic surveillance of the Ebola virus epidemic, in the field in Guinea. The authors demonstrate that it is possible to pack a genomic surveillance laboratory in a suitcase and transport it to the field for on-site virus sequencing, generating results within 24 hours of sample collection. The Ebola virus disease epidemic in West Africa is the largest on record, responsible for over 28,599 cases and more than 11,299 deaths1. Genome sequencing in viral outbreaks is desirable to characterize the infectious agent and determine its evolutionary rate. Genome sequencing also allows the identification of signatures of host adaptation, identification and monitoring of diagnostic targets, and characterization of responses to vaccines and treatments. The Ebola virus (EBOV) genome substitution rate in the Makona strain has been estimated at between 0.87 × 10−3 and 1.42 × 10−3 mutations per site per year. This is equivalent to 16–27 mutations in each genome, meaning that sequences diverge rapidly enough to identify distinct sub-lineages during a prolonged epidemic2,3,4,5,6,7. Genome sequencing provides a high-resolution view of pathogen evolution and is increasingly sought after for outbreak surveillance. Sequence data may be used to guide control measures, but only if the results are generated quickly enough to inform interventions8. Genomic surveillance during the epidemic has been sporadic owing to a lack of local sequencing capacity coupled with practical difficulties transporting samples to remote sequencing facilities9. To address this problem, here we devise a genomic surveillance system that utilizes a novel nanopore DNA sequencing instrument. In April 2015 this system was transported in standard airline luggage to Guinea and used for real-time genomic surveillance of the ongoing epidemic. We present sequence data and analysis of 142 EBOV samples collected during the period March to October 2015. We were able to generate results less than 24 h after receiving an Ebola-positive sample, with the sequencing process taking as little as 15–60 min. We show that real-time genomic surveillance is possible in resource-limited settings and can be established rapidly to monitor outbreaks.