scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
12 Oct 2016-BMJ
TL;DR: Risk of Bias In Non-randomised Studies - of Interventions is developed, a new tool for evaluating risk of bias in estimates of the comparative effectiveness of interventions from studies that did not use randomisation to allocate units or clusters of individuals to comparison groups.
Abstract: Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.

8,028 citations


Posted Content
TL;DR: It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.
Abstract: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.

7,951 citations


Journal ArticleDOI
TL;DR: An updated protocol for Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites and analyze the effect of amino acid variants for a user's protein sequence.
Abstract: Phyre2 is a web-based tool for predicting and analyzing protein structure and function. Phyre2 uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyze amino acid variants in a protein sequence. Phyre2 is a suite of tools available on the web to predict and analyze protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a paper in Nature Protocols. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites and analyze the effect of amino acid variants (e.g., nonsynonymous SNPs (nsSNPs)) for a user's protein sequence. Users are guided through results by a simple interface at a level of detail they determine. This protocol will guide users from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins that are difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2 . A typical structure prediction will be returned between 30 min and 2 h after submission.

7,941 citations


Posted Content
TL;DR: GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Abstract: Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

7,926 citations


Proceedings ArticleDOI
15 Feb 2018
TL;DR: Graph Attention Networks (GATs) as mentioned in this paper leverage masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.
Abstract: We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).

7,904 citations


Posted Content
Tsung-Yi Lin1, Priya Goyal1, Ross Girshick1, Kaiming He1, Piotr Dollár1 
TL;DR: This paper proposes to address the extreme foreground-background class imbalance encountered during training of dense detectors by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples, and develops a novel Focal Loss, which focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.
Abstract: The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL.

7,715 citations


Journal ArticleDOI
TL;DR: Patients with type 2 diabetes at high risk for cardiovascular events who received empagliflozin, as compared with placebo, had a lower rate of the primary composite cardiovascular outcome and of death from any cause when the study drug was added to standard care.
Abstract: BACKGROUND The effects of empagliflozin, an inhibitor of sodium–glucose cotransporter 2, in addition to standard care, on cardiovascular morbidity and mortality in patients with type 2 diabetes at high cardiovascular risk are not known. METHODS We randomly assigned patients to receive 10 mg or 25 mg of empagliflozin or placebo once daily. The primary composite outcome was death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke, as analyzed in the pooled empagliflozin group versus the placebo group. The key secondary composite outcome was the primary outcome plus hospitalization for unstable angina. RESULTS A total of 7020 patients were treated (median observation time, 3.1 years). The primary outcome occurred in 490 of 4687 patients (10.5%) in the pooled empagliflozin group and in 282 of 2333 patients (12.1%) in the placebo group (hazard ratio in the empagliflozin group, 0.86; 95.02% confidence interval, 0.74 to 0.99; P = 0.04 for superiority). There were no significant between-group differences in the rates of myocardial infarction or stroke, but in the empagliflozin group there were significantly lower rates of death from cardiovascular causes (3.7%, vs. 5.9% in the placebo group; 38% relative risk reduction), hospitalization for heart failure (2.7% and 4.1%, respectively; 35% relative risk reduction), and death from any cause (5.7% and 8.3%, respectively; 32% relative risk reduction). There was no significant between-group difference in the key secondary outcome (P = 0.08 for superiority). Among patients receiving empagliflozin, there was an increased rate of genital infection but no increase in other adverse events. CONCLUSIONS Patients with type 2 diabetes at high risk for cardiovascular events who received empagliflozin, as compared with placebo, had a lower rate of the primary composite cardiovascular outcome and of death from any cause when the study drug was added to standard care. (Funded by Boehringer Ingelheim and Eli Lilly; EMPA-REG OUTCOME ClinicalTrials.gov number, NCT01131676.)

7,705 citations


Journal ArticleDOI
16 Sep 2020-Nature
TL;DR: In this paper, the authors review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data, and their evolution into a flexible interoperability layer between increasingly specialized computational libraries is discussed.
Abstract: Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis. NumPy is the primary array programming library for Python; here its fundamental concepts are reviewed and its evolution into a flexible interoperability layer between increasingly specialized computational libraries is discussed.

7,624 citations


Journal ArticleDOI
TL;DR: An overview of the types of case study designs is provided along with general recommendations for writing the research questions, developing propositions, determining the “case” under study, binding the case and a discussion of data sources and triangulation.
Abstract: Qualitative case study methodology provides tools for researchers to study complex phenomena within their contexts. When the approach is applied correctly, it becomes a valuable method for health science research to develop theory, evaluate programs, and develop interventions. The purpose of this paper is to guide the novice researcher in identifying the key elements for designing and implementing qualitative case study research projects. An overview of the types of case study designs is provided along with general recommendations for writing the research questions, developing propositions, determining the “case” under study, binding the case and a discussion of data sources and triangulation. To facilitate application of these principles, clear examples of research questions, study propositions and the different types of case study designs

7,611 citations


Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations


Proceedings ArticleDOI
01 Jun 2016
TL;DR: This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity.
Abstract: Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.

Journal ArticleDOI
TL;DR: This paper proposed a new approach based on skip-gram model, where each word is represented as a bag of character n-grams, words being represented as the sum of these representations, allowing to train models on large corpora quickly and allowing to compute word representations for words that did not appear in the training data.
Abstract: Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models to learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram, words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.

Journal ArticleDOI
TL;DR: The strongest features of the app, identified and reported in user feedback, were its ability to help in screening and collaboration as well as the time savings it affords to users.
Abstract: Synthesis of multiple randomized controlled trials (RCTs) in a systematic review can summarize the effects of individual outcomes and provide numerical answers about the effectiveness of interventions. Filtering of searches is time consuming, and no single method fulfills the principal requirements of speed with accuracy. Automation of systematic reviews is driven by a necessity to expedite the availability of current best evidence for policy and clinical decision-making. We developed Rayyan ( http://rayyan.qcri.org ), a free web and mobile app, that helps expedite the initial screening of abstracts and titles using a process of semi-automation while incorporating a high level of usability. For the beta testing phase, we used two published Cochrane reviews in which included studies had been selected manually. Their searches, with 1030 records and 273 records, were uploaded to Rayyan. Different features of Rayyan were tested using these two reviews. We also conducted a survey of Rayyan’s users and collected feedback through a built-in feature. Pilot testing of Rayyan focused on usability, accuracy against manual methods, and the added value of the prediction feature. The “taster” review (273 records) allowed a quick overview of Rayyan for early comments on usability. The second review (1030 records) required several iterations to identify the previously identified 11 trials. The “suggestions” and “hints,” based on the “prediction model,” appeared as testing progressed beyond five included studies. Post rollout user experiences and a reflexive response by the developers enabled real-time modifications and improvements. The survey respondents reported 40% average time savings when using Rayyan compared to others tools, with 34% of the respondents reporting more than 50% time savings. In addition, around 75% of the respondents mentioned that screening and labeling studies as well as collaborating on reviews to be the two most important features of Rayyan. As of November 2016, Rayyan users exceed 2000 from over 60 countries conducting hundreds of reviews totaling more than 1.6M citations. Feedback from users, obtained mostly through the app web site and a recent survey, has highlighted the ease in exploration of searches, the time saved, and simplicity in sharing and comparing include-exclude decisions. The strongest features of the app, identified and reported in user feedback, were its ability to help in screening and collaboration as well as the time savings it affords to users. Rayyan is responsive and intuitive in use with significant potential to lighten the load of reviewers.

Journal ArticleDOI
TL;DR: ACCF/AHAIAI: angiotensin-converting enzyme inhibitor as discussed by the authors, angio-catabolizing enzyme inhibitor inhibitor inhibitor (ACS inhibitor) is a drug that is used to prevent atrial fibrillation.
Abstract: ACC/AHA : American College of Cardiology/American Heart Association ACCF/AHA : American College of Cardiology Foundation/American Heart Association ACE : angiotensin-converting enzyme ACEI : angiotensin-converting enzyme inhibitor ACS : acute coronary syndrome AF : atrial fibrillation

Journal ArticleDOI
TL;DR: Nivolumab was associated with even greater efficacy than docetaxel across all end points in subgroups defined according to prespecified levels of tumor-membrane expression (≥1, ≥5%, and ≥10%) of the PD-1 ligand.
Abstract: BackgroundNivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint–inhibitor antibody, disrupts PD-1–mediated signaling and may restore antitumor immunity. MethodsIn this randomized, open-label, international phase 3 study, we assigned patients with nonsquamous non–small-cell lung cancer (NSCLC) that had progressed during or after platinum-based doublet chemotherapy to receive nivolumab at a dose of 3 mg per kilogram of body weight every 2 weeks or docetaxel at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival. ResultsOverall survival was longer with nivolumab than with docetaxel. The median overall survival was 12.2 months (95% confidence interval [CI], 9.7 to 15.0) among 292 patients in the nivolumab group and 9.4 months (95% CI, 8.1 to 10.7) among 290 patients in the docetaxel group (hazard ratio for death, 0.73; 96% CI, 0.59 to 0.89; P=0.002). At 1 year, the overall survival rate was 51% (95% CI, 45 to 56) with nivolumab ve...

Journal ArticleDOI
TL;DR: All-cause age-standardised YLD rates decreased by 3·9% from 1990 to 2017; however, the all-age YLD rate increased by 7·2% while the total sum of global YLDs increased from 562 million (421–723) to 853 million (642–1100).

Proceedings ArticleDOI
15 Feb 2018
TL;DR: This paper introduced a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts (i.e., to model polysemy).
Abstract: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

Journal ArticleDOI
TL;DR: Aerosol and Surface Stability of SARS-CoV-2 In this research letter, investigators report on the stability of Sars-CoVs and the viability of the two virus under experimental conditions.
Abstract: Aerosol and Surface Stability of SARS-CoV-2 In this research letter, investigators report on the stability of SARS-CoV-2 and SARS-CoV-1 under experimental conditions. The viability of the two virus...

Book ChapterDOI
08 Oct 2016
TL;DR: In this paper, the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation.
Abstract: Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 % error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https://github.com/KaimingHe/resnet-1k-layers.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1131 moreInstitutions (123)
TL;DR: The association of GRB 170817A, detected by Fermi-GBM 1.7 s after the coalescence, corroborates the hypothesis of a neutron star merger and provides the first direct evidence of a link between these mergers and short γ-ray bursts.
Abstract: On August 17, 2017 at 12∶41:04 UTC the Advanced LIGO and Advanced Virgo gravitational-wave detectors made their first observation of a binary neutron star inspiral. The signal, GW170817, was detected with a combined signal-to-noise ratio of 32.4 and a false-alarm-rate estimate of less than one per 8.0×10^{4} years. We infer the component masses of the binary to be between 0.86 and 2.26 M_{⊙}, in agreement with masses of known neutron stars. Restricting the component spins to the range inferred in binary neutron stars, we find the component masses to be in the range 1.17-1.60 M_{⊙}, with the total mass of the system 2.74_{-0.01}^{+0.04}M_{⊙}. The source was localized within a sky region of 28 deg^{2} (90% probability) and had a luminosity distance of 40_{-14}^{+8} Mpc, the closest and most precisely localized gravitational-wave signal yet. The association with the γ-ray burst GRB 170817A, detected by Fermi-GBM 1.7 s after the coalescence, corroborates the hypothesis of a neutron star merger and provides the first direct evidence of a link between these mergers and short γ-ray bursts. Subsequent identification of transient counterparts across the electromagnetic spectrum in the same location further supports the interpretation of this event as a neutron star merger. This unprecedented joint gravitational and electromagnetic observation provides insight into astrophysics, dense matter, gravitation, and cosmology.

Journal ArticleDOI
13 Mar 2020-Science
TL;DR: The authors show that this protein binds at least 10 times more tightly than the corresponding spike protein of severe acute respiratory syndrome (SARS)–CoV to their common host cell receptor, and test several published SARS-CoV RBD-specific monoclonal antibodies found that they do not have appreciable binding to 2019-nCoV S, suggesting that antibody cross-reactivity may be limited between the two RBDs.
Abstract: The outbreak of a novel coronavirus (2019-nCoV) represents a pandemic threat that has been declared a public health emergency of international concern. The CoV spike (S) glycoprotein is a key target for vaccines, therapeutic antibodies, and diagnostics. To facilitate medical countermeasure development, we determined a 3.5-angstrom-resolution cryo-electron microscopy structure of the 2019-nCoV S trimer in the prefusion conformation. The predominant state of the trimer has one of the three receptor-binding domains (RBDs) rotated up in a receptor-accessible conformation. We also provide biophysical and structural evidence that the 2019-nCoV S protein binds angiotensin-converting enzyme 2 (ACE2) with higher affinity than does severe acute respiratory syndrome (SARS)-CoV S. Additionally, we tested several published SARS-CoV RBD-specific monoclonal antibodies and found that they do not have appreciable binding to 2019-nCoV S, suggesting that antibody cross-reactivity may be limited between the two RBDs. The structure of 2019-nCoV S should enable the rapid development and evaluation of medical countermeasures to address the ongoing public health crisis.

Proceedings Article
04 Dec 2017
TL;DR: In this article, a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), is presented, which assigns each feature an importance value for a particular prediction.
Abstract: Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Journal ArticleDOI
TL;DR: A revised and updated classification for the families of the flowering plants is provided in this paper, which includes Austrobaileyales, Canellales, Gunnerales, Crossosomatales and Celastrales.

Journal ArticleDOI
TL;DR: This is a list of winners and nominees for the 2016 Paralympic Games in Rio de Janeiro, Brazil.
Abstract: Hadley Wickham1, Mara Averick1, Jennifer Bryan1, Winston Chang1, Lucy D’Agostino McGowan8, Romain François1, Garrett Grolemund1, Alex Hayes12, Lionel Henry1, Jim Hester1, Max Kuhn1, Thomas Lin Pedersen1, Evan Miller13, Stephan Milton Bache3, Kirill Müller2, Jeroen Ooms14, David Robinson5, Dana Paige Seidel10, Vitalie Spinu4, Kohske Takahashi9, Davis Vaughan1, Claus Wilke6, Kara Woo7, and Hiroaki Yutani11

Journal ArticleDOI
26 May 2020-JAMA
TL;DR: This case series provides characteristics and early outcomes of sequentially hospitalized patients with confirmed COVID-19 in the New York City area and assesses outcomes during hospitalization, such as invasive mechanical ventilation, kidney replacement therapy, and death.
Abstract: Importance There is limited information describing the presenting characteristics and outcomes of US patients requiring hospitalization for coronavirus disease 2019 (COVID-19). Objective To describe the clinical characteristics and outcomes of patients with COVID-19 hospitalized in a US health care system. Design, Setting, and Participants Case series of patients with COVID-19 admitted to 12 hospitals in New York City, Long Island, and Westchester County, New York, within the Northwell Health system. The study included all sequentially hospitalized patients between March 1, 2020, and April 4, 2020, inclusive of these dates. Exposures Confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection by positive result on polymerase chain reaction testing of a nasopharyngeal sample among patients requiring admission. Main Outcomes and Measures Clinical outcomes during hospitalization, such as invasive mechanical ventilation, kidney replacement therapy, and death. Demographics, baseline comorbidities, presenting vital signs, and test results were also collected. Results A total of 5700 patients were included (median age, 63 years [interquartile range {IQR}, 52-75; range, 0-107 years]; 39.7% female). The most common comorbidities were hypertension (3026; 56.6%), obesity (1737; 41.7%), and diabetes (1808; 33.8%). At triage, 30.7% of patients were febrile, 17.3% had a respiratory rate greater than 24 breaths/min, and 27.8% received supplemental oxygen. The rate of respiratory virus co-infection was 2.1%. Outcomes were assessed for 2634 patients who were discharged or had died at the study end point. During hospitalization, 373 patients (14.2%) (median age, 68 years [IQR, 56-78]; 33.5% female) were treated in the intensive care unit care, 320 (12.2%) received invasive mechanical ventilation, 81 (3.2%) were treated with kidney replacement therapy, and 553 (21%) died. As of April 4, 2020, for patients requiring mechanical ventilation (n = 1151, 20.2%), 38 (3.3%) were discharged alive, 282 (24.5%) died, and 831 (72.2%) remained in hospital. The median postdischarge follow-up time was 4.4 days (IQR, 2.2-9.3). A total of 45 patients (2.2%) were readmitted during the study period. The median time to readmission was 3 days (IQR, 1.0-4.5) for readmitted patients. Among the 3066 patients who remained hospitalized at the final study follow-up date (median age, 65 years [IQR, 54-75]), the median follow-up at time of censoring was 4.5 days (IQR, 2.4-8.1). Conclusions and Relevance This case series provides characteristics and early outcomes of sequentially hospitalized patients with confirmed COVID-19 in the New York City area.

Proceedings Article
15 Feb 2016
TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.

Journal ArticleDOI
16 Apr 2020-Cell
TL;DR: It is demonstrating that cross-neutralizing antibodies targeting conserved S epitopes can be elicited upon vaccination, and it is shown that SARS-CoV-2 S uses ACE2 to enter cells and that the receptor-binding domains of Sars- coV- 2 S and SARS S bind with similar affinities to human ACE2, correlating with the efficient spread of SATS among humans.

Proceedings ArticleDOI
21 Jul 2017
TL;DR: ResNeXt as discussed by the authors is a simple, highly modularized network architecture for image classification, which is constructed by repeating a building block that aggregates a set of transformations with the same topology.
Abstract: We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

Journal ArticleDOI
13 Feb 2015-Science
TL;DR: An updated and extended analysis of the planetary boundary (PB) framework and identifies levels of anthropogenic perturbations below which the risk of destabilization of the Earth system (ES) is likely to remain low—a “safe operating space” for global societal development.
Abstract: The planetary boundaries framework defines a safe operating space for humanity based on the intrinsic biophysical processes that regulate the stability of the Earth system. Here, we revise and update the planetary boundary framework, with a focus on the underpinning biophysical science, based on targeted input from expert research communities and on more general scientific advances over the past 5 years. Several of the boundaries now have a two-tier approach, reflecting the importance of cross-scale interactions and the regional-level heterogeneity of the processes that underpin the boundaries. Two core boundaries—climate change and biosphere integrity—have been identified, each of which has the potential on its own to drive the Earth system into a new state should they be substantially and persistently transgressed.

Journal ArticleDOI
TL;DR: Progress has stagnated for breast and prostate cancers but strengthened for lung cancer, coinciding with changes in medical practice related to cancer screening and/or treatment, and mortality patterns reflect incidence trends.
Abstract: Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths in the United States and compiles the most recent data on population‐based cancer occurrence and outcomes. Incidence data (through 2018) were collected by the Surveillance, Epidemiology, and End Results program; the National Program of Cancer Registries; and the North American Association of Central Cancer Registries. Mortality data (through 2019) were collected by the National Center for Health Statistics. In 2022, 1,918,030 new cancer cases and 609,360 cancer deaths are projected to occur in the United States, including approximately 350 deaths per day from lung cancer, the leading cause of cancer death. Incidence during 2014 through 2018 continued a slow increase for female breast cancer (by 0.5% annually) and remained stable for prostate cancer, despite a 4% to 6% annual increase for advanced disease since 2011. Consequently, the proportion of prostate cancer diagnosed at a distant stage increased from 3.9% to 8.2% over the past decade. In contrast, lung cancer incidence continued to decline steeply for advanced disease while rates for localized‐stage increased suddenly by 4.5% annually, contributing to gains both in the proportion of localized‐stage diagnoses (from 17% in 2004 to 28% in 2018) and 3‐year relative survival (from 21% to 31%). Mortality patterns reflect incidence trends, with declines accelerating for lung cancer, slowing for breast cancer, and stabilizing for prostate cancer. In summary, progress has stagnated for breast and prostate cancers but strengthened for lung cancer, coinciding with changes in medical practice related to cancer screening and/or treatment. More targeted cancer control interventions and investment in improved early detection and treatment would facilitate reductions in cancer mortality.