scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations


Journal ArticleDOI
TL;DR: The optimal simulation protocol for each program has been implemented in CHARMM-GUI and is expected to be applicable to the remainder of the additive C36 FF including the proteins, nucleic acids, carbohydrates, and small molecules.
Abstract: Proper treatment of nonbonded interactions is essential for the accuracy of molecular dynamics (MD) simulations, especially in studies of lipid bilayers. The use of the CHARMM36 force field (C36 FF) in different MD simulation programs can result in disagreements with published simulations performed with CHARMM due to differences in the protocols used to treat the long-range and 1-4 nonbonded interactions. In this study, we systematically test the use of the C36 lipid FF in NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM. A wide range of Lennard-Jones (LJ) cutoff schemes and integrator algorithms were tested to find the optimal simulation protocol to best match bilayer properties of six lipids with varying acyl chain saturation and head groups. MD simulations of a 1,2-dipalmitoyl-sn-phosphatidylcholine (DPPC) bilayer were used to obtain the optimal protocol for each program. MD simulations with all programs were found to reasonably match the DPPC bilayer properties (surface area per lipid, chain order para...

2,182 citations


Journal ArticleDOI
TL;DR: The newly developed toolbox, DPABI, which was evolved from REST and DPARSF is introduced, designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies.
Abstract: Brain imaging efforts are being increasingly devoted to decode the functioning of the human brain. Among neuroimaging techniques, resting-state fMRI (R-fMRI) is currently expanding exponentially. Beyond the general neuroimaging analysis packages (e.g., SPM, AFNI and FSL), REST and DPARSF were developed to meet the increasing need of user-friendly toolboxes for R-fMRI data processing. To address recently identified methodological challenges of R-fMRI, we introduce the newly developed toolbox, DPABI, which was evolved from REST and DPARSF. DPABI incorporates recent research advances on head motion control and measurement standardization, thus allowing users to evaluate results using stringent control strategies. DPABI also emphasizes test-retest reliability and quality control of data processing. Furthermore, DPABI provides a user-friendly pipeline analysis toolkit for rat/monkey R-fMRI data analysis to reflect the rapid advances in animal imaging. In addition, DPABI includes preprocessing modules for task-based fMRI, voxel-based morphometry analysis, statistical analysis and results viewing. DPABI is designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies. We anticipate this open-source toolbox will assist novices and expert users alike and continue to support advancing R-fMRI methodology and its application to clinical translational studies.

2,179 citations


Journal ArticleDOI
04 Jun 2015-Cell
TL;DR: In a unified mechanism of m(6)A-based regulation in the cytoplasm, YTHDF2-mediated degradation controls the lifetime of target transcripts, whereasYTHDF1-mediated translation promotion increases translation efficiency, ensuring effective protein production from dynamic transcripts that are marked by m( 6)A.

2,179 citations


Journal ArticleDOI
TL;DR: The antibacterial mechanisms of NPs against bacteria and the factors that are involved are discussed and the limitations of current research are discussed.
Abstract: Nanoparticles (NPs) are increasingly used to target bacteria as an alternative to antibiotics. Nanotechnology may be particularly advantageous in treating bacterial infections. Examples include the utilization of NPs in antibacterial coatings for implantable devices and medicinal materials to prevent infection and promote wound healing, in antibiotic delivery systems to treat disease, in bacterial detection systems to generate microbial diagnostics, and in antibacterial vaccines to control bacterial infections. The antibacterial mechanisms of NPs are poorly understood, but the currently accepted mechanisms include oxidative stress induction, metal ion release, and non-oxidative mechanisms. The multiple simultaneous mechanisms of action against microbes would require multiple simultaneous gene mutations in the same bacterial cell for antibacterial resistance to develop; therefore, it is difficult for bacterial cells to become resistant to NPs. In this review, we discuss the antibacterial mechanisms of NPs against bacteria and the factors that are involved. The limitations of current research are also discussed.

2,178 citations


Journal ArticleDOI
TL;DR: Author(s): Writing Group Members; Mozaffarian, Dariush; Benjamin, Emelia J; Go, Alan S; Arnett, Donna K; Blaha, Michael J; Cushman, Mary; Das, Sandeep R; de Ferranti, Sarah; Despres, Jean-Pierre; Fullerton, Heather J; Howard, Virginia J; Huffman, Mark D; Isasi, Carmen R; Jimenez, Monik C; Judd, Suzanne
Abstract: Author(s): Writing Group Members; Mozaffarian, Dariush; Benjamin, Emelia J; Go, Alan S; Arnett, Donna K; Blaha, Michael J; Cushman, Mary; Das, Sandeep R; de Ferranti, Sarah; Despres, Jean-Pierre; Fullerton, Heather J; Howard, Virginia J; Huffman, Mark D; Isasi, Carmen R; Jimenez, Monik C; Judd, Suzanne E; Kissela, Brett M; Lichtman, Judith H; Lisabeth, Lynda D; Liu, Simin; Mackey, Rachel H; Magid, David J; McGuire, Darren K; Mohler, Emile R; Moy, Claudia S; Muntner, Paul; Mussolino, Michael E; Nasir, Khurram; Neumar, Robert W; Nichol, Graham; Palaniappan, Latha; Pandey, Dilip K; Reeves, Mathew J; Rodriguez, Carlos J; Rosamond, Wayne; Sorlie, Paul D; Stein, Joel; Towfighi, Amytis; Turan, Tanya N; Virani, Salim S; Woo, Daniel; Yeh, Robert W; Turner, Melanie B; American Heart Association Statistics Committee; Stroke Statistics Subcommittee

2,178 citations


Journal ArticleDOI
TL;DR: The first Gaia data release, Gaia DR1 as discussed by the authors, consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues.
Abstract: Context. At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. Aims: A summary of Gaia DR1 is presented along with illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Methods: The raw data collected by Gaia during the first 14 months of the mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC) and turned into an astrometric and photometric catalogue. Results: Gaia DR1 consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues - a realisation of the Tycho-Gaia Astrometric Solution (TGAS) - and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set, consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of 3000 Cepheid and RR Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas yr-1 for the proper motions. A systematic component of 0.3 mas should be added to the parallax uncertainties. For the subset of 94 000 Hipparcos stars in the primary data set, the proper motions are much more precise at about 0.06 mas yr-1. For the secondary astrometric data set, the typical uncertainty of the positions is 10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to0.03 mag over the magnitude range 5 to 20.7. Conclusions: Gaia DR1 is an important milestone ahead of the next Gaia data release, which will feature five-parameter astrometry for all sources. Extensive validation shows that Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data.

2,174 citations


Posted Content
TL;DR: In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
Abstract: Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.

2,174 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: HED turns pixel-wise edge classification into image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets to approach the human ability to resolve the challenging ambiguity in edge and object boundary detection.
Abstract: We develop a new edge detection algorithm that addresses two critical issues in this long-standing vision problem: (1) holistic image training, and (2) multi-scale feature learning. Our proposed method, holistically-nested edge detection (HED), turns pixel-wise edge classification into image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are crucially important in order to approach the human ability to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of 0.782) and the NYU Depth dataset (ODS F-score of 0.746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than recent CNN-based edge detection algorithms.

2,173 citations


Journal ArticleDOI
13 Jul 2020-Science
TL;DR: The results of this trio of studies suggest that the location, timing, and duration of IFN exposure are critical parameters underlying the success or failure of therapeutics for viral respiratory infections.
Abstract: Coronavirus disease 2019 (COVID-19) is characterized by distinct patterns of disease progression suggesting diverse host immune responses. We performed an integrated immune analysis on a cohort of 50 COVID-19 patients with various disease severity. A unique phenotype was observed in severe and critical patients, consisting of a highly impaired interferon (IFN) type I response (characterized by no IFN-β and low IFN-α production and activity), associated with a persistent blood viral load and an exacerbated inflammatory response. Inflammation was partially driven by the transcriptional factor NF-κB and characterized by increased tumor necrosis factor (TNF)-α and interleukin (IL)-6 production and signaling. These data suggest that type-I IFN deficiency in the blood could be a hallmark of severe COVID-19 and provide a rationale for combined therapeutic approaches.

2,171 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an R package iNEXT (iNterpolation/EXTrapolation) which provides simple functions to compute and plot the seamless rarefaction and extrapolation sampling curves for the three most widely used members of the Hill number family.
Abstract: Summary Hill numbers (or the effective number of species) have been increasingly used to quantify the species/taxonomic diversity of an assemblage. The sample-size- and coverage-based integrations of rarefaction (interpolation) and extrapolation (prediction) of Hill numbers represent a unified standardization method for quantifying and comparing species diversity across multiple assemblages. We briefly review the conceptual background of Hill numbers along with two approaches to standardization. We present an R package iNEXT (iNterpolation/EXTrapolation) which provides simple functions to compute and plot the seamless rarefaction and extrapolation sampling curves for the three most widely used members of the Hill number family (species richness, Shannon diversity and Simpson diversity). Two types of biodiversity data are allowed: individual-based abundance data and sampling-unit-based incidence data. Several applications of the iNEXT packages are reviewed: (i) Non-asymptotic analysis: comparison of diversity estimates for equally large or equally complete samples. (ii) Asymptotic analysis: comparison of estimated asymptotic or true diversities. (iii) Assessment of sample completeness (sample coverage) across multiple samples. (iv) Comparison of estimated point diversities for a specified sample size or a specified level of sample coverage. Two examples are demonstrated, using the data (one for abundance data and the other for incidence data) included in the package, to illustrate all R functions and graphical displays.

Posted Content
TL;DR: It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.
Abstract: Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between "hand-engineering" and "end-to-end" learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.

Proceedings Article
20 Apr 2018
TL;DR: A benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models, which favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks.
Abstract: Human ability to understand language is general, flexible, and robust. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE, gluebenchmark.com): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it is limited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.

Journal ArticleDOI
TL;DR: Modules for Experiments in Stellar Astrophysics (MESA) as discussed by the authors can now simultaneously evolve an interacting pair of differentially rotating stars undergoing transfer and loss of mass and angular momentum, greatly enhancing the prior ability to model binary evolution.
Abstract: We substantially update the capabilities of the open-source software instrument Modules for Experiments in Stellar Astrophysics (MESA). MESA can now simultaneously evolve an interacting pair of differentially rotating stars undergoing transfer and loss of mass and angular momentum, greatly enhancing the prior ability to model binary evolution. New MESA capabilities in fully coupled calculation of nuclear networks with hundreds of isotopes now allow MESA to accurately simulate advanced burning stages needed to construct supernova progenitor models. Implicit hydrodynamics with shocks can now be treated with MESA, enabling modeling of the entire massive star lifecycle, from pre-main sequence evolution to the onset of core collapse and nucleosynthesis from the resulting explosion. Coupling of the GYRE non-adiabatic pulsation instrument with MESA allows for new explorations of the instability strips for massive stars while also accelerating the astrophysical use of asteroseismology data. We improve treatment of mass accretion, giving more accurate and robust near-surface profiles. A new MESA capability to calculate weak reaction rates "on-the-fly" from input nuclear data allows better simulation of accretion induced collapse of massive white dwarfs and the fate of some massive stars. We discuss the ongoing challenge of chemical diffusion in the strongly coupled plasma regime, and exhibit improvements in MESA that now allow for the simulation of radiative levitation of heavy elements in hot stars. We close by noting that the MESA software infrastructure provides bit-for-bit consistency for all results across all the supported platforms, a profound enabling capability for accelerating MESA's development.

Journal ArticleDOI
14 Sep 2018-Science
TL;DR: In this article, a semi-empirical model analysis and using the tandem cell strategy to overcome the low charge mobility of organic materials, leading to a limit on the active-layer thickness and efficient light absorption was performed.
Abstract: Although organic photovoltaic (OPV) cells have many advantages, their performance still lags far behind that of other photovoltaic platforms. A fundamental reason for their low performance is the low charge mobility of organic materials, leading to a limit on the active-layer thickness and efficient light absorption. In this work, guided by a semi-empirical model analysis and using the tandem cell strategy to overcome such issues, and taking advantage of the high diversity and easily tunable band structure of organic materials, a record and certified 17.29% power conversion efficiency for a two-terminal monolithic solution-processed tandem OPV is achieved.

Journal ArticleDOI
13 Sep 2017-Nature
TL;DR: The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers.
Abstract: Recent progress implies that a crossover between machine learning and quantum information processing benefits both fields. Traditional machine learning has dramatically improved the benchmarking an ...

Journal ArticleDOI
18 Nov 2016-Science
TL;DR: How high-index dielectric nanoparticles can offer a substitute for plasmonic nanoparticle structures, providing a highly flexible and low-loss route to the manipulation of light at the nanoscale is reviewed.
Abstract: The resonant modes of plasmonic nanoparticle structures made of gold or silver endow them with an ability to manipulate light at the nanoscale. However, owing to the high light losses caused by metals at optical wavelengths, only a small fraction of plasmonics applications have been realized. Kuznetsov et al. review how high-index dielectric nanoparticles can offer a substitute for these metals, providing a highly flexible and low-loss route to the manipulation of light at the nanoscale. Science , this issue p. [10.1126/science.aag2472][1] [1]: /lookup/doi/10.1126/science.aag2472

20 Mar 2020
TL;DR: The effects of the epidemic caused by the new CoV has yet to emerge as the situation is quickly evolving, and world governments are at work to establish countermeasures to stem possible devastating effects.
Abstract: According to the World Health Organization (WHO), viral diseases continue to emerge and represent a serious issue to public health In the last twenty years, several viral epidemics such as the severe acute respiratory syndrome coronavirus (SARS-CoV) in 2002 to 2003, and H1N1 influenza in 2009, have been recorded Most recently, the Middle East respiratory syndrome coronavirus (MERS-CoV) was first identified in Saudi Arabia in 2012 In a timeline that reaches the present day, an epidemic of cases with unexplained low respiratory infections detected in Wuhan, the largest metropolitan area in China's Hubei province, was first reported to the WHO Country Office in China, on December 31, 2019 Published literature can trace the beginning of symptomatic individuals back to the beginning of December 2019 As they were unable to identify the causative agent, these first cases were classified as "pneumonia of unknown etiology " The Chinese Center for Disease Control and Prevention (CDC) and local CDCs organized an intensive outbreak investigation program The etiology of this illness is now attributed to a novel virus belonging to the coronavirus (CoV) family, COVID-19 On February 11, 2020, the WHO Director-General, Dr Tedros Adhanom Ghebreyesus, announced that the disease caused by this new CoV was a "COVID-19," which is the acronym of "coronavirus disease 2019" In the past twenty years, two additional coronavirus epidemics have occurred SARS-CoV provoked a large-scale epidemic beginning in China and involving two dozen countries with approximately 8000 cases and 800 deaths, and the MERS-CoV that began in Saudi Arabia and has approximately 2,500 cases and 800 deaths and still causes as sporadic cases This new virus seems to be very contagious and has quickly spread globally In a meeting on January 30, 2020, per the International Health Regulations (IHR, 2005), the outbreak was declared by the WHO a Public Health Emergency of International Concern (PHEIC) as it had spread to 18 countries with four countries reporting human-to-human transmission An additional landmark occurred on February 26, 2020, as the first case of the disease, not imported from China, was recorded in the United States Initially, the new virus was called 2019-nCoV Subsequently, the task of experts of the International Committee on Taxonomy of Viruses (ICTV) termed it the SARS-CoV-2 virus as it is very similar to the one that caused the SARS outbreak (SARS-CoVs) The CoVs have become the major pathogens of emerging respiratory disease outbreaks They are a large family of single-stranded RNA viruses (+ssRNA) that can be isolated in different animal species For reasons yet to be explained, these viruses can cross species barriers and can cause, in humans, illness ranging from the common cold to more severe diseases such as MERS and SARS Interestingly, these latter viruses have probably originated from bats and then moving into other mammalian hosts — the Himalayan palm civet for SARS-CoV, and the dromedary camel for MERS-CoV — before jumping to humans The dynamics of SARS-Cov-2 are currently unknown, but there is speculation that it also has an animal origin The potential for these viruses to grow to become a pandemic worldwide seems to be a serious public health risk Concerning COVID-19, the WHO raised the threat to the CoV epidemic to the "very high" level, on February 28, 2020 Probably, the effects of the epidemic caused by the new CoV has yet to emerge as the situation is quickly evolving World governments are at work to establish countermeasures to stem possible devastating effects Health organizations coordinate information flows and issues directives and guidelines to best mitigate the impact of the threat At the same time, scientists around the world work tirelessly, and information about the transmission mechanisms, the clinical spectrum of disease, new diagnostics, and prevention and therapeutic strategies are rapidly developing Many uncertainties remain with regard to both the virus-host interac ion and the evolution of the epidemic, with specific reference to the times when the epidemic will reach its peak At the moment, the therapeutic strategies to deal with the infection are only supportive, and prevention aimed at reducing transmission in the community is our best weapon Aggressive isolation measures in China have led to a progressive reduction of cases in the last few days In Italy, in geographic regions of the north of the peninsula, political and health authorities are making incredible efforts to contain a shock wave that is severely testing the health system In the midst of the crisis, the authors have chosen to use the "Statpearls" platform because, within the PubMed scenario, it represents a unique tool that may allow them to make updates in real-time The aim, therefore, is to collect information and scientific evidence and to provide an overview of the topic that will be continuously updated

Posted Content
TL;DR: In this paper, a fully convolutional one-stage object detector (FCOS) is proposed to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation.
Abstract: We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:Code is available at: this https URL

Journal ArticleDOI
TL;DR: In patients recovering from coronavirus disease 2019 (without severe respiratory distress during the disease course), lung abnormalities on chest CT scans showed greatest severity approximately 10 days after initial onset of symptoms.
Abstract: Background Chest CT is used to assess the severity of lung involvement in coronavirus disease 2019 (COVID-19). Purpose To determine the changes in chest CT findings associated with COVID-19 from initial diagnosis until patient recovery. Materials and Methods This retrospective review included patients with real-time polymerase chain reaction-confirmed COVID-19 who presented between January 12, 2020, and February 6, 2020. Patients with severe respiratory distress and/or oxygen requirement at any time during the disease course were excluded. Repeat chest CT was performed at approximately 4-day intervals. Each of the five lung lobes was visually scored on a scale of 0 to 5, with 0 indicating no involvement and 5 indicating more than 75% involvement. The total CT score was determined as the sum of lung involvement, ranging from 0 (no involvement) to 25 (maximum involvement). Results Twenty-one patients (six men and 15 women aged 25-63 years) with confirmed COVID-19 were evaluated. A total of 82 chest CT scans were obtained in these patients, with a mean interval (±standard deviation) of 4 days ± 1 (range, 1-8 days). All patients were discharged after a mean hospitalization period of 17 days ± 4 (range, 11-26 days). Maximum lung involved peaked at approximately 10 days (with a calculated total CT score of 6) from the onset of initial symptoms (R2 = 0.25, P < .001). Based on quartiles of chest CT scans from day 0 to day 26 involvement, four stages of lung CT findings were defined. CT scans obtained in stage 1 (0-4 days) showed ground-glass opacities (18 of 24 scans [75%]), with a mean total CT score of 2 ± 2; scans obtained in stage 2 (5-8 days) showed an increase in both the crazy-paving pattern (nine of 17 scans [53%]) and total CT score (mean, 6 ± 4; P = .002); scans obtained in stage 3 (9-13 days) showed consolidation (19 of 21 scans [91%]) and a peak in the total CT score (mean, 7 ± 4); and scans obtained in stage 4 (≥14 days) showed gradual resolution of consolidation (15 of 20 scans [75%]) and a decrease in the total CT score (mean, 6 ± 4) without crazy-paving pattern. Conclusion In patients recovering from coronavirus disease 2019 (without severe respiratory distress during the disease course), lung abnormalities on chest CT scans showed greatest severity approximately 10 days after initial onset of symptoms. © RSNA, 2020.

Proceedings ArticleDOI
18 Mar 2019
TL;DR: S spatially-adaptive normalization is proposed, a simple but effective layer for synthesizing photorealistic images given an input semantic layout that allows users to easily control the style and content of image synthesis results as well as create multi-modal results.
Abstract: We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the network, forcing the network to memorize the information throughout all the layers. Instead, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned affine transformation. Experiments on several challenging datasets demonstrate the superiority of our method compared to existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows users to easily control the style and content of image synthesis results as well as create multi-modal results. Code is available upon publication.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper introduces ActivityNet, a new large-scale video benchmark for human activity understanding that aims at covering a wide range of complex human activities that are of interest to people in their daily living.
Abstract: In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new large-scale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.

Posted Content
TL;DR: The entire ImageJ codebase was rewrote, engineering a redesigned plugin mechanism intended to facilitate extensibility at every level, with the goal of creating a more powerful tool that continues to serve the existing community while addressing a wider range of scientific requirements.
Abstract: ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs.

Proceedings ArticleDOI
12 Oct 2015
TL;DR: A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name.
Abstract: Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by Fredrikson et al., adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is unknown. We develop a new class of model inversion attack that exploits confidence values revealed along with predictions. Our new attacks are applicable in a variety of settings, and we explore two in depth: decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition. In both cases confidence values are revealed to those with the ability to make prediction queries to models. We experimentally show attacks that are able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and, in the other context, show how to recover recognizable images of people's faces given only their name and access to the ML model. We also initiate experimental exploration of natural countermeasures, investigating a privacy-aware decision tree training algorithm that is a simple variant of CART learning, as well as revealing only rounded confidence values. The lesson that emerges is that one can avoid these kinds of MI attacks with negligible degradation to utility.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts.
Abstract: Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at this https URL

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this paper, the spatial context is used as a source of free and plentiful supervisory signal for training a rich visual representation, and the feature representation learned using this within-image context captures visual similarity across images.
Abstract: This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.

Journal ArticleDOI
TL;DR: Chloroquine phosphate, an old drug for treatment of malaria, is shown to have apparent efficacy and acceptable safety against COVID-19 associated pneumonia in multicenter clinical trials conducted in China.
Abstract: The coronavirus disease 2019 (COVID-19) virus is spreading rapidly, and scientists are endeavoring to discover drugs for its efficacious treatment in China. Chloroquine phosphate, an old drug for treatment of malaria, is shown to have apparent efficacy and acceptable safety against COVID-19 associated pneumonia in multicenter clinical trials conducted in China. The drug is recommended to be included in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People's Republic of China for treatment of COVID-19 infection in larger populations in the future.

Posted Content
TL;DR: The API design and the system implementation of MXNet are described, and it is explained how embedding of both symbolic expression and tensor operation is handled in a unified fashion.
Abstract: MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.

Journal ArticleDOI
01 Dec 2015
TL;DR: Burnout and satisfaction with work-life balance in US physicians worsened from 2011 to 2014, resulting in an increasing disparity in burn out and satisfaction in physicians relative to the general US working population.
Abstract: Objective To evaluate the prevalence of burnout and satisfaction with work-life balance in physicians and US workers in 2014 relative to 2011. Patients and Methods From August 28, 2014, to October 6, 2014, we surveyed both US physicians and a probability-based sample of the general US population using the methods and measures used in our 2011 study. Burnout was measured using validated metrics, and satisfaction with work-life balance was assessed using standard tools. Results Of the 35,922 physicians who received an invitation to participate, 6880 (19.2%) completed surveys. When assessed using the Maslach Burnout Inventory, 54.4% (n=3680) of the physicians reported at least 1 symptom of burnout in 2014 compared with 45.5% (n=3310) in 2011 ( P P P P Conclusion Burnout and satisfaction with work-life balance in US physicians worsened from 2011 to 2014. More than half of US physicians are now experiencing professional burnout.

Journal ArticleDOI
Shane A. McCarthy1, Sayantan Das2, Warren W. Kretzschmar3, Olivier Delaneau4, Andrew R. Wood5, Alexander Teumer6, Hyun Min Kang2, Christian Fuchsberger2, Petr Danecek1, Kevin Sharp3, Yang Luo1, C Sidore7, Alan Kwong2, Nicholas J. Timpson8, Seppo Koskinen, Scott I. Vrieze9, Laura J. Scott2, He Zhang2, Anubha Mahajan3, Jan H. Veldink, Ulrike Peters10, Ulrike Peters11, Carlos N. Pato12, Cornelia M. van Duijn13, Christopher E. Gillies2, Ilaria Gandin14, Massimo Mezzavilla, Arthur Gilly1, Massimiliano Cocca14, Michela Traglia, Andrea Angius7, Jeffrey C. Barrett1, D.I. Boomsma15, Kari Branham2, Gerome Breen16, Gerome Breen17, Chad M. Brummett2, Fabio Busonero7, Harry Campbell18, Andrew T. Chan19, Sai Chen2, Emily Y. Chew20, Francis S. Collins20, Laura J Corbin8, George Davey Smith8, George Dedoussis21, Marcus Dörr6, Aliki-Eleni Farmaki21, Luigi Ferrucci20, Lukas Forer22, Ross M. Fraser2, Stacey Gabriel23, Shawn Levy, Leif Groop24, Leif Groop25, Tabitha A. Harrison11, Andrew T. Hattersley5, Oddgeir L. Holmen26, Kristian Hveem26, Matthias Kretzler2, James Lee27, Matt McGue28, Thomas Meitinger29, David Melzer5, Josine L. Min8, Karen L. Mohlke30, John B. Vincent31, Matthias Nauck6, Deborah A. Nickerson10, Aarno Palotie23, Aarno Palotie19, Michele T. Pato12, Nicola Pirastu14, Melvin G. McInnis2, J. Brent Richards16, J. Brent Richards32, Cinzia Sala, Veikko Salomaa, David Schlessinger20, Sebastian Schoenherr22, P. Eline Slagboom33, Kerrin S. Small16, Tim D. Spector16, Dwight Stambolian34, Marcus A. Tuke5, Jaakko Tuomilehto, Leonard H. van den Berg, Wouter van Rheenen, Uwe Völker6, Cisca Wijmenga35, Daniela Toniolo, Eleftheria Zeggini1, Paolo Gasparini14, Matthew G. Sampson2, James F. Wilson18, Timothy M. Frayling5, Paul I.W. de Bakker36, Morris A. Swertz35, Steven A. McCarroll19, Charles Kooperberg11, Annelot M. Dekker, David Altshuler, Cristen J. Willer2, William G. Iacono28, Samuli Ripatti25, Nicole Soranzo27, Nicole Soranzo1, Klaudia Walter1, Anand Swaroop20, Francesco Cucca7, Carl A. Anderson1, Richard M. Myers, Michael Boehnke2, Mark I. McCarthy37, Mark I. McCarthy3, Richard Durbin1, Gonçalo R. Abecasis2, Jonathan Marchini3 
TL;DR: A reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies.
Abstract: We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies, and it can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently.