scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The ADE20K dataset as discussed by the authors contains 25k images of complex everyday scenes containing a variety of objects in their natural spatial context, on average there are 19.5 instances and 10.5 object classes per image.
Abstract: Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement state-of-the-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects.

961 citations


Journal ArticleDOI
TL;DR: Galpy as discussed by the authors is a general framework for representing galactic potentials both in python and in C (for accelerated computations); galpy functions, objects, and methods can generally take arbitrary combinations of these as arguments.
Abstract: I describe the design, implementation, and usage of galpy, a python package for galactic-dynamics calculations. At its core, galpy consists of a general framework for representing galactic potentials both in python and in C (for accelerated computations); galpy functions, objects, and methods can generally take arbitrary combinations of these as arguments. Numerical orbit integration is supported with a variety of Runge-Kutta-type and symplectic integrators. For planar orbits, integration of the phase-space volume is also possible. galpy supports the calculation of action-angle coordinates and orbital frequencies for a given phase-space point for general spherical potentials, using state-of-the-art numerical approximations for axisymmetric potentials, and making use of a recent general approximation for any static potential. A number of different distribution functions (DFs) are also included in the current release; currently, these consist of two-dimensional axisymmetric and non-axisymmetric disk DFs, a three-dimensional disk DF, and a DF framework for tidal streams. I provide several examples to illustrate the use of the code. I present a simple model for the Milky Way's gravitational potential consistent with the latest observations. I also numerically calculate the Oort functions for different tracer populations of stars and compare them to a new analytical approximation. Additionally, I characterize the response of a kinematically warm disk to an elliptical m = 2 perturbation in detail. Overall, galpy consists of about 54,000 lines, including 23,000 lines of code in the module, 11,000 lines of test code, and about 20,000 lines of documentation. The test suite covers 99.6% of the code. galpy is available at http://github.com/jobovy/galpy with extensive documentation available at http://galpy.readthedocs.org/en/latest.

961 citations


Journal ArticleDOI
TL;DR: Tastuzumab deruxtecan showed durable antitumor activity in a pretreated patient population with HER2-positive metastatic breast cancer and requires attention to pulmonary symptoms and careful monitoring.
Abstract: Background Trastuzumab deruxtecan (DS-8201) is an antibody-drug conjugate composed of an anti-HER2 (human epidermal growth factor receptor 2) antibody, a cleavable tetrapeptide-based linke...

961 citations


Journal ArticleDOI
24 Nov 2016-Nature
TL;DR: Modular synthetic hydrogel networks are used to define the key extracellular matrix parameters that govern intestinal stem cell (ISC) expansion and organoid formation, and show that separate stages of the process require different mechanical environments and ECM components.
Abstract: Epithelial organoids recapitulate multiple aspects of real organs, making them promising models of organ development, function and disease. However, the full potential of organoids in research and therapy has remained unrealized, owing to the poorly defined animal-derived matrices in which they are grown. Here we used modular synthetic hydrogel networks to define the key extracellular matrix (ECM) parameters that govern intestinal stem cell (ISC) expansion and organoid formation, and show that separate stages of the process require different mechanical environments and ECM components. In particular, fibronectin-based adhesion was sufficient for ISC survival and proliferation. High matrix stiffness significantly enhanced ISC expansion through a yes-associated protein 1 (YAP)-dependent mechanism. ISC differentiation and organoid formation, on the other hand, required a soft matrix and laminin-based adhesion. We used these insights to build a fully defined culture system for the expansion of mouse and human ISCs. We also produced mechanically dynamic matrices that were initially optimal for ISC expansion and subsequently permissive to differentiation and intestinal organoid formation, thus creating well-defined alternatives to animal-derived matrices for the culture of mouse and human stem-cell-derived organoids. Our approach overcomes multiple limitations of current organoid cultures and greatly expands their applicability in basic and clinical research. The principles presented here can be extended to identify designer matrices that are optimal for long-term culture of other types of stem cells and organoids.

961 citations


Journal ArticleDOI
21 Apr 2015-Immunity
TL;DR: It is proposed that multiple Breg cell subsets can be induced in response to inflammation at different stages in development, with a particular emphasis on their ontogeny.

961 citations


Journal ArticleDOI
06 Jun 2018-Nature
TL;DR: The genetic architecture of the human plasma proteome in healthy blood donors from the INTERVAL study is characterized, and it is shown that protein quantitative trait loci overlap with gene expression quantitative traits, as well as with disease-associated loci, and evidence that protein biomarkers have causal roles in disease is found.
Abstract: Although plasma proteins have important roles in biological processes and are the direct targets of many drugs, the genetic factors that control inter-individual variation in plasma protein levels are not well understood. Here we characterize the genetic architecture of the human plasma proteome in healthy blood donors from the INTERVAL study. We identify 1,927 genetic associations with 1,478 proteins, a fourfold increase on existing knowledge, including trans associations for 1,104 proteins. To understand the consequences of perturbations in plasma protein levels, we apply an integrated approach that links genetic variation with biological pathway, disease, and drug databases. We show that protein quantitative trait loci overlap with gene expression quantitative trait loci, as well as with disease-associated loci, and find evidence that protein biomarkers have causal roles in disease using Mendelian randomization analysis. By linking genetic factors to diseases via specific proteins, our analyses highlight potential therapeutic targets, opportunities for matching existing drugs with new disease indications, and potential safety concerns for drugs under development.

961 citations


Posted Content
TL;DR: This paper describes a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent, and extends convergence results to more general GANs and proves local convergence for simplified gradient penalties even if the generator and data distribution lie on lower dimensional manifolds.
Abstract: Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distribution lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.

961 citations


Journal ArticleDOI
02 Nov 2017-Nature
TL;DR: It is found that lactate can be a primary source of carbon for the TCA cycle and thus of energy, and during the fasted state, the contribution of glucose to tissue TCA metabolism is primarily indirect (via circulating lactate) in all tissues except the brain.
Abstract: Mammalian tissues are fuelled by circulating nutrients, including glucose, amino acids, and various intermediary metabolites. Under aerobic conditions, glucose is generally assumed to be burned fully by tissues via the tricarboxylic acid cycle (TCA cycle) to carbon dioxide. Alternatively, glucose can be catabolized anaerobically via glycolysis to lactate, which is itself also a potential nutrient for tissues and tumours. The quantitative relevance of circulating lactate or other metabolic intermediates as fuels remains unclear. Here we systematically examine the fluxes of circulating metabolites in mice, and find that lactate can be a primary source of carbon for the TCA cycle and thus of energy. Intravenous infusions of 13C-labelled nutrients reveal that, on a molar basis, the circulatory turnover flux of lactate is the highest of all metabolites and exceeds that of glucose by 1.1-fold in fed mice and 2.5-fold in fasting mice; lactate is made primarily from glucose but also from other sources. In both fed and fasted mice, 13C-lactate extensively labels TCA cycle intermediates in all tissues. Quantitative analysis reveals that during the fasted state, the contribution of glucose to tissue TCA metabolism is primarily indirect (via circulating lactate) in all tissues except the brain. In genetically engineered lung and pancreatic cancer tumours in fasted mice, the contribution of circulating lactate to TCA cycle intermediates exceeds that of glucose, with glutamine making a larger contribution than lactate in pancreatic cancer. Thus, glycolysis and the TCA cycle are uncoupled at the level of lactate, which is a primary circulating TCA substrate in most tissues and tumours.

961 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: The results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers, and show that the convolutional features provide improved results compared to standard hand-crafted features.
Abstract: Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets.

961 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: A simple and efficient baseline for person re-identification with deep neural networks by combining effective training tricks together, which achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features.
Abstract: This paper explores a simple and efficient baseline for person re-identification (ReID). Person re-identification (ReID) with deep neural networks has made progress and achieved high performance in recent years. However, many state-of-the-arts methods design complex network structure and concatenate multi-branch features. In the literature, some effective training tricks are briefly appeared in several papers or source codes. This paper will collect and evaluate these effective training tricks in person ReID. By combining these tricks together, the model achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features. Our codes and models are available at https://github.com/michuanhaohao/reid-strong-baseline.

960 citations


Journal ArticleDOI
06 Aug 2015-Nature
TL;DR: This work maps the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network.
Abstract: The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

Journal ArticleDOI
TL;DR: In this article, a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers.
Abstract: Floods, wildfires, heatwaves and droughts often result from a combination of interacting physical processes across multiple spatial and temporal scales. The combination of processes (climate drivers and hazards) leading to a significant impact is referred to as a ‘compound event’. Traditional risk assessment methods typically only consider one driver and/or hazard at a time, potentially leading to underestimation of risk, as the processes that cause extreme events often interact and are spatially and/or temporally dependent. Here we show how a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers, who need to work closely together to understand these complex events.

Journal ArticleDOI
TL;DR: The fundamental properties of CBn homologues and their cyclic derivatives are discussed with a focus on their synthesis and their applications in catalysis.
Abstract: In the wide area of supramolecular chemistry, cucurbit[n]urils (CBn) present themselves as a young family of molecular containers, able to form stable complexes with various guests, including drug molecules, amino acids and peptides, saccharides, dyes, hydrocarbons, perfluorinated hydrocarbons, and even high molecular weight guests such as proteins (e.g., human insulin). Since the discovery of the first CBn, CB6, the field has seen tremendous growth with respect to the synthesis of new homologues and derivatives, the discovery of record binding affinities of guest molecules in their hydrophobic cavity, and associated applications ranging from sensing to drug delivery. In this review, we discuss in detail the fundamental properties of CBn homologues and their cyclic derivatives with a focus on their synthesis and their applications in catalysis.

Journal ArticleDOI
TL;DR: A variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons and a reinforcement-learning scheme that is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems.
Abstract: The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the non-trivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form, for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with variable number of hidden neurons. A reinforcement-learning scheme is then demonstrated, capable of either finding the ground-state or describing the unitary time evolution of complex interacting quantum systems. We show that this approach achieves very high accuracy in the description of equilibrium and dynamical properties of prototypical interacting spins models in both one and two dimensions, thus offering a new powerful tool to solve the quantum many-body problem.

Proceedings ArticleDOI
04 Aug 2017
TL;DR: This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Abstract: Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

Journal ArticleDOI
TL;DR: An overview of recent advances in physical reservoir computing is provided by classifying them according to the type of the reservoir to expand its practical applications and develop next-generation machine learning systems.

Journal ArticleDOI
TL;DR: Global stroke burden continues to increase globally, and more efficient stroke prevention and management strategies are urgently needed to halt and eventually reverse the stroke pandemic, while universal access to organized stroke services should be a priority.
Abstract: Background: Global stroke epidemiology is changing rapidly. Although age-standardized rates of stroke mortality have decreased worldwide in the past 2 decades, the absolute numbers of people who have a stroke every year, and live with the consequences of stroke or die from their stroke, are increasing. Regular updates on the current level of stroke burden are important for advancing our knowledge on stroke epidemiology and facilitate organization and planning of evidence-based stroke care. Objectives: This study aims to estimate incidence, prevalence, mortality, disability-adjusted life years (DALYs) and years lived with disability (YLDs) and their trends for ischemic stroke (IS) and hemorrhagic stroke (HS) for 188 countries from 1990 to 2013. Methodology: Stroke incidence, prevalence, mortality, DALYs and YLDs were estimated using all available data on mortality and stroke incidence, prevalence and excess mortality. Statistical models and country-level covariate data were employed, and all rates were age-standardized to a global population. All estimates were produced with 95% uncertainty intervals (UIs). Results: In 2013, there were globally almost 25.7 million stroke survivors (71% with IS), 6.5 million deaths from stroke (51% died from IS), 113 million DALYs due to stroke (58% due to IS) and 10.3 million new strokes (67% IS). Over the 1990-2013 period, there was a significant increase in the absolute number of DALYs due to IS, and of deaths from IS and HS, survivors and incident events for both IS and HS. The preponderance of the burden of stroke continued to reside in developing countries, comprising 75.2% of deaths from stroke and 81.0% of stroke-related DALYs. Globally, the proportional contribution of stroke-related DALYs and deaths due to stroke compared to all diseases increased from 1990 (3.54% (95% UI 3.11-4.00) and 9.66% (95% UI 8.47-10.70), respectively) to 2013 (4.62% (95% UI 4.01-5.30) and 11.75% (95% UI 10.45-13.31), respectively), but there was a diverging trend in developed and developing countries with a significant increase in DALYs and deaths in developing countries, and no measurable change in the proportional contribution of DALYs and deaths from stroke in developed countries. Conclusion: Global stroke burden continues to increase globally. More efficient stroke prevention and management strategies are urgently needed to halt and eventually reverse the stroke pandemic, while universal access to organized stroke services should be a priority.

Journal ArticleDOI
TL;DR: A major ocean plastic accumulation zone formed in subtropical waters between California and Hawaii: The Great Pacific Garbage Patch is characterised and quantified, suggesting that ocean plastic pollution within the GPGP is increasing exponentially and at a faster rate than in surrounding waters.
Abstract: Ocean plastic can persist in sea surface waters, eventually accumulating in remote areas of the world’s oceans. Here we characterise and quantify a major ocean plastic accumulation zone formed in subtropical waters between California and Hawaii: The Great Pacific Garbage Patch (GPGP). Our model, calibrated with data from multi-vessel and aircraft surveys, predicted at least 79 (45–129) thousand tonnes of ocean plastic are floating inside an area of 1.6 million km2; a figure four to sixteen times higher than previously reported. We explain this difference through the use of more robust methods to quantify larger debris. Over three-quarters of the GPGP mass was carried by debris larger than 5 cm and at least 46% was comprised of fishing nets. Microplastics accounted for 8% of the total mass but 94% of the estimated 1.8 (1.1–3.6) trillion pieces floating in the area. Plastic collected during our study has specific characteristics such as small surface-to-volume ratio, indicating that only certain types of debris have the capacity to persist and accumulate at the surface of the GPGP. Finally, our results suggest that ocean plastic pollution within the GPGP is increasing exponentially and at a faster rate than in surrounding waters.

Journal ArticleDOI
TL;DR: The PseudoDojo framework for developing and testing full tables of pseudopotentials is presented, and a new table generated with the ONCVPSP approach is demonstrated, leading to new insights into the effects of both the core-valence partitioning and the non-linear core corrections on the stability, convergence, and transferability of norm-conserving pseudopotential.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a representation of patients' entire, raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format and demonstrated that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization.
Abstract: Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patient's record. We propose a representation of patients' entire, raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two U.S. academic medical centers with 216,221 adult patients hospitalized for at least 24 hours. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting in-hospital mortality (AUROC across sites 0.93-0.94), 30-day unplanned readmission (AUROC 0.75-0.76), prolonged length of stay (AUROC 0.85-0.86), and all of a patient's final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed state-of-the-art traditional predictive models in all cases. We also present a case-study of a neural-network attribution system, which illustrates how clinicians can gain some transparency into the predictions. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios, complete with explanations that directly highlight evidence in the patient's chart.

Posted Content
TL;DR: Temporal Segment Network (TSN) as discussed by the authors is based on the idea of long-range temporal structure modeling and combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video.
Abstract: Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( $ 69.4\% $) and UCF101 ($ 94.2\% $). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.

Posted Content
TL;DR: This work presents a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption and believes it provides a compelling set of information that helps design and engineer efficient DNNs.
Abstract: Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.

Journal ArticleDOI
14 Jan 2016-Cell
TL;DR: Nucleosome spacing inferred from cfDNA in healthy individuals correlates most strongly with epigenetic features of lymphoid and myeloid cells, consistent with hematopoietic cell death as the normal source of cfDNA.

Journal ArticleDOI
15 Sep 2020-JAMA
TL;DR: Among patients with moderate COVID-19, those randomized to a 10-day course of remdesivir did not have a statistically significant difference in clinical status compared with standard care at 11 days after initiation of treatment, but the difference was of uncertain clinical importance.
Abstract: Importance Remdesivir demonstrated clinical benefit in a placebo-controlled trial in patients with severe coronavirus disease 2019 (COVID-19), but its effect in patients with moderate disease is unknown. Objective To determine the efficacy of 5 or 10 days of remdesivir treatment compared with standard care on clinical status on day 11 after initiation of treatment. Design, Setting, and Participants Randomized, open-label trial of hospitalized patients with confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and moderate COVID-19 pneumonia (pulmonary infiltrates and room-air oxygen saturation >94%) enrolled from March 15 through April 18, 2020, at 105 hospitals in the United States, Europe, and Asia. The date of final follow-up was May 20, 2020. Interventions Patients were randomized in a 1:1:1 ratio to receive a 10-day course of remdesivir (n = 197), a 5-day course of remdesivir (n = 199), or standard care (n = 200). Remdesivir was dosed intravenously at 200 mg on day 1 followed by 100 mg/d. Main Outcomes and Measures The primary end point was clinical status on day 11 on a 7-point ordinal scale ranging from death (category 1) to discharged (category 7). Differences between remdesivir treatment groups and standard care were calculated using proportional odds models and expressed as odds ratios. An odds ratio greater than 1 indicates difference in clinical status distribution toward category 7 for the remdesivir group vs the standard care group. Results Among 596 patients who were randomized, 584 began the study and received remdesivir or continued standard care (median age, 57 [interquartile range, 46-66] years; 227 [39%] women; 56% had cardiovascular disease, 42% hypertension, and 40% diabetes), and 533 (91%) completed the trial. Median length of treatment was 5 days for patients in the 5-day remdesivir group and 6 days for patients in the 10-day remdesivir group. On day 11, patients in the 5-day remdesivir group had statistically significantly higher odds of a better clinical status distribution than those receiving standard care (odds ratio, 1.65; 95% CI, 1.09-2.48;P = .02). The clinical status distribution on day 11 between the 10-day remdesivir and standard care groups was not significantly different (P = .18 by Wilcoxon rank sum test). By day 28, 9 patients had died: 2 (1%) in the 5-day remdesivir group, 3 (2%) in the 10-day remdesivir group, and 4 (2%) in the standard care group. Nausea (10% vs 3%), hypokalemia (6% vs 2%), and headache (5% vs 3%) were more frequent among remdesivir-treated patients compared with standard care. Conclusions and Relevance Among patients with moderate COVID-19, those randomized to a 10-day course of remdesivir did not have a statistically significant difference in clinical status compared with standard care at 11 days after initiation of treatment. Patients randomized to a 5-day course of remdesivir had a statistically significant difference in clinical status compared with standard care, but the difference was of uncertain clinical importance. Trial Registration ClinicalTrials.gov Identifier:NCT04292730

Journal ArticleDOI
TL;DR: The major risk factors for hepatocellular carcinoma (HCC) in contemporary clinical practice are becoming increasingly related to sustained virological response after hepatitis C, suppressed hepatitis B virus during treatment, and alcoholic and nonalcoholic fatty liver disease.

Journal ArticleDOI
TL;DR: The Three-Step Theory (3ST) as discussed by the authors is a theory of suicide rooted in the ideation-to-action framework, which hypothesizes that suicide ideation results from the combination of pain (usually psychological pain) and hopelessness.
Abstract: Klonsky and May (2014) argued that an “ideation-to-action” framework should guide suicide theory, research, and prevention. From this perspective, (a) the development of suicide ideation and (b) the progression from ideation to suicide attempts are distinct processes with distinct explanations. The present article introduces a specific theory of suicide rooted in the ideation-to-action framework: the Three-Step Theory (3ST). First, the theory hypothesizes that suicide ideation results from the combination of pain (usually psychological pain) and hopelessness. Second, among those experiencing both pain and hopelessness, connectedness is a key protective factor against escalating ideation. Third, the theory views the progression from ideation to attempts as facilitated by dispositional, acquired, and practical contributors to the capacity to attempt suicide. To examine the theory, the authors administered self-report measures to 910 U.S. adults utilizing Amazon's Mechanical Turk (oversampling for ideation a...

Posted ContentDOI
11 Mar 2020-medRxiv
TL;DR: The results demonstrate the proof-of-principle for using artificial intelligence to extract radiological features for timely and accurate COVID-19 diagnosis.
Abstract: Background The outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) has caused more than 2.5 million cases of Corona Virus Disease (COVID-19) in the world so far, with that number continuing to grow. To control the spread of the disease, screening large numbers of suspected cases for appropriate quarantine and treatment is a priority. Pathogenic laboratory testing is the gold standard but is time-consuming with significant false negative results. Therefore, alternative diagnostic methods are urgently needed to combat the disease. Based on COVID-19 radiographical changes in CT images, we hypothesized that Artificial Intelligence’s deep learning methods might be able to extract COVID-19’s specific graphical features and provide a clinical diagnosis ahead of the pathogenic test, thus saving critical time for disease control. Methods and Findings We collected 1,065 CT images of pathogen-confirmed COVID-19 cases (325 images) along with those previously diagnosed with typical viral pneumonia (740 images). We modified the Inception transfer-learning model to establish the algorithm, followed by internal and external validation. The internal validation achieved a total accuracy of 89.5% with specificity of 0.88 and sensitivity of 0.87. The external testing dataset showed a total accuracy of 79.3% with specificity of 0.83 and sensitivity of 0.67. In addition, in 54 COVID-19 images that first two nucleic acid test results were negative, 46 were predicted as COVID-19 positive by the algorithm, with the accuracy of 85.2%. Conclusion These results demonstrate the proof-of-principle for using artificial intelligence to extract radiological features for timely and accurate COVID-19 diagnosis. Author summary To control the spread of the COVID-19, screening large numbers of suspected cases for appropriate quarantine and treatment measures is a priority. Pathogenic laboratory testing is the gold standard but is time-consuming with significant false negative results. Therefore, alternative diagnostic methods are urgently needed to combat the disease. We hypothesized that Artificial Intelligence’s deep learning methods might be able to extract COVID-19’s specific graphical features and provide a clinical diagnosis ahead of the pathogenic test, thus saving critical time. We collected 1,065 CT images of pathogen-confirmed COVID-19 cases along with those previously diagnosed with typical viral pneumonia. We modified the Inception transfer-learning model to establish the algorithm. The internal validation achieved a total accuracy of 89.5% with specificity of 0.88 and sensitivity of 0.87. The external testing dataset showed a total accuracy of 79.3% with specificity of 0.83 and sensitivity of 0.67. In addition, in 54 COVID-19 images that first two nucleic acid test results were negative, 46 were predicted as COVID-19 positive by the algorithm, with the accuracy of 85.2%. Our study represents the first study to apply artificial intelligence to CT images for effectively screening for COVID-19.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work presents the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors, and displays the updated model in real time.
Abstract: We present the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors. Our DynamicFusion approach reconstructs scene geometry whilst simultaneously estimating a dense volumetric 6D motion field that warps the estimated geometry into a live frame. Like KinectFusion, our system produces increasingly denoised, detailed, and complete reconstructions as more measurements are fused, and displays the updated model in real time. Because we do not require a template or other prior scene model, the approach is applicable to a wide range of moving objects and scenes.

Journal ArticleDOI
TL;DR: It is shown that proteins, enzymes and DNA rapidly induce the formation of protective metal-organic framework coatings under physiological conditions by concentrating the framework building blocks and facilitating crystallization around the biomacromolecules.
Abstract: Robust biomacromolecules could be used for a wide range of biotechnological applications. Here the authors report a biomimetic mineralization process, in which biomolecules are encapsulated within metal-organic frameworks, and their stability is subsequently increased without significant bioactivity loss.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the contributions of crystal structure (phase), edges, and sulfur vacancies (S-vacancies) to the catalytic activity of 1T phase MoS2 nanosheets.
Abstract: Molybdenum disulfide (MoS2) is a promising nonprecious catalyst for the hydrogen evolution reaction (HER) that has been extensively studied due to its excellent performance, but the lack of understanding of the factors that impact its catalytic activity hinders further design and enhancement of MoS2-based electrocatalysts. Here, by using novel porous (holey) metallic 1T phase MoS2 nanosheets synthesized by a liquid-ammonia-assisted lithiation route, we systematically investigated the contributions of crystal structure (phase), edges, and sulfur vacancies (S-vacancies) to the catalytic activity toward HER from five representative MoS2 nanosheet samples, including 2H and 1T phase, porous 2H and 1T phase, and sulfur-compensated porous 2H phase. Superior HER catalytic activity was achieved in the porous 1T phase MoS2 nanosheets that have even more edges and S-vacancies than conventional 1T phase MoS2. A comparative study revealed that the phase serves as the key role in determining the HER performance, as 1T ...