scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In postmenopausal women with osteoporosis, romosozumab was associated with a lower risk of vertebral fracture than placebo at 12 months and, after the transition to denosumab, at 24 months.
Abstract: BackgroundRomosozumab, a monoclonal antibody that binds sclerostin, increases bone formation and decreases bone resorption. MethodsWe enrolled 7180 postmenopausal women who had a T score of –2.5 to –3.5 at the total hip or femoral neck. Patients were randomly assigned to receive subcutaneous injections of romosozumab (at a dose of 210 mg) or placebo monthly for 12 months; thereafter, patients in each group received denosumab for 12 months, at a dose of 60 mg, administered subcutaneously every 6 months. The coprimary end points were the cumulative incidences of new vertebral fractures at 12 months and 24 months. Secondary end points included clinical (a composite of nonvertebral and symptomatic vertebral) and nonvertebral fractures. ResultsAt 12 months, new vertebral fractures had occurred in 16 of 3321 patients (0.5%) in the romosozumab group, as compared with 59 of 3322 (1.8%) in the placebo group (representing a 73% lower risk with romosozumab; P<0.001). Clinical fractures had occurred in 58 of 3589 pat...

998 citations


Posted Content
TL;DR: Model-Agnostic Meta-Learning (MAML) as discussed by the authors is a meta-learning approach that aims to train a model on a variety of learning tasks such that it can solve new learning tasks using only a small number of training samples.
Abstract: We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.

997 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: Zhang et al. as mentioned in this paper proposed a two-stream adaptive graph convolutional network (2s-AGCN) to model both the first-order and the second-order information simultaneously, which shows notable improvement for the recognition accuracy.
Abstract: In skeleton-based action recognition, graph convolutional networks (GCNs), which model the human body skeletons as spatiotemporal graphs, have achieved remarkable performance. However, in existing GCN-based methods, the topology of the graph is set manually, and it is fixed over all layers and input samples. This may not be optimal for the hierarchical GCN and diverse samples in action recognition tasks. In addition, the second-order information (the lengths and directions of bones) of the skeleton data, which is naturally more informative and discriminative for action recognition, is rarely investigated in existing methods. In this work, we propose a novel two-stream adaptive graph convolutional network (2s-AGCN) for skeleton-based action recognition. The topology of the graph in our model can be either uniformly or individually learned by the BP algorithm in an end-to-end manner. This data-driven method increases the flexibility of the model for graph construction and brings more generality to adapt to various data samples. Moreover, a two-stream framework is proposed to model both the first-order and the second-order information simultaneously, which shows notable improvement for the recognition accuracy. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that the performance of our model exceeds the state-of-the-art with a significant margin.

997 citations


Journal ArticleDOI
TL;DR: First-line therapy with pembrolizumab in patients with advanced Merkel-cell carcinoma was associated with an objective response rate of 56% and effectiveness was correlated with tumor viral status, as assessed by serologic and immunohistochemical testing.
Abstract: BackgroundMerkel-cell carcinoma is an aggressive skin cancer that is linked to exposure to ultraviolet light and the Merkel-cell polyomavirus (MCPyV). Advanced Merkel-cell carcinoma often responds to chemotherapy, but responses are transient. Blocking the programmed death 1 (PD-1) immune inhibitory pathway is of interest, because these tumors often express PD-L1, and MCPyV-specific T cells express PD-1. MethodsIn this multicenter, phase 2, noncontrolled study, we assigned adults with advanced Merkel-cell carcinoma who had received no previous systemic therapy to receive pembrolizumab (anti–PD-1) at a dose of 2 mg per kilogram of body weight every 3 weeks. The primary end point was the objective response rate according to Response Evaluation Criteria in Solid Tumors, version 1.1. Efficacy was correlated with tumor viral status, as assessed by serologic and immunohistochemical testing. ResultsA total of 26 patients received at least one dose of pembrolizumab. The objective response rate among the 25 patient...

997 citations


Journal ArticleDOI
10 Aug 2018-Science
TL;DR: The development of microresonator-generated frequency combs is reviewed to map out how understanding and control of their generation is providing a new basis for precision technology and establish a nascent research field at the interface of soliton physics, frequency metrology, and integrated photonics.
Abstract: The development of compact, chip-scale optical frequency comb sources (microcombs) based on parametric frequency conversion in microresonators has seen applications in terabit optical coherent communications, atomic clocks, ultrafast distance measurements, dual-comb spectroscopy, and the calibration of astophysical spectrometers and have enabled the creation of photonic-chip integrated frequency synthesizers. Underlying these recent advances has been the observation of temporal dissipative Kerr solitons in microresonators, which represent self-enforcing, stationary, and localized solutions of a damped, driven, and detuned nonlinear Schrodinger equation, which was first introduced to describe spatial self-organization phenomena. The generation of dissipative Kerr solitons provide a mechanism by which coherent optical combs with bandwidth exceeding one octave can be synthesized and have given rise to a host of phenomena, such as the Stokes soliton, soliton crystals, soliton switching, or dispersive waves. Soliton microcombs are compact, are compatible with wafer-scale processing, operate at low power, can operate with gigahertz to terahertz line spacing, and can enable the implementation of frequency combs in remote and mobile environments outside the laboratory environment, on Earth, airborne, or in outer space.

997 citations


Journal ArticleDOI
TL;DR: The European Space Agency's Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009 and scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857GHz as mentioned in this paper.
Abstract: The European Space Agency's Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009. It scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857GHz. This paper presents the cosmological legacy of Planck, which currently provides our strongest constraints on the parameters of the standard cosmological model and some of the tightest limits available on deviations from that model. The 6-parameter LCDM model continues to provide an excellent fit to the cosmic microwave background data at high and low redshift, describing the cosmological information in over a billion map pixels with just six parameters. With 18 peaks in the temperature and polarization angular power spectra constrained well, Planck measures five of the six parameters to better than 1% (simultaneously), with the best-determined parameter (theta_*) now known to 0.03%. We describe the multi-component sky as seen by Planck, the success of the LCDM model, and the connection to lower-redshift probes of structure formation. We also give a comprehensive summary of the major changes introduced in this 2018 release. The Planck data, alone and in combination with other probes, provide stringent constraints on our models of the early Universe and the large-scale structure within which all astrophysical objects form and evolve. We discuss some lessons learned from the Planck mission, and highlight areas ripe for further experimental advances.

997 citations


Journal ArticleDOI
TL;DR: Managing obesity can help reduce the risks of cardiovascular diseases and poor outcome via inhibiting inflammatory mechanisms.
Abstract: Obesity is the accumulation of abnormal or excessive fat that may interfere with the maintenance of an optimal state of health. The excess of macronutrients in the adipose tissues stimulates them to release inflammatory mediators such as tumor necrosis factor α and interleukin 6, and reduces production of adiponectin, predisposing to a pro-inflammatory state and oxidative stress. The increased level of interleukin 6 stimulates the liver to synthesize and secrete C-reactive protein. As a risk factor, inflammation is an imbedded mechanism of developed cardiovascular diseases including coagulation, atherosclerosis, metabolic syndrome, insulin resistance, and diabetes mellitus. It is also associated with development of non-cardiovascular diseases such as psoriasis, depression, cancer, and renal diseases. On the other hand, a reduced level of adiponectin, a significant predictor of cardiovascular mortality, is associated with impaired fasting glucose, leading to type-2 diabetes development, metabolic abnormalities, coronary artery calcification, and stroke. Finally, managing obesity can help reduce the risks of cardiovascular diseases and poor outcome via inhibiting inflammatory mechanisms.

997 citations


Journal ArticleDOI
TL;DR: Among patients with atrial fibrillation who had undergone PCI, the risk of bleeding was lower among those who received dual therapy with dabigatran and a P2Y12 inhibitor than amongThose who received triple therapy with warfarin, a P1Y12 inhibitors, and aspirin.
Abstract: BackgroundTriple antithrombotic therapy with warfarin plus two antiplatelet agents is the standard of care after percutaneous coronary intervention (PCI) for patients with atrial fibrillation, but this therapy is associated with a high risk of bleeding. MethodsIn this multicenter trial, we randomly assigned 2725 patients with atrial fibrillation who had undergone PCI to triple therapy with warfarin plus a P2Y12 inhibitor (clopidogrel or ticagrelor) and aspirin (for 1 to 3 months) (triple-therapy group) or dual therapy with dabigatran (110 mg or 150 mg twice daily) plus a P2Y12 inhibitor (clopidogrel or ticagrelor) and no aspirin (110-mg and 150-mg dual-therapy groups). Outside the United States, elderly patients (≥80 years of age; ≥70 years of age in Japan) were randomly assigned to the 110-mg dual-therapy group or the triple-therapy group. The primary end point was a major or clinically relevant nonmajor bleeding event during follow-up (mean follow-up, 14 months). The trial also tested for the noninferio...

997 citations


Journal ArticleDOI
TL;DR: antiSMASH as mentioned in this paper is the most widely used tool for detecting and characterising biosynthetic gene clusters (BGCs) in bacteria and fungi, and it is updated version 6 of antiSMASH.
Abstract: Many microorganisms produce natural products that form the basis of antimicrobials, antivirals, and other drugs. Genome mining is routinely used to complement screening-based workflows to discover novel natural products. Since 2011, the "antibiotics and secondary metabolite analysis shell-antiSMASH" (https://antismash.secondarymetabolites.org/) has supported researchers in their microbial genome mining tasks, both as a free-to-use web server and as a standalone tool under an OSI-approved open-source license. It is currently the most widely used tool for detecting and characterising biosynthetic gene clusters (BGCs) in bacteria and fungi. Here, we present the updated version 6 of antiSMASH. antiSMASH 6 increases the number of supported cluster types from 58 to 71, displays the modular structure of multi-modular BGCs, adds a new BGC comparison algorithm, allows for the integration of results from other prediction tools, and more effectively detects tailoring enzymes in RiPP clusters.

997 citations


Posted Content
TL;DR: Deep learning methods employ multiple processing layers to learn hierarchical representations of data and have produced state-of-the-art results in many domains as mentioned in this paper, such as natural language processing (NLP).
Abstract: Deep learning methods employ multiple processing layers to learn hierarchical representations of data and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP tasks and provide a walk-through of their evolution. We also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP.

997 citations


Journal ArticleDOI
TL;DR: This is the first extensive biodistribution investigation of EVs comparing the impact of several different variables, the results of which have implications for the design and feasibility of therapeutic studies using EVs.
Abstract: Extracellular vesicles (EVs) have emerged as important mediators of intercellular communication in a diverse range of biological processes. For future therapeutic applications and for EV biology research in general, understanding the in vivo fate of EVs is of utmost importance. Here we studied biodistribution of EVs in mice after systemic delivery. EVs were isolated from 3 different mouse cell sources, including dendritic cells (DCs) derived from bone marrow, and labelled with a near-infrared lipophilic dye. Xenotransplantation of EVs was further carried out for cross-species comparison. The reliability of the labelling technique was confirmed by sucrose gradient fractionation, organ perfusion and further supported by immunohistochemical staining using CD63-EGFP probed vesicles. While vesicles accumulated mainly in liver, spleen, gastrointestinal tract and lungs, differences related to EV cell origin were detected. EVs accumulated in the tumour tissue of tumour-bearing mice and, after introduction of the rabies virus glycoprotein-targeting moiety, they were found more readily in acetylcholine-receptor-rich organs. In addition, the route of administration and the dose of injected EVs influenced the biodistribution pattern. This is the first extensive biodistribution investigation of EVs comparing the impact of several different variables, the results of which have implications for the design and feasibility of therapeutic studies using EVs.

Journal ArticleDOI
TL;DR: Adjuvant ipilimumab significantly improved recurrence-free survival for patients with completely resected high-risk stage III melanoma and the adverse event profile was consistent with that observed in advanced melanoma, but at higher incidences in particular for endocrinopathies.
Abstract: Summary Background Ipilimumab is an approved treatment for patients with advanced melanoma. We aimed to assess ipilimumab as adjuvant therapy for patients with completely resected stage III melanoma at high risk of recurrence. Methods We did a double-blind, phase 3 trial in patients with stage III cutaneous melanoma (excluding lymph node metastasis ≤1 mm or in-transit metastasis) with adequate resection of lymph nodes (ie, the primary cutaneous melanoma must have been completely excised with adequate surgical margins) who had not received previous systemic therapy for melanoma from 91 hospitals located in 19 countries. Patients were randomly assigned (1:1), centrally by an interactive voice response system, to receive intravenous infusions of 10 mg/kg ipilimumab or placebo every 3 weeks for four doses, then every 3 months for up to 3 years. Using a minimisation technique, randomisation was stratified by disease stage and geographical region. The primary endpoint was recurrence-free survival, assessed by an independent review committee, and analysed by intention to treat. Enrollment is complete but the study is ongoing for follow-up for analysis of secondary endpoints. This trial is registered with EudraCT, number 2007-001974-10, and ClinicalTrials.gov, number NCT00636168. Findings Between July 10, 2008, and Aug 1, 2011, 951 patients were randomly assigned to ipilimumab (n=475) or placebo (n=476), all of whom were included in the intention-to-treat analyses. At a median follow-up of 2·74 years (IQR 2·28–3·22), there were 528 recurrence-free survival events (234 in the ipilimumab group vs 294 in the placebo group). Median recurrence-free survival was 26·1 months (95% CI 19·3–39·3) in the ipilimumab group versus 17·1 months (95% CI 13·4–21·6) in the placebo group (hazard ratio 0·75; 95% CI 0·64–0·90; p=0·0013); 3-year recurrence-free survival was 46·5% (95% CI 41·5–51·3) in the ipilimumab group versus 34·8% (30·1–39·5) in the placebo group. The most common grade 3–4 immune-related adverse events in the ipilimumab group were gastrointestinal (75 [16%] vs four [ vs one [ vs none). Adverse events led to discontinuation of treatment in 245 (52%) of 471 patients who started ipilimumab (182 [39%] during the initial treatment period of four doses). Five patients (1%) died due to drug-related adverse events. Five (1%) participants died because of drug-related adverse events in the ipilimumab group; three patients died because of colitis (two with gastrointestinal perforation), one patient because of myocarditis, and one patient because of multiorgan failure with Guillain-Barre syndrome. Interpretation Adjuvant ipilimumab significantly improved recurrence-free survival for patients with completely resected high-risk stage III melanoma. The adverse event profile was consistent with that observed in advanced melanoma, but at higher incidences in particular for endocrinopathies. The risk–benefit ratio of adjuvant ipilimumab at this dose and schedule requires additional assessment based on distant metastasis-free survival and overall survival endpoints to define its definitive value. Funding Bristol-Myers Squibb.

Journal ArticleDOI
TL;DR: It is shown that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a “shallow” dictionary learning model with augmentation.
Abstract: The ability of deep convolutional neural networks (CNNs) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep CNN architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a “shallow” dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.

Journal ArticleDOI
06 May 2015-Neuron
TL;DR: Human neuroimaging studies indicate that surprisingly similar circuitry is activated by quite diverse pleasures, suggesting a common neural currency shared by all.

Journal ArticleDOI
TL;DR: Comparing vitamin D with placebo did not result in a lower incidence of invasive cancer or cardiovascular events than placebo, and Supplementation with vitamin D was not associated with a lower risk of either of the primary end points.
Abstract: Background It is unclear whether supplementation with vitamin D reduces the risk of cancer or cardiovascular disease, and data from randomized trials are limited. Methods We conducted a nationwide, randomized, placebo-controlled trial, with a two-by-two factorial design, of vitamin D3 (cholecalciferol) at a dose of 2000 IU per day and marine n−3 (also called omega-3) fatty acids at a dose of 1 g per day for the prevention of cancer and cardiovascular disease among men 50 years of age or older and women 55 years of age or older in the United States. Primary end points were invasive cancer of any type and major cardiovascular events (a composite of myocardial infarction, stroke, or death from cardiovascular causes). Secondary end points included site-specific cancers, death from cancer, and additional cardiovascular events. This article reports the results of the comparison of vitamin D with placebo. Results A total of 25,871 participants, including 5106 black participants, underwent randomization....

Journal ArticleDOI
Klaus H. Maier-Hein1, Peter F. Neher1, Jean-Christophe Houde2, Marc-Alexandre Côté2, Eleftherios Garyfallidis2, Jidan Zhong3, Maxime Chamberland2, Fang-Cheng Yeh4, Ying-Chia Lin5, Qing Ji6, Wilburn E. Reddick6, John O. Glass6, David Qixiang Chen7, Yuanjing Feng8, Chengfeng Gao8, Ye Wu8, Jieyan Ma, H Renjie, Qiang Li, Carl-Fredrik Westin9, Samuel Deslauriers-Gauthier2, J. Omar Ocegueda Gonzalez, Michael Paquette2, Samuel St-Jean2, Gabriel Girard2, François Rheault2, Jasmeen Sidhu2, Chantal M. W. Tax10, Fenghua Guo10, Hamed Y. Mesri10, Szabolcs David10, Martijn Froeling10, Anneriet M. Heemskerk10, Alexander Leemans10, Arnaud Boré11, Basile Pinsard11, Christophe Bedetti11, Matthieu Desrosiers11, Simona M. Brambati11, Julien Doyon11, Alessia Sarica12, Roberta Vasta12, Antonio Cerasa12, Aldo Quattrone12, Jason D. Yeatman13, Ali R. Khan14, Wes Hodges, Simon Alexander, David Romascano15, Muhamed Barakovic15, Anna Auría15, Oscar Esteban16, Alia Lemkaddem15, Jean-Philippe Thiran15, Hasan Ertan Cetingul17, Benjamin L. Odry17, Boris Mailhe17, Mariappan S. Nadar17, Fabrizio Pizzagalli18, Gautam Prasad18, Julio E. Villalon-Reina18, Justin Galvis18, Paul M. Thompson18, Francisco De Santiago Requejo19, Pedro Luque Laguna19, Luis Miguel Lacerda19, Rachel Barrett19, Flavio Dell'Acqua19, Marco Catani, Laurent Petit20, Emmanuel Caruyer21, Alessandro Daducci15, Tim B. Dyrby22, Tim Holland-Letz1, Claus C. Hilgetag23, Bram Stieltjes24, Maxime Descoteaux2 
TL;DR: The encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent) is reported, however, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups.
Abstract: Tractography based on non-invasive diffusion imaging is central to the study of human brain connectivity. To date, the approach has not been systematically validated in ground truth studies. Based on a simulated human brain data set with ground truth tracts, we organized an open international tractography challenge, which resulted in 96 distinct submissions from 20 research groups. Here, we report the encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent). However, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups. Taken together, our results demonstrate and confirm fundamental ambiguities inherent in tract reconstruction based on orientation information alone, which need to be considered when interpreting tractography and connectivity results. Our approach provides a novel framework for estimating reliability of tractography and encourages innovation to address its current limitations.

Journal ArticleDOI
TL;DR: This document provides a summary of the existing evidence for the clinical value of parametric mapping in the heart as of mid 2017, and gives recommendations for practical use in different clinical scenarios for scientists, clinicians, and CMR manufacturers.
Abstract: Parametric mapping techniques provide a non-invasive tool for quantifying tissue alterations in myocardial disease in those eligible for cardiovascular magnetic resonance (CMR). Parametric mapping with CMR now permits the routine spatial visualization and quantification of changes in myocardial composition based on changes in T1, T2, and T2*(star) relaxation times and extracellular volume (ECV). These changes include specific disease pathways related to mainly intracellular disturbances of the cardiomyocyte (e.g., iron overload, or glycosphingolipid accumulation in Anderson-Fabry disease); extracellular disturbances in the myocardial interstitium (e.g., myocardial fibrosis or cardiac amyloidosis from accumulation of collagen or amyloid proteins, respectively); or both (myocardial edema with increased intracellular and/or extracellular water). Parametric mapping promises improvements in patient care through advances in quantitative diagnostics, inter- and intra-patient comparability, and relatedly improvements in treatment. There is a multitude of technical approaches and potential applications. This document provides a summary of the existing evidence for the clinical value of parametric mapping in the heart as of mid 2017, and gives recommendations for practical use in different clinical scenarios for scientists, clinicians, and CMR manufacturers.

Journal ArticleDOI
TL;DR: The Embodied Predictive Interoception Coding model is introduced, which integrates an anatomical model of corticocortical connections with Bayesian active inference principles, to propose that agranular visceromotor cortices contribute to interoception by issuing interoceptive predictions.
Abstract: Intuition suggests that perception follows sensation and therefore bodily feelings originate in the body. However, recent evidence goes against this logic: interoceptive experience may largely reflect limbic predictions about the expected state of the body that are constrained by ascending visceral sensations. In this Opinion article, we introduce the Embodied Predictive Interoception Coding model, which integrates an anatomical model of corticocortical connections with Bayesian active inference principles, to propose that agranular visceromotor cortices contribute to interoception by issuing interoceptive predictions. We then discuss how disruptions in interoceptive predictions could function as a common vulnerability for mental and physical illness.


Posted ContentDOI
08 Mar 2017-bioRxiv
TL;DR: XCell as mentioned in this paper is a gene-signature based method for inferring 64 immune and stroma cell types from 1,822 transcriptomic profiles of pure human cells from various sources, employed a curve fitting approach for linear comparison of cell types, and introduced a novel spillover compensation technique for separating closely related cell types.
Abstract: Tissues are a complex milieu consisting of numerous cell types. For example, understanding the cellular heterogeneity the tumor microenvironment is an emerging field of research. Numerous methods have been published in recent years for the enumeration of cell subsets from tissue expression profiles. However, the available methods suffer from three major problems: inferring cell subset based on gene sets learned and verified from limited sources; displaying only partial portrayal of the full cellular heterogeneity; and insufficient validation in mixed tissues. To address these issues we developed xCell, a novel gene-signature based method for inferring 64 immune and stroma cell types. We first curated and harmonized 1,822 transcriptomic profiles of pure human cell types from various sources, employed a curve fitting approach for linear comparison of cell types, and introduced a novel spillover compensation technique for separating between closely related cell types. We test the ability of our model learned from pure cell types to infer enrichments of cell types in mixed tissues, using both comprehensive in silico analyses, and by comparison to cytometry immunophenotyping to show that our scores outperform previously published methods. Finally, we explore the cell type enrichments in tumor samples and show that the cellular heterogeneity of the tumor microenvironment uniquely characterizes different cancer types. We provide our method for inferring cell type abundances as a public resource to allow researchers to portray the cellular heterogeneity landscape of tissue expression profiles: http://xCell.ucsf.edu/.

Journal ArticleDOI
TL;DR: In this article, the authors focus on recent experimental and theoretical studies, which aim at unraveling the underlying physics, characterized by the delicate interplay of liquid inertia, viscosity, and surface tension, but also the surrounding gas.
Abstract: A drop hitting a solid surface can deposit, bounce, or splash. Splashing arises from the breakup of a fine liquid sheet that is ejected radially along the substrate. Bouncing and deposition depend crucially on the wetting properties of the substrate. In this review, we focus on recent experimental and theoretical studies, which aim at unraveling the underlying physics, characterized by the delicate interplay of not only liquid inertia, viscosity, and surface tension, but also the surrounding gas. The gas cushions the initial contact; it is entrapped in a central microbubble on the substrate; and it promotes the so-called corona splash, by lifting the lamella away from the solid. Particular attention is paid to the influence of surface roughness, natural or engineered to enhance repellency, relevant in many applications.

Journal ArticleDOI
24 Jun 2016-Science
TL;DR: Even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles, and regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
Abstract: Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.

Journal ArticleDOI
15 Jan 2015-Nature
TL;DR: A global meta-analysis using 5,463 paired yield observations from 610 studies to compare no-till, the original and central concept of conservation agriculture, with conventional tillage practices across 48 crops and 63 countries indicates that the potential contribution of no-Till to the sustainable intensification of agriculture is more limited than often assumed.
Abstract: One of the primary challenges of our time is to feed a growing and more demanding world population with reduced external inputs and minimal environmental impacts, all under more variable and extreme climate conditions in the future. Conservation agriculture represents a set of three crop management principles that has received strong international support to help address this challenge, with recent conservation agriculture efforts focusing on smallholder farming systems in sub-Saharan Africa and South Asia. However, conservation agriculture is highly debated, with respect to both its effects on crop yields and its applicability in different farming contexts. Here we conduct a global meta-analysis using 5,463 paired yield observations from 610 studies to compare no-till, the original and central concept of conservation agriculture, with conventional tillage practices across 48 crops and 63 countries. Overall, our results show that no-till reduces yields, yet this response is variable and under certain conditions no-till can produce equivalent or greater yields than conventional tillage. Importantly, when no-till is combined with the other two conservation agriculture principles of residue retention and crop rotation, its negative impacts are minimized. Moreover, no-till in combination with the other two principles significantly increases rainfed crop productivity in dry climates, suggesting that it may become an important climate-change adaptation strategy for ever-drier regions of the world. However, any expansion of conservation agriculture should be done with caution in these areas, as implementation of the other two principles is often challenging in resource-poor and vulnerable smallholder farming systems, thereby increasing the likelihood of yield losses rather than gains. Although farming systems are multifunctional, and environmental and socio-economic factors need to be considered, our analysis indicates that the potential contribution of no-till to the sustainable intensification of agriculture is more limited than often assumed.

Journal ArticleDOI
TL;DR: This survey focuses on more generic object categories including, but not limited to, road, building, tree, vehicle, ship, airport, urban-area, and proposes two promising research directions, namely deep learning- based feature representation and weakly supervised learning-based geospatial object detection.
Abstract: Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey (1) template matching-based object detection methods, (2) knowledge-based object detection methods, (3) object-based image analysis (OBIA)-based object detection methods, (4) machine learning-based object detection methods, and (5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.

Journal ArticleDOI
TL;DR: Defining the Epidemiology of Covid-19 Experience with MERS, pandemic influenza, and other outbreaks has shown that as an epidemic evolves, there is an urgent need to expand public health activities in response to this epidemic.
Abstract: Defining the Epidemiology of Covid-19 Experience with MERS, pandemic influenza, and other outbreaks has shown that as an epidemic evolves, we face an urgent need to expand public health activities ...

Journal ArticleDOI
TL;DR: Findings indicate that the opioid overdose epidemic is worsening and there is a need for continued action to prevent opioid abuse, dependence, and death, improve treatment capacity for opioid use disorders, and reduce the supply of illicit opioids, particularly heroin and illicit fentanyl.

Posted Content
TL;DR: It is shown that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs.
Abstract: Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a \emph{BadNet}) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of {25}\% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and---because the behavior of neural networks is difficult to explicate---stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software.

Journal ArticleDOI
TL;DR: The purpose of this brief review is to summarize published studies as of late February 2020 on the clinical features, symptoms, complications, and treatments of COVID-19 and help provide guidance for frontline medical staff in the clinical management of this outbreak.
Abstract: In late December 2019, a cluster of cases with 2019 Novel Coronavirus pneumonia (SARS-CoV-2) in Wuhan, China, aroused worldwide concern. Previous studies have reported epidemiological and clinical characteristics of coronavirus disease 2019 (COVID-19). The purpose of this brief review is to summarize those published studies as of late February 2020 on the clinical features, symptoms, complications, and treatments of COVID-19 and help provide guidance for frontline medical staff in the clinical management of this outbreak.

Journal ArticleDOI
TL;DR: In this paper, the authors study the out-of-sample and post-publication return predictability of 97 variables shown to predict cross-sectional stock returns and find that publication-informed trading results in a lower return.
Abstract: We study the out-of-sample and post-publication return predictability of 97 variables shown to predict cross-sectional stock returns. Portfolio returns are 26% lower out-of-sample and 58% lower post-publication. The out-of-sample decline is an upper bound estimate of data mining effects. We estimate a 32% (58%–26%) lower return from publication-informed trading. Post-publication declines are greater for predictors with higher in-sample returns, and returns are higher for portfolios concentrated in stocks with high idiosyncratic risk and low liquidity. Predictor portfolios exhibit post-publication increases in correlations with other published-predictor portfolios. Our findings suggest that investors learn about mispricing from academic publications.

Journal ArticleDOI
TL;DR: This paper develops a methodology that allows safety conditions—expression as control barrier functions—to be unified with performance objectives—expressed as control Lyapunov functions—in the context of real-time optimization-based controllers.
Abstract: Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions—expressed as control barrier functions —to be unified with performance objectives—expressed as control Lyapunov functions—in the context of real-time optimization-based controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds.