Browse all papers
••
University of Turin1, Mayo Clinic2, University of Tübingen3, Emory University4, Semmelweis University5, Ankara University6, Charles University in Prague7, University of Mainz8, Cornell University9, Johnson & Johnson Pharmaceutical Research and Development10, Janssen Pharmaceutica11, Monash University12, Erasmus University Rotterdam13
TL;DR: Among patients with relapsed or relapsed and refractory multiple myeloma, daratumumab in combination with bortezomib and dexamethasone resulted in significantly longer progression-free survival than borteonib and DexamethAsone alone and was associated with infusion-related reactions and higher rates of thrombocytopenia and neutropenia.
Abstract: BackgroundDaratumumab, a human IgGκ monoclonal antibody that targets CD38, induces direct and indirect antimyeloma activity and has shown substantial efficacy as monotherapy in heavily pretreated patients with multiple myeloma, as well as in combination with bortezomib in patients with newly diagnosed multiple myeloma. MethodsIn this phase 3 trial, we randomly assigned 498 patients with relapsed or relapsed and refractory multiple myeloma to receive bortezomib (1.3 mg per square meter of body-surface area) and dexamethasone (20 mg) alone (control group) or in combination with daratumumab (16 mg per kilogram of body weight) (daratumumab group). The primary end point was progression-free survival. ResultsA prespecified interim analysis showed that the rate of progression-free survival was significantly higher in the daratumumab group than in the control group; the 12-month rate of progression-free survival was 60.7% in the daratumumab group versus 26.9% in the control group. After a median follow-up period ...
1,135 citations
••
TL;DR: Developments and improvements of the ATSAS software suite for analysis of small-angle scattering data of biological macromolecules or nanoparticles are described.
Abstract: ATSAS is a comprehensive software suite for the analysis of small-angle scattering data from dilute solutions of biological macromolecules or nanoparticles. It contains applications for primary data processing and assessment, ab initio bead modelling, and model validation, as well as methods for the analysis of flexibility and mixtures. In addition, approaches are supported that utilize information from X-ray crystallography, nuclear magnetic resonance spectroscopy or atomistic homology modelling to construct hybrid models based on the scattering data. This article summarizes the progress made during the 2.5–2.8 ATSAS release series and highlights the latest developments. These include AMBIMETER, an assessment of the reconstruction ambiguity of experimental data; DATCLASS, a multiclass shape classification based on experimental data; SASRES, for estimating the resolution of ab initio model reconstructions; CHROMIXS, a convenient interface to analyse in-line size exclusion chromatography data; SHANUM, to evaluate the useful angular range in measured data; SREFLEX, to refine available high-resolution models using normal mode analysis; SUPALM for a rapid superposition of low- and high-resolution models; and SASPy, the ATSAS plugin for interactive modelling in PyMOL. All these features and other improvements are included in the ATSAS release 2.8, freely available for academic users from https://www.embl-hamburg.de/biosaxs/software.html.
1,135 citations
••
TL;DR: The Global Fire Emissions Database (GFED) as mentioned in this paper has been used to quantify global fire emissions patterns during 1997-2016, with the largest impact on emissions in temperate North America, Central America, Europe, and temperate Asia.
Abstract: . Climate, land use, and other anthropogenic and natural drivers have the potential to influence fire dynamics in many regions. To develop a mechanistic understanding of the changing role of these drivers and their impact on atmospheric composition, long-term fire records are needed that fuse information from different satellite and in situ data streams. Here we describe the fourth version of the Global Fire Emissions Database (GFED) and quantify global fire emissions patterns during 1997–2016. The modeling system, based on the Carnegie–Ames–Stanford Approach (CASA) biogeochemical model, has several modifications from the previous version and uses higher quality input datasets. Significant upgrades include (1) new burned area estimates with contributions from small fires, (2) a revised fuel consumption parameterization optimized using field observations, (3) modifications that improve the representation of fuel consumption in frequently burning landscapes, and (4) fire severity estimates that better represent continental differences in burning processes across boreal regions of North America and Eurasia. The new version has a higher spatial resolution (0.25°) and uses a different set of emission factors that separately resolves trace gas and aerosol emissions from temperate and boreal forest ecosystems. Global mean carbon emissions using the burned area dataset with small fires (GFED4s) were 2.2 × 1015 grams of carbon per year (Pg C yr−1) during 1997–2016, with a maximum in 1997 (3.0 Pg C yr−1) and minimum in 2013 (1.8 Pg C yr−1). These estimates were 11 % higher than our previous estimates (GFED3) during 1997–2011, when the two datasets overlapped. This net increase was the result of a substantial increase in burned area (37 %), mostly due to the inclusion of small fires, and a modest decrease in mean fuel consumption (−19 %) to better match estimates from field studies, primarily in savannas and grasslands. For trace gas and aerosol emissions, differences between GFED4s and GFED3 were often larger due to the use of revised emission factors. If small fire burned area was excluded (GFED4 without the s for small fires), average emissions were 1.5 Pg C yr−1. The addition of small fires had the largest impact on emissions in temperate North America, Central America, Europe, and temperate Asia. This small fire layer carries substantial uncertainties; improving these estimates will require use of new burned area products derived from high-resolution satellite imagery. Our revised dataset provides an internally consistent set of burned area and emissions that may contribute to a better understanding of multi-decadal changes in fire dynamics and their impact on the Earth system. GFED data are available from http://www.globalfiredata.org .
1,135 citations
••
TL;DR: Fecal microbiota transplantation induces remission in a significantly greater percentage of patients with active UC than placebo, with no difference in adverse events.
1,135 citations
••
Cornell University1, University of Porto2, Imperial College London3, National Institutes of Health4, Memorial Sloan Kettering Cancer Center5, Princeton University6, Lawrence Berkeley National Laboratory7, Garvan Institute of Medical Research8, Stanford University9, University of Copenhagen10, Fred Hutchinson Cancer Research Center11
TL;DR: This Review summarizes the main processes and new mechanisms involved in the formation of the pre-metastatic niche and describes the main mechanisms used to modify organs of future metastasis.
Abstract: It is well established that organs of future metastasis are not passive receivers of circulating tumour cells, but are instead selectively and actively modified by the primary tumour before metastatic spread has even occurred. Sowing the 'seeds' of metastasis requires the action of tumour-secreted factors and tumour-shed extracellular vesicles that enable the 'soil' at distant metastatic sites to encourage the outgrowth of incoming cancer cells. In this Review, we summarize the main processes and new mechanisms involved in the formation of the pre-metastatic niche.
1,134 citations
•
TL;DR: In this paper, the authors established rigorous benchmarks for image classifier robustness and proposed ImageNet-C, a robustness benchmark that evaluates performance on common corruptions and perturbations not worst-case adversarial perturbation.
Abstract: In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.
1,134 citations
••
Broad Institute1, Harvard University2, Howard Hughes Medical Institute3, University of California, Berkeley4, University of California, Los Angeles5, Chinese Academy of Sciences6, Max Planck Society7, Columbia University8, Massachusetts Institute of Technology9, Cayetano Heredia University10, University of Pennsylvania11, University College London12, University of Bern13, Leiden University14, Nanyang Technological University15, University of Chicago16, Estonian Biocentre17, National University of La Plata18, University of Oxford19, University of Bergen20, Novosibirsk State University21, Moscow Institute of Physics and Technology22, Sofia Medical University23, Armenian National Academy of Sciences24, Wellcome Trust Sanger Institute25, Raja Isteri Pengiran Anak Saleha Hospital26, Case Western Reserve University27, University of Tartu28, Estonian Academy of Sciences29, Stony Brook University30, Illumina31, Gladstone Institutes32, University of Helsinki33, University of Washington34, Bashkir State University35, Jaramogi Oginga Odinga University of Science and Technology36, Pompeu Fabra University37, University of Arizona38, University of Cambridge39, Leidos40, Université de Montréal41, University of Utah42, Altai State University43, Council of Scientific and Industrial Research44
TL;DR: It is demonstrated that indigenous Australians, New Guineans and Andamanese do not derive substantial ancestry from an early dispersal of modern humans; instead, their modern human ancestry is consistent with coming from the same source as that of other non-Africans.
Abstract: Here we report the Simons Genome Diversity Project data set: high quality genomes from 300 individuals from 142 diverse populations. These genomes include at least 5.8 million base pairs that are not present in the human reference genome. Our analysis reveals key features of the landscape of human genome variation, including that the rate of accumulation of mutations has accelerated by about 5% in non-Africans compared to Africans since divergence. We show that the ancestors of some pairs of present-day human populations were substantially separated by 100,000 years ago, well before the archaeologically attested onset of behavioural modernity. We also demonstrate that indigenous Australians, New Guineans and Andamanese do not derive substantial ancestry from an early dispersal of modern humans; instead, their modern human ancestry is consistent with coming from the same source as that of other non-Africans.
1,133 citations
••
TL;DR: QTL IciMapping is freely available public software capable of building high-density linkage maps and mapping quantitative trait loci (QTL) in biparental populations and to perform analysis of variance for multi-environmental trials.
Abstract: QTL IciMapping is freely available public software capable of building high-density linkage maps and mapping quantitative trait loci (QTL) in biparental populations. Eight functionalities are integrated in this software package: (1) BIN: binning of redundant markers; (2) MAP: construction of linkage maps in biparental populations; (3) CMP: consensus map construction from multiple linkage maps sharing common markers; (4) SDL: mapping of segregation distortion loci; (5) BIP: mapping of additive, dominant, and digenic epistasis genes; (6) MET: QTL-by-environment interaction analysis; (7) CSL: mapping of additive and digenic epistasis genes with chromosome segment substitution lines; and (8) NAM: QTL mapping in NAM populations. Input files can be arranged in plain text, MS Excel 2003, or MS Excel 2007 formats. Output files have the same prefix name as the input but with different extensions. As examples, there are two output files in BIN, one for summarizing the identified bin groups and deleted markers in each bin, and the other for using the MAP functionality. Eight output files are generated by MAP, including summary of the completed linkage maps, Mendelian ratio test of individual markers, estimates of recombination frequencies, LOD scores, and genetic distances, and the input files for using the BIP, SDL, and MET functionalities. More than 30 output files are generated by BIP, including results at all scanning positions, identified QTL, permutation tests, and detection powers for up to six mapping methods. Three supplementary tools have also been developed to display completed genetic linkage maps, to estimate recombination frequency between two loci, and to perform analysis of variance for multi-environmental trials.
1,133 citations
••
TL;DR: In the case of aircraft components, AM technology enables low-volume manufacturing, easy integration of design changes and, at least as importantly, piece part reductions to greatly simplify product assembly.
Abstract: The past few decades have seen substantial growth in Additive Manufacturing (AM) technologies. However, this growth has mainly been process-driven. The evolution of engineering design to take advantage of the possibilities afforded by AM and to manage the constraints associated with the technology has lagged behind. This paper presents the major opportunities, constraints, and economic considerations for Design for Additive Manufacturing. It explores issues related to design and redesign for direct and indirect AM production. It also highlights key industrial applications, outlines future challenges, and identifies promising directions for research and the exploitation of AM's full potential in industry.
1,132 citations
••
TL;DR: In this article, the authors demonstrate how a deep neural network trained on quantum mechanical (QM) DFT calculations can learn an accurate and transferable potential for organic molecules, which is called ANI-ME (Accurate NeurAl networK engINE for Molecular Energies).
Abstract: Deep learning is revolutionizing many areas of science and technology, especially image, text, and speech recognition In this paper, we demonstrate how a deep neural network (NN) trained on quantum mechanical (QM) DFT calculations can learn an accurate and transferable potential for organic molecules We introduce ANAKIN-ME (Accurate NeurAl networK engINe for Molecular Energies) or ANI for short ANI is a new method designed with the intent of developing transferable neural network potentials that utilize a highly-modified version of the Behler and Parrinello symmetry functions to build single-atom atomic environment vectors (AEV) as a molecular representation AEVs provide the ability to train neural networks to data that spans both configurational and conformational space, a feat not previously accomplished on this scale We utilized ANI to build a potential called ANI-1, which was trained on a subset of the GDB databases with up to 8 heavy atoms in order to predict total energies for organic molecules containing four atom types: H, C, N, and O To obtain an accelerated but physically relevant sampling of molecular potential surfaces, we also proposed a Normal Mode Sampling (NMS) method for generating molecular conformations Through a series of case studies, we show that ANI-1 is chemically accurate compared to reference DFT calculations on much larger molecular systems (up to 54 atoms) than those included in the training data set
1,132 citations
••
19 Mar 2019
TL;DR: Performance in an online course in relationship to student interaction and sense of presence in the course is examined to go beyond typical institutional performance measures and to examine measures specifically related to course objectives.
Abstract: The research literature on Web-based learning supports the assumption that interaction is important for a successful course, yet questions exist regarding the nature and extent of the interaction and its effects on student performance. Much of the research is based on student perceptions of the quality and quantity of their interactions and how much they have learned in an online course. The purpose of this study is to examine performance in an online course in relationship to student interaction and sense of presence in the course. Data on multiple independent (measures of interaction and presence) and dependent (measures of performance) variables were collected and subjected to analysis. An attempt was made to go beyond typical institutional performance measures such as grades and withdrawal rates and to examine measures specifically related to course objectives.
••
TL;DR: The use of CRISPR-Cas9 nucleases with altered and improved PAM specificities has been explored in genomic engineering, epigenomic engineering, and genome targeting as mentioned in this paper.
Abstract: Engineered CRISPR-Cas9 nucleases with altered and improved PAM specificities and their use in genomic engineering, epigenomic engineering, and genome targeting.
••
TL;DR: MUMmer4 is described, a substantially improved version of MUMmer that addresses genome size constraints by changing the 32-bit suffix tree data structure at the core of Mummer to a 48- bit suffix array, and that offers improved speed through parallel processing of input query sequences.
Abstract: The MUMmer system and the genome sequence aligner nucmer included within it are among the most widely used alignment packages in genomics. Since the last major release of MUMmer version 3 in 2004, it has been applied to many types of problems including aligning whole genome sequences, aligning reads to a reference genome, and comparing different assemblies of the same genome. Despite its broad utility, MUMmer3 has limitations that can make it difficult to use for large genomes and for the very large sequence data sets that are common today. In this paper we describe MUMmer4, a substantially improved version of MUMmer that addresses genome size constraints by changing the 32-bit suffix tree data structure at the core of MUMmer to a 48-bit suffix array, and that offers improved speed through parallel processing of input query sequences. With a theoretical limit on the input size of 141Tbp, MUMmer4 can now work with input sequences of any biologically realistic length. We show that as a result of these enhancements, the nucmer program in MUMmer4 is easily able to handle alignments of large genomes; we illustrate this with an alignment of the human and chimpanzee genomes, which allows us to compute that the two species are 98% identical across 96% of their length. With the enhancements described here, MUMmer4 can also be used to efficiently align reads to reference genomes, although it is less sensitive and accurate than the dedicated read aligners. The nucmer aligner in MUMmer4 can now be called from scripting languages such as Perl, Python and Ruby. These improvements make MUMer4 one the most versatile genome alignment packages available.
••
TL;DR: In this paper, a review of control strategies, stability analysis, and stabilization techniques for dc microgrids is presented, where overall control is systematically classified into local and coordinated control levels according to respective functionalities in each level.
Abstract: This paper presents a review of control strategies, stability analysis, and stabilization techniques for dc microgrids (MGs). Overall control is systematically classified into local and coordinated control levels according to respective functionalities in each level. As opposed to local control, which relies only on local measurements, some line of communication between units needs to be made available in order to achieve the coordinated control. Depending on the communication method, three basic coordinated control strategies can be distinguished, i.e., decentralized, centralized, and distributed control. Decentralized control can be regarded as an extension of the local control since it is also based exclusively on local measurements. In contrast, centralized and distributed control strategies rely on digital communication technologies. A number of approaches using these three coordinated control strategies to achieve various control objectives are reviewed in this paper. Moreover, properties of dc MG dynamics and stability are discussed. This paper illustrates that tightly regulated point-of-load converters tend to reduce the stability margins of the system since they introduce negative impedances, which can potentially oscillate with lightly damped power supply input filters. It is also demonstrated that how the stability of the whole system is defined by the relationship of the source and load impedances, referred to as the minor loop gain. Several prominent specifications for the minor loop gain are reviewed. Finally, a number of active stabilization techniques are presented.
••
TL;DR: This work proposes the “A/T/N” system, a descriptive system for categorizing multidomain biomarker findings at the individual person level in a format that is easy to understand and use and suited to population studies of cognitive aging.
Abstract: Biomarkers have become an essential component of Alzheimer disease (AD) research and because of the pervasiveness of AD pathology in the elderly, the same biomarkers are used in cognitive aging research. A number of current issues suggest that an unbiased descriptive classification scheme for these biomarkers would be useful. We propose the "A/T/N" system in which 7 major AD biomarkers are divided into 3 binary categories based on the nature of the pathophysiology that each measures. "A" refers to the value of a β-amyloid biomarker (amyloid PET or CSF Aβ42); "T," the value of a tau biomarker (CSF phospho tau, or tau PET); and "N," biomarkers of neurodegeneration or neuronal injury ([(18)F]-fluorodeoxyglucose-PET, structural MRI, or CSF total tau). Each biomarker category is rated as positive or negative. An individual score might appear as A+/T+/N-, or A+/T-/N-, etc. The A/T/N system includes the new modality tau PET. It is agnostic to the temporal ordering of mechanisms underlying AD pathogenesis. It includes all individuals in any population regardless of the mix of biomarker findings and therefore is suited to population studies of cognitive aging. It does not specify disease labels and thus is not a diagnostic classification system. It is a descriptive system for categorizing multidomain biomarker findings at the individual person level in a format that is easy to understand and use. Given the present lack of consensus among AD specialists on terminology across the clinically normal to dementia spectrum, a biomarker classification scheme will have broadest acceptance if it is independent from any one clinically defined diagnostic scheme.
••
Université catholique de Louvain1, Utrecht University2, Institut national de la recherche agronomique3, Centre national de la recherche scientifique4, Université du Québec à Montréal5, Royal Society for the Protection of Birds6, University of Cambridge7, University of Padua8, University of Sussex9, Natural Resources Canada10, Purdue University11, Helmholtz Centre for Environmental Research - UFZ12, Smithsonian Institution13, University of Neuchâtel14, University of Saskatchewan15, Washington State University16, University of Bergen17, University of Stirling18
TL;DR: In this paper, a review of the global literature explores these risks and show a growing body of evidence that persistent, low concentrations of these insecticides pose serious risks of undesirable environmental impacts.
Abstract: Since their discovery in the late 1980s, neonicotinoid pesticides have become the most widely used class of insecticides worldwide, with large-scale applications ranging from plant protection (crops, vegetables, fruits), veterinary products, and biocides to invertebrate pest control in fish farming. In this review, we address the phenyl-pyrazole fipronil together with neonicotinoids because of similarities in their toxicity, physicochemical profiles, and presence in the environment. Neonicotinoids and fipronil currently account for approximately one third of the world insecticide market; the annual world production of the archetype neonicotinoid, imidacloprid, was estimated to be ca. 20,000 tonnes active substance in 2010. There were several reasons for the initial success of neonicotinoids and fipronil: (1) there was no known pesticide resistance in target pests, mainly because of their recent development, (2) their physicochemical properties included many advantages over previous generations of insecticides (i.e., organophosphates, carbamates, pyrethroids, etc.), and (3) they shared an assumed reduced operator and consumer risk. Due to their systemic nature, they are taken up by the roots or leaves and translocated to all parts of the plant, which, in turn, makes them effectively toxic to herbivorous insects. The toxicity persists for a variable period of time—depending on the plant, its growth stage, and the amount of pesticide applied. A wide variety of applications are available, including the most common prophylactic non-Good Agricultural Practices (GAP) application by seed coating. As a result of their extensive use and physicochemical properties, these substances can be found in all environmental compartments including soil, water, and air. Neonicotinoids and fipronil operate by disrupting neural transmission in the central nervous system of invertebrates. Neonicotinoids mimic the action of neurotransmitters, while fipronil inhibits neuronal receptors. In doing so, they continuously stimulate neurons leading ultimately to death of target invertebrates. Like virtually all insecticides, they can also have lethal and sublethal impacts on non-target organisms, including insect predators and vertebrates. Furthermore, a range of synergistic effects with other stressors have been documented. Here, we review extensively their metabolic pathways, showing how they form both compound-specific and common metabolites which can themselves be toxic. These may result in prolonged toxicity. Considering their wide commercial expansion, mode of action, the systemic properties in plants, persistence and environmental fate, coupled with limited information about the toxicity profiles of these compounds and their metabolites, neonicotinoids and fipronil may entail significant risks to the environment. A global evaluation of the potential collateral effects of their use is therefore timely. The present paper and subsequent chapters in this review of the global literature explore these risks and show a growing body of evidence that persistent, low concentrations of these insecticides pose serious risks of undesirable environmental impacts.
••
University of Florida1, Utrecht University2, Australian National University3, University of Zurich4, Technical University of Denmark5, Institute of Tropical Medicine Antwerp6, Boston Children's Hospital7, University of Kelaniya8, Université catholique de Louvain9, World Health Organization10, Centers for Disease Control and Prevention11
TL;DR: The Foodborne Disease Burden Epidemiology Reference Group (FERG) reports their first estimates of the incidence, mortality, and disease burden due to 31 foodborne hazards, finding that the global burden of FBD is comparable to those of the major infectious diseases, HIV/AIDS, malaria and tuberculosis.
Abstract: Illness and death from diseases caused by contaminated food are a constant threat to public health and a significant impediment to socio-economic development worldwide. To measure the global and regional burden of foodborne disease (FBD), the World Health Organization (WHO) established the Foodborne Disease Burden Epidemiology Reference Group (FERG), which here reports their first estimates of the incidence, mortality, and disease burden due to 31 foodborne hazards. We find that the global burden of FBD is comparable to those of the major infectious diseases, HIV/AIDS, malaria and tuberculosis. The most frequent causes of foodborne illness were diarrheal disease agents, particularly norovirus and Campylobacter spp. Diarrheal disease agents, especially non-typhoidal Salmonella enterica, were also responsible for the majority of deaths due to FBD. Other major causes of FBD deaths were Salmonella Typhi, Taenia solium and hepatitis A virus. The global burden of FBD caused by the 31 hazards in 2010 was 33 million Disability Adjusted Life Years (DALYs); children under five years old bore 40% of this burden. The 14 subregions, defined on the basis of child and adult mortality, had considerably different burdens of FBD, with the greatest falling on the subregions in Africa, followed by the subregions in South-East Asia and the Eastern Mediterranean D subregion. Some hazards, such as non-typhoidal S. enterica, were important causes of FBD in all regions of the world, whereas others, such as certain parasitic helminths, were highly localised. Thus, the burden of FBD is borne particularly by children under five years old-although they represent only 9% of the global population-and people living in low-income regions of the world. These estimates are conservative, i.e., underestimates rather than overestimates; further studies are needed to address the data gaps and limitations of the study. Nevertheless, all stakeholders can contribute to improvements in food safety throughout the food chain by incorporating these estimates into policy development at national and international levels.
••
TL;DR: The 2019 novel coronavirus (2019‐nCoV) infection is spreading and its incidence is increasing nationwide, and the first deaths occurred mostly in elderly people, among whom the disease might progress faster.
Abstract: To help health workers and the public recognize and deal with the 2019 novel coronavirus (2019-nCoV) quickly, effectively, and calmly with an updated understanding. A comprehensive search from Chinese and worldwide official websites and announcements was performed between 1 December 2019 and 9:30 am 26 January 2020 (Beijing time). A latest summary of 2019-nCoV and the current outbreak was drawn. Up to 24 pm, 25 January 2020, a total of 1975 cases of 2019-nCoV infection were confirmed in mainland China with a total of 56 deaths having occurred. The latest mortality was approximately 2.84% with a total of 2684 cases still suspected. The China National Health Commission reported the details of the first 17 deaths up to 24 pm, 22 January 2020. The deaths included 13 males and 4 females. The median age of the people who died was 75 (range 48-89) years. Fever (64.7%) and cough (52.9%) were the most common first symptoms among those who died. The median number of days from the occurence of the first symptom to death was 14.0 (range 6-41) days, and it tended to be shorter among people aged 70 years or more (11.5 [range 6-19] days) than those aged less than 70 years (20 [range 10-41] days; P = .033). The 2019-nCoV infection is spreading and its incidence is increasing nationwide. The first deaths occurred mostly in elderly people, among whom the disease might progress faster. The public should still be cautious in dealing with the virus and pay more attention to protecting the elderly people from the virus.
••
TL;DR: This work investigated the development of resistance against four antibodies to the spike protein that potently neutralize SARS-CoV-2, individually as well as when combined into cocktails.
Abstract: Antibodies targeting the spike protein of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) present a promising approach to combat the coronavirus disease 2019 (COVID-19) pandemic; however, concerns remain that mutations can yield antibody resistance. We investigated the development of resistance against four antibodies to the spike protein that potently neutralize SARS-CoV-2, individually as well as when combined into cocktails. These antibodies remain effective against spike variants that have arisen in the human population. However, novel spike mutants rapidly appeared after in vitro passaging in the presence of individual antibodies, resulting in loss of neutralization; such escape also occurred with combinations of antibodies binding diverse but overlapping regions of the spike protein. Escape mutants were not generated after treatment with a noncompeting antibody cocktail.
••
TL;DR: The current status of medical education is discussed, how CO VID-19 may affect preclerkship and clerkship learning environments are described, and potential implications of COVID-19 for the future ofmedical education are explored.
Abstract: These are unprecedented times. Although the necessary focus has been to care for patients and communities, theemergenceofsevereacuterespiratorysyndromecoronavirus 2 has disrupted medical education and requires intense and prompt attention from medical educators. The need to prepare future physicians has never been as focused as it is now in the setting of a global emergency. The profound effects of coronavirus disease 2019 (COVID-19) may forever change how future physicians are educated. This pandemic presents practical and logistical challenges and concerns for patient safety, recognizing that students may potentially spread the virus when asymptomatic and may acquire the virus in the course of training. This Viewpoint discusses the current status of medical education, describes how COVID-19 may affect preclerkship and clerkship learning environments, and explores potential implications of COVID-19 for the future of medical education.
••
TL;DR: A thorough survey to fully understand Few-shot Learning (FSL), and categorizes FSL methods from three perspectives: data, which uses prior knowledge to augment the supervised experience; model, which used to reduce the size of the hypothesis space; and algorithm, which using prior knowledgeto alter the search for the best hypothesis in the given hypothesis space.
Abstract: Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this article, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimizer is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications, and theories, are also proposed to provide insights for future research.1
••
TL;DR: Among patients with 1 to 3 brain metastases, the use of SRS alone, compared with SRS combined with WBRT, resulted in less cognitive deterioration at 3 months, and in the absence of a difference in overall survival, these findings suggest that.
Abstract: Importance Whole brain radiotherapy (WBRT) significantly improves tumor control in the brain after stereotactic radiosurgery (SRS), yet because of its association with cognitive decline, its role in the treatment of patients with brain metastases remains controversial. Objective To determine whether there is less cognitive deterioration at 3 months after SRS alone vs SRS plus WBRT. Design, Setting, and Participants At 34 institutions in North America, patients with 1 to 3 brain metastases were randomized to receive SRS or SRS plus WBRT between February 2002 and December 2013. Interventions The WBRT dose schedule was 30 Gy in 12 fractions; the SRS dose was 18 to 22 Gy in the SRS plus WBRT group and 20 to 24 Gy for SRS alone. Main Outcomes and Measures The primary end point was cognitive deterioration (decline >1 SD from baseline on at least 1 cognitive test at 3 months) in participants who completed the baseline and 3-month assessments. Secondary end points included time to intracranial failure, quality of life, functional independence, long-term cognitive status, and overall survival. Results There were 213 randomized participants (SRS alone, n = 111; SRS plus WBRT, n = 102) with a mean age of 60.6 years (SD, 10.5 years); 103 (48%) were women. There was less cognitive deterioration at 3 months after SRS alone (40/63 patients [63.5%]) than when combined with WBRT (44/48 patients [91.7%]; difference, −28.2%; 90% CI, −41.9% to −14.4%; P P = .002). Time to intracranial failure was significantly shorter for SRS alone compared with SRS plus WBRT (hazard ratio, 3.6; 95% CI, 2.2-5.9; P P = .26). Median overall survival was 10.4 months for SRS alone and 7.4 months for SRS plus WBRT (hazard ratio, 1.02; 95% CI, 0.75-1.38; P = .92). For long-term survivors, the incidence of cognitive deterioration was less after SRS alone at 3 months (5/11 [45.5%] vs 16/17 [94.1%]; difference, −48.7%; 95% CI, −87.6% to −9.7%; P = .007) and at 12 months (6/10 [60%] vs 17/18 [94.4%]; difference, −34.4%; 95% CI, −74.4% to 5.5%; P = .04). Conclusions and Relevance Among patients with 1 to 3 brain metastases, the use of SRS alone, compared with SRS combined with WBRT, resulted in less cognitive deterioration at 3 months. In the absence of a difference in overall survival, these findings suggest that for patients with 1 to 3 brain metastases amenable to radiosurgery, SRS alone may be a preferred strategy. Trial Registration clinicaltrials.gov Identifier:NCT00377156
••
26 Apr 2015TL;DR: In this paper, the authors extract and analyze the core of the Bitcoin protocol and prove two fundamental properties which they call common prefix and chain quality in the static setting where the number of players remains fixed.
Abstract: Bitcoin is the first and most popular decentralized cryptocurrency to date. In this work, we extract and analyze the core of the Bitcoin protocol, which we term the Bitcoin backbone, and prove two of its fundamental properties which we call common prefix and chain quality in the static setting where the number of players remains fixed. Our proofs hinge on appropriate and novel assumptions on the “hashing power” of the adversary relative to network synchronicity; we show our results to be tight under high synchronization.
••
TL;DR: Estimates show that genomics is a “four-headed beast”—it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis.
Abstract: Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a “four-headed beast”—it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the “genomical” challenges of the next decade.
••
TL;DR: Using an improved human mutation rate model, human protein-coding genes are classified along a spectrum representing tolerance to inactivation, validate this classification using data from model organisms and engineered human cells, and show that it can be used to improve gene discovery power for both common and rare diseases.
Abstract: Summary Genetic variants that inactivate protein-coding genes are a powerful source of information about the phenotypic consequences of gene disruption: genes critical for an organism’s function will be depleted for such variants in natural populations, while non-essential genes will tolerate their accumulation. However, predicted loss-of-function (pLoF) variants are enriched for annotation errors, and tend to be found at extremely low frequencies, so their analysis requires careful variant annotation and very large sample sizes. Here, we describe the aggregation of 125,748 exomes and 15,708 genomes from human sequencing studies into the Genome Aggregation Database (gnomAD). We identify 443,769 high-confidence pLoF variants in this cohort after filtering for sequencing and annotation artifacts. Using an improved model of human mutation, we classify human protein-coding genes along a spectrum representing intolerance to inactivation, validate this classification using data from model organisms and engineered human cells, and show that it can be used to improve gene discovery power for both common and rare diseases.
••
Medical University of Vienna1, Leiden University Medical Center2, Humboldt State University3, Cornell University4, Leeds Teaching Hospitals NHS Trust5, Keio University6, Dresden University of Technology7, Drexel University8, VU University Medical Center9, Goethe University Frankfurt10, University of Montpellier11, Cliniques Universitaires Saint-Luc12, University of Lisbon13, Hospital for Special Surgery14, University of Santiago, Chile15, Medical University of Graz16, University of Amsterdam17, University of Queensland18, University of Copenhagen19, University of Cambridge20, Southlake Regional Health Center21, National and Kapodistrian University of Athens22, Karolinska University Hospital23
TL;DR: The 4 overarching principles and 10 recommendations are based on stronger evidence than before and are supposed to inform patients, rheumatologists and other stakeholders about strategies to reach optimal outcomes of RA.
Abstract: Background Reaching the therapeutic target of remission or low-disease activity has improved outcomes in patients with rheumatoid arthritis (RA) significantly. The treat-to-target recommendations, formulated in 2010, have provided a basis for implementation of a strategic approach towards this therapeutic goal in routine clinical practice, but these recommendations need to be re-evaluated for appropriateness and practicability in the light of new insights. Objective To update the 2010 treat-to-target recommendations based on systematic literature reviews (SLR) and expert opinion. Methods A task force of rheumatologists, patients and a nurse specialist assessed the SLR results and evaluated the individual items of the 2010 recommendations accordingly, reformulating many of the items. These were subsequently discussed, amended and voted upon by >40 experts, including 5 patients, from various regions of the world. Levels of evidence, strengths of recommendations and levels of agreement were derived. Results The update resulted in 4 overarching principles and 10 recommendations. The previous recommendations were partly adapted and their order changed as deemed appropriate in terms of importance in the view of the experts. The SLR had now provided also data for the effectiveness of targeting low-disease activity or remission in established rather than only early disease. The role of comorbidities, including their potential to preclude treatment intensification, was highlighted more strongly than before. The treatment aim was again defined as remission with low-disease activity being an alternative goal especially in patients with long-standing disease. Regular follow-up (every 1-3 months during active disease) with according therapeutic adaptations to reach the desired state was recommended. Follow-up examinations ought to employ composite measures of disease activity that include joint counts. Additional items provide further details for particular aspects of the disease, especially comorbidity and shared decision-making with the patient. Levels of evidence had increased for many items compared with the 2010 recommendations, and levels of agreement were very high for most of the individual recommendations (=9/10). Conclusions The 4 overarching principles and 10 recommendations are based on stronger evidence than before and are supposed to inform patients, rheumatologists and other stakeholders about strategies to reach optimal outcomes of RA.
••
TL;DR: The Modules for Experiments in Stellar Astrophysics (MESA) Isochrones and Stellar Tracks (MIST) project as mentioned in this paper provides a set of stellar evolutionary tracks and isochrones computed using MESA, a state-of-the-art 1D stellar evolution package.
Abstract: This is the first of a series of papers presenting the Modules for Experiments in Stellar Astrophysics (MESA) Isochrones and Stellar Tracks (MIST) project, a new comprehensive set of stellar evolutionary tracks and isochrones computed using MESA, a state-of-the-art open-source 1D stellar evolution package. In this work, we present models with solar-scaled abundance ratios covering a wide range of ages ($5 \leq \rm \log(Age)\;[yr] \leq 10.3$), masses ($0.1 \leq M/M_{\odot} \leq 300$), and metallicities ($-2.0 \leq \rm [Z/H] \leq 0.5$). The models are self-consistently and continuously evolved from the pre-main sequence to the end of hydrogen burning, the white dwarf cooling sequence, or the end of carbon burning, depending on the initial mass. We also provide a grid of models evolved from the pre-main sequence to the end of core helium burning for $-4.0 \leq \rm [Z/H] < -2.0$. We showcase extensive comparisons with observational constraints as well as with some of the most widely used existing models in the literature. The evolutionary tracks and isochrones can be downloaded from the project website at this http URL
••
TL;DR: This paper presents a tutorial using a simple example of count data with mixed effects to guide the user along a gentle learning curve, adding only a few commands or options at a time.
Abstract: Summary
The r package simr allows users to calculate power for generalized linear mixed models from the lme4 package. The power calculations are based on Monte Carlo simulations.
It includes tools for (i) running a power analysis for a given model and design; and (ii) calculating power curves to assess trade-offs between power and sample size.
This paper presents a tutorial using a simple example of count data with mixed effects (with structure representative of environmental monitoring data) to guide the user along a gentle learning curve, adding only a few commands or options at a time.
••
TL;DR: A multimaterial 3D bioprinting method is reported that enables the creation of thick human tissues (>1 cm) replete with an engineered extracellular matrix, embedded vasculature, and multiple cell types that can be actively perfused for long durations.
Abstract: The advancement of tissue and, ultimately, organ engineering requires the ability to pattern human tissues composed of cells, extracellular matrix, and vasculature with controlled microenvironments that can be sustained over prolonged time periods. To date, bioprinting methods have yielded thin tissues that only survive for short durations. To improve their physiological relevance, we report a method for bioprinting 3D cell-laden, vascularized tissues that exceed 1 cm in thickness and can be perfused on chip for long time periods (>6 wk). Specifically, we integrate parenchyma, stroma, and endothelium into a single thick tissue by coprinting multiple inks composed of human mesenchymal stem cells (hMSCs) and human neonatal dermal fibroblasts (hNDFs) within a customized extracellular matrix alongside embedded vasculature, which is subsequently lined with human umbilical vein endothelial cells (HUVECs). These thick vascularized tissues are actively perfused with growth factors to differentiate hMSCs toward an osteogenic lineage in situ. This longitudinal study of emergent biological phenomena in complex microenvironments represents a foundational step in human tissue generation.
••
University of South Carolina1, Los Alamos National Laboratory2, Moscow State University3, Delhi Technological University4, University of Paris5, University of California, Davis6, Indian Institute of Technology (BHU) Varanasi7, University of Moratuwa8, University of Illinois at Urbana–Champaign9, California Polytechnic State University10, Sandia National Laboratories11, Max Planck Society12, Indian Institute of Technology Kharagpur13, French Institute for Research in Computer Science and Automation14, University of New Mexico15, Charles University in Prague16, Birla Institute of Technology and Science17, Indian Institute of Technology Bombay18, University of West Bohemia19
TL;DR: The architecture of SymPy is presented, a description of its features, and a discussion of select domain specific submodules are discussed, to become the standard symbolic library for the scientific Python ecosystem.
Abstract: SymPy is an open source computer algebra system written in pure Python. It is built with a focus on extensibility and ease of use, through both interactive and programmatic applications. These characteristics have led SymPy to become a popular symbolic library for the scientific Python ecosystem. This paper presents the architecture of SymPy, a description of its features, and a discussion of select submodules. The supplementary material provide additional examples and further outline details of the architecture and features of SymPy.