scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
30 Jun 2017-Science
TL;DR: This article developed a flexible architecture for computing damages that integrates climate science, econometric analyses, and process models, and used this approach to construct spatially explicit, probabilistic, and empirically derived estimates of economic damage in the United States from climate change.
Abstract: Estimates of climate change damage are central to the design of climate policies. Here, we develop a flexible architecture for computing damages that integrates climate science, econometric analyses, and process models. We use this approach to construct spatially explicit, probabilistic, and empirically derived estimates of economic damage in the United States from climate change. The combined value of market and nonmarket damage across analyzed sectors-agriculture, crime, coastal storms, energy, human mortality, and labor-increases quadratically in global mean temperature, costing roughly 1.2% of gross domestic product per +1°C on average. Importantly, risk is distributed unequally across locations, generating a large transfer of value northward and westward that increases economic inequality. By the late 21st century, the poorest third of counties are projected to experience damages between 2 and 20% of county income (90% chance) under business-as-usual emissions (Representative Concentration Pathway 8.5).

621 citations


Journal ArticleDOI
TL;DR: In this article , the authors show that BA.4/BA.5 exhibit higher transmissibility than BA.2.1, BA.12.1 and BA.1.
Abstract: Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) Omicron sublineages BA.2.12.1, BA.4 and BA.5 exhibit higher transmissibility than the BA.2 lineage1. The receptor binding and immune-evasion capability of these recently emerged variants require immediate investigation. Here, coupled with structural comparisons of the spike proteins, we show that BA.2.12.1, BA.4 and BA.5 (BA.4 and BA.5 are hereafter referred collectively to as BA.4/BA.5) exhibit similar binding affinities to BA.2 for the angiotensin-converting enzyme 2 (ACE2) receptor. Of note, BA.2.12.1 and BA.4/BA.5 display increased evasion of neutralizing antibodies compared with BA.2 against plasma from triple-vaccinated individuals or from individuals who developed a BA.1 infection after vaccination. To delineate the underlying antibody-evasion mechanism, we determined the escape mutation profiles2, epitope distribution3 and Omicron-neutralization efficiency of 1,640 neutralizing antibodies directed against the receptor-binding domain of the viral spike protein, including 614 antibodies isolated from people who had recovered from BA.1 infection. BA.1 infection after vaccination predominantly recalls humoral immune memory directed against ancestral (hereafter referred to as wild-type (WT)) SARS-CoV-2 spike protein. The resulting elicited antibodies could neutralize both WT SARS-CoV-2 and BA.1 and are enriched on epitopes on spike that do not bind ACE2. However, most of these cross-reactive neutralizing antibodies are evaded by spike mutants L452Q, L452R and F486V. BA.1 infection can also induce new clones of BA.1-specific antibodies that potently neutralize BA.1. Nevertheless, these neutralizing antibodies are largely evaded by BA.2 and BA.4/BA.5 owing to D405N and F486V mutations, and react weakly to pre-Omicron variants, exhibiting narrow neutralization breadths. The therapeutic neutralizing antibodies bebtelovimab4 and cilgavimab5 can effectively neutralize BA.2.12.1 and BA.4/BA.5, whereas the S371F, D405N and R408S mutations undermine most broadly sarbecovirus-neutralizing antibodies. Together, our results indicate that Omicron may evolve mutations to evade the humoral immunity elicited by BA.1 infection, suggesting that BA.1-derived vaccine boosters may not achieve broad-spectrum protection against new Omicron variants.

621 citations


Posted Content
TL;DR: It is observed that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner, and small architectural changes are derived that guarantee that unwanted information cannot leak into the hierarchical synthesis process.
Abstract: We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation.

621 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: PANet as mentioned in this paper learns class-specific prototype representations from a few support images within an embedding space and then performs segmentation over the query images through matching each pixel to the learned prototypes.
Abstract: Despite the great progress made by deep CNNs in image semantic segmentation, they typically require a large number of densely-annotated images for training and are difficult to generalize to unseen object categories. Few-shot segmentation has thus been developed to learn to perform segmentation from only a few annotated examples. In this paper, we tackle the challenging few-shot segmentation problem from a metric learning perspective and present PANet, a novel prototype alignment network to better utilize the information of the support set. Our PANet learns class-specific prototype representations from a few support images within an embedding space and then performs segmentation over the query images through matching each pixel to the learned prototypes. With non-parametric metric learning, PANet offers high-quality prototypes that are representative for each semantic class and meanwhile discriminative for different classes. Moreover, PANet introduces a prototype alignment regularization between support and query. With this, PANet fully exploits knowledge from the support and provides better generalization on few-shot segmentation. Significantly, our model achieves the mIoU score of 48.1% and 55.7% on PASCAL-5i for 1-shot and 5-shot settings respectively, surpassing the state-of-the-art method by 1.8% and 8.6%.

621 citations


Book
26 Aug 2016
TL;DR: The authors provides a comprehensive overview of the current state of the theoretical and empirical research on this topic, and suggestions on where scholarship could go next, and provides a timely collection of contributions by eminent scholars from a wide range of academic disciplines.
Abstract: Global and transnational challenges figure ever more prominently on national and international policy agendas and are increasingly analysed as global public goods (GPGs). This timely collection, which includes contributions by eminent scholars from a wide range of academic disciplines, provides a comprehensive overview of the current state of the theoretical and empirical research on this topic, and suggestions on where scholarship could go next.

621 citations


Journal ArticleDOI
01 Jun 2018-Science
TL;DR: Single-cell RNA sequencing reveals cell type trajectories and cell lineage in the developing zebrafish embryo, and high-throughput mapping of cellular differentiation hierarchies from single-cell data promises to empower systematic interrogations of vertebrate development and disease.
Abstract: High-throughput mapping of cellular differentiation hierarchies from single-cell data promises to empower systematic interrogations of vertebrate development and disease. Here we applied single-cell RNA sequencing to >92,000 cells from zebrafish embryos during the first day of development. Using a graph-based approach, we mapped a cell-state landscape that describes axis patterning, germ layer formation, and organogenesis. We tested how clonally related cells traverse this landscape by developing a transposon-based barcoding approach (TracerSeq) for reconstructing single-cell lineage histories. Clonally related cells were often restricted by the state landscape, including a case in which two independent lineages converge on similar fates. Cell fates remained restricted to this landscape in embryos lacking the chordin gene. We provide web-based resources for further analysis of the single-cell data.

621 citations


Journal ArticleDOI
18 Nov 2016-Science
TL;DR: A major thermosensory role for the phytochromes (red light receptors) during the night is described, and it is found that phy tochrome B directly associates with the promoters of key target genes in a temperature-dependent manner.
Abstract: Plants are responsive to temperature, and some species can distinguish differences of 1°C In Arabidopsis, warmer temperature accelerates flowering and increases elongation growth (thermomorphogenesis) However, the mechanisms of temperature perception are largely unknown We describe a major thermosensory role for the phytochromes (red light receptors) during the night Phytochrome null plants display a constitutive warm-temperature response, and consistent with this, we show in this background that the warm-temperature transcriptome becomes derepressed at low temperatures We found that phytochrome B (phyB) directly associates with the promoters of key target genes in a temperature-dependent manner The rate of phyB inactivation is proportional to temperature in the dark, enabling phytochromes to function as thermal timers that integrate temperature information over the course of the night

621 citations


Posted Content
Yinpeng Dong1, Fangzhou Liao1, Tianyu Pang1, Hang Su1, Jun Zhu1, Xiaolin Hu1, Jianguo Li2 
TL;DR: In this article, a broad class of momentum-based iterative algorithms to boost adversarial attacks is proposed to stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Abstract: Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.

621 citations


Proceedings Article
03 Jul 2018
TL;DR: In this paper, a neural network-based permutation-invariant aggregation operator is proposed to learn the Bernoulli distribution of the bag label, where the bag-label probability is fully parameterized by neural networks.
Abstract: Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore, we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.

621 citations


Journal ArticleDOI
TL;DR: The regression analysis revealed that direct diabetes costs are closely and positively associated with a country’s gross domestic product (GDP) per capita, and that the USA stood out as having particularly high costs, even after controlling for GDP per capita.
Abstract: Background There has been a widely documented and recognized increase in diabetes prevalence, not only in high-income countries (HICs) but also in low- and middle-income countries (LMICs), over recent decades. The economic burden associated with diabetes, especially in LMICs, is less clear.

621 citations


Journal ArticleDOI
TL;DR: Pharmacological target deconvolution of ketamine and its metabolites will provide insight critical to the development of new pharmacotherapies that possess the desirable clinical effects of ketamines, but limit undesirable side effects.
Abstract: Ketamine, a racemic mixture consisting of (S)- and (R)-ketamine, has been in clinical use since 1970. Although best characterized for its dissociative anesthetic properties, ketamine also exerts analgesic, anti-inflammatory, and antidepressant actions. We provide a comprehensive review of these therapeutic uses, emphasizing drug dose, route of administration, and the time course of these effects. Dissociative, psychotomimetic, cognitive, and peripheral side effects associated with short-term or prolonged exposure, as well as recreational ketamine use, are also discussed. We further describe ketamine’s pharmacokinetics, including its rapid and extensive metabolism to norketamine, dehydronorketamine, hydroxyketamine, and hydroxynorketamine (HNK) metabolites. Whereas the anesthetic and analgesic properties of ketamine are generally attributed to direct ketamine-induced inhibition of N-methyl-D-aspartate receptors, other putative lower-affinity pharmacological targets of ketamine include, but are not limited to, γ-amynobutyric acid (GABA), dopamine, serotonin, sigma, opioid, and cholinergic receptors, as well as voltage-gated sodium and hyperpolarization-activated cyclic nucleotide-gated channels. We examine the evidence supporting the relevance of these targets of ketamine and its metabolites to the clinical effects of the drug. Ketamine metabolites may have broader clinical relevance than was previously considered, given that HNK metabolites have antidepressant efficacy in preclinical studies. Overall, pharmacological target deconvolution of ketamine and its metabolites will provide insight critical to the development of new pharmacotherapies that possess the desirable clinical effects of ketamine, but limit undesirable side effects.

Journal ArticleDOI
TL;DR: In this paper, a roadmap towards fabricating hybrid structures based on MoS2 and graphene is highlighted, proposing ways to enhance properties of the individual component and broaden the range of functional applications in various fields, including flexible electronics, energy storage and harvesting as well as electrochemical catalysis.

Journal ArticleDOI
TL;DR: This critical review discusses the drivers, incentives, technologies, and environmental impacts of zero liquid discharge, and highlights the evolution of ZLD from thermal- to membrane-based processes, and analyzes the advantages and limitations of existing and emerging ZLD technologies.
Abstract: Zero liquid discharge (ZLD)—a wastewater management strategy that eliminates liquid waste and maximizes water usage efficiency — has attracted renewed interest worldwide in recent years. Although implementation of ZLD reduces water pollution and augments water supply, the technology is constrained by high cost and intensive energy consumption. In this critical review, we discuss the drivers, incentives, technologies, and environmental impacts of ZLD. Within this framework, the global applications of ZLD in the United States and emerging economies such as China and India are examined. We highlight the evolution of ZLD from thermal- to membrane-based processes, and analyze the advantages and limitations of existing and emerging ZLD technologies. The potential environmental impacts of ZLD, notably greenhouse gas emission and generation of solid waste, are discussed and the prospects of ZLD technologies and research needs are highlighted.

Journal ArticleDOI
TL;DR: A general framework for the estimation of the transmitter-LIM and LIM-receiver cascaded channel is introduced, and a two-stage algorithm that includes a sparse matrix factorization stage and a matrix completion stage is proposed that can achieve accurate channel estimation for LIM-assisted massive MIMO systems.
Abstract: In this letter, we consider the problem of channel estimation for large intelligent metasurface (LIM) assisted massive multiple-input multiple-output (MIMO) systems. The main challenge of this problem is that the LIM integrated with a large number of low-cost metamaterial antennas can only passively reflect the incident signals by certain phase shifts, and does not have any signal processing capability. To deal with this, we introduce a general framework for the estimation of the transmitter-LIM and LIM-receiver cascaded channel, and propose a two-stage algorithm that includes a sparse matrix factorization stage and a matrix completion stage. Simulation results illustrate that the proposed method can achieve accurate channel estimation for LIM-assisted massive MIMO systems.

ReportDOI
TL;DR: In this paper, the authors report the results of a WFH experiment at CTrip, a 16,000-employee, NASDAQ-listed Chinese travel agency, where call center employees who volunteered to WFH were randomly assigned to work from home or in the office for 9 months.
Abstract: About 10% of US employees now regularly work from home (WFH), but there are concerns this can lead to “shirking from home.” We report the results of a WFH experiment at CTrip, a 16,000-employee, NASDAQ-listed Chinese travel agency. Call center employees who volunteered to WFH were randomly assigned to work from home or in the office for 9 months. Home working led to a 13% performance increase, of which about 9% was from working more minutes per shift (fewer breaks and sick-days) and 4% from more calls per minute (attributed to a quieter working environment). Home workers also reported improved work satisfaction and experienced less turnover, but their promotion rate conditional on performance fell. Due to the success of the experiment, CTrip rolled-out the option to WFH to the whole firm and allowed the experimental employees to re-select between the home or office. Interestingly, over half of them switched, which led to the gains from WFH almost doubling to 22%. This highlights the benefits of learning and selection effects when adopting modern management practices like WFH.

Journal ArticleDOI
TL;DR: This consensus statement brings together the key findings and recommendations from a conference on monitoring Athlete Training Loads in a shared conceptual framework for use by coaches, sport-science and -medicine staff, and other related professionals who have an interest in monitoring athlete training loads.
Abstract: Monitoring the load placed on athletes in both training and competition has become a very hot topic in sport science. Both scientists and coaches routinely monitor training loads using multidisciplinary approaches, and the pursuit of the best methodologies to capture and interpret data has produced an exponential increase in empirical and applied research. Indeed, the field has developed with such speed in recent years that it has given rise to industries aimed at developing new and novel paradigms to allow us to precisely quantify the internal and external loads placed on athletes and to help protect them from injury and ill health. In February 2016, a conference on "Monitoring Athlete Training Loads-The Hows and the Whys" was convened in Doha, Qatar, which brought together experts from around the world to share their applied research and contemporary practices in this rapidly growing field and also to investigate where it may branch to in the future. This consensus statement brings together the key findings and recommendations from this conference in a shared conceptual framework for use by coaches, sport-science and -medicine staff, and other related professionals who have an interest in monitoring athlete training loads and serves to provide an outline on what athlete-load monitoring is and how it is being applied in research and practice, why load monitoring is important and what the underlying rationale and prospective goals of monitoring are, and where athlete-load monitoring is heading in the future.

Journal ArticleDOI
TL;DR: The consequences of sleep deprivation on attention and working memory, positive and negative emotion, and hippocampal learning are reviewed, and how this evidence informs mechanistic understanding of the known changes in cognition and emotion associated with SD is explored.
Abstract: How does a lack of sleep affect our brains? In contrast to the benefits of sleep, frameworks exploring the impact of sleep loss are relatively lacking Importantly, the effects of sleep deprivation (SD) do not simply reflect the absence of sleep and the benefits attributed to it; rather, they reflect the consequences of several additional factors, including extended wakefulness With a focus on neuroimaging studies, we review the consequences of SD on attention and working memory, positive and negative emotion, and hippocampal learning We explore how this evidence informs our mechanistic understanding of the known changes in cognition and emotion associated with SD, and the insights it provides regarding clinical conditions associated with sleep disruption

Journal ArticleDOI
TL;DR: A consensus between researchers in the field is reported on procedures for testing perovskite solar cell stability, which are based on the International Summit on Organic Photovoltaic Stability (ISOS) protocols, and additional procedures to account for properties specific to PSCs are proposed.
Abstract: Improving the long-term stability of perovskite solar cells is critical to the deployment of this technology. Despite the great emphasis laid on stability-related investigations, publications lack consistency in experimental procedures and parameters reported. It is therefore challenging to reproduce and compare results and thereby develop a deep understanding of degradation mechanisms. Here, we report a consensus between researchers in the field on procedures for testing perovskite solar cell stability, which are based on the International Summit on Organic Photovoltaic Stability (ISOS) protocols. We propose additional procedures to account for properties specific to PSCs such as ion redistribution under electric fields, reversible degradation and to distinguish ambient-induced degradation from other stress factors. These protocols are not intended as a replacement of the existing qualification standards, but rather they aim to unify the stability assessment and to understand failure modes. Finally, we identify key procedural information which we suggest reporting in publications to improve reproducibility and enable large data set analysis. Reliability of stability data for perovskite solar cells is undermined by a lack of consistency in the test conditions and reporting. This Consensus Statement outlines practices for testing and reporting stability tailoring ISOS protocols for perovskite devices.

Proceedings ArticleDOI
20 Apr 2020
TL;DR: The proposed HGT model consistently outperforms all the state-of-the-art GNN baselines by 9–21 on various downstream tasks, and the heterogeneous mini-batch graph sampling algorithm—HGSampling—for efficient and scalable training.
Abstract: Recent years have witnessed the emerging success of graph neural networks (GNNs) for modeling structured data. However, most GNNs are designed for homogeneous graphs, in which all nodes and edges belong to the same types, making it infeasible to represent heterogeneous structures. In this paper, we present the Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous graphs. To model heterogeneity, we design node- and edge-type dependent parameters to characterize the heterogeneous attention over each edge, empowering HGT to maintain dedicated representations for different types of nodes and edges. To handle Web-scale graph data, we design the heterogeneous mini-batch graph sampling algorithm—HGSampling—for efficient and scalable training. Extensive experiments on the Open Academic Graph of 179 million nodes and 2 billion edges show that the proposed HGT model consistently outperforms all the state-of-the-art GNN baselines by 9–21 on various downstream tasks. The dataset and source code of HGT are publicly available at https://github.com/acbull/pyHGT.

Proceedings Article
04 Dec 2017
TL;DR: This work mathematically proves the convergence of TernGrad under the assumption of a bound on gradients, and proposes layer-wise ternarizing and gradient clipping to improve its convergence.
Abstract: High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {-1,0,1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn't incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available1.

Journal ArticleDOI
TL;DR: An initial bolus and subsequent 2-hour infusion of andexanet substantially reduced anti-factor Xa activity in patients with acute major bleeding associated with factor Xa inhibitors, with effective hemostasis occurring in 79%.
Abstract: BackgroundAndexanet alfa (andexanet) is a recombinant modified human factor Xa decoy protein that has been shown to reverse the inhibition of factor Xa in healthy volunteers. MethodsIn this multicenter, prospective, open-label, single-group study, we evaluated 67 patients who had acute major bleeding within 18 hours after the administration of a factor Xa inhibitor. The patients all received a bolus of andexanet followed by a 2-hour infusion of the drug. Patients were evaluated for changes in measures of anti–factor Xa activity and were assessed for clinical hemostatic efficacy during a 12-hour period. All the patients were subsequently followed for 30 days. The efficacy population of 47 patients had a baseline value for anti–factor Xa activity of at least 75 ng per milliliter (or ≥0.5 IU per milliliter for those receiving enoxaparin) and had confirmed bleeding severity at adjudication. ResultsThe mean age of the patients was 77 years; most of the patients had substantial cardiovascular disease. Bleeding ...

Journal ArticleDOI
TL;DR: This article performed a series of Monte Carlo simulations to evaluate the total error due to bias and variance in the inferences of each model, for typical sizes and types of datasets encountered in applied research.
Abstract: Empirical analyses in social science frequently confront quantitative data that are clustered or grouped. To account for group-level variation and improve model fit, researchers will commonly specify either a fixed- or random-effects model. But current advice on which approach should be preferred, and under what conditions, remains vague and sometimes contradictory. This study performs a series of Monte Carlo simulations to evaluate the total error due to bias and variance in the inferences of each model, for typical sizes and types of datasets encountered in applied research. The results offer a typology of dataset characteristics to help researchers choose a preferred model.

Proceedings Article
01 Jan 2018
TL;DR: A novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise) and a light-weight neural net is proposed to learn sentence embedding, based solely on the proposed attention without any RNN/CNN structure.
Abstract: Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively. Attention mechanisms have recently attracted enormous interest due to their highly parallelizable computation, significantly less training time, and flexibility in modeling dependencies. We propose a novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise). A light-weight neural net, "Directional Self-Attention Network (DiSAN)", is then proposed to learn sentence embedding, based solely on the proposed attention without any RNN/CNN structure. DiSAN is only composed of a directional self-attention with temporal order encoded, followed by a multi-dimensional attention that compresses the sequence into a vector representation. Despite its simple form, DiSAN outperforms complicated RNN models on both prediction quality and time efficiency. It achieves the best test accuracy among all sentence encoding methods and improves the most recent best result by 1.02% on the Stanford Natural Language Inference (SNLI) dataset, and shows state-of-the-art test accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK), Customer Review, MPQA, TREC question-type classification and Subjectivity (SUBJ) datasets.

Journal ArticleDOI
TL;DR: Obesity, metabolic syndrome, and diabetes are increasing in developed and developing countries and will lead to more cases of HCC and new infections with hepatitis C virus are low in developed countries.

Journal ArticleDOI
TL;DR: These final results from the DASISION trial continue to support dasatinib 100 mg once daily as a safe and effective first-line therapy for the long-term treatment of CML-CP.
Abstract: PurposeWe report the 5-year analysis from the phase III Dasatinib Versus Imatinib Study in Treatment-Naive Chronic Myeloid Leukemia Patients (DASISION) trial, evaluating long-term efficacy and safety outcomes of patients with chronic myeloid leukemia (CML) in chronic phase (CP) treated with dasatinib or imatinib.Patients and MethodsPatients with newly diagnosed CML-CP were randomly assigned to receive dasatinib 100 mg once daily (n = 259) or imatinib 400 mg once daily (n = 260).ResultsAt the time of study closure, 61% and 63% of dasatinib- and imatinib-treated patients remained on initial therapy, respectively. Cumulative rates of major molecular response and molecular responses with a 4.0- or 4.5-log reduction in BCR-ABL1 transcripts from baseline by 5 years remained statistically significantly higher for dasatinib compared with imatinib. Rates for progression-free and overall survival at 5 years remained high and similar across treatment arms. In patients who achieved BCR-ABL1 ≤ 10% at 3 months (dasatin...

Journal ArticleDOI
TL;DR: PCV13 reduced IPD across all age groups when used routinely in children in the USA, providing reassurance that, similar to PCV7, PCVs with additional serotypes can also prevent transmission to unvaccinated populations.
Abstract: Summary Background In 2000, seven-valent pneumococcal conjugate vaccine (PCV7) was introduced in the USA and resulted in dramatic reductions in invasive pneumococcal disease (IPD) and moderate increases in non-PCV7 type IPD. In 2010, PCV13 replaced PCV7 in the US immunisation schedule. We aimed to assess the effect of use of PCV13 in children on IPD in children and adults in the USA. Methods We used laboratory-based and population-based data on incidence of IPD from the Active Bacterial Core surveillance (part of the Centers for Disease Control and Prevention's Emerging Infections Program) in a time-series model to compare rates of IPD before and after the introduction of PCV13. Cases of IPD between July 1, 2004, and June 30, 2013, were classified as being caused by the PCV13 serotypes against which PCV7 has no effect (PCV13 minus PCV7). In a time-series model, we used an expected outcomes approach to compare the reported incidence of IPD to that which would have been expected if PCV13 had not replaced PCV7. Findings Compared with incidence expected among children younger than 5 years if PCV7 alone had been continued, incidence of IPD overall declined by 64% (95% interval estimate [95% IE] 59–68) and IPD caused by PCV13 minus PCV7 serotypes declined by 93% (91–94), by July, 2012, to June, 2013. Among adults, incidence of IPD overall also declined by 12–32% and IPD caused by PCV13 minus PCV7 type IPD declined by 58–72%, depending on age. We estimated that over 30 000 cases of IPD and 3000 deaths were averted in the first 3 years after the introduction of PCV13. Interpretation PCV13 reduced IPD across all age groups when used routinely in children in the USA. These findings provide reassurance that, similar to PCV7, PCVs with additional serotypes can also prevent transmission to unvaccinated populations. Funding Centers for Disease Control and Prevention.

Journal ArticleDOI
11 Nov 2016-Science
TL;DR: This work found that individual atomic features inside the gap of a plasmonic nanoassembly can localize light to volumes well below 1 cubic nanometer, enabling optical experiments on the atomic scale, and sets the basis for developing nanoscale nonlinear quantum optics on the single-molecule level.
Abstract: Trapping light with noble metal nanostructures overcomes the diffraction limit and can confine light to volumes typically on the order of 30 cubic nanometers. We found that individual atomic features inside the gap of a plasmonic nanoassembly can localize light to volumes well below 1 cubic nanometer (“picocavities”), enabling optical experiments on the atomic scale. These atomic features are dynamically formed and disassembled by laser irradiation. Although unstable at room temperature, picocavities can be stabilized at cryogenic temperatures, allowing single atomic cavities to be probed for many minutes. Unlike traditional optomechanical resonators, such extreme optical confinement yields a factor of 106 enhancement of optomechanical coupling between the picocavity field and vibrations of individual molecular bonds. This work sets the basis for developing nanoscale nonlinear quantum optics on the single-molecule level.

Journal ArticleDOI
TL;DR: This approach combines an application programing interface (API) inspired by pandas with the Common Data Model for self-described scientific data to provide a toolkit and data structures for N-dimensional labeled arrays.
Abstract: xarray is an open source project and Python package that provides a toolkit and data structures for N-dimensional labeled arrays. Our approach combines an application programing interface (API) inspired by pandas with the Common Data Model for self-described scientific data. Key features of the xarray package include label-based indexing and arithmetic, interoperability with the core scientific Python packages (e.g., pandas, NumPy, Matplotlib), out-of-core computation on datasets that don’t fit into memory, a wide range of serialization and input/output (I/O) options, and advanced multi-dimensional data manipulation tools such as group-by and resampling. xarray, as a data model and analytics toolkit, has been widely adopted in the geoscience community but is also used more broadly for multi-dimensional data analysis in physics, machine learning and finance.

Journal ArticleDOI
TL;DR: In this article, the state-of-the-art multispectral pansharpening techniques for hyperspectral data were compared with some of the state of the art methods for multi-spectral panchambering.
Abstract: Pansharpening aims at fusing a panchromatic image with a multispectral one, to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter. In the last decade, many algorithms have been presented in the literatures for pansharpening using multispectral data. With the increasing availability of hyperspectral systems, these methods are now being adapted to hyperspectral images. In this work, we compare new pansharpening techniques designed for hyperspectral data with some of the state-of-the-art methods for multispectral pansharpening, which have been adapted for hyperspectral data. Eleven methods from different classes (component substitution, multiresolution analysis, hybrid, Bayesian and matrix factorization) are analyzed. These methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators. In addition, all the pansharpening techniques considered in this paper have been implemented in a MATLAB toolbox that is made available to the community.

Proceedings Article
12 Jul 2020
TL;DR: The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing.
Abstract: Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the {\em over-smoothing} problem. In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping}. We provide theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing. Our experiments show that the deep GCNII model outperforms the state-of-the-art methods on various semi- and full-supervised tasks. Code is available at this https URL .