scispace - formally typeset
Search or ask a question

Showing papers on "Context (language use) published in 2019"


Journal ArticleDOI
TL;DR: In this article, the authors introduce physics-informed neural networks, which are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations.

5,448 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: New state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset is achieved without using coarse data.
Abstract: In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the self-attention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data.

4,327 citations



Journal ArticleDOI
TL;DR: How these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems are described.
Abstract: Here we present deep-learning techniques for healthcare, centering our discussion on deep learning in computer vision, natural language processing, reinforcement learning, and generalized methods. We describe how these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems. Our discussion of computer vision focuses largely on medical imaging, and we describe the application of natural language processing to domains such as electronic health record data. Similarly, reinforcement learning is discussed in the context of robotic-assisted surgery, and generalized deep-learning methods for genomics are reviewed.

1,843 citations


Journal ArticleDOI
TL;DR: A tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing is presented, and preliminary results from the use of this tool are revealed.
Abstract: This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.

1,424 citations


Proceedings ArticleDOI
25 Apr 2019
TL;DR: A simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation is created, and this simplified design shares similar structure with Squeeze-Excitation Network (SENet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks.
Abstract: The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies, via aggregating query-specific global context to each query position. However, through a rigorous empirical analysis, we have found that the global contexts modeled by non-local network are almost the same for different query positions within an image. In this paper, we take advantage of this finding to create a simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation. We further observe that this simplified design shares similar structure with Squeeze-Excitation Network (SENet). Hence we unify them into a three-step general framework for global context modeling. Within the general framework, we design a better instantiation, called the global context (GC) block, which is lightweight and can effectively model the global context. The lightweight property allows us to apply it for multiple layers in a backbone network to construct a global context network (GCNet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks.

1,202 citations


Journal ArticleDOI
TL;DR: These data provide the most comprehensive survey of genetic risk within Parkinson's disease to date, providing a biological context for these risk factors, and showing that a considerable genetic component of this disease remains unidentified.
Abstract: Summary Background Genome-wide association studies (GWAS) in Parkinson's disease have increased the scope of biological knowledge about the disease over the past decade. We aimed to use the largest aggregate of GWAS data to identify novel risk loci and gain further insight into the causes of Parkinson's disease. Methods We did a meta-analysis of 17 datasets from Parkinson's disease GWAS available from European ancestry samples to nominate novel loci for disease risk. These datasets incorporated all available data. We then used these data to estimate heritable risk and develop predictive models of this heritability. We also used large gene expression and methylation resources to examine possible functional consequences as well as tissue, cell type, and biological pathway enrichments for the identified risk factors. Additionally, we examined shared genetic risk between Parkinson's disease and other phenotypes of interest via genetic correlations followed by Mendelian randomisation. Findings Between Oct 1, 2017, and Aug 9, 2018, we analysed 7·8 million single nucleotide polymorphisms in 37 688 cases, 18 618 UK Biobank proxy-cases (ie, individuals who do not have Parkinson's disease but have a first degree relative that does), and 1·4 million controls. We identified 90 independent genome-wide significant risk signals across 78 genomic regions, including 38 novel independent risk signals in 37 loci. These 90 variants explained 16–36% of the heritable risk of Parkinson's disease depending on prevalence. Integrating methylation and expression data within a Mendelian randomisation framework identified putatively associated genes at 70 risk signals underlying GWAS loci for follow-up functional studies. Tissue-specific expression enrichment analyses suggested Parkinson's disease loci were heavily brain-enriched, with specific neuronal cell types being implicated from single cell data. We found significant genetic correlations with brain volumes (false discovery rate-adjusted p=0·0035 for intracranial volume, p=0·024 for putamen volume), smoking status (p=0·024), and educational attainment (p=0·038). Mendelian randomisation between cognitive performance and Parkinson's disease risk showed a robust association (p=8·00 × 10−7). Interpretation These data provide the most comprehensive survey of genetic risk within Parkinson's disease to date, to the best of our knowledge, by revealing many additional Parkinson's disease risk loci, providing a biological context for these risk factors, and showing that a considerable genetic component of this disease remains unidentified. These associations derived from European ancestry datasets will need to be followed-up with more diverse data. Funding The National Institute on Aging at the National Institutes of Health (USA), The Michael J Fox Foundation, and The Parkinson's Foundation (see appendix for full list of funding sources).

1,152 citations


Journal ArticleDOI
TL;DR: How macrophage shape local immune responses in the tumour microenvironment to both suppress and promote immunity to tumours is described and the potential of targeting tumour-associated macrophages to enhance antitumour immune responses is discussed.
Abstract: Macrophages are critical mediators of tissue homeostasis, with tumours distorting this proclivity to stimulate proliferation, angiogenesis and metastasis. This had led to an interest in targeting macrophages in cancer, and preclinical studies have demonstrated efficacy across therapeutic modalities and tumour types. Much of the observed efficacy can be traced to the suppressive capacity of macrophages, driven by microenvironmental cues such as hypoxia and fibrosis. As a result, tumour macrophages display an ability to suppress T cell recruitment and function as well as to regulate other aspects of tumour immunity. With the increasing impact of cancer immunotherapy, macrophage targeting is now being evaluated in this context. Here, we discuss the results of clinical trials and the future of combinatorial immunotherapy. In this Review, DeNardo and Ruffell describe how macrophages shape local immune responses in the tumour microenvironment to both suppress and promote immunity to tumours. The authors also discuss the potential of targeting tumour-associated macrophages to enhance antitumour immune responses.

1,100 citations


Journal ArticleDOI
Bo Liu1, Dandan Zheng1, Qi Jin1, Lihong Chen1, Jian Yang1 
TL;DR: An integrated and automatic pipeline, VFanalyzer, is introduced to VFDB to systematically identify known/potential VFs in complete/draft bacterial genomes through a context-based data refinement process for VFs encoded by gene clusters that can achieve relatively high specificity and sensitivity without manual curation.
Abstract: The virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/) is devoted to providing the scientific community with a comprehensive warehouse and online platform for deciphering bacterial pathogenesis. The various combinations, organizations and expressions of virulence factors (VFs) are responsible for the diverse clinical symptoms of pathogen infections. Currently, whole-genome sequencing is widely used to decode potential novel or variant pathogens both in emergent outbreaks and in routine clinical practice. However, the efficient characterization of pathogenomic compositions remains a challenge for microbiologists or physicians with limited bioinformatics skills. Therefore, we introduced to VFDB an integrated and automatic pipeline, VFanalyzer, to systematically identify known/potential VFs in complete/draft bacterial genomes. VFanalyzer first constructs orthologous groups within the query genome and preanalyzed reference genomes from VFDB to avoid potential false positives due to paralogs. Then, it conducts iterative and exhaustive sequence similarity searches among the hierarchical prebuilt datasets of VFDB to accurately identify potential untypical/strain-specific VFs. Finally, via a context-based data refinement process for VFs encoded by gene clusters, VFanalyzer can achieve relatively high specificity and sensitivity without manual curation. In addition, a thoroughly optimized interactive web interface is introduced to present VFanalyzer reports in comparative pathogenomic style for easy online analysis.

1,008 citations



Journal ArticleDOI
TL;DR: The studies point to context dependent outcomes with ROS modulator combinations with Chemotherapy and radiotherapy, indicating a need for additional pre-clinical research in the field.
Abstract: Reactive oxygen species (ROS) are a group of short-lived, highly reactive, oxygen-containing molecules that can induce DNA damage and affect the DNA damage response (DDR). There is unequivocal pre-clinical and clinical evidence that ROS influence the genotoxic stress caused by chemotherapeutics agents and ionizing radiation. Recent studies have provided mechanistic insight into how ROS can also influence the cellular response to DNA damage caused by genotoxic therapy, especially in the context of Double Strand Breaks (DSBs). This has led to the clinical evaluation of agents modulating ROS in combination with genotoxic therapy for cancer, with mixed success so far. These studies point to context dependent outcomes with ROS modulator combinations with Chemotherapy and radiotherapy, indicating a need for additional pre-clinical research in the field. In this review, we discuss the current knowledge on the effect of ROS in the DNA damage response, and its clinical relevance.

Journal ArticleDOI
TL;DR: This review highlights recent progress in understanding the function of N6-methyladenosine (m6A), the most abundant internal mark on eukaryotic mRNA, in light of the specific biological contexts of m6A effectors, and emphasizes the importance of context for RNA modification regulation and function.

Proceedings ArticleDOI
13 May 2019
TL;DR: Pixel-aligned Implicit Function (PIFu) as mentioned in this paper aligns pixels of 2D images with the global context of their corresponding 3D object to produce highresolution surfaces including largely unseen regions such as the back of a person.
Abstract: We introduce Pixel-aligned Implicit Function (PIFu), an implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object. Using PIFu, we propose an end-to-end deep learning method for digitizing highly detailed clothed humans that can infer both 3D surface and texture from a single image, and optionally, multiple input images. Highly intricate shapes, such as hairstyles, clothing, as well as their variations and deformations can be digitized in a unified way. Compared to existing representations used for 3D deep learning, PIFu produces high-resolution surfaces including largely unseen regions such as the back of a person. In particular, it is memory efficient unlike the voxel representation, can handle arbitrary topology, and the resulting surface is spatially aligned with the input image. Furthermore, while previous techniques are designed to process either a single image or multiple views, PIFu extends naturally to arbitrary number of views. We demonstrate high-resolution and robust reconstructions on real world images from the DeepFashion dataset, which contains a variety of challenging clothing types. Our method achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.

Journal ArticleDOI
TL;DR: Comprehensive results show that the proposed CE-Net method outperforms the original U- net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation , cell contour segmentation and retinal optical coherence tomography layer segmentation.
Abstract: Medical image segmentation is an important step in medical image analysis. With the rapid development of a convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, and so on. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations led to the loss of some spatial information. In this paper, we propose a context encoder network (CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor, and a feature decoder module. We use the pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution block and a residual multi-kernel pooling block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation, and retinal optical coherence tomography layer segmentation.

Proceedings Article
03 Jun 2019
TL;DR: This work develops a model which learns image representations that significantly outperform prior methods on the tasks the authors consider, and extends this model to use mixture-based representations, where segmentation behaviour emerges as a natural side-effect.
Abstract: We propose an approach to self-supervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context. For example, one could produce multiple views of a local spatio-temporal context by observing it from different locations (e.g., camera positions within a scene), and via different modalities (e.g., tactile, auditory, or visual). Or, an ImageNet image could provide a context from which one produces multiple views by repeatedly applying data augmentation. Maximizing mutual information between features extracted from these views requires capturing information about high-level factors whose influence spans multiple views – e.g., presence of certain objects or occurrence of certain events. Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider. Most notably, using self-supervised learning, our model learns representations which achieve 68.1% accuracy on ImageNet using standard linear evaluation. This beats prior results by over 12% and concurrent results by 7%. When we extend our model to use mixture-based representations, segmentation behaviour emerges as a natural side-effect. Our code is available online: https://github.com/Philip-Bachman/amdim-public.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a context encoder network (referred to as CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation, which mainly contains three major components: a feature encoder module, a context extractor and a feature decoder module.
Abstract: Medical image segmentation is an important step in medical image analysis. With the rapid development of convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, etc. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations lead to the loss of some spatial information. In this paper, we propose a context encoder network (referred to as CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor and a feature decoder module. We use pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution (DAC) block and residual multi-kernel pooling (RMP) block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation and retinal optical coherence tomography layer segmentation.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: In this paper, an interpretable framework based on Generative Adversarial Network (GAN) is proposed for path prediction for multiple interacting agents in a scene, which leverages two sources of information, the path history of all the agents in the scene, and the scene context information, using images of the scene.
Abstract: This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present SoPhie; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks.

Journal ArticleDOI
TL;DR: The authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types to illustrate how common clinical problems are being addressed.
Abstract: Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.

Journal ArticleDOI
TL;DR: This Review considers DSB repair-pathway choice in somatic mammalian cells as a series of ‘decision trees’, and explores how defective pathway choice can lead to genomic instability.
Abstract: The major pathways of DNA double-strand break (DSB) repair are crucial for maintaining genomic stability. However, if deployed in an inappropriate cellular context, these same repair functions can mediate chromosome rearrangements that underlie various human diseases, ranging from developmental disorders to cancer. The two major mechanisms of DSB repair in mammalian cells are non-homologous end joining (NHEJ) and homologous recombination. In this Review, we consider DSB repair-pathway choice in somatic mammalian cells as a series of 'decision trees', and explore how defective pathway choice can lead to genomic instability. Stalled, collapsed or broken DNA replication forks present a distinctive challenge to the DSB repair system. Emerging evidence suggests that the 'rules' governing repair-pathway choice at stalled replication forks differ from those at replication-independent DSBs.

Proceedings ArticleDOI
29 Jan 2019
TL;DR: Model cards as discussed by the authors are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) that are relevant to the intended application domains.
Abstract: Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.

Journal ArticleDOI
TL;DR: The authors examine three large datasets and find only a small negative association between digital technology use and adolescent well-being, explaining at most 0.4% of the variation inWell-being.
Abstract: The widespread use of digital technologies by young people has spurred speculation that their regular use negatively impacts psychological well-being. Current empirical evidence supporting this idea is largely based on secondary analyses of large-scale social datasets. Though these datasets provide a valuable resource for highly powered investigations, their many variables and observations are often explored with an analytical flexibility that marks small effects as statistically significant, thereby leading to potential false positives and conflicting results. Here we address these methodological challenges by applying specification curve analysis (SCA) across three large-scale social datasets (total n = 355,358) to rigorously examine correlational evidence for the effects of digital technology on adolescents. The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being. Taking the broader context of the data into account suggests that these effects are too small to warrant policy change.

Journal ArticleDOI
TL;DR: This paper constitutes the first holistic tutorial on the development of ANN-based ML techniques tailored to the needs of future wireless networks and overviews how artificial neural networks (ANNs)-based ML algorithms can be employed for solving various wireless networking problems.
Abstract: In order to effectively provide ultra reliable low latency communications and pervasive connectivity for Internet of Things (IoT) devices, next-generation wireless networks can leverage intelligent, data-driven functions enabled by the integration of machine learning (ML) notions across the wireless core and edge infrastructure. In this context, this paper provides a comprehensive tutorial that overviews how artificial neural networks (ANNs)-based ML algorithms can be employed for solving various wireless networking problems. For this purpose, we first present a detailed overview of a number of key types of ANNs that include recurrent, spiking, and deep neural networks, that are pertinent to wireless networking applications. For each type of ANN, we present the basic architecture as well as specific examples that are particularly important and relevant wireless network design. Such ANN examples include echo state networks, liquid state machine, and long short term memory. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality applications over wireless networks as well as edge computing and caching. For each individual application, we present the main motivation for using ANNs along with the associated challenges while we also provide a detailed example for a use case scenario and outline future works that can be addressed using ANNs. In a nutshell, this paper constitutes the first holistic tutorial on the development of ANN-based ML techniques tailored to the needs of future wireless networks.

Journal ArticleDOI
01 Aug 2019-Nature
TL;DR: A deep learning approach that predicts the risk of acute kidney injury and provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests are developed.
Abstract: The early prediction of deterioration could have an important role in supporting healthcare professionals, as an estimated 11% of deaths in hospital follow a failure to promptly recognize and treat deteriorating patients1. To achieve this goal requires predictions of patient risk that are continuously updated and accurate, and delivered at an individual level with sufficient context and enough time to act. Here we develop a deep learning approach for the continuous risk prediction of future deterioration in patients, building on recent work that models adverse events from electronic health records2–17 and using acute kidney injury—a common and potentially life-threatening condition18—as an exemplar. Our model was developed on a large, longitudinal dataset of electronic health records that cover diverse clinical environments, comprising 703,782 adult patients across 172 inpatient and 1,062 outpatient sites. Our model predicts 55.8% of all inpatient episodes of acute kidney injury, and 90.2% of all acute kidney injuries that required subsequent administration of dialysis, with a lead time of up to 48 h and a ratio of 2 false alerts for every true alert. In addition to predicting future acute kidney injury, our model provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests9. Although the recognition and prompt treatment of acute kidney injury is known to be challenging, our approach may offer opportunities for identifying patients at risk within a time window that enables early treatment. A deep learning approach that predicts the risk of acute kidney injury may help to identify patients at risk of health deterioration within a time window that enables early treatment.

Posted Content
TL;DR: SuperGlue as discussed by the authors matches two sets of local features by jointly finding correspondences and rejecting non-matchable points by solving a differentiable optimal transport problem, whose costs are predicted by a graph neural network.
Abstract: This paper introduces SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points. Assignments are estimated by solving a differentiable optimal transport problem, whose costs are predicted by a graph neural network. We introduce a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Compared to traditional, hand-designed heuristics, our technique learns priors over geometric transformations and regularities of the 3D world through end-to-end training from image pairs. SuperGlue outperforms other learned approaches and achieves state-of-the-art results on the task of pose estimation in challenging real-world indoor and outdoor environments. The proposed method performs matching in real-time on a modern GPU and can be readily integrated into modern SfM or SLAM systems. The code and trained weights are publicly available at this https URL.

Journal ArticleDOI
TL;DR: A novel and unified architecture which contains a bidirectional LSTM (BiLSTM), attention mechanism and the convolutional layer is proposed in this paper, which outperforms other state-of-the-art text classification methods in terms of the classification accuracy.

Journal ArticleDOI
TL;DR: This review considers the commonly used term of ‘aerosol transmission’ in the context of some infectious agents that are well-recognized to be transmissible via the airborne route, and discusses other agents, like influenza virus, where the potential for airborne transmission is much more dependent on various host, viral and environmental factors, and where its potential for aerosol transmission may be underestimated.
Abstract: Although short-range large-droplet transmission is possible for most respiratory infectious agents, deciding on whether the same agent is also airborne has a potentially huge impact on the types (and costs) of infection control interventions that are required. The concept and definition of aerosols is also discussed, as is the concept of large droplet transmission, and airborne transmission which is meant by most authors to be synonymous with aerosol transmission, although some use the term to mean either large droplet or aerosol transmission. However, these terms are often used confusingly when discussing specific infection control interventions for individual pathogens that are accepted to be mostly transmitted by the airborne (aerosol) route (e.g. tuberculosis, measles and chickenpox). It is therefore important to clarify such terminology, where a particular intervention, like the type of personal protective equipment (PPE) to be used, is deemed adequate to intervene for this potential mode of transmission, i.e. at an N95 rather than surgical mask level requirement. With this in mind, this review considers the commonly used term of ‘aerosol transmission’ in the context of some infectious agents that are well-recognized to be transmissible via the airborne route. It also discusses other agents, like influenza virus, where the potential for airborne transmission is much more dependent on various host, viral and environmental factors, and where its potential for aerosol transmission may be underestimated.

Journal ArticleDOI
TL;DR: The second edition of the International Principles and Standards for the Practice of Ecological Restoration (the Standards) presents a robust framework for restoration projects to achieve intended goals, while addressing challenges including effective design and implementation, accounting for complex ecosystem dynamics (especially in the context of climate change), and navigating trade-offs associated with land management priorities and decisions as mentioned in this paper.
Abstract: Ecological restoration, when implemented effectively and sustainably, contributes to protecting biodiversity; improving human health and wellbeing; increasing food and water security; delivering goods, services, and economic prosperity; and supporting climate change mitigation, resilience, and adaptation. It is a solutions-based approach that engages communities, scientists, policymakers, and land managers to repair ecological damage and rebuild a healthier relationship between people and the rest of nature. When combined with conservation and sustainable use, ecological restoration is the link needed to move local, regional, and global environmental conditions from a state of continued degradation, to one of net positive improvement. The second edition of the International Principles and Standards for the Practice of Ecological Restoration (the Standards) presents a robust framework for restoration projects to achieve intended goals, while addressing challenges including effective design and implementation, accounting for complex ecosystem dynamics (especially in the context of climate change), and navigating trade-offs associated with land management priorities and decisions.

Journal ArticleDOI
01 Aug 2019-BMJ Open
TL;DR: Key principles and actions for consideration when developing interventions to improve health are presented and researchers should consider each action by addressing its relevance to a specific intervention in a specific context, both at the start and throughout the development process.
Abstract: Objective To provide researchers with guidance on actions to take during intervention development. Summary of key points Based on a consensus exercise informed by reviews and qualitative interviews, we present key principles and actions for consideration when developing interventions to improve health. These include seeing intervention development as a dynamic iterative process, involving stakeholders, reviewing published research evidence, drawing on existing theories, articulating programme theory, undertaking primary data collection, understanding context, paying attention to future implementation in the real world and designing and refining an intervention using iterative cycles of development with stakeholder input throughout. Conclusion Researchers should consider each action by addressing its relevance to a specific intervention in a specific context, both at the start and throughout the development process.

Proceedings ArticleDOI
25 Jun 2019
TL;DR: In this paper, the authors provide an introduction and overview of control barrier functions and their use to verify and enforce safety properties in the context of (optimization based) safety-critical controllers.
Abstract: This paper provides an introduction and overview of recent work on control barrier functions and their use to verify and enforce safety properties in the context of (optimization based) safety-critical controllers. We survey the main technical results and discuss applications to several domains including robotic systems.

Journal ArticleDOI
TL;DR: In this paper, the authors employed the panel vector autoregressive (PVAR) model to examine the impact of renewable energy and financial development on carbon dioxide (CO2) emissions and economic growth.