scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Arlington published in 2020"


Book
Georges Aad1, E. Abat2, Jalal Abdallah3, Jalal Abdallah4  +3029 moreInstitutions (164)
23 Feb 2020
TL;DR: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper, where a brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
Abstract: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.

3,111 citations


Journal ArticleDOI
TL;DR: SARS-CoV-2 itself is not a recombinant of any sarbecoviruses detected to date, and its receptor-binding motif appears to be an ancestral trait shared with bat viruses and not one acquired recently via recombination.
Abstract: There are outstanding evolutionary questions on the recent emergence of human coronavirus SARS-CoV-2 including the role of reservoir species, the role of recombination and its time of divergence from animal viruses. We find that the sarbecoviruses—the viral subgenus containing SARS-CoV and SARS-CoV-2—undergo frequent recombination and exhibit spatially structured genetic diversity on a regional scale in China. SARS-CoV-2 itself is not a recombinant of any sarbecoviruses detected to date, and its receptor-binding motif, important for specificity to human ACE2 receptors, appears to be an ancestral trait shared with bat viruses and not one acquired recently via recombination. To employ phylogenetic dating methods, recombinant regions of a 68-genome sarbecovirus alignment were removed with three independent methods. Bayesian evolutionary rate and divergence date estimates were shown to be consistent for these three approaches and for two different prior specifications of evolutionary rates based on HCoV-OC43 and MERS-CoV. Divergence dates between SARS-CoV-2 and the bat sarbecovirus reservoir were estimated as 1948 (95% highest posterior density (HPD): 1879–1999), 1969 (95% HPD: 1930–2000) and 1982 (95% HPD: 1948–2009), indicating that the lineage giving rise to SARS-CoV-2 has been circulating unnoticed in bats for decades. In this manuscript, the authors address evolutionary questions on the emergence of SARS-CoV-2. They find that SARS-CoV-2 is not a recombinant of any sarbecoviruses detected to date, and that the bat and pangolin sequences most closely related to SARS-CoV-2 probably diverged several decades ago or possibly earlier from human SARS-CoV-2 samples.

716 citations


Journal ArticleDOI
TL;DR: In this article, three regional-scale models for forecasting and assessing the course of the coronavirus disease 2019 (COVID-19) pandemic are presented. But, the authors focus on early-time data and provide an accessible framework for generating policy-relevant insights into its course.
Abstract: The coronavirus disease 2019 (COVID-19) pandemic has placed epidemic modeling at the forefront of worldwide public policy making. Nonetheless, modeling and forecasting the spread of COVID-19 remains a challenge. Here, we detail three regional-scale models for forecasting and assessing the course of the pandemic. This work demonstrates the utility of parsimonious models for early-time data and provides an accessible framework for generating policy-relevant insights into its course. We show how these models can be connected to each other and to time series data for a particular region. Capable of measuring and forecasting the impacts of social distancing, these models highlight the dangers of relaxing nonpharmaceutical public health interventions in the absence of a vaccine or antiviral therapies.

447 citations


Posted Content
TL;DR: This paper provides an extensive review of self-supervised methods that follow the contrastive approach, explaining commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far.
Abstract: Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudo labels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we have a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make substantial progress.

426 citations


Posted ContentDOI
31 Mar 2020-bioRxiv
TL;DR: Estimates are obtained from three approaches that the most likely divergence date of SARS-CoV-2 from its most closely related available bat sequences ranges from 1948 to 1982, indicating that there are high levels of co-infection in horseshoe bats and that the viral pool can generate novel allele combinations and substantial genetic diversity.
Abstract: There are outstanding evolutionary questions on the recent emergence of coronavirus SARS-CoV-2/hCoV-19 in Hubei province that caused the COVID-19 pandemic, including (1) the relationship of the new virus to the SARS-related coronaviruses, (2) the role of bats as a reservoir species, (3) the potential role of other mammals in the emergence event, and (4) the role of recombination in viral emergence. Here, we address these questions and find that the sarbecoviruses -- the viral subgenus responsible for the emergence of SARS-CoV and SARS-CoV-2 -- exhibit frequent recombination, but the SARS-CoV-2 lineage itself is not a recombinant of any viruses detected to date. In order to employ phylogenetic methods to date the divergence events between SARS-CoV-2 and the bat sarbecovirus reservoir, recombinant regions of a 68-genome sarbecovirus alignment were removed with three independent methods. Bayesian evolutionary rate and divergence date estimates were consistent for all three recombination-free alignments and robust to two different prior specifications based on HCoV-OC43 and MERS-CoV evolutionary rates. Divergence dates between SARS-CoV-2 and the bat sarbecovirus reservoir were estimated as 1948 (95% HPD: 1879-1999), 1969 (95% HPD: 1930-2000), and 1982 (95% HPD: 1948-2009). Despite intensified characterization of sarbecoviruses since SARS, the lineage giving rise to SARS-CoV-2 has been circulating unnoticed for decades in bats and been transmitted to other hosts such as pangolins. The occurrence of a third significant coronavirus emergence in 17 years together with the high prevalence and virus diversity in bats implies that these viruses are likely to cross species boundaries again.

344 citations


Journal ArticleDOI
Edoardo Aprà1, Eric J. Bylaska1, W. A. de Jong2, Niranjan Govind1, Karol Kowalski1, T. P. Straatsma3, Marat Valiev1, H. J. J. van Dam4, Yuri Alexeev5, J. Anchell6, V. Anisimov5, Fredy W. Aquino, Raymond Atta-Fynn7, Jochen Autschbach8, Nicholas P. Bauman1, Jeffrey C. Becca9, David E. Bernholdt10, K. Bhaskaran-Nair11, Stuart Bogatko12, Piotr Borowski13, Jeffery S. Boschen14, Jiří Brabec15, Adam Bruner16, Emilie Cauet17, Y. Chen18, Gennady N. Chuev19, Christopher J. Cramer20, Jeff Daily1, M. J. O. Deegan, Thom H. Dunning21, Michel Dupuis8, Kenneth G. Dyall, George I. Fann10, Sean A. Fischer22, Alexandr Fonari23, Herbert A. Früchtl24, Laura Gagliardi20, Jorge Garza25, Nitin A. Gawande1, Soumen Ghosh20, Kurt R. Glaesemann1, Andreas W. Götz26, Jeff R. Hammond6, Volkhard Helms27, Eric D. Hermes28, Kimihiko Hirao, So Hirata29, Mathias Jacquelin2, Lasse Jensen9, Benny G. Johnson, Hannes Jónsson30, Ricky A. Kendall10, Michael Klemm6, Rika Kobayashi31, V. Konkov32, Sriram Krishnamoorthy1, M. Krishnan18, Zijing Lin33, Roberto D. Lins34, Rik J. Littlefield, Andrew J. Logsdail35, Kenneth Lopata36, Wan Yong Ma37, Aleksandr V. Marenich20, J. Martin del Campo38, Daniel Mejía-Rodríguez39, Justin E. Moore6, Jonathan M. Mullin, Takahito Nakajima, Daniel R. Nascimento1, Jeffrey A. Nichols10, P. J. Nichols40, J. Nieplocha1, Alberto Otero-de-la-Roza41, Bruce J. Palmer1, Ajay Panyala1, T. Pirojsirikul42, Bo Peng1, Roberto Peverati32, Jiri Pittner15, L. Pollack, Ryan M. Richard43, P. Sadayappan44, George C. Schatz45, William A. Shelton36, Daniel W. Silverstein46, D. M. A. Smith6, Thereza A. Soares47, Duo Song1, Marcel Swart, H. L. Taylor48, G. S. Thomas1, Vinod Tipparaju49, Donald G. Truhlar20, Kiril Tsemekhman, T. Van Voorhis50, Álvaro Vázquez-Mayagoitia5, Prakash Verma, Oreste Villa51, Abhinav Vishnu1, Konstantinos D. Vogiatzis52, Dunyou Wang53, John H. Weare26, Mark J. Williamson54, Theresa L. Windus14, Krzysztof Wolinski13, A. T. Wong, Qin Wu4, Chan-Shan Yang2, Q. Yu55, Martin Zacharias56, Zhiyong Zhang57, Yan Zhao58, Robert W. Harrison59 
Pacific Northwest National Laboratory1, Lawrence Berkeley National Laboratory2, National Center for Computational Sciences3, Brookhaven National Laboratory4, Argonne National Laboratory5, Intel6, University of Texas at Arlington7, State University of New York System8, Pennsylvania State University9, Oak Ridge National Laboratory10, Washington University in St. Louis11, Wellesley College12, Maria Curie-Skłodowska University13, Iowa State University14, Academy of Sciences of the Czech Republic15, University of Tennessee at Martin16, Université libre de Bruxelles17, Facebook18, Russian Academy of Sciences19, University of Minnesota20, University of Washington21, United States Naval Research Laboratory22, Georgia Institute of Technology23, University of St Andrews24, Universidad Autónoma Metropolitana25, University of California, San Diego26, Saarland University27, Sandia National Laboratories28, University of Illinois at Urbana–Champaign29, University of Iceland30, Australian National University31, Florida Institute of Technology32, University of Science and Technology of China33, Oswaldo Cruz Foundation34, Cardiff University35, Louisiana State University36, Chinese Academy of Sciences37, National Autonomous University of Mexico38, University of Florida39, Los Alamos National Laboratory40, University of Oviedo41, Prince of Songkla University42, Ames Laboratory43, University of Utah44, Northwestern University45, Universal Display Corporation46, Federal University of Pernambuco47, CD-adapco48, Cray49, Massachusetts Institute of Technology50, Nvidia51, University of Tennessee52, Shandong Normal University53, University of Cambridge54, Advanced Micro Devices55, Technische Universität München56, Stanford University57, Wuhan University of Technology58, Stony Brook University59
TL;DR: The NWChem computational chemistry suite is reviewed, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.
Abstract: Specialized computational chemistry packages have permanently reshaped the landscape of chemical and materials science by providing tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties. In this regard, electronic structure packages have played a special role by using first-principle-driven methodologies to model complex chemical and materials processes. Over the past few decades, the rapid development of computing technologies and the tremendous increase in computational power have offered a unique chance to study complex transformations using sophisticated and predictive many-body techniques that describe correlated behavior of electrons in molecular and condensed phase systems at different levels of theory. In enabling these simulations, novel parallel algorithms have been able to take advantage of computational resources to address the polynomial scaling of electronic structure methods. In this paper, we briefly review the NWChem computational chemistry suite, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.

342 citations


Journal ArticleDOI
TL;DR: This 2020 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations for advanced life support includes updates on multiple advanced life life support topics addressed with 3 different types of reviews.

311 citations



Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, Ovsat Abdinov4  +2934 moreInstitutions (199)
TL;DR: In this article, a search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented, based on 139.fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at
Abstract: A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider at $\sqrt{s}=13$ $\text {TeV}$. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 $\text {Ge}\text {V}$ are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 $\text {TeV}$ for slepton-mediated decays, whereas for slepton-pair production masses up to 700 $\text {Ge}\text {V}$ are excluded assuming three generations of mass-degenerate sleptons.

272 citations


Journal ArticleDOI
TL;DR: Examination of the most recently available data from both Los Angeles, CA, and Indianapolis, IN, shows that social distancing has had a statistically significant impact on a few specific crime types, however, the overall effect is notably less than might be expected given the scale of the disruption to social and economic life.

244 citations


Proceedings Article
30 Apr 2020
TL;DR: DropEdge as discussed by the authors randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer, and theoretically demonstrate that DropEdge either retards the convergence speed of over-smoothing or relieves the information loss caused by it.
Abstract: Over-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge, a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either retards the convergence speed of over-smoothing or relieves the information loss caused by it. More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically visualized and validated as well. Codes will be made public upon the publication.

Journal ArticleDOI
M. G. Aartsen1, Markus Ackermann, Jenni Adams1, Juanan Aguilar2  +361 moreInstitutions (48)
TL;DR: The results, all based on searches for a cumulative neutrino signal integrated over the 10 years of available data, motivate further study of these and similar sources, including time-dependent analyses, multimessenger correlations, and the possibility of stronger evidence with coming upgrades to the detector.
Abstract: This Letter presents the results from pointlike neutrino source searches using ten years of IceCube data collected between April 6, 2008 and July 10, 2018. We evaluate the significance of an astrophysical signal from a pointlike source looking for an excess of clustered neutrino events with energies typically above ∼1 TeV among the background of atmospheric muons and neutrinos. We perform a full-sky scan, a search within a selected source catalog, a catalog population study, and three stacked Galactic catalog searches. The most significant point in the northern hemisphere from scanning the sky is coincident with the Seyfert II galaxy NGC 1068, which was included in the source catalog search. The excess at the coordinates of NGC 1068 is inconsistent with background expectations at the level of 2.9σ after accounting for statistical trials from the entire catalog. The combination of this result along with excesses observed at the coordinates of three other sources, including TXS 0506+056, suggests that, collectively, correlations with sources in the northern catalog are inconsistent with background at 3.3σ significance. The southern catalog is consistent with background. These results, all based on searches for a cumulative neutrino signal integrated over the 10 years of available data, motivate further study of these and similar sources, including time-dependent analyses, multimessenger correlations, and the possibility of stronger evidence with coming upgrades to the detector.

Journal ArticleDOI
TL;DR: This work proposes Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling to efficiently learn imaging features from the WSI and then aggregate WSI-level information to patient-level.

Journal ArticleDOI
B. Abi1, R. Acciarri2, M. A. Acero3, George Adamov4  +966 moreInstitutions (155)
TL;DR: The Deep Underground Neutrino Experiment (DUNE) as discussed by the authors is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model.
Abstract: The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay—these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE's physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2954 moreInstitutions (198)
TL;DR: In this paper, the trigger algorithms and selection were optimized to control the rates while retaining a high efficiency for physics analyses at the ATLAS experiment to cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), and a similar increase in the number of interactions per beam-crossing to about 60.
Abstract: Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton–proton and heavy-ion collisions. To cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), to 2.1×1034cm-2s-1, and a similar increase in the number of interactions per beam-crossing to about 60, trigger algorithms and selections were optimised to control the rates while retaining a high efficiency for physics analyses. For proton–proton collisions, the single-electron trigger efficiency relative to a single-electron offline selection is at least 75% for an offline electron of 31 GeV, and rises to 96% at 60 GeV; the trigger efficiency of a 25 GeV leg of the primary diphoton trigger relative to a tight offline photon selection is more than 96% for an offline photon of 30 GeV. For heavy-ion collisions, the primary electron and photon trigger efficiencies relative to the corresponding standard offline selections are at least 84% and 95%, respectively, at 5 GeV above the corresponding trigger threshold.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2962 moreInstitutions (199)
TL;DR: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13‬TeV recorded with the ATLAS detector.
Abstract: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13 TeV recorded with the ATLAS detector. The search for heavy resonances is performed over the mass range 0.2-2.5 TeV for the τ^{+}τ^{-} decay with at least one τ-lepton decaying into final states with hadrons. The data are in good agreement with the background prediction of the standard model. In the M_{h}^{125} scenario of the minimal supersymmetric standard model, values of tanβ>8 and tanβ>21 are excluded at the 95% confidence level for neutral Higgs boson masses of 1.0 and 1.5 TeV, respectively, where tanβ is the ratio of the vacuum expectation values of the two Higgs doublets.

Journal ArticleDOI
TL;DR: The regulatory pathways that control metabolism of the cell wall and surface lipids in M. tuberculosis during growth and stasis are described, and it is speculated about how this regulation might affect antibiotic susceptibility and interactions with the immune system.
Abstract: Mycobacterium tuberculosis, the leading cause of death due to infection, has a dynamic and immunomodulatory cell envelope. The cell envelope structurally and functionally varies across the length of the cell and during the infection process. This variability allows the bacterium to manipulate the human immune system, tolerate antibiotic treatment and adapt to the variable host environment. Much of what we know about the mycobacterial cell envelope has been gleaned from model actinobacterial species, or model conditions such as growth in vitro, in macrophages and in the mouse. In this Review, we combine data from different experimental systems to build a model of the dynamics of the mycobacterial cell envelope across space and time. We describe the regulatory pathways that control metabolism of the cell wall and surface lipids in M. tuberculosis during growth and stasis, and speculate about how this regulation might affect antibiotic susceptibility and interactions with the immune system. Mycobacterium tuberculosis has a distinctive cell envelope that contributes to its resistance against the human immune system and antibiotic therapy. In this Review, Dulberger, Rubin and Boutte discuss mycobacterial cell envelope dynamics and their relevance for infection and drug treatment.

Journal ArticleDOI
M. G. Aartsen1, Markus Ackermann, Jenni Adams1, Juanan Aguilar2  +355 moreInstitutions (48)
TL;DR: This analysis provides the most detailed characterization of the neutrino flux at energies below ∼100 TeV compared to previous IceCube results, and suggests the existence of astrophysical neutrinos sources characterized by dense environments which are opaque to gamma rays.
Abstract: We report on the first measurement of the astrophysical neutrino flux using particle showers (cascades) in IceCube data from 2010-2015. Assuming standard oscillations, the astrophysical neutrinos in this dedicated cascade sample are dominated (∼90%) by electron and tau flavors. The flux, observed in the sensitive energy range from 16 TeV to 2.6 PeV, is consistent with a single power-law model as expected from Fermi-type acceleration of high energy particles at astrophysical sources. We find the flux spectral index to be γ=2.53±0.07 and a flux normalization for each neutrino flavor of ϕ_{astro}=1.66_{-0.27}^{+0.25} at E_{0}=100 TeV, in agreement with IceCube's complementary muon neutrino results and with all-neutrino flavor fit results. In the measured energy range we reject spectral indices γ≤2.28 at ≥3σ significance level. Because of high neutrino energy resolution and low atmospheric neutrino backgrounds, this analysis provides the most detailed characterization of the neutrino flux at energies below ∼100 TeV compared to previous IceCube results. Results from fits assuming more complex neutrino flux models suggest a flux softening at high energies and a flux hardening at low energies (p value ≥0.06). The sizable and smooth flux measured below ∼100 TeV remains a puzzle. In order to not violate the isotropic diffuse gamma-ray background as measured by the Fermi Large Area Telescope, it suggests the existence of astrophysical neutrino sources characterized by dense environments which are opaque to gamma rays.

Journal ArticleDOI
TL;DR: A distributed robust controller is developed, which consists of a position controller to govern the translational motion for the desired formation and an attitude controller to control the rotational motion of each quadrotor.
Abstract: In this paper, the robust formation control problem is investigated for a group of quadrotors. Each quadrotor dynamics exhibits the features of underactuation, high nonlinearities and couplings, and disturbances in both the translational and rotational motions. A distributed robust controller is developed, which consists of a position controller to govern the translational motion for the desired formation and an attitude controller to control the rotational motion of each quadrotor. Theoretical analysis and simulation studies of a formation of multiple uncertain quadrotors are presented to validate the effectiveness of the proposed formation control scheme.


Journal ArticleDOI
TL;DR: This paper surveys 23 recent embedding-based entity alignment approaches and categorizes them based on their techniques and characteristics, and proposes a new KG sampling algorithm, with which to generate a set of dedicated benchmark datasets with various heterogeneity and distributions for a realistic evaluation.
Abstract: Entity alignment seeks to find entities in different knowledge graphs (KGs) that refer to the same real-world object. Recent advancement in KG embedding impels the advent of embedding-based entity alignment, which encodes entities in a continuous embedding space and measures entity similarities based on the learned embeddings. In this paper, we conduct a comprehensive experimental study of this emerging field. We survey 23 recent embedding-based entity alignment approaches and categorize them based on their techniques and characteristics. We also propose a new KG sampling algorithm, with which we generate a set of dedicated benchmark datasets with various heterogeneity and distributions for a realistic evaluation. We develop an open-source library including 12 representative embedding-based entity alignment approaches, and extensively evaluate these approaches, to understand their strengths and limitations. Additionally, for several directions that have not been explored in current approaches, we perform exploratory experiments and report our preliminary findings for future studies. The benchmark datasets, open-source library and experimental results are all accessible online and will be duly maintained.

Posted Content
TL;DR: Three regional-scale models for forecasting and assessing the course of the COVID-19 pandemic demonstrate the utility of parsimonious models for early-time data and provides an accessible framework for generating policy-relevant insights into its course.
Abstract: We present three data driven model-types for COVID-19 with a minimal number of parameters to provide insights into the spread of the disease that may be used for developing policy responses. The first is exponential growth, widely studied in analysis of early-time data. The second is a self-exciting branching process model which includes a delay in transmission and recovery. It allows for meaningful fit to early time stochastic data. The third is the well-known Susceptible-Infected-Resistant (SIR) model and its cousin, SEIR, with an "Exposed" component. All three models are related quantitatively, and the SIR model is used to illustrate the potential effects of short-term distancing measures in the United States.

Journal ArticleDOI
B. Abi1, R. Acciarri2, M. A. Acero3, George Adamov4  +972 moreInstitutions (153)
TL;DR: The Dune experiment as discussed by the authors is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model.
Abstract: The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay—these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. DUNE is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. Central to achieving DUNE's physics program is a far detector that combines the many tens-of-kiloton fiducial mass necessary for rare event searches with sub-centimeter spatial resolution in its ability to image those events, allowing identification of the physics signatures among the numerous backgrounds. In the single-phase liquid argon time-projection chamber (LArTPC) technology, ionization charges drift horizontally in the liquid argon under the influence of an electric field towards a vertical anode, where they are read out with fine granularity. A photon detection system supplements the TPC, directly enhancing physics capabilities for all three DUNE physics drivers and opening up prospects for further physics explorations. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. Volume IV presents an overview of the basic operating principles of a single-phase LArTPC, followed by a description of the DUNE implementation. Each of the subsystems is described in detail, connecting the high-level design requirements and decisions to the overriding physics goals of DUNE.

Journal ArticleDOI
TL;DR: This work defines two models: an end-to-end reconstruction network which performs simultaneous particle identification and energy regression of particles when given calorimeter shower data, and a generative network which can provide reasonable modeling ofCalorimeter showers for different particle types at specified angles and energies.
Abstract: Using detailed simulations of calorimeter showers as training data, we investigate the use of deep learning algorithms for the simulation and reconstruction of single isolated particles produced in high-energy physics collisions. We train neural networks on single-particle shower data at the calorimeter-cell level, and show significant improvements for simulation and reconstruction when using these networks compared to methods which rely on currently-used state-of-the-art algorithms. We define two models: an end-to-end reconstruction network which performs simultaneous particle identification and energy regression of particles when given calorimeter shower data, and a generative network which can provide reasonable modeling of calorimeter showers for different particle types at specified angles and energies. We investigate the optimization of our models with hyperparameter scans. Furthermore, we demonstrate the applicability of the reconstruction model to shower inputs from other detector geometries, specifically ATLAS-like and CMS-like geometries. These networks can serve as fast and computationally light methods for particle shower simulation and reconstruction for current and future experiments at particle colliders.

Journal ArticleDOI
TL;DR: A smart consumer electronics solution to facilitate safe and gradual opening after stay-at-home restrictions are lifted and an Internet of Medical Things enabled wearable called EasyBand is introduced to limit the growth of new positive cases.
Abstract: COVID-19 (Corona Virus Disease 2019) is a pandemic, which has been spreading exponentially around the globe. Many countries adopted stay-at-home or lockdown policies to control its spreading. However, prolonged stay-at-home may cause worse effects like economical crises, unemployment, food scarcity, and mental health problems of individuals. This article presents a smart consumer electronics solution to facilitate safe and gradual opening after stay-at-home restrictions are lifted. An Internet of Medical Things enabled wearable called EasyBand is introduced to limit the growth of new positive cases by autocontact tracing and by encouraging essential social distancing.

Proceedings Article
30 Apr 2020
TL;DR: Geom-GCN as discussed by the authors proposes a geometric aggregation scheme for graph neural networks to overcome the two fundamental weaknesses of MPNNs' aggregators: losing structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs.
Abstract: Message-passing neural networks (MPNNs) have been successfully applied in a wide variety of applications in the real world. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. Few studies have noticed the weaknesses from different perspectives. From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses. The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph. The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation. We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs. Experimental results show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs.

Journal ArticleDOI
TL;DR: Scribe is presented, a toolkit for detecting and visualizing causal regulatory interactions between genes and the potential for single-cell experiments to power network reconstruction and it is demonstrated that performing causal inference requires temporal coupling between measurements.
Abstract: Here, we present Scribe (https://github.com/aristoteleo/Scribe-py), a toolkit for detecting and visualizing causal regulatory interactions between genes and explore the potential for single-cell experiments to power network reconstruction. Scribe employs restricted directed information to determine causality by estimating the strength of information transferred from a potential regulator to its downstream target. We apply Scribe and other leading approaches for causal network reconstruction to several types of single-cell measurements and show that there is a dramatic drop in performance for "pseudotime"-ordered single-cell data compared with true time-series data. We demonstrate that performing causal inference requires temporal coupling between measurements. We show that methods such as "RNA velocity" restore some degree of coupling through an analysis of chromaffin cell fate commitment. These analyses highlight a shortcoming in experimental and computational methods for analyzing gene regulation at single-cell resolution and suggest ways of overcoming it.

Journal ArticleDOI
TL;DR: The various methodologies commonly used to assess reactive hyperemia are reviewed, current mechanistic pathways believed to contribute to reactive hypeRemia are assessed, and several methodological considerations are reflected on.
Abstract: Reactive hyperemia is a well-established technique for noninvasive assessment of peripheral microvascular function and a predictor of all-cause and cardiovascular morbidity and mortality. In its simplest form, reactive hyperemia represents the magnitude of limb reperfusion following a brief period of ischemia induced by arterial occlusion. Over the past two decades, investigators have employed a variety of methods, including brachial artery velocity by Doppler ultrasound, tissue reperfusion by near-infrared spectroscopy, limb distension by venous occlusion plethysmography, and peripheral artery tonometry, to measure reactive hyperemia. Regardless of the technique used to measure reactive hyperemia, blunted reactive hyperemia is believed to reflect impaired microvascular function. With the advent of several technological advancements, together with an increased interest in the microcirculation, reactive hyperemia is becoming more common as a research tool and is widely used across multiple disciplines. With this in mind, we sought to review the various methodologies commonly used to assess reactive hyperemia and current mechanistic pathways believed to contribute to reactive hyperemia and reflect on several methodological considerations.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2957 moreInstitutions (201)
TL;DR: A search for narrowly resonant new physics using a machine-learning anomaly detection procedure that does not rely on signal simulations for developing the analysis selection and results are complementary to the dedicated searches for the case that B and C are standard model bosons.
Abstract: This Letter describes a search for narrowly resonant new physics using a machine-learning anomaly detection procedure that does not rely on signal simulations for developing the analysis selection. Weakly supervised learning is used to train classifiers directly on data to enhance potential signals. The targeted topology is dijet events and the features used for machine learning are the masses of the two jets. The resulting analysis is essentially a three-dimensional search A→BC, for m_{A}∼O(TeV), m_{B},m_{C}∼O(100 GeV) and B, C are reconstructed as large-radius jets, without paying a penalty associated with a large trials factor in the scan of the masses of the two jets. The full run 2 sqrt[s]=13 TeV pp collision dataset of 139 fb^{-1} recorded by the ATLAS detector at the Large Hadron Collider is used for the search. There is no significant evidence of a localized excess in the dijet invariant mass spectrum between 1.8 and 8.2 TeV. Cross-section limits for narrow-width A, B, and C particles vary with m_{A}, m_{B}, and m_{C}. For example, when m_{A}=3 TeV and m_{B}≳200 GeV, a production cross section between 1 and 5 fb is excluded at 95% confidence level, depending on m_{C}. For certain masses, these limits are up to 10 times more sensitive than those obtained by the inclusive dijet search. These results are complementary to the dedicated searches for the case that B and C are standard model bosons.

Journal ArticleDOI
B. Abi1, R. Acciarri2, M. A. Acero3, George Adamov4  +975 moreInstitutions (155)
TL;DR: The sensitivity of the Deep Underground Neutrino Experiment (DUNE) to neutrino oscillation is determined, based on a full simulation, reconstruction, and event selection of the far detector and full simulation and parameterized analysis of the near detector as mentioned in this paper.
Abstract: The sensitivity of the Deep Underground Neutrino Experiment (DUNE) to neutrino oscillation is determined, based on a full simulation, reconstruction, and event selection of the far detector and a full simulation and parameterized analysis of the near detector. Detailed uncertainties due to the flux prediction, neutrino interaction model, and detector effects are included. DUNE will resolve the neutrino mass ordering to a precision of 5σ, for all δCP values, after 2 years of running with the nominal detector design and beam configuration. It has the potential to observe charge-parity violation in the neutrino sector to a precision of 3σ (5σ) after an exposure of 5 (10) years, for 50% of all δCP values. It will also make precise measurements of other parameters governing long-baseline neutrino oscillation, and after an exposure of 15 years will achieve a similar sensitivity to sin 22 θ13 to current reactor experiments.