scispace - formally typeset
Search or ask a question

Showing papers by "Moscow Institute of Physics and Technology published in 2019"


Journal ArticleDOI
TL;DR: It is shown that CeH9 can be synthesized at 80-100 GPa with laser heating, and is characterized by a clathrate structure with a dense 3-dimensional atomic hydrogen sublattice, which shed a significant light on the search for superhydrides in close similarity with atomic hydrogen within a feasible pressure range.
Abstract: Hydrogen-rich superhydrides are believed to be very promising high-Tc superconductors. Recent experiments discovered superhydrides at very high pressures, e.g. FeH5 at 130 GPa and LaH10 at 170 GPa. With the motivation of discovering new hydrogen-rich high-Tc superconductors at lowest possible pressure, here we report the prediction and experimental synthesis of cerium superhydride CeH9 at 80–100 GPa in the laser-heated diamond anvil cell coupled with synchrotron X-ray diffraction. Ab initio calculations were carried out to evaluate the detailed chemistry of the Ce-H system and to understand the structure, stability and superconductivity of CeH9. CeH9 crystallizes in a P63/mmc clathrate structure with a very dense 3-dimensional atomic hydrogen sublattice at 100 GPa. These findings shed a significant light on the search for superhydrides in close similarity with atomic hydrogen within a feasible pressure range. Discovery of superhydride CeH9 provides a practical platform to further investigate and understand conventional superconductivity in hydrogen rich superhydrides. Hydrogen-rich superhydrides are promising high-temperature superconductors which have been observed only at pressures above 170 GPa. Here the authors show that CeH9 can be synthesized at 80-100 GPa with laser heating, and is characterized by a clathrate structure with a dense 3-dimensional atomic hydrogen sublattice.

926 citations


Proceedings ArticleDOI
28 Jul 2019
TL;DR: It is found that the most important and confident heads play consistent and often linguistically-interpretable roles and when pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, it is observed that specialized heads are last to be pruned.
Abstract: Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads to the overall performance of the model and analyze the roles played by them in the encoder. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU.

718 citations


Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Federico Ambrogi1  +2265 moreInstitutions (153)
TL;DR: Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented and constraints are placed on various two Higgs doublet models.
Abstract: Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $\sqrt{s}=13\,\text {Te}\text {V} $ , corresponding to an integrated luminosity of 35.9 ${\,\text {fb}^{-1}} $ . The combination is based on analyses targeting the five main Higgs boson production mechanisms (gluon fusion, vector boson fusion, and associated production with a $\mathrm {W}$ or $\mathrm {Z}$ boson, or a top quark-antiquark pair) and the following decay modes: $\mathrm {H} \rightarrow \gamma \gamma $ , $\mathrm {Z}\mathrm {Z}$ , $\mathrm {W}\mathrm {W}$ , $\mathrm {\tau }\mathrm {\tau }$ , $\mathrm {b} \mathrm {b} $ , and $\mathrm {\mu }\mathrm {\mu }$ . Searches for invisible Higgs boson decays are also considered. The best-fit ratio of the signal yield to the standard model expectation is measured to be $\mu =1.17\pm 0.10$ , assuming a Higgs boson mass of $125.09\,\text {Ge}\text {V} $ . Additional results are given for various assumptions on the scaling behavior of the production and decay modes, including generic parametrizations based on ratios of cross sections and branching fractions or couplings. The results are compatible with the standard model predictions in all parametrizations considered. In addition, constraints are placed on various two Higgs doublet models.

451 citations


Journal ArticleDOI
TL;DR: This Review discusses structure prediction methods, examining their potential for the study of different materials systems, and presents examples of computationally driven discoveries of new materials — including superhard materials, superconductors and organic materials — that will enable new technologies.
Abstract: Progress in the discovery of new materials has been accelerated by the development of reliable quantum-mechanical approaches to crystal structure prediction. The properties of a material depend very sensitively on its structure; therefore, structure prediction is the key to computational materials discovery. Structure prediction was considered to be a formidable problem, but the development of new computational tools has allowed the structures of many new and increasingly complex materials to be anticipated. These widely applicable methods, based on global optimization and relying on little or no empirical knowledge, have been used to study crystalline structures, point defects, surfaces and interfaces. In this Review, we discuss structure prediction methods, examining their potential for the study of different materials systems, and present examples of computationally driven discoveries of new materials — including superhard materials, superconductors and organic materials — that will enable new technologies. Advances in first-principle structure predictions also lead to a better understanding of physical and chemical phenomena in materials. Recent breakthroughs in crystal structure prediction have enabled the discovery of new materials and of new physical and chemical phenomena. This Review surveys structure prediction methods and presents examples of results in different classes of materials.

415 citations


Book ChapterDOI
TL;DR: This book chapter describes how to apply BRAKER in environments characterized by various combinations of external evidence, both RNA-Seq and protein alignments.
Abstract: BRAKER is a pipeline for highly accurate and fully automated gene prediction in novel eukaryotic genomes. It combines two major tools: GeneMark-ES/ET and AUGUSTUS. GeneMark-ES/ET learns its parameters from a novel genomic sequence in a fully automated fashion; if available, it uses extrinsic evidence for model refinement. From the protein-coding genes predicted by GeneMark-ES/ET, we select a set for training AUGUSTUS, one of the most accurate gene finding tools that, in contrast to GeneMark-ES/ET, integrates extrinsic evidence already into the gene prediction step. The first published version, BRAKER1, integrated genomic footprints of unassembled RNA-Seq reads into the training as well as into the prediction steps. The pipeline has since been extended to the integration of data on mapped cross-species proteins, and to the usage of heterogeneous extrinsic evidence, both RNA-Seq and protein alignments. In this book chapter, we briefly summarize the pipeline methodology and describe how to apply BRAKER in environments characterized by various combinations of external evidence.

382 citations


Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2298 moreInstitutions (160)
TL;DR: In this article, a search for invisible decays of a Higgs boson via vector boson fusion is performed using proton-proton collision data collected with the CMS detector at the LHC in 2016 at a center-of-mass energy root s = 13 TeV, corresponding to an integrated luminosity of 35.9fb(-1).

347 citations


Book ChapterDOI
TL;DR: To improve performance on multi-turn conversations with humans, future systems must go beyond single word metrics like perplexity to measure the performance across sequences of utterances (conversations)—in terms of repetition, consistency and balance of dialogue acts.
Abstract: We describe the setting and results of the ConvAI2 NeurIPS competition that aims to further the state-of-the-art in open-domain chatbots. Some key takeaways from the competition are: (1) pretrained Transformer variants are currently the best performing models on this task, (2) but to improve performance on multi-turn conversations with humans, future systems must go beyond single word metrics like perplexity to measure the performance across sequences of utterances (conversations)—in terms of repetition, consistency and balance of dialogue acts (e.g. how many questions asked vs. answered).

340 citations


Journal ArticleDOI
TL;DR: A review of the progress in the field of exotic $XYZ$ hadrons can be found in this article, with a summary on future prospects and challenges, as well as a survey of the current state-of-the-art.
Abstract: The quark model was formulated in 1964 to classify mesons as bound states made of a quark-antiquark pair, and baryons as bound states made of three quarks. For a long time all known mesons and baryons could be classified within this scheme. Quantum Chromodynamics (QCD), however, in principle also allows the existence of more complex structures, generically called exotic hadrons or simply exotics. These include four-quark hadrons (tetraquarks and hadronic molecules), five-quark hadrons (pentaquarks) and states with active gluonic degrees of freedom (hybrids), and even states of pure glue (glueballs). Exotic hadrons have been systematically searched for in numerous experiments for many years. Remarkably, in the past fifteen years, many new hadrons that do not exhibit the expected properties of ordinary (not exotic) hadrons have been discovered in the quarkonium spectrum. These hadrons are collectively known as $XYZ$ states. Some of them, like the charged states, are undoubtedly exotic. Parallel to the experimental progress, the last decades have also witnessed an enormous theoretical effort to reach a theoretical understanding of the $XYZ$ states. Theoretical approaches include not only phenomenological extensions of the quark model to exotics, but also modern non-relativistic effective field theories and lattice QCD calculations. The present work aims at reviewing the rapid progress in the field of exotic $XYZ$ hadrons over the past few years both in experiments and theory. It concludes with a summary on future prospects and challenges.

298 citations


Journal ArticleDOI
Georges Aad1, Alexander Kupco2, Samuel Webb3, Timo Dreyer4  +3380 moreInstitutions (206)
TL;DR: In this article, a search for high-mass dielectron and dimuon resonances in the mass range of 250 GeV to 6 TeV was performed at the Large Hadron Collider.

248 citations


Journal ArticleDOI
E. Kou, Phillip Urquijo1, Wolfgang Altmannshofer2, F. Beaujean3  +558 moreInstitutions (140)
TL;DR: The Belle II detector as mentioned in this paper is a state-of-the-art detector for heavy flavor physics, quarkonium and exotic states, searches for dark sectors, and many other areas.
Abstract: The Belle II detector will provide a major step forward in precision heavy flavor physics, quarkonium and exotic states, searches for dark sectors, and many other areas. The sensitivity to a large number of key observables can be improved by about an order of magnitude compared to the current measurements, and up to two orders in very clean search measurements. This increase in statistical precision arises not only due to the increased luminosity, but also from improved detector efficiency and precision for many channels. Many of the most interesting observables tend to have very small theoretical uncertainties that will therefore not limit the physics reach. This book has presented many new ideas for measurements, both to elucidate the nature of current anomalies seen in flavor, and to search for new phenomena in a plethora of observables that will become accessible with the Belle II dataset. The simulation used for the studiesinthis book was state ofthe artat the time, though weare learning a lot more about the experiment during the commissioning period. The detector is in operation, and working spectacularly well.

247 citations


Journal ArticleDOI
TL;DR: A methodology based on the evolutionary algorithm USPEX and the machine-learning interatomic potentials actively learning on-the-fly allows for an automated construction of an interatomic interaction model from scratch, replacing the expensive density functional theory (DFT) and giving a speedup of several orders of magnitude.
Abstract: We propose a methodology for crystal structure prediction that is based on the evolutionary algorithm USPEX and the machine-learning interatomic potentials actively learning on-the-fly. Our methodology allows for an automated construction of an interatomic interaction model from scratch, replacing the expensive density functional theory (DFT) and giving a speedup of several orders of magnitude. Predicted low-energy structures are then tested on DFT, ensuring that our machine-learning model does not introduce any prediction error. We tested our methodology on prediction of crystal structures of carbon, high-pressure phases of sodium, and boron allotropes, including those that have more than 100 atoms in the primitive cell. All the the main allotropes have been reproduced, and a hitherto unknown 54-atom structure of boron has been predicted with very modest computational effort.

Posted Content
TL;DR: This work shows that transfer learning from a multilingual model to monolingual model results in significant growth of performance on such tasks as reading comprehension, paraphrase detection, and sentiment analysis.
Abstract: The paper introduces methods of adaptation of multilingual masked language models for a specific language. Pre-trained bidirectional language models show state-of-the-art performance on a wide range of tasks including reading comprehension, natural language inference, and sentiment analysis. At the moment there are two alternative approaches to train such models: monolingual and multilingual. While language specific models show superior performance, multilingual models allow to perform a transfer from one language to another and solve tasks for different languages simultaneously. This work shows that transfer learning from a multilingual model to monolingual model results in significant growth of performance on such tasks as reading comprehension, paraphrase detection, and sentiment analysis. Furthermore, multilingual initialization of monolingual model substantially reduces training time. Pre-trained models for the Russian language are open sourced.

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Dale Charles Abbott3  +3001 moreInstitutions (220)
TL;DR: In this paper, the decays of B0 s! + and B0! + have been studied using 26 : 3 fb of 13TeV LHC proton-proton collision data collected with the ATLAS detector in 2015 and 2016.
Abstract: A study of the decays B0 s ! + and B0 ! + has been performed using 26 : 3 fb of 13TeV LHC proton-proton collision data collected with the ATLAS detector in 2015 and 2016. Since the detector resolut ...

Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Federico Ambrogi1  +2319 moreInstitutions (159)
TL;DR: In this article, the performance of missing transverse momentum (Tmiss) reconstruction algorithms for the CMS experiment is presented, using proton-proton collisions at a center of mass energy of 13 TeV, collected at the CERN LHC in 2016.
Abstract: The performance of missing transverse momentum (Tmiss) reconstruction algorithms for the CMS experiment is presented, using proton-proton collisions at a center-of-mass energy of 13 TeV, collected at the CERN LHC in 2016. The data sample corresponds to an integrated luminosity of 35.9 fb-1. The results include measurements of the scale and resolution of Tmiss, and detailed studies of events identified with anomalous Tmiss. The performance is presented of a Tmiss reconstruction algorithm that mitigates the effects of multiple proton-proton interactions, using the "pileup per particle identification" method. The performance is shown of an algorithm used to estimate the compatibility of the reconstructed Tmiss with the hypothesis that it originates from resolution effects.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2272 moreInstitutions (160)
TL;DR: A search for Higgs boson pair production using the combined results from four final states: bbγγ, bbττ, bbbb, and bbVV, where V represents a W or Z boson, is performed using data collected in 2016 by the CMS experiment from LHC proton-proton collisions.
Abstract: This Letter describes a search for Higgs boson pair production using the combined results from four final states: bbγγ, bbττ, bbbb, and bbVV, where V represents a W or Z boson. The search is performed using data collected in 2016 by the CMS experiment from LHC proton-proton collisions at s=13 TeV, corresponding to an integrated luminosity of 35.9 fb-1. Limits are set on the Higgs boson pair production cross section. A 95% confidence level observed (expected) upper limit on the nonresonant production cross section is set at 22.2 (12.8) times the standard model value. A search for narrow resonances decaying to Higgs boson pairs is also performed in the mass range 250–3000 GeV. No evidence for a signal is observed, and upper limits are set on the resonance production cross section.

Journal ArticleDOI
Eric Armengaud1, David Attié1, Stefano Basso2, Pierre Brun1, N. Bykovskiy3, J. M. Carmona4, J. F. Castel4, S. Cebrián4, Michele Cicoli5, Marta Civitani2, C. Cogollos6, Joseph P. Conlon, D. Costa6, T. Dafni4, Ryuji Daido7, A. V. Derbin8, Marie-Anne Descalle9, Klaus Kurt Desch10, I. Dratchnev8, Babette Döbrich3, Alexey Dudarev3, E. Ferrer-Ribas1, Ivor Fleck11, Javier Galan4, Giorgio Galanti2, Lluis Garrido6, David Gascon6, Loredana Gastaldo12, Cristiano Germani6, G. Ghisellini2, Maurizio Giannotti13, Ioannis Giomataris1, S. N. Gninenko14, N. Golubev14, Ricardo Graciani6, I. G. Irastorza4, Krešimir Jakovčić, Jochen Kaminski10, Milica Krčmar, C. Krieger10, Biljana Lakić, Thierry Lasserre1, P. Laurent1, Olivier Limousin1, A. Lindner, I. Lomskaya8, BayarJon Paul Lubsandorzhiev14, G. Luzón4, M. C. D. Marsh15, C. Margalejo4, Federico Mescia6, Manuel Meyer16, Jordi Miralda-Escudé17, Jordi Miralda-Escudé6, H. Mirallas4, V. N. Muratova8, X.F. Navick1, C. Nones1, Alessio Notari6, A. A. Nozik14, A. A. Nozik18, A. Ortiz de Solórzano4, V. S. Pantuev14, T. Papaevangelou1, G. Pareschi2, K. Perez19, E. Picatoste6, Michael J. Pivovaroff9, Javier Redondo20, Javier Redondo4, Andreas Ringwald, M. Roncadelli, E. Ruiz-Choliz4, J. Ruz9, K. Saikawa20, Jordi Salvado6, M. P. Samperiz4, T. Schiffer10, S. Schmidt10, U. Schneekloth, Matthias Schott21, H. Silva3, G. Tagliaferri2, Fuminobu Takahashi7, Fuminobu Takahashi22, Fabrizio Tavecchio2, H. Ten Kate3, Igor Tkachev14, Sergey Troitsky14, E. V. Unzhakov8, P. Vedrine1, Julia Vogel9, C. Weinsheimer21, Amanda Weltman23, Wen Yin24, Wen Yin25 
TL;DR: The International Axion Observatory (IAXO) as discussed by the authors has the potential to find the QCD axion in the 1 meV~1 eV mass range where it solves the strong CP problem, can account for the cold dark matter of the Universe and be responsible for the anomalous cooling observed in a number of stellar systems.
Abstract: We review the physics potential of a next generation search for solar axions: the International Axion Observatory (IAXO) . Endowed with a sensitivity to discover axion-like particles (ALPs) with a coupling to photons as small as gaγ~ 10−12 GeV−1, or to electrons gae~10−13, IAXO has the potential to find the QCD axion in the 1 meV~1 eV mass range where it solves the strong CP problem, can account for the cold dark matter of the Universe and be responsible for the anomalous cooling observed in a number of stellar systems. At the same time, IAXO will have enough sensitivity to detect lower mass axions invoked to explain: 1) the origin of the anomalous "transparency" of the Universe to gamma-rays, 2) the observed soft X-ray excess from galaxy clusters or 3) some inflationary models. In addition, we review string theory axions with parameters accessible by IAXO and discuss their potential role in cosmology as Dark Matter and Dark Radiation as well as their connections to the above mentioned conundrums.

Journal ArticleDOI
David Curtin1, Marco Drewes2, Matthew McCullough3, Patrick Meade4, Rabindra N. Mohapatra5, Jessie Shelton6, Brian Shuve7, Brian Shuve8, Elena Accomando9, Cristiano Alpigiani10, Stefan Antusch11, J. C. Arteaga-Velázquez12, Brian Batell13, Martin Bauer14, Nikita Blinov7, Karen S. Caballero-Mora, Jae Hyeok Chang4, Eung Jin Chun15, Raymond T. Co16, Timothy Cohen17, Peter Cox18, Nathaniel Craig19, Csaba Csáki20, Yanou Cui21, Francesco D'Eramo22, Luigi Delle Rose23, P. S. Bhupal Dev24, Keith R. Dienes5, Keith R. Dienes25, Jeff A. Dror26, Jeff A. Dror27, Rouven Essig4, Jared A. Evans6, Jared A. Evans28, Jason L. Evans15, Arturo Fernandez Tellez29, Oliver Fischer30, Thomas Flacke, Anthony Fradette31, Claudia Frugiuele32, Elina Fuchs32, Tony Gherghetta33, Gian F. Giudice3, Dmitry Gorbunov34, Rajat Gupta35, Claudia Hagedorn36, Lawrence J. Hall26, Lawrence J. Hall27, Philip Harris37, Juan Carlos Helo38, Juan Carlos Helo39, Martin Hirsch40, Yonit Hochberg41, Anson Hook5, Alejandro Ibarra15, Alejandro Ibarra42, Seyda Ipek43, Sunghoon Jung44, Simon Knapen26, Simon Knapen27, Eric Kuflik41, Zhen Liu, Salvator Lombardo20, Henry Lubatti10, David McKeen45, Emiliano Molinaro46, Stefano Moretti47, Stefano Moretti9, Natsumi Nagata18, Matthias Neubert48, Matthias Neubert20, Jose Miguel No49, Jose Miguel No50, Emmanuel Olaiya47, Gilad Perez32, Michael E. Peskin7, David Pinner51, David Pinner52, Maxim Pospelov31, Maxim Pospelov53, Matthew Reece51, Dean J. Robinson28, Mario Rodriguez Cahuantzi29, R. Santonico54, Matthias Schlaffer32, Claire H. Shepherd-Themistocleous47, Andrew Spray, Daniel Stolarski55, Martin A. Subieta Vasquez56, Raman Sundrum5, Andrea Thamm3, Brooks Thomas57, Yuhsin Tsai5, Brock Tweedie13, Stephen M. West58, Charles Young7, Felix Yu48, Bryan Zaldivar50, Bryan Zaldivar59, Yongchao Zhang24, Yongchao Zhang60, Kathryn M. Zurek3, Kathryn M. Zurek26, Kathryn M. Zurek27, José Zurita30 
University of Toronto1, Université catholique de Louvain2, CERN3, C. N. Yang Institute for Theoretical Physics4, University of Maryland, College Park5, University of Illinois at Urbana–Champaign6, Stanford University7, Harvey Mudd College8, University of Southampton9, University of Washington10, University of Basel11, Universidad Michoacana de San Nicolás de Hidalgo12, University of Pittsburgh13, Heidelberg University14, Korea Institute for Advanced Study15, University of Michigan16, University of Oregon17, University of Tokyo18, University of California, Santa Barbara19, Cornell University20, University of California, Riverside21, University of Padua22, University of Florence23, Washington University in St. Louis24, University of Arizona25, University of California, Berkeley26, Lawrence Berkeley National Laboratory27, University of Cincinnati28, Benemérita Universidad Autónoma de Puebla29, Karlsruhe Institute of Technology30, University of Victoria31, Weizmann Institute of Science32, University of Minnesota33, Moscow Institute of Physics and Technology34, Durham University35, University of Southern Denmark36, Massachusetts Institute of Technology37, Valparaiso University38, University of La Serena39, Spanish National Research Council40, Hebrew University of Jerusalem41, Technische Universität München42, University of California, Irvine43, Seoul National University44, TRIUMF45, Aarhus University46, Rutherford Appleton Laboratory47, University of Mainz48, King's College London49, Autonomous University of Madrid50, Harvard University51, Brown University52, Perimeter Institute for Theoretical Physics53, University of Rome Tor Vergata54, Carleton University55, Higher University of San Andrés56, Lafayette College57, Royal Holloway, University of London58, University of Grenoble59, Université libre de Bruxelles60
TL;DR: A model-independent approach is developed to describe the sensitivity of MATHUSLA to BSM LLP signals, and a general discussion of the top-down and bottom-up motivations for LLP searches are synthesized to demonstrate the exceptional strength and breadth of the physics case for the construction of the MATH USLA detector.
Abstract: We examine the theoretical motivations for long-lived particle (LLP) signals at the LHC in a comprehensive survey of standard model (SM) extensions. LLPs are a common prediction of a wide range of theories that address unsolved fundamental mysteries such as naturalness, dark matter, baryogenesis and neutrino masses, and represent a natural and generic possibility for physics beyond the SM (BSM). In most cases the LLP lifetime can be treated as a free parameter from the [Formula: see text]m scale up to the Big Bang Nucleosynthesis limit of [Formula: see text] m. Neutral LLPs with lifetimes above [Formula: see text]100 m are particularly difficult to probe, as the sensitivity of the LHC main detectors is limited by challenging backgrounds, triggers, and small acceptances. MATHUSLA is a proposal for a minimally instrumented, large-volume surface detector near ATLAS or CMS. It would search for neutral LLPs produced in HL-LHC collisions by reconstructing displaced vertices (DVs) in a low-background environment, extending the sensitivity of the main detectors by orders of magnitude in the long-lifetime regime. We study the LLP physics opportunities afforded by a MATHUSLA-like detector at the HL-LHC, assuming backgrounds can be rejected as expected. We develop a model-independent approach to describe the sensitivity of MATHUSLA to BSM LLP signals, and compare it to DV and missing energy searches at ATLAS or CMS. We then explore the BSM motivations for LLPs in considerable detail, presenting a large number of new sensitivity studies. While our discussion is especially oriented towards the long-lifetime regime at MATHUSLA, this survey underlines the importance of a varied LLP search program at the LHC in general. By synthesizing these results into a general discussion of the top-down and bottom-up motivations for LLP searches, it is our aim to demonstrate the exceptional strength and breadth of the physics case for the construction of the MATHUSLA detector.

Journal ArticleDOI
Roy Burstein1, Nathaniel J Henry1, Michael Collison1, Laurie B. Marczak1  +663 moreInstitutions (290)
16 Oct 2019-Nature
TL;DR: A high-resolution, global atlas of mortality of children under five years of age between 2000 and 2017 highlights subnational geographical inequalities in the distribution, rates and absolute counts of child deaths by age.
Abstract: Since 2000, many countries have achieved considerable success in improving child survival, but localized progress remains unclear. To inform efforts towards United Nations Sustainable Development Goal 3.2—to end preventable child deaths by 2030—we need consistently estimated data at the subnational level regarding child mortality rates and trends. Here we quantified, for the period 2000–2017, the subnational variation in mortality rates and number of deaths of neonates, infants and children under 5 years of age within 99 low- and middle-income countries using a geostatistical survival model. We estimated that 32% of children under 5 in these countries lived in districts that had attained rates of 25 or fewer child deaths per 1,000 live births by 2017, and that 58% of child deaths between 2000 and 2017 in these countries could have been averted in the absence of geographical inequality. This study enables the identification of high-mortality clusters, patterns of progress and geographical inequalities to inform appropriate investments and implementations that will help to improve the health of all populations.

Posted Content
TL;DR: This work proposes a new distributed learning method --- DIANA --- which resolves issues via compression of gradient differences, and performs a theoretical analysis in the strongly convex and nonconvex settings and shows that its rates are superior to existing rates.
Abstract: Training large machine learning models requires a distributed computing approach, with communication of the model updates being the bottleneck. For this reason, several methods based on the compression (e.g., sparsification and/or quantization) of updates were recently proposed, including QSGD (Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein et al., 2018), and DQGD (Khirirat et al., 2018). However, none of these methods are able to learn the gradients, which renders them incapable of converging to the true optimum in the batch mode, incompatible with non-smooth regularizers, and slows down their convergence. In this work we propose a new distributed learning method --- DIANA --- which resolves these issues via compression of gradient differences. We perform a theoretical analysis in the strongly convex and nonconvex settings and show that our rates are superior to existing rates. Our analysis of block-quantization and differences between $\ell_2$ and $\ell_\infty$ quantization closes the gaps in theory and practice. Finally, by applying our analysis technique to TernGrad, we establish the first convergence rate for this method.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2382 moreInstitutions (209)
TL;DR: In this paper, a search for supersymmetric particles in the final state with multiple jets and large missing transverse momentum was performed using a sample of proton-proton collisions collected with the CMS detector.
Abstract: Results are reported from a search for supersymmetric particles in the final state with multiple jets and large missing transverse momentum. The search uses a sample of proton-proton collisions at $ \sqrt{s} $ = 13 TeV collected with the CMS detector in 2016–2018, corresponding to an integrated luminosity of 137 fb$^{−1}$, representing essentially the full LHC Run 2 data sample. The analysis is performed in a four-dimensional search region defined in terms of the number of jets, the number of tagged bottom quark jets, the scalar sum of jet transverse momenta, and the magnitude of the vector sum of jet transverse momenta. No significant excess in the event yield is observed relative to the expected background contributions from standard model processes. Limits on the pair production of gluinos and squarks are obtained in the framework of simplified models for supersymmetric particle production and decay processes. Assuming the lightest supersymmetric particle to be a neutralino, lower limits on the gluino mass as large as 2000 to 2310 GeV are obtained at 95% confidence level, while lower limits on the squark mass as large as 1190 to 1630 GeV are obtained, depending on the production scenario.

Journal ArticleDOI
TL;DR: In this paper, an electrically-driven soliton microcomb was demonstrated by coupling a III-V-material-based (indium phosphide) multiple-longitudinal-mode laser diode chip to a high-Q silicon nitride microresonator fabricated using the photonic Damascene process.
Abstract: Microcombs provide a path to broad-bandwidth integrated frequency combs with low power consumption, which are compatible with wafer-scale fabrication. Yet, electrically-driven, photonic chip-based microcombs are inhibited by the required high threshold power and the frequency agility of the laser for soliton initiation. Here we demonstrate an electrically-driven soliton microcomb by coupling a III–V-material-based (indium phosphide) multiple-longitudinal-mode laser diode chip to a high-Q silicon nitride microresonator fabricated using the photonic Damascene process. The laser diode is self-injection locked to the microresonator, which is accompanied by the narrowing of the laser linewidth, and the simultaneous formation of dissipative Kerr solitons. By tuning the laser diode current, we observe transitions from modulation instability, breather solitons, to single-soliton states. The system operating at an electronically-detectable sub-100-GHz mode spacing requires less than 1 Watt of electrical power, can fit in a volume of ca. 1 cm3, and does not require on-chip filters and heaters, thus simplifying the integrated microcomb. Chip-based frequency combs promise many applications, but full integration requires the electrical pump source and the microresonator to be on the same chip. Here, the authors show such integration of a microcomb with < 100 GHz mode spacing without additional filtering cavities or on-chip heaters.

Journal ArticleDOI
TL;DR: In this article, the light-by-light scattering process in ultra-peripheral PbPb collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV is reported.

Journal ArticleDOI
C. Ahdida1, Raffaele Albanese2, A. Alexandrov, A. M. Anokhina3  +345 moreInstitutions (50)
TL;DR: In this article, heavy neutral leptons (HNLs) are used to explain the origin of neutrino masses, generate the observed matter-antimatter asymmetry in the Universe and provide a dark matter candidate.
Abstract: Heavy Neutral Leptons (HNLs) are hypothetical particles predicted by many extensions of the Standard Model. These particles can, among other things, explain the origin of neutrino masses, generate the observed matter-antimatter asymmetry in the Universe and provide a dark matter candidate.

Journal ArticleDOI
TL;DR: In this paper, a direct Cartesian multipole decomposition of higher-order toroidal moments of both types (up to the electric octupole toroidal moment) is presented, allowing to obtain new near and far-field configurations.
Abstract: All-dielectric nanophotonics attracts ever increasing attention nowadays due to the possibility to control and configure light scattering on high-index semiconductor nanoparticles. It opens a room of opportunities for the designing novel types of nanoscale elements and devices, and paves a way to advanced technologies of light energy manipulation. One of the exciting and promising prospects is associated with the utilizing so called toroidal moment being the result of poloidal currents excitation, and anapole states corresponding to the interference of dipole and toroidal electric moments. Here, we present and investigate in details via the direct Cartesian multipole decomposition higher order toroidal moments of both types (up to the electric octupole toroidal moment) allowing to obtain new near- and far-field configurations. Poloidal currents can be associated with vortex-like distributions of the displacement currents inside nanoparticles revealing the physical meaning of the high-order toroidal moments and the convenience of the Cartesian multipoles as an auxiliary tool for analysis. We demonstrate high-order nonradiating anapole states (vanishing contribution to the far-field zone) accompanied by the excitation of intense near-fields. We believe our results to be of high importance for both fundamental understanding of light scattering by high-index particles, and a variety of nanophotonics applications and light governing on nanoscale.

Proceedings Article
24 May 2019
TL;DR: This theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form mini-batches, and can determine the mini-batch size that optimizes the total complexity.
Abstract: We propose a general yet simple theorem describing the convergence of SGD under the arbitrary sampling paradigm. Our theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form mini-batches. This is the first time such an analysis is performed, and most of our variants of SGD were never explicitly considered in the literature before. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size. With this we can also determine the mini-batch size that optimizes the total complexity, and show explicitly that as the variance of the stochastic gradient evaluated at the minimum grows, so does the optimal mini-batch size. For zero variance, the optimal mini-batch size is one. Moreover, we prove insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime.

Journal ArticleDOI
TL;DR: It is proposed that therapeutic inhibition of EMT-activated mechanisms of cell survival in NSCLC would eliminate pools of persister cells and prevent or delay cancer recurrence when applied in combination with the agents targeting EGFR.

Journal ArticleDOI
TL;DR: In this article, a simple model that uses only the elastic properties to calculate the hardness and fracture toughness was proposed and compared with other available models and experimental data for metals, covalent and ionic crystals, and bulk metallic glasses.
Abstract: Hardness and fracture toughness are some of the most important mechanical properties. Here, we propose a simple model that uses only the elastic properties to calculate the hardness and fracture toughness. Its accuracy is checked by comparison with other available models and experimental data for metals, covalent and ionic crystals, and bulk metallic glasses. We found the model to perform well on all datasets for both hardness and fracture toughness, while for auxetic materials (i.e., those having a negative Poisson’s ratio), it turned out to be the only model that gives reasonable hardness. Predictions are made for several materials for which no experimental data exist.

Posted Content
TL;DR: In this paper, the convergence of SGD under the arbitrary sampling paradigm is analyzed, and it is shown that the optimal mini-batch size is a function of the expected smoothness.
Abstract: We propose a general yet simple theorem describing the convergence of SGD under the arbitrary sampling paradigm. Our theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form mini-batches. This is the first time such an analysis is performed, and most of our variants of SGD were never explicitly considered in the literature before. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size. With this we can also determine the mini-batch size that optimizes the total complexity, and show explicitly that as the variance of the stochastic gradient evaluated at the minimum grows, so does the optimal mini-batch size. For zero variance, the optimal mini-batch size is one. Moreover, we prove insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime.

Journal ArticleDOI
TL;DR: In this article, the strong coupling energy scales in scalar, gauge and gravity sectors all are lifted up to the Planck scale, and it is shown that introducing R 2 -term makes the Higgsinflation and Higgs-dilaton inflation consistent.

Journal ArticleDOI
TL;DR: In this paper, a parsec-scale jet kinematics study of 409 radio-loud AGNs based on 15 GHz VLBA data obtained between 1994 August 31 and 2016 December 26 is presented.
Abstract: We present results from a parsec-scale jet kinematics study of 409 bright radio-loud AGNs based on 15 GHz VLBA data obtained between 1994 August 31 and 2016 December 26 as part of the 2cm VLBA survey and MOJAVE programs. We tracked 1744 individual bright features in 382 jets over at least five epochs. A majority (59%) of the best-sampled jet features showed evidence of accelerated motion at the >3sigma level. Although most features within a jet typically have speeds within ~40% of a characteristic median value, we identified 55 features in 42 jets that had unusually slow pattern speeds, nearly all of which lie within 4 pc (100 pc de-projected) of the core feature. Our results combined with other speeds from the literature indicate a strong correlation between apparent jet speed and synchrotron peak frequency, with the highest jet speeds being found only in low-peaked AGNs. Using Monte Carlo simulations, we find best fit parent population parameters for a complete sample of 174 quasars above 1.5 Jy at 15 GHz. Acceptable fits are found with a jet population that has a simple unbeamed power law luminosity function incorporating pure luminosity evolution, and a power law Lorentz factor distribution ranging from 1.25 to 50 with slope -1.4 +- 0.2. The parent jets of the brightest radio quasars have a space density of 261 +- 19 Gpc$^{-3}$ and unbeamed 15 GHz luminosities above ~$10^{24.5}$ W/Hz, consistent with FR II class radio galaxies.