Showing papers by "Royal Holloway, University of London published in 2019"
••
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4 +1491 more•Institutions (239)
TL;DR: In this article, the authors present the second volume of the Future Circular Collider Conceptual Design Report, devoted to the electron-positron collider FCC-ee, and present the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) study was launched, as an international collaboration hosted by CERN. This study covers a highest-luminosity high-energy lepton collider (FCC-ee) and an energy-frontier hadron collider (FCC-hh), which could, successively, be installed in the same 100 km tunnel. The scientific capabilities of the integrated FCC programme would serve the worldwide community throughout the 21st century. The FCC study also investigates an LHC energy upgrade, using FCC-hh technology. This document constitutes the second volume of the FCC Conceptual Design Report, devoted to the electron-positron collider FCC-ee. After summarizing the physics discovery opportunities, it presents the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan. FCC-ee can be built with today’s technology. Most of the FCC-ee infrastructure could be reused for FCC-hh. Combining concepts from past and present lepton colliders and adding a few novel elements, the FCC-ee design promises outstandingly high luminosity. This will make the FCC-ee a unique precision instrument to study the heaviest known particles (Z, W and H bosons and the top quark), offering great direct and indirect sensitivity to new physics.
526 citations
••
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4 +1496 more•Institutions (238)
TL;DR: In this paper, the authors describe the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider in collaboration with national institutes, laboratories and universities worldwide, and enhanced by a strong participation of industrial partners.
Abstract: Particle physics has arrived at an important moment of its history. The discovery of the Higgs boson, with a mass of 125 GeV, completes the matrix of particles and interactions that has constituted the “Standard Model” for several decades. This model is a consistent and predictive theory, which has so far proven successful at describing all phenomena accessible to collider experiments. However, several experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do likewise point to the existence of physics beyond the Standard Model. This report contains the description of a novel research infrastructure based on a highest-energy hadron collider with a centre-of-mass collision energy of 100 TeV and an integrated luminosity of at least a factor of 5 larger than the HL-LHC. It will extend the current energy frontier by almost an order of magnitude. The mass reach for direct discovery will reach several tens of TeV, and allow, for example, to produce new particles whose existence could be indirectly exposed by precision measurements during the earlier preceding e+e– collider phase. This collider will also precisely measure the Higgs self-coupling and thoroughly explore the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. WIMPs as thermal dark matter candidates will be discovered, or ruled out. As a single project, this particle collider infrastructure will serve the world-wide physics community for about 25 years and, in combination with a lepton collider (see FCC conceptual design report volume 2), will provide a research tool until the end of the 21st century. Collision energies beyond 100 TeV can be considered when using high-temperature superconductors. The European Strategy for Particle Physics (ESPP) update 2013 stated “To stay at the forefront of particle physics, Europe needs to be in a position to propose an ambitious post-LHC accelerator project at CERN by the time of the next Strategy update”. The FCC study has implemented the ESPP recommendation by developing a long-term vision for an “accelerator project in a global context”. This document describes the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider “in collaboration with national institutes, laboratories and universities worldwide”, and enhanced by a strong participation of industrial partners. Now, a coordinated preparation effort can be based on a core of an ever-growing consortium of already more than 135 institutes worldwide. The technology for constructing a high-energy circular hadron collider can be brought to the technology readiness level required for constructing within the coming ten years through a focused R&D programme. The FCC-hh concept comprises in the baseline scenario a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb3Sn technology pioneered at the HL-LHC, an energy-efficient cryogenic refrigeration infrastructure based on a neon-helium (Nelium) light gas mixture, a high-reliability and low loss cryogen distribution infrastructure based on Invar, high-power distributed beam transfer using superconducting elements and local magnet energy recovery and re-use technologies that are already gradually introduced at other CERN accelerators. On a longer timescale, high-temperature superconductors can be developed together with industrial partners to achieve an even more energy efficient particle collider or to reach even higher collision energies.The re-use of the LHC and its injector chain, which also serve for a concurrently running physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. Strategic R&D for FCC-hh aims at minimising construction cost and energy consumption, while maximising the socio-economic impact. It will mitigate technology-related risks and ensure that industry can benefit from an acceptable utility. Concerning the implementation, a preparatory phase of about eight years is both necessary and adequate to establish the project governance and organisation structures, to build the international machine and experiment consortia, to develop a territorial implantation plan in agreement with the host-states’ requirements, to optimise the disposal of land and underground volumes, and to prepare the civil engineering project. Such a large-scale, international fundamental research infrastructure, tightly involving industrial partners and providing training at all education levels, will be a strong motor of economic and societal development in all participating nations. The FCC study has implemented a set of actions towards a coherent vision for the world-wide high-energy and particle physics community, providing a collaborative framework for topically complementary and geographically well-balanced contributions. This conceptual design report lays the foundation for a subsequent infrastructure preparatory and technical design phase.
425 citations
••
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4 +1501 more•Institutions (239)
TL;DR: In this article, the physics opportunities of the Future Circular Collider (FC) were reviewed, covering its e+e-, pp, ep and heavy ion programs, and the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions.
Abstract: We review the physics opportunities of the Future Circular Collider, covering its e+e-, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders, which will contribute to a uniquely coherent and ambitious research programme, providing an unmatchable combination of precision and sensitivity to new physics.
407 citations
••
Centre national de la recherche scientifique1, IFREMER2, ETH Zurich3, University of Bern4, Cardiff University5, Université Paris-Saclay6, University of Bordeaux7, Federal Fluminense University8, Leibniz Institute for Baltic Sea Research9, University of St Andrews10, University of New Hampshire11, Oregon State University12, École pratique des hautes études13, Royal Holloway, University of London14, University of Nantes15, Hofstra University16, Lamont–Doherty Earth Observatory17, Uppsala University18, Woods Hole Oceanographic Institution19, University of Edinburgh20, Geological Survey of Denmark and Greenland21, Instituto Geológico y Minero de España22, University of Connecticut23, Georgia Institute of Technology24, University of Colorado Boulder25, University of the Algarve26, British Antarctic Survey27, VU University Amsterdam28, University of Bremen29, Max Planck Society30, Thermo Fisher Scientific31, University of Cambridge32, University of Paris33, University College London34, Ghent University35, Aix-Marseille University36, Autonomous University of Barcelona37, University of California, Santa Barbara38, Utrecht University39
TL;DR: This is the first set of consistently dated marine sediment cores enabling paleoclimate scientists to evaluate leads/lags between circulation and climate changes over vast regions of the Atlantic Ocean.
Abstract: Rapid changes in ocean circulation and climate have been observed in marine-sediment and ice cores over the last glacial period and deglaciation, highlighting the non-linear character of the climate system and underlining the possibility of rapid climate shifts in response to anthropogenic greenhouse gas forcing. To date, these rapid changes in climate and ocean circulation are still not fully explained. One obstacle hindering progress in our understanding of the interactions between past ocean circulation and climate changes is the difficulty of accurately dating marine cores. Here, we present a set of 92 marine sediment cores from the Atlantic Ocean for which we have established age-depth models that are consistent with the Greenland GICC05 ice core chronology, and computed the associated dating uncertainties, using a new deposition modeling technique. This is the first set of consistently dated marine sediment cores enabling paleoclimate scientists to evaluate leads/lags between circulation and climate changes over vast regions of the Atlantic Ocean. Moreover, this data set is of direct use in paleoclimate modeling studies.
399 citations
••
Royal Holloway, University of London1, Victoria University of Wellington2, National Oceanic and Atmospheric Administration3, Institute of Arctic and Alpine Research4, Norwegian Institute for Air Research5, University of Manchester6, University of Groningen7, University of Oxford8, Environmental Change Institute9, Lancaster University10, British Antarctic Survey11, Heidelberg University12, University of East Anglia13, University of Cambridge14
TL;DR: The increase in the methane burden began in 2007, with the mean global mole fraction in remote surface background air rising from about 1775 ppb in 2006 to 1850 ppb by 2017, at rates not observed since the 1980s as discussed by the authors.
Abstract: Atmospheric methane grew very rapidly in 2014 (12.7±0.5 ppb/yr), 2015 (10.1±0.7 ppb/yr), 2016 (7.0± 0.7 ppb/yr) and 2017 (7.7±0.7 ppb/yr), at rates not observed since the 1980s. The increase in the methane burden began in 2007, with the mean global mole fraction in remote surface background air rising from about 1775 ppb in 2006 to 1850 ppb in 2017. Simultaneously the 13C/12C isotopic ratio (expressed as δ13CCH4) has shifted, in a new trend to more negative values that have been observed worldwide for over a decade. The causes of methane's recent mole fraction increase are therefore either a change in the relative proportions (and totals) of emissions from biogenic and thermogenic and pyrogenic sources, especially in the tropics and sub-tropics, or a decline in the atmospheric sink of methane, or both. Unfortunately, with limited measurement data sets, it is not currently possible to be more definitive. The climate warming impact of the observed methane increase over the past decade, if continued at >5 ppb/yr in the coming decades, is sufficient to challenge the Paris Agreement, which requires sharp cuts in the atmospheric methane burden. However, anthropogenic methane emissions are relatively very large and thus offer attractive targets for rapid reduction, which are essential if the Paris Agreement aims are to be attained.
329 citations
••
TL;DR: In this article, a search for high-mass dielectron and dimuon resonances in the mass range of 250 GeV to 6 TeV was performed at the Large Hadron Collider.
248 citations
••
TL;DR: An exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected) in combination with the results at sqrt[s]=7 and 8 TeV.
Abstract: Dark matter particles, if sufficiently light, may be produced in decays of the Higgs boson. This Letter presents a statistical combination of searches for H→invisible decays where H is produced according to the standard model via vector boson fusion, Z(ll)H, and W/Z(had)H, all performed with the ATLAS detector using 36.1 fb^{-1} of pp collisions at a center-of-mass energy of sqrt[s]=13 TeV at the LHC. In combination with the results at sqrt[s]=7 and 8 TeV, an exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected).
234 citations
••
TL;DR: The third CAFA challenge, CAFA3, that featured an expanded analysis over the previous CAFA rounds, both in terms of volume of data analyzed and the types of analysis performed, concluded that while predictions of the molecular function and biological process annotations have slightly improved over time, those of the cellular component have not.
Abstract: The Critical Assessment of Functional Annotation (CAFA) is an ongoing, global, community-driven effort to evaluate and improve the computational annotation of protein function. Here, we report on the results of the third CAFA challenge, CAFA3, that featured an expanded analysis over the previous CAFA rounds, both in terms of volume of data analyzed and the types of analysis performed. In a novel and major new development, computational predictions and assessment goals drove some of the experimental assays, resulting in new functional annotations for more than 1000 genes. Specifically, we performed experimental whole-genome mutation screening in Candida albicans and Pseudomonas aureginosa genomes, which provided us with genome-wide experimental data for genes associated with biofilm formation and motility. We further performed targeted assays on selected genes in Drosophila melanogaster, which we suspected of being involved in long-term memory. We conclude that while predictions of the molecular function and biological process annotations have slightly improved over time, those of the cellular component have not. Term-centric prediction of experimental annotations remains equally challenging; although the performance of the top methods is significantly better than the expectations set by baseline methods in C. albicans and D. melanogaster, it leaves considerable room and need for improvement. Finally, we report that the CAFA community now involves a broad range of participants with expertise in bioinformatics, biological experimentation, biocuration, and bio-ontologies, working together to improve functional annotation, computational function prediction, and our ability to manage big data in the era of large experimental screens.
227 citations
••
TL;DR: In this article, an improved energy clustering algorithm is introduced, and its implications for the measurement and identification of prompt electrons and photons are discussed in detail, including corrections and calibrations that affect performance, including energy calibration, identification and isolation efficiencies.
Abstract: This paper describes the reconstruction of electrons and photons with the ATLAS detector, employed for measurements and searches exploiting the complete LHC Run 2 dataset. An improved energy clustering algorithm is introduced, and its implications for the measurement and identification of prompt electrons and photons are discussed in detail. Corrections and calibrations that affect performance, including energy calibration, identification and isolation efficiencies, and the measurement of the charge of reconstructed electron candidates are determined using up to 81 fb−1 of proton-proton collision data collected at √s=13 TeV between 2015 and 2017.
227 citations
••
TL;DR: It is the hope that the renewed sociotechnical frame for the IS discipline discussed in the paper holds potential to contribute to the enduring strength of the diverse, distinctive, yet unified discipline.
Abstract: The sociotechnical perspective is often seen as one of the foundational viewpoints—or an “axis of cohesion”— for the Information Systems (IS) discipline, contributing to both its distinctiveness and its ability to coherently expand its boundaries. However, our review of papers in the two leading IS journals from 2000 to 2016 suggests that IS research has lost sight of the discipline’s sociotechnical character—a character that was widely acknowledged at the discipline’s inception. This is a problem because an axis of cohesion can be fundamental to a discipline’s long-term vitality. In order to address this issue, we offer ways to renew the sociotechnical perspective so that it can continue to serve as a distinctive and coherent foundation for the discipline. Our hope is that the renewed sociotechnical frame for the IS discipline discussed in the paper holds potential to contribute to the enduring strength of our diverse, distinctive, yet unified discipline. It also prompts members of the discipline to think more deeply about what it means to be an IS scholar.
225 citations
••
TL;DR: In this article, the ATLAS Collaboration during Run 2 of the Large Hadron Collider (LHC) was used to identify jets containing b-hadrons, and the performance of the algorithms was evaluated in the s...
Abstract: The algorithms used by the ATLAS Collaboration during Run 2 of the Large Hadron Collider to identify jets containing b-hadrons are presented. The performance of the algorithms is evaluated in the s ...
••
TL;DR: Findings indicated a gender difference in reasons for camouflaging, with autistic women more likely to endorse “conventional” reasons (e.g. getting by in formal settings such as work), which have implications for understanding camouflaging in autistic adults.
Abstract: Camouflaging entails 'masking' in or 'passing' social situations. Research suggests camouflaging behaviours are common in autistic people, and may negatively impact mental health. To enhance understanding of camouflaging, this study examined reasons, contexts and costs of camouflaging. 262 autistic people completed measures of camouflaging behaviours, camouflaging contexts (e.g. work vs. family), camouflaging reasons (e.g. to make friends) and mental health symptoms. Findings indicated a gender difference in reasons for camouflaging, with autistic women more likely to endorse "conventional" reasons (e.g. getting by in formal settings such as work). Both camouflaging highly across contexts and 'switching' between camouflaging in some contexts but not in others, related to poorer mental health. These findings have implications for understanding camouflaging in autistic adults.
•
14 Aug 2019TL;DR: In this article, the authors argue that results are commonly inflated due to two pervasive sources of experimental bias: spatial bias caused by distributions of training and testing data that are not representative of a real-world deployment.
Abstract: Is Android malware classification a solved problem? Published F1 scores of up to 0.99 appear to leave very little room for improvement. In this paper, we argue that results are commonly inflated due to two pervasive sources of experimental bias: "spatial bias" caused by distributions of training and testing data that are not representative of a real-world deployment; and "temporal bias" caused by incorrect time splits of training and testing sets, leading to impossible configurations. We propose a set of space and time constraints for experiment design that eliminates both sources of bias. We introduce a new metric that summarizes the expected robustness of a classifier in a real-world setting, and we present an algorithm to tune its performance. Finally, we demonstrate how this allows us to evaluate mitigation strategies for time decay such as active learning. We have implemented our solutions in TESSERACT, an open source evaluation framework for comparing malware classifiers in a realistic setting. We used TESSERACT to evaluate three Android malware classifiers from the literature on a dataset of 129K applications spanning over three years. Our evaluation confirms that earlier published results are biased, while also revealing counter-intuitive performance and showing that appropriate tuning can lead to significant improvements.
••
TL;DR: Analysis of records from 414 societies that span the past 10,000 years from 30 regions around the world reveals that moralizing gods follow—rather than precede—large increases in social complexity.
Abstract: The origins of religion and of complex societies represent evolutionary puzzles1–8. The ‘moralizing gods’ hypothesis offers a solution to both puzzles by proposing that belief in morally concerned supernatural agents culturally evolved to facilitate cooperation among strangers in large-scale societies9–13. Although previous research has suggested an association between the presence of moralizing gods and social complexity3,6,7,9–18, the relationship between the two is disputed9–13,19–24, and attempts to establish causality have been hampered by limitations in the availability of detailed global longitudinal data. To overcome these limitations, here we systematically coded records from 414 societies that span the past 10,000 years from 30 regions around the world, using 51 measures of social complexity and 4 measures of supernatural enforcement of morality. Our analyses not only confirm the association between moralizing gods and social complexity, but also reveal that moralizing gods follow—rather than precede—large increases in social complexity. Contrary to previous predictions9,12,16,18, powerful moralizing ‘big gods’ and prosocial supernatural punishment tend to appear only after the emergence of ‘megasocieties’ with populations of more than around one million people. Moralizing gods are not a prerequisite for the evolution of social complexity, but they may help to sustain and expand complex multi-ethnic empires after they have become established. By contrast, rituals that facilitate the standardization of religious traditions across large populations25,26 generally precede the appearance of moralizing gods. This suggests that ritual practices were more important than the particular content of religious belief to the initial rise of social complexity. Belief in moralizing gods followed the expansion of human societies and may have been preceded by doctrinal rituals that contributed to the initial rise of social complexity.
••
TL;DR: In this paper, the decays of B0 s! + and B0! + have been studied using 26 : 3 fb of 13TeV LHC proton-proton collision data collected with the ATLAS detector in 2015 and 2016.
Abstract: A study of the decays B0 s ! + and B0 ! + has been performed using 26 : 3 fb of 13TeV LHC proton-proton collision data collected with the ATLAS detector in 2015 and 2016. Since the detector resolut ...
••
TL;DR: Magnetic force microscopy (MFM) has become a truly widespread and commonly used characterization technique that has been applied to a variety of research and industrial applications as discussed by the authors, where the main advantages of the method include its high spatial resolution (typically ∼50 nm), ability to work in variable temperature and applied magnetic fields, versatility, and simplicity in operation, all without almost any need for sample preparation.
Abstract: Since it was first demonstrated in 1987, magnetic force microscopy (MFM) has become a truly widespread and commonly used characterization technique that has been applied to a variety of research and industrial applications. Some of the main advantages of the method includes its high spatial resolution (typically ∼50 nm), ability to work in variable temperature and applied magnetic fields, versatility, and simplicity in operation, all without almost any need for sample preparation. However, for most commercial systems, the technique has historically provided only qualitative information, and the number of available modes was typically limited, thus not reflecting the experimental demands. Additionally, the range of samples under study was largely restricted to “classic” ferromagnetic samples (typically, thin films or patterned nanostructures). Throughout this Perspective article, the recent progress and development of MFM is described, followed by a summary of the current state-of-the-art techniques and objects for study. Finally, the future of this fascinating field is discussed in the context of emerging instrumental and material developments. Aspects including quantitative MFM, the accurate interpretation of the MFM images, new instrumentation, probe-engineering alternatives, and applications of MFM to new (often interdisciplinary) areas of the materials science, physics, and biology will be discussed. We first describe the physical principles of MFM, specifically paying attention to common artifacts frequently occurring in MFM measurements; then, we present a comprehensive review of the recent developments in the MFM modes, instrumentation, and the main application areas; finally, the importance of the technique is speculated upon for emerging or anticipated to emerge fields including skyrmions, 2D-materials, and topological insulators.
••
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4 +1496 more•Institutions (238)
TL;DR: The third volume of the FCC Conceptual Design Report as discussed by the authors is devoted to the hadron collider FCC-hh, and summarizes the physics discovery opportunities, presents the FCC-HH accelerator design, performance reach, and staged operation plan, discusses the underlying technologies, the civil engineering and technical infrastructure, and also sketches a possible implementation.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics (EPPSU), the Future Circular Collider (FCC) study was launched as a world-wide international collaboration hosted by CERN. The FCC study covered an energy-frontier hadron collider (FCC-hh), a highest-luminosity high-energy lepton collider (FCC-ee), the corresponding 100 km tunnel infrastructure, as well as the physics opportunities of these two colliders, and a high-energy LHC, based on FCC-hh technology. This document constitutes the third volume of the FCC Conceptual Design Report, devoted to the hadron collider FCC-hh. It summarizes the FCC-hh physics discovery opportunities, presents the FCC-hh accelerator design, performance reach, and staged operation plan, discusses the underlying technologies, the civil engineering and technical infrastructure, and also sketches a possible implementation. Combining ingredients from the Large Hadron Collider (LHC), the high-luminosity LHC upgrade and adding novel technologies and approaches, the FCC-hh design aims at significantly extending the energy frontier to 100 TeV. Its unprecedented centre-of-mass collision energy will make the FCC-hh a unique instrument to explore physics beyond the Standard Model, offering great direct sensitivity to new physics and discoveries.
••
University of Toronto1, Université catholique de Louvain2, CERN3, C. N. Yang Institute for Theoretical Physics4, University of Maryland, College Park5, University of Illinois at Urbana–Champaign6, Stanford University7, Harvey Mudd College8, University of Southampton9, University of Washington10, University of Basel11, Universidad Michoacana de San Nicolás de Hidalgo12, University of Pittsburgh13, Heidelberg University14, Korea Institute for Advanced Study15, University of Michigan16, University of Oregon17, University of Tokyo18, University of California, Santa Barbara19, Cornell University20, University of California, Riverside21, University of Padua22, University of Florence23, Washington University in St. Louis24, University of Arizona25, Lawrence Berkeley National Laboratory26, University of California, Berkeley27, University of Cincinnati28, Benemérita Universidad Autónoma de Puebla29, Karlsruhe Institute of Technology30, University of Victoria31, Weizmann Institute of Science32, University of Minnesota33, Moscow Institute of Physics and Technology34, Durham University35, University of Southern Denmark36, Massachusetts Institute of Technology37, Valparaiso University38, University of La Serena39, Spanish National Research Council40, Hebrew University of Jerusalem41, Technische Universität München42, University of California, Irvine43, Seoul National University44, TRIUMF45, Aarhus University46, Rutherford Appleton Laboratory47, University of Mainz48, King's College London49, Autonomous University of Madrid50, Brown University51, Harvard University52, Perimeter Institute for Theoretical Physics53, University of Rome Tor Vergata54, Carleton University55, Higher University of San Andrés56, Lafayette College57, Royal Holloway, University of London58, University of Grenoble59, Université libre de Bruxelles60
TL;DR: A model-independent approach is developed to describe the sensitivity of MATHUSLA to BSM LLP signals, and a general discussion of the top-down and bottom-up motivations for LLP searches are synthesized to demonstrate the exceptional strength and breadth of the physics case for the construction of the MATH USLA detector.
Abstract: We examine the theoretical motivations for long-lived particle (LLP) signals at the LHC in a comprehensive survey of standard model (SM) extensions. LLPs are a common prediction of a wide range of theories that address unsolved fundamental mysteries such as naturalness, dark matter, baryogenesis and neutrino masses, and represent a natural and generic possibility for physics beyond the SM (BSM). In most cases the LLP lifetime can be treated as a free parameter from the [Formula: see text]m scale up to the Big Bang Nucleosynthesis limit of [Formula: see text] m. Neutral LLPs with lifetimes above [Formula: see text]100 m are particularly difficult to probe, as the sensitivity of the LHC main detectors is limited by challenging backgrounds, triggers, and small acceptances. MATHUSLA is a proposal for a minimally instrumented, large-volume surface detector near ATLAS or CMS. It would search for neutral LLPs produced in HL-LHC collisions by reconstructing displaced vertices (DVs) in a low-background environment, extending the sensitivity of the main detectors by orders of magnitude in the long-lifetime regime. We study the LLP physics opportunities afforded by a MATHUSLA-like detector at the HL-LHC, assuming backgrounds can be rejected as expected. We develop a model-independent approach to describe the sensitivity of MATHUSLA to BSM LLP signals, and compare it to DV and missing energy searches at ATLAS or CMS. We then explore the BSM motivations for LLPs in considerable detail, presenting a large number of new sensitivity studies. While our discussion is especially oriented towards the long-lifetime regime at MATHUSLA, this survey underlines the importance of a varied LLP search program at the LHC in general. By synthesizing these results into a general discussion of the top-down and bottom-up motivations for LLP searches, it is our aim to demonstrate the exceptional strength and breadth of the physics case for the construction of the MATHUSLA detector.
••
TL;DR: The Sustainable Development Goals and the New Urban Agenda recognise the role of cities in achieving sustainable development as discussed by the authors, however, these agendas were agreed and signed by national goverment authorities.
Abstract: The Sustainable Development Goals and the New Urban Agenda recognise the role of cities in achieving sustainable development. However, these agendas were agreed and signed by national gover...
••
TL;DR: Algorithms used for the reconstruction and identification of electrons in the central region of the ATLAS detector at the Large Hadron Collider (LHC) are presented in this article, these algorithms a...
Abstract: Algorithms used for the reconstruction and identification of electrons in the central region of the ATLAS detector at the Large Hadron Collider (LHC) are presented in this paper; these algorithms a ...
••
TL;DR: The complex interplay between abiotic stressors, host trees, insect herbivores and their natural enemies makes it very difficult to predict overall consequences of climate change on forest health, so process-based models are needed to simulate pest population dynamics under climate change scenarios.
Abstract: Climate change is a multi-faceted phenomenon, including elevated CO2, warmer temperatures, more severe droughts and more frequent storms. All these components can affect forest pests directly, or indirectly through interactions with host trees and natural enemies. Most of the responses of forest insect herbivores to climate change are expected to be positive, with shorter generation time, higher fecundity and survival, leading to increased range expansion and outbreaks. Forest insect pest can also benefit from synergistic effects of several climate change pressures, such as hotter droughts or warmer storms. However, lesser known negative effects are also likely, such as lethal effects of heat waves or thermal shocks, less palatable host tissues or more abundant parasitoids and predators. The complex interplay between abiotic stressors, host trees, insect herbivores and their natural enemies makes it very difficult to predict overall consequences of climate change on forest health. This calls for the development of process-based models to simulate pest population dynamics under climate change scenarios.
••
Queen Mary University of London1, University of Queensland2, Indian Institute of Technology Kharagpur3, Royal Holloway, University of London4, Max Planck Society5, King Abdullah University of Science and Technology6, University of Adelaide7, University of Edinburgh8, Wellcome Trust Sanger Institute9, Massachusetts Institute of Technology10, University of Exeter11, Nationwide Children's Hospital12
TL;DR: Sequenceserver is a tool for running BLAST and visually inspecting BLAST results for biological interpretation and uses simple algorithms to prevent potential analysis errors and provides flexible text-based and visual outputs to support researcher productivity.
Abstract: Comparing newly obtained and previously known nucleotide and amino-acid sequences underpins modern biological research. BLAST is a well-established tool for such comparisons but is challenging to use on new data sets. We combined a user-centric design philosophy with sustainable software development approaches to create Sequenceserver, a tool for running BLAST and visually inspecting BLAST results for biological interpretation. Sequenceserver uses simple algorithms to prevent potential analysis errors and provides flexible text-based and visual outputs to support researcher productivity. Our software can be rapidly installed for use by individuals or on shared servers.
••
TL;DR: It is reported that the CAFA community now involves a broad range of participants with expertise in bioinformatics, biological experimentation, biocuration, and bioontologies, working together to improve functional annotation, computational function prediction, and the ability to manage big data in the era of large experimental screens.
Abstract: The Critical Assessment of Functional Annotation (CAFA) is an ongoing, global, community-driven effort to evaluate and improve the computational annotation of protein function. Here we report on the results of the third CAFA challenge, CAFA3, that featured an expanded analysis over the previous CAFA rounds, both in terms of volume of data analyzed and the types of analysis performed. In a novel and major new development, computational predictions and assessment goals drove some of the experimental assays, resulting in new functional annotations for more than 1000 genes. Specifically, we performed experimental whole-genome mutation screening in Candida albicans and Pseudomonas aureginosa genomes, which provided us with genome-wide experimental data for genes associated with biofilm formation and motility (P. aureginosa only). We further performed targeted assays on selected genes in Drosophila melanogaster, which we suspected of being involved in long-term memory. We conclude that, while predictions of the molecular function and biological process annotations have slightly improved over time, those of the cellular component have not. Term-centric prediction of experimental annotations remains equally challenging; although the performance of the top methods is significantly better than expectations set by baseline methods in C. albicans and D. melanogaster, it leaves considerable room and need for improvement. We finally report that the CAFA community now involves a broad range of participants with expertise in bioinformatics, biological experimentation, biocuration, and bioontologies, working together to improve functional annotation, computational function prediction, and our ability to manage big data in the era of large experimental screens.
••
TL;DR: Although the lion's share of scholarship in management and organization studies conceives of organizations as entities within which communication occurs, "Communication Constitutes Organization" (C... as mentioned in this paper ).
Abstract: Although the lion’s share of scholarship in management and organization studies conceives of organizations as entities within which communication occurs, “Communication Constitutes Organization” (C...
••
University of Wales, Lampeter1, Aberystwyth University2, Brock University3, Royal Holloway, University of London4, University of Illinois at Chicago5, Lund University6, University of Minnesota7, Xi'an Jiaotong University8, University of New Brunswick9, University of Ottawa10, University of Copenhagen11, Durham University12, Victoria University of Wellington13, Yale University14
TL;DR: In 2018, the International Union of Geological Sciences formally ratified a proposal to subdivide the Holocene into three stages/ages, along with their equivalent subseries/subepochs, each anchored by a Global boundary Stratotype Section and Point (GSSP) as mentioned in this paper.
Abstract: The Holocene, which currently spans ~11 700 years, is the shortest series/epoch within the geological time scale (GTS), yet it contains a rich archive of evidence in stratigraphical contexts that are frequently continuous and often preserved at high levels of resolution. On 14 June 2018, the Executive Committee of the International Union of Geological Sciences formally ratified a proposal to subdivide the Holocene into three stages/ages, along with their equivalent subseries/subepochs, each anchored by a Global boundary Stratotype Section and Point (GSSP). The new stages are the Greenlandian (Lower/Early Holocene Subseries/Subepoch) with its GSSP in the Greenland NGRIP2 ice core and dated at 11 700 a b2k (before 2000 CE); the Northgrippian (Middle Holocene Subseries/Subepoch) with its GSSP in the Greenland NGRIP1 ice core and dated at 8236 a b2k; and the Meghalayan (Upper/Late Holocene Subseries/Subepoch) with its GSSP in a speleothem from Mawmluh Cave, north-eastern India, with a date of 4250 a b2k. We explain the nomenclature of the new divisions, describe the procedures involved in the ratification process, designate auxiliary stratotypes to support the GSSPs and consider the implications of the subdivision for defining the Anthropocene as a new unit within the GTS. (Less)
••
Imperial College London1, Plymouth Marine Laboratory2, Cooperative Institute for Research in Environmental Sciences3, Open University4, Natural Environment Research Council5, Royal Holloway, University of London6, King's College London7, Aberystwyth University8, Foreign and Commonwealth Office9, University College London10, University of Manitoba11
TL;DR: In this paper, a global 1.5°C scenario, the number of days above 0°C with up to 130 days each year in the northern Peninsula of Antarctica was studied.
Abstract: Warming of the Antarctic Peninsula in the latter half of the 20th century was greater than any other terrestrial environment in the Southern Hemisphere, and obvious cryospheric and biological consequences have been observed. Under a global 1.5°C scenario, warming in the Antarctic Peninsula is likely increase the number of days above 0°C, with up to 130 of such days each year in the northern Peninsula. Ocean turbulence will increase, making the circumpolar deep water (CDW) both warmer and shallower, delivering heat to the sea surface and to coastal margins. Thinning and recession of marine margins of glaciers and ice caps is expected to accelerate to terrestrial limits, increasing iceberg production, after which glacier retreat may slow on land. Ice shelves will experience continued increase in meltwater production and consequent structural change, but not imminent regional collapses. Marine biota can respond in multiple ways to climatic changes, with effects complicated by past resource extraction activities. Southward distribution shifts have been observed in multiple taxa during the last century and these are likely to continue. Exposed (ice free) terrestrial areas will expand, providing new habitats for native and non-native organisms, but with a potential loss of genetic diversity. While native terrestrial biota are likely to benefit from modest warming, the greatest threat to native biodiversity is from non-native terrestrial species.
••
TL;DR: In this article, the authors measured the yield and nuclear modification factor (R-AA) of the Pb+Pb data at root s(NN) = 5.02 TeV and 25 pb−Pb−1 data at r...
••
Aix-Marseille University1, University of Oklahoma2, University of Massachusetts Amherst3, Azerbaijan National Academy of Sciences4, University of Pavia5, University of Göttingen6, Royal Holloway, University of London7, University of Toronto8, University of Copenhagen9, University of Sussex10, University of Oslo11, University of Bergen12, Joint Institute for Nuclear Research13, Tel Aviv University14, Technion – Israel Institute of Technology15, Argonne National Laboratory16, International Centre for Theoretical Physics17, King's College London18, University of Tokyo19, University of Mainz20, National University of La Plata21, AGH University of Science and Technology22, Northern Illinois University23, Ludwig Maximilian University of Munich24, Boğaziçi University25, Istanbul University26, University of Geneva27
TL;DR: In this article, leptonic decays of W bosons extracted from 13 TeV proton-proton collisions at the LHC are used to search for heavy neutral leptons (HNLs) that are produced through mixing with muon or electron neutrinos.
Abstract: The problems of neutrino masses, matter-antimatter asymmetry, and dark matter could be successfully addressed by postulating right-handed neutrinos with Majorana masses below the electroweak scale. In this work, leptonic decays of W bosons extracted from 32.9 fb−1 to 36.1 fb−1 of 13 TeV proton–proton collisions at the LHC are used to search for heavy neutral leptons (HNLs) that are produced through mixing with muon or electron neutrinos. The search is conducted using the ATLAS detector in both prompt and displaced leptonic decay signatures. The prompt signature requires three leptons produced at the interaction point (either μμe or eeμ) with a veto on same-flavour opposite-charge topologies. The displaced signature comprises a prompt muon from the W boson decay and the requirement of a dilepton vertex (either μμ or μe) displaced in the transverse plane by 4–300 mm from the interaction point. The search sets constraints on the HNL mixing to muon and electron neutrinos for HNL masses in the range 4.5–50 GeV.
••
TL;DR: This Letter describes the observation of the light-by-light scattering process, γγ→γγ, in Pb+Pb collisions at sqrt[s_{NN}]=5.02 TeV, and the observed excess of events over the expected background has a significance of 8.2 standard deviations.
Abstract: This Letter describes the observation of the light-by-light scattering process, γγ→γγ, in Pb+Pb collisions at sqrt[s_{NN}]=5.02 TeV. The analysis is conducted using a data sample corresponding to an integrated luminosity of 1.73 nb^{-1}, collected in November 2018 by the ATLAS experiment at the LHC. Light-by-light scattering candidates are selected in events with two photons produced exclusively, each with transverse energy E_{T}^{γ}>3 GeV and pseudorapidity |η_{γ}|<2.4, diphoton invariant mass above 6 GeV, and small diphoton transverse momentum and acoplanarity. After applying all selection criteria, 59 candidate events are observed for a background expectation of 12±3 events. The observed excess of events over the expected background has a significance of 8.2 standard deviations. The measured fiducial cross section is 78±13(stat)±7(syst)±3(lumi) nb.
••
TL;DR: In this paper, a search for a heavy charged-boson resonance decaying into a charged lepton (electron or muon) and a neutrino is reported, where the observed transverse mass distribution computed from the lepton and missing transverse momenta is consistent with the distribution expected from the Standard Model.
Abstract: A search for a heavy charged-boson resonance decaying into a charged lepton (electron or muon) and a neutrino is reported. A data sample of 139 fb−1 of proton-proton collisions at √s=13 TeV collected with the ATLAS detector at the LHC during 2015–2018 is used in the search. The observed transverse mass distribution computed from the lepton and missing transverse momenta is consistent with the distribution expected from the Standard Model, and upper limits on the cross section for pp→W′→lν are extracted (l=e or μ). These vary between 1.3 pb and 0.05 fb depending on the resonance mass in the range between 0.15 and 7.0 TeV at 95% confidence level for the electron and muon channels combined. Gauge bosons with a mass below 6.0 and 5.1 TeV are excluded in the electron and muon channels, respectively, in a model with a resonance that has couplings to fermions identical to those of the Standard Model W boson. Cross-section limits are also provided for resonances with several fixed Γ/m values in the range between 1% and 15%. Model-independent limits are derived in single-bin signal regions defined by a varying minimum transverse mass threshold. The resulting visible cross-section upper limits range between 4.6 (15) pb and 22 (22) ab as the threshold increases from 130 (110) GeV to 5.1 (5.1) TeV in the electron (muon) channel.