Cosmic-Ray Positron Energy Spectrum Measured by PAMELA
TL;DR: The satellite-borne experiment PAMELA has been used to make a new measurement of the cosmic-ray positron flux and fraction that extends previously published measurements up to 300 GeV in kinetic energy.
Abstract: Precision measurements of the positron component in the cosmic radiation provide important information about the propagation of cosmic rays and the nature of particle sources in our Galaxy. The satellite-borne experiment PAMELA has been used to make a new measurement of the cosmic-ray positron flux and fraction that extends previously published measurements up to 300 GeV in kinetic energy. The combined measurements of the cosmic-ray positron energy spectrum and fraction provide a unique tool to constrain interpretation models. During the recent solar minimum activity period from July 2006 to December 2009, approximately 24 500 positrons were observed. The results cannot be easily reconciled with purely secondary production, and additional sources of either astrophysical or exotic origin may be required.
Citations
More filters
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
••
TL;DR: In this paper, the Alpha Magnetic Spectrometer on the International Space Station was used to measure the primary cosmic-ray electron flux in the range 0.5 to 700 GeV and the positron flux in a range of 0.1 to 500 GeV.
Abstract: Precision measurements by the Alpha Magnetic Spectrometer on the International Space Station of the primary cosmic-ray electron flux in the range 0.5 to 700 GeV and the positron flux in the range 0.5 to 500 GeV are presented. The electron flux and the positron flux each require a description beyond a single power-law spectrum. Both the electron flux and the positron flux change their behavior at ∼30 GeV but the fluxes are significantly different in their magnitude and energy dependence. Between 20 and 200 GeV the positron spectral index is significantly harder than the electron spectral index. The determination of the differing behavior of the spectral indices versus energy is a new observation and provides important information on the origins of cosmic-ray electrons and positrons.
461 citations
01 Jan 2013
TL;DR: In this paper, the authors classify all two-body nonrelativistic Dark Matter annihilation channels to the allowed polarization states of Standard Model particles, computing the energy spectra of the stable nal-state particles relevant for indirect DM detection.
Abstract: Taking into account spins, we classify all two-body non-relativistic Dark Matter annihilation channels to the allowed polarization states of Standard Model particles, computing the energy spectra of the stable nal-state particles relevant for indirect DM detection. We study the DM masses, annihilation channels and cross sections that can reproduce the PAMELA indications of an e + excess consistently with the PAMELA p data and the ATIC/PPB-BETS e + +e data. From the PAMELA data alone, two solutions emerge: (i) either the DM particles that annihilate into W;Z;h must be heavier than about 10 TeV or (ii) the DM must annihilate only into leptons. Thus in both cases a DM particle compatible with the PAMELA excess seems to have quite unexpected properties. The solution (ii) implies a peak in the e + +e
402 citations
••
Jamia Millia Islamia1, Vietnam Academy of Science and Technology2, Hanoi University of Science3, Saitama University4, Aarhus University5, University of California, Santa Barbara6, Commissariat à l'énergie atomique et aux énergies alternatives7, University of Münster8, University of Connecticut9, University of Manchester10, Indian Institute of Technology Guwahati11, Leiden University12, Northwestern University13, Federal University of Paraíba14, Centre national de la recherche scientifique15, Universidade Federal do ABC16, University of Southampton17, Argonne National Laboratory18, Karlsruhe Institute of Technology19, University of Mainz20, Technische Universität München21, Max Planck Society22, Heidelberg University23, University of Tübingen24, Massachusetts Institute of Technology25, Durham University26, University of California, San Diego27, C. N. Yang Institute for Theoretical Physics28, Russian Academy of Sciences29, Moscow Institute of Physics and Technology30, University of Sydney31, University of Copenhagen32, Université libre de Bruxelles33, Paris Diderot University34, Niels Bohr Institute35, Estácio S.A.36, CERN37, University of California, Santa Cruz38, Institute on Taxation and Economic Policy39, University of Bern40, Institute for Advanced Study41, RWTH Aachen University42, Chinese Academy of Sciences43, East China University of Science and Technology44, University of Chicago45, Autonomous University of Madrid46, King's College London47, INAF48, Lawrence Berkeley National Laboratory49, Istituto Nazionale di Fisica Nucleare50, University of Bari51, University of Geneva52, Petersburg Nuclear Physics Institute53, University of Genoa54, Kapteyn Astronomical Institute55, Fermilab56, Spanish National Research Council57, Oak Ridge National Laboratory58, University of California, Berkeley59, École Polytechnique Fédérale de Lausanne60, University of Paris61, University of Zurich62, Mitchell Institute63, Tohoku University64, Princeton University65, Shimane University66, University of Maryland, College Park67, Dresden University of Technology68
TL;DR: A comprehensive review of keV-scale neutrino Dark Matter can be found in this paper, where the role of active neutrinos in particle physics, astrophysics, and cosmology is reviewed.
Abstract: We present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved—cosmology, astrophysics, nuclear, and particle physics—in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. Here, we first review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterile neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. The paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.
398 citations
References
More filters
••
[...]
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
79,257 citations
••
University of Genoa1, University of Manchester2, KEK3, CERN4, Imperial College London5, Stanford University6, Tata Institute of Fundamental Research7, Istituto Nazionale di Fisica Nucleare8, University of Pittsburgh9, Lyon College10, TRIUMF11, Northeastern University12, Thomas Jefferson National Accelerator Facility13, University of Córdoba (Spain)14, Goethe University Frankfurt15, University of Southampton16, University of Udine17, University of Alberta18, Tokyo Metropolitan University19, Helsinki Institute of Physics20, National Research Nuclear University MEPhI21, University of Bath22, Niigata University23, Naruto University of Education24, Kobe University25, University of Calabria26, University of Trieste27, European Space Agency28, University of Birmingham29, Ritsumeikan University30, Qinetiq31, École Polytechnique Fédérale de Lausanne32, Massachusetts Institute of Technology33, Brookhaven National Laboratory34
01 Jul 2003-Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment
TL;DR: The Gelfant 4 toolkit as discussed by the authors is a toolkit for simulating the passage of particles through matter, including a complete range of functionality including tracking, geometry, physics models and hits.
Abstract: G eant 4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics.
18,904 citations
••
17 Jul 1986
15,313 citations
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
••
TL;DR: It is found that the positron fraction increases sharply over much of that range, in a way that appears to be completely inconsistent with secondary sources, and is concluded that a primary source, be it an astrophysical object or dark matter annihilation, is necessary.
Abstract: Antiparticles account for a small fraction of cosmic rays and are known to be produced in interactions between cosmic-ray nuclei and atoms in the interstellar medium(1), which is referred to as a ' ...
2,287 citations