scispace - formally typeset
Search or ask a question
Author

Gernot Maier

Bio: Gernot Maier is an academic researcher from McGill University. The author has contributed to research in topics: Cherenkov Telescope Array & Blazar. The author has an hindex of 62, co-authored 386 publications receiving 15092 citations. Previous affiliations of Gernot Maier include University of Delaware & University of Zagreb.


Papers
More filters
Journal ArticleDOI
Marcos Daniel Actis1, G. Agnetta2, Felix Aharonian3, A. G. Akhperjanian  +682 moreInstitutions (109)
TL;DR: The ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes as mentioned in this paper, which is an international initiative to build the next generation instrument, with a factor of 5-10 improvement in sensitivity in the 100 GeV-10 TeV range and the extension to energies well below 100GeV and above 100 TeV.
Abstract: Ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes. Ground-based gamma-ray astronomy has a huge potential in astrophysics, particle physics and cosmology. CTA is an international initiative to build the next generation instrument, with a factor of 5-10 improvement in sensitivity in the 100 GeV-10 TeV range and the extension to energies well below 100 GeV and above 100 TeV. CTA will consist of two arrays (one in the north, one in the south) for full sky coverage and will be operated as open observatory. The design of CTA is based on currently available technology. This document reports on the status and presents the major design concepts of CTA.

1,006 citations

Journal ArticleDOI
B. S. Acharya1, Marcos Daniel Actis2, T. Aghajani3, G. Agnetta4  +979 moreInstitutions (122)
TL;DR: The Cherenkov Telescope Array (CTA) as discussed by the authors is a very high-energy (VHE) gamma ray observatory with an international collaboration with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America.

701 citations

Journal ArticleDOI
TL;DR: In this paper, a composition analysis of KASCADE air shower data is performed by means of unfolding the two-dimensional frequency spectrum of electron and muon numbers, and the analysis is the determination of energy spectra for elemental groups representing the chemical composition of primary cosmic rays.

526 citations

Journal ArticleDOI
19 May 2005-Nature
TL;DR: The results show that it should be possible to determine the nature and composition of UHECRs with combined radio and particle detectors, and to detect the ultrahigh-energy neutrinos expected from flavour mixing.
Abstract: The nature of ultrahigh-energy cosmic rays (UHECRs) at energies >10(20) eV remains a mystery. They are likely to be of extragalactic origin, but should be absorbed within approximately 50 Mpc through interactions with the cosmic microwave background. As there are no sufficiently powerful accelerators within this distance from the Galaxy, explanations for UHECRs range from unusual astrophysical sources to exotic string physics. Also unclear is whether UHECRs consist of protons, heavy nuclei, neutrinos or gamma-rays. To resolve these questions, larger detectors with higher duty cycles and which combine multiple detection techniques are needed. Radio emission from UHECRs, on the other hand, is unaffected by attenuation, has a high duty cycle, gives calorimetric measurements and provides high directional accuracy. Here we report the detection of radio flashes from cosmic-ray air showers using low-cost digital radio receivers. We show that the radiation can be understood in terms of the geosynchrotron effect. Our results show that it should be possible to determine the nature and composition of UHECRs with combined radio and particle detectors, and to detect the ultrahigh-energy neutrinos expected from flavour mixing.

345 citations

MonographDOI
TL;DR: The Cherenkov Telescope Array (CTA) as mentioned in this paper is the major global observatory for very high energy gamma-ray astronomy over the next decade and beyond, covering a huge range in photon energy from 20 GeV to 300 TeV.
Abstract: The Cherenkov Telescope Array, CTA, will be the major global observatory for very high energy gamma-ray astronomy over the next decade and beyond. The scientific potential of CTA is extremely broad: from understanding the role of relativistic cosmic particles to the search for dark matter. CTA is an explorer of the extreme universe, probing environments from the immediate neighbourhood of black holes to cosmic voids on the largest scales. Covering a huge range in photon energy from 20 GeV to 300 TeV, CTA will improve on all aspects of performance with respect to current instruments. The observatory will operate arrays on sites in both hemispheres to provide full sky coverage and will hence maximize the potential for the rarest phenomena such as very nearby supernovae, gamma-ray bursts or gravitational wave transients. With 99 telescopes on the southern site and 19 telescopes on the northern site, flexible operation will be possible, with sub-arrays available for specific tasks. CTA will have important synergies with many of the new generation of major astronomical and astroparticle observatories. Multi-wavelength and multi-messenger approaches combining CTA data with those from other instruments will lead to a deeper understanding of the broad-band non-thermal properties of target sources. The CTA Observatory will be operated as an open, proposal-driven observatory, with all data available on a public archive after a pre-defined proprietary period. Scientists from institutions worldwide have combined together to form the CTA Consortium. This Consortium has prepared a proposal for a Core Programme of highly motivated observations. The programme, encompassing approximately 40% of the available observing time over the first ten years of CTA operation, is made up of individual Key Science Projects (KSPs), which are presented in this document.

334 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Dec 1982
TL;DR: In this article, it was shown that any black hole will create and emit particles such as neutrinos or photons at just the rate that one would expect if the black hole was a body with a temperature of (κ/2π) (ħ/2k) ≈ 10−6 (M/M)K where κ is the surface gravity of the body.
Abstract: QUANTUM gravitational effects are usually ignored in calculations of the formation and evolution of black holes. The justification for this is that the radius of curvature of space-time outside the event horizon is very large compared to the Planck length (Għ/c3)1/2 ≈ 10−33 cm, the length scale on which quantum fluctuations of the metric are expected to be of order unity. This means that the energy density of particles created by the gravitational field is small compared to the space-time curvature. Even though quantum effects may be small locally, they may still, however, add up to produce a significant effect over the lifetime of the Universe ≈ 1017 s which is very long compared to the Planck time ≈ 10−43 s. The purpose of this letter is to show that this indeed may be the case: it seems that any black hole will create and emit particles such as neutrinos or photons at just the rate that one would expect if the black hole was a body with a temperature of (κ/2π) (ħ/2k) ≈ 10−6 (M/M)K where κ is the surface gravity of the black hole1. As a black hole emits this thermal radiation one would expect it to lose mass. This in turn would increase the surface gravity and so increase the rate of emission. The black hole would therefore have a finite life of the order of 1071 (M/M)−3 s. For a black hole of solar mass this is much longer than the age of the Universe. There might, however, be much smaller black holes which were formed by fluctuations in the early Universe2. Any such black hole of mass less than 1015 g would have evaporated by now. Near the end of its life the rate of emission would be very high and about 1030 erg would be released in the last 0.1 s. This is a fairly small explosion by astronomical standards but it is equivalent to about 1 million 1 Mton hydrogen bombs. It is often said that nothing can escape from a black hole. But in 1974, Stephen Hawking realized that, owing to quantum effects, black holes should emit particles with a thermal distribution of energies — as if the black hole had a temperature inversely proportional to its mass. In addition to putting black-hole thermodynamics on a firmer footing, this discovery led Hawking to postulate 'black hole explosions', as primordial black holes end their lives in an accelerating release of energy.

2,947 citations

Journal ArticleDOI
TL;DR: A binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors.
Abstract: On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of $\sim 1.7\,{\rm{s}}$ with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of ${40}_{-8}^{+8}$ Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 $\,{M}_{\odot }$. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at $\sim 40\,{\rm{Mpc}}$) less than 11 hours after the merger by the One-Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient's position $\sim 9$ and $\sim 16$ days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC 4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta.

2,746 citations

Journal ArticleDOI
Kazunori Akiyama, Antxon Alberdi1, Walter Alef2, Keiichi Asada3  +403 moreInstitutions (82)
TL;DR: In this article, the Event Horizon Telescope was used to reconstruct event-horizon-scale images of the supermassive black hole candidate in the center of the giant elliptical galaxy M87.
Abstract: When surrounded by a transparent emission region, black holes are expected to reveal a dark shadow caused by gravitational light bending and photon capture at the event horizon. To image and study this phenomenon, we have assembled the Event Horizon Telescope, a global very long baseline interferometry array observing at a wavelength of 1.3 mm. This allows us to reconstruct event-horizon-scale images of the supermassive black hole candidate in the center of the giant elliptical galaxy M87. We have resolved the central compact radio source as an asymmetric bright emission ring with a diameter of 42 +/- 3 mu as, which is circular and encompasses a central depression in brightness with a flux ratio greater than or similar to 10: 1. The emission ring is recovered using different calibration and imaging schemes, with its diameter and width remaining stable over four different observations carried out in different days. Overall, the observed image is consistent with expectations for the shadow of a Kerr black hole as predicted by general relativity. The asymmetry in brightness in the ring can be explained in terms of relativistic beaming of the emission from a plasma rotating close to the speed of light around a black hole. We compare our images to an extensive library of ray-traced general-relativistic magnetohydrodynamic simulations of black holes and derive a central mass of M = (6.5 +/- 0.7) x 10(9) M-circle dot. Our radio-wave observations thus provide powerful evidence for the presence of supermassive black holes in centers of galaxies and as the central engines of active galactic nuclei. They also present a new tool to explore gravity in its most extreme limit and on a mass scale that was so far not accessible.

2,589 citations