scispace - formally typeset
Search or ask a question
Author

Peter Nugent

Bio: Peter Nugent is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Supernova & Light curve. The author has an hindex of 127, co-authored 754 publications receiving 92988 citations. Previous affiliations of Peter Nugent include Liverpool John Moores University & National Autonomous University of Mexico.
Topics: Supernova, Light curve, Galaxy, Redshift, White dwarf


Papers
More filters
Journal Article
TL;DR: In this paper, the authors demonstrate how the approaching era of Extremely Large Telescopes (ELTs) will transform the astrophysical measure of H0 from the limited and few into a fundamentally new regime where (i) multiple, independent techniques are employed with modest use of large aperture facilities and (ii) 1% or better precision is readily attainable.
Abstract: Author(s): Beaton, Rachael L; Birrer, Simon; Dell'Antonio, Ian; Fassnacht, Chris; Goldstein, Danny; Lee, Chien-Hsiu; Nugent, Peter; Pierce, Michael; Shajib, Anowar J; Treu, Tommaso | Abstract: Many of the fundamental physical constants in Physics, as a discipline, are measured to exquisite levels of precision. The fundamental constants that define Cosmology, however, are largely determined via a handful of independent techniques that are applied to even fewer datasets. The history of the measurement of the Hubble Constant (H0), which serves to anchor the expansion history of the Universe to its current value, is an exemplar of the difficulties of cosmological measurement; indeed, as we approach the centennial of its first measurement, the quest for H0 still consumes a great number of resources. In this white paper, we demonstrate how the approaching era of Extremely Large Telescopes (ELTs) will transform the astrophysical measure of H0 from the limited and few into a fundamentally new regime where (i) multiple, independent techniques are employed with modest use of large aperture facilities and (ii) 1% or better precision is readily attainable. This quantum leap in how we approach H0 is due to the unparalleled sensitivity and spatial resolution of ELT's and the ability to use integral field observations for simultaneous spectroscopy and photometry, which together permit both familiar and new techniques to effectively by-pass the conventional 'ladder' framework to minimize total uncertainty. Three independent techniques are discussed -- (i) standard candles via a two-step distance ladder applied to metal, poor stellar populations, (ii) standard clocks via gravitational lens cosmography, and (iii) standard sirens via gravitational wave sources -- each of which can reach 1% with relatively modest investment from 30-m class facilities.

1 citations

Posted Content
01 Oct 2009
TL;DR: In this paper, a new class of luminous supernovae whose observed properties cannot be explained by any of these known processes was identified. But these SNe are all ~10 times brighter than SNe Ia, do not show any trace of hydrogen, emit significant ultra-violet (UV) flux for extended periods of time, and have late-time decay rates which are inconsistent with radioactivity.
Abstract: Supernovae (SNe) are stellar explosions driven by gravitational or thermonuclear energy, observed as electromagnetic radiation emitted over weeks or more. In all known SNe, this radiation comes from internal energy deposited in the outflowing ejecta by either radioactive decay of freshly-synthesized elements (typically 56Ni), stored heat deposited by the explosion shock in the envelope of a supergiant star, or interaction between the SN debris and slowly-moving, hydrogen-rich circumstellar material. Here we report on a new class of luminous SNe whose observed properties cannot be explained by any of these known processes. These include four new SNe we have discovered, and two previously unexplained events (SN 2005ap; SCP 06F6) that we can now identify as members. These SNe are all ~10 times brighter than SNe Ia, do not show any trace of hydrogen, emit significant ultra-violet (UV) flux for extended periods of time, and have late-time decay rates which are inconsistent with radioactivity. Our data require that the observed radiation is emitted by hydrogen-free material distributed over a large radius (~10^15 cm) and expanding at high velocities (>10^4 km s^-1). These long-lived, UV-luminous events can be observed out to redshifts z>4 and offer an excellent opportunity to study star formation in, and the interstellar medium of, primitive distant galaxies.

1 citations

Journal ArticleDOI
TL;DR: In this paper , the authors presented a new optical imaging survey of four deep drilling fields (DDFs), two Galactic and two extragalactic, with the DECam on the 4 meter Blanco telescope at the Cerro Tololo Inter-American Observatory (CTIO).
Abstract: This paper presents a new optical imaging survey of four deep drilling fields (DDFs), two Galactic and two extragalactic, with the Dark Energy Camera (DECam) on the 4 meter Blanco telescope at the Cerro Tololo Inter-American Observatory (CTIO). During the first year of observations in 2021, > 4000 images covering 21 square degrees (7 DECam pointings), with ∼ 40 epochs (nights) per field and 5 to 6 images per night per filter in 𝑔 , 𝑟 , 𝑖 , and/or 𝑧 , have become publicly available (the proprietary period for this program is waived). We describe the real-time difference-image pipeline and how alerts are distributed to brokers via the same distribution system as the Zwicky Transient Facility (ZTF). In this paper, we focus on the two extragalactic deep fields (COSMOS and ELAIS-S1), characterizing the detected sources and demonstrating that the survey design is effective for probing the discovery space of faint and fast variable and transient sources. We describe and make publicly available 4413 calibrated light curves based on difference-image detection photometry of transients and variables in the extragalactic fields. We also present preliminary scientific analysis regarding Solar System small bodies, stellar flares and variables, Galactic anomaly detection, fast-rising transients and variables, supernovae, and active galactic nuclei.

1 citations

Journal ArticleDOI
M. Ackermann, Iair Arcavi1, Iair Arcavi2, Luca Baldini3, Jean Ballet4, Guido Barbiellini5, Guido Barbiellini3, Denis Bastieri3, Denis Bastieri6, Ronaldo Bellazzini3, Elisabetta Bissaldi3, Roger Blandford7, R. Bonino3, R. Bonino8, Eugenio Bottacini7, T. J. Brandt9, Johan Bregeon10, Pascal Bruel11, R. Buehler, S. Buson3, S. Buson6, G. A. Caliandro7, R. A. Cameron7, M. Caragiulo3, P. A. Caraveo12, E. Cavazzuti13, C. Cecchi14, C. Cecchi3, Eric Charles7, A. Chekhtman15, J. Chiang7, G. Chiaro6, Stefano Ciprini13, Stefano Ciprini3, Stefano Ciprini12, R. Claus7, Johann Cohen-Tanugi10, S. Cutini13, S. Cutini12, S. Cutini3, Filippo D'Ammando12, Filippo D'Ammando16, A. De Angelis3, F. de Palma3, R. Desiante17, R. Desiante3, L. Di Venere18, Persis S. Drell7, C. Favuzzi3, C. Favuzzi18, S. J. Fegan11, A. Franckowiak7, Stefan Funk7, P. Fusco18, P. Fusco3, Avishay Gal-Yam19, F. Gargano3, Dario Gasparrini3, Dario Gasparrini12, Dario Gasparrini13, Nicola Giglietto18, Nicola Giglietto3, Francesco Giordano3, Francesco Giordano18, Marcello Giroletti12, T. Glanzman7, G. Godfrey7, I. A. Grenier4, J. E. Grove15, Sylvain Guiriec9, Alice K. Harding9, Katsuhiro Hayashi20, John W. Hewitt9, John W. Hewitt21, A. B. Hill7, A. B. Hill22, D. Horan11, T. Jogler7, Gudlaugur Johannesson23, Daniel Kocevski9, M. Kuss3, Stefan Larsson24, J. Lashner25, L. Latronico3, J. Li26, Liang Li24, Liang Li27, Francesco Longo3, Francesco Longo5, F. Loparco3, F. Loparco18, M. N. Lovellette15, P. Lubrano14, P. Lubrano3, D. Malyshev7, Michael Mayer, M. N. Mazziotta3, Julie McEnery28, Julie McEnery9, Peter F. Michelson7, Tsunefumi Mizuno29, M. E. Monzani7, A. Morselli3, Kohta Murase30, Kohta Murase31, Peter Nugent32, Peter Nugent33, Eric Nuss10, Eran O. Ofek19, T. Ohsugi29, M. Orienti12, Elena Orlando7, J. F. Ormes34, D. Paneque7, D. Paneque35, Melissa Pesce-Rollins3, Frederic Piron10, G. Pivato3, S. Rainò18, S. Rainò3, Riccardo Rando3, Riccardo Rando6, M. Razzano3, A. Reimer7, Olaf Reimer7, A. Schulz, Carmelo Sgrò3, E. J. Siskind, F. Spada3, Gloria Spandre3, P. Spinelli18, P. Spinelli3, D. J. Suson36, Hiromitsu Takahashi29, J. B. Thayer7, L. Tibaldo7, Diego F. Torres26, Diego F. Torres37, Eleonora Troja28, Eleonora Troja9, Giacomo Vianello7, Michael Werner, K. S. Wood15, Matthew Wood7 
TL;DR: In this paper, the authors performed a systematic search for gamma-ray emission in Fermi LAT data in the energy range from 100 MeV to 300 GeV from the ensemble of 147 SNe Type IIn exploding in dense CSM.
Abstract: Supernovae (SNe) exploding in a dense circumstellar medium (CSM) are hypothesized to accelerate cosmic rays in collisionless shocks and emit GeV gamma rays and TeV neutrinos on a time scale of several months. We perform the first systematic search for gamma-ray emission in Fermi LAT data in the energy range from 100 MeV to 300 GeV from the ensemble of 147 SNe Type IIn exploding in dense CSM. We search for a gamma-ray excess at each SNe location in a one year time window. In order to enhance a possible weak signal, we simultaneously study the closest and optically brightest sources of our sample in a joint-likelihood analysis in three different time windows (1 year, 6 months and 3 months). For the most promising source of the sample, SN 2010jl (PTF10aaxf), we repeat the analysis with an extended time window lasting 4.5 years. We do not find a significant excess in gamma rays for any individual source nor for the combined sources and provide model-independent flux upper limits for both cases. In addition, we derive limits on the gamma-ray luminosity and the ratio of gamma-ray-to-optical luminosity ratio as a function of the index of the proton injection spectrum assuming a generic gamma-ray production model. Furthermore, we present detailed flux predictions based on multi-wavelength observations and the corresponding flux upper limit at 95% confidence level (CL) for the source SN 2010jl (PTF10aaxf).

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the mass density, Omega_M, and cosmological-constant energy density of the universe were measured using the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology project.
Abstract: We report measurements of the mass density, Omega_M, and cosmological-constant energy density, Omega_Lambda, of the universe based on the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology Project. The magnitude-redshift data for these SNe, at redshifts between 0.18 and 0.83, are fit jointly with a set of SNe from the Calan/Tololo Supernova Survey, at redshifts below 0.1, to yield values for the cosmological parameters. All SN peak magnitudes are standardized using a SN Ia lightcurve width-luminosity relation. The measurement yields a joint probability distribution of the cosmological parameters that is approximated by the relation 0.8 Omega_M - 0.6 Omega_Lambda ~= -0.2 +/- 0.1 in the region of interest (Omega_M <~ 1.5). For a flat (Omega_M + Omega_Lambda = 1) cosmology we find Omega_M = 0.28{+0.09,-0.08} (1 sigma statistical) {+0.05,-0.04} (identified systematics). The data are strongly inconsistent with a Lambda = 0 flat cosmology, the simplest inflationary universe model. An open, Lambda = 0 cosmology also does not fit the data well: the data indicate that the cosmological constant is non-zero and positive, with a confidence of P(Lambda > 0) = 99%, including the identified systematic uncertainties. The best-fit age of the universe relative to the Hubble time is t_0 = 14.9{+1.4,-1.1} (0.63/h) Gyr for a flat cosmology. The size of our sample allows us to perform a variety of statistical tests to check for possible systematic errors and biases. We find no significant differences in either the host reddening distribution or Malmquist bias between the low-redshift Calan/Tololo sample and our high-redshift sample. The conclusions are robust whether or not a width-luminosity relation is used to standardize the SN peak magnitudes.

16,838 citations

Journal ArticleDOI
TL;DR: In this article, the authors used spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62.
Abstract: We present spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-z Supernova Search Team and recent results by Riess et al., this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmo- logical parameters: the Hubble constant the mass density the cosmological constant (i.e., the (H 0 ), () M ), vacuum energy density, the deceleration parameter and the dynamical age of the universe ) " ), (q 0 ), ) M \ 1) methods. We estimate the dynamical age of the universe to be 14.2 ^ 1.7 Gyr including systematic uncer- tainties in the current Cepheid distance scale. We estimate the likely e†ect of several sources of system- atic error, including progenitor and metallicity evolution, extinction, sample selection bias, local perturbations in the expansion rate, gravitational lensing, and sample contamination. Presently, none of these e†ects appear to reconcile the data with and ) " \ 0 q 0 " 0.

16,674 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.

11,309 citations

01 Jan 1998
TL;DR: The spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62 were presented in this paper.
Abstract: We present spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-Z Supernova Search Team (Garnavich et al. 1998; Schmidt et al. 1998) and Riess et al. (1998a), this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmological parameters: the Hubble constant (H0), the mass density (M), the cosmological constant (i.e., the vacuum energy density, �), the deceleration parameter (q0), and the dynamical age of the Universe (t0). The distances of the high-redshift SNe Ia are, on average, 10% to 15% farther than expected in a low mass density (M = 0.2) Universe without a cosmological constant. Different light curve fitting methods, SN Ia subsamples, and prior constraints unanimously favor eternally expanding models with positive cosmological constant (i.e., � > 0) and a current acceleration of the expansion (i.e., q0 < 0). With no prior constraint on mass density other than M � 0, the spectroscopically confirmed SNe Ia are statistically consistent with q0 < 0 at the 2.8�

11,197 citations