scispace - formally typeset
Search or ask a question
Author

Peter Nugent

Bio: Peter Nugent is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Supernova & Light curve. The author has an hindex of 127, co-authored 754 publications receiving 92988 citations. Previous affiliations of Peter Nugent include Liverpool John Moores University & National Autonomous University of Mexico.
Topics: Supernova, Light curve, Galaxy, Redshift, White dwarf


Papers
More filters
Posted Content
TL;DR: In this article, the authors demonstrate how the approaching era of Extremely Large Telescopes (ELTs) will transform the astrophysical measure of H0 from the limited and few into a fundamentally new regime where (i) multiple, independent techniques are employed with modest use of large aperture facilities and (ii) 1% or better precision is readily attainable.
Abstract: Many of the fundamental physical constants in Physics, as a discipline, are measured to exquisite levels of precision. The fundamental constants that define Cosmology, however, are largely determined via a handful of independent techniques that are applied to even fewer datasets. The history of the measurement of the Hubble Constant (H0), which serves to anchor the expansion history of the Universe to its current value, is an exemplar of the difficulties of cosmological measurement; indeed, as we approach the centennial of its first measurement, the quest for H0 still consumes a great number of resources. In this white paper, we demonstrate how the approaching era of Extremely Large Telescopes (ELTs) will transform the astrophysical measure of H0 from the limited and few into a fundamentally new regime where (i) multiple, independent techniques are employed with modest use of large aperture facilities and (ii) 1% or better precision is readily attainable. This quantum leap in how we approach H0 is due to the unparalleled sensitivity and spatial resolution of ELT's and the ability to use integral field observations for simultaneous spectroscopy and photometry, which together permit both familiar and new techniques to effectively by-pass the conventional 'ladder' framework to minimize total uncertainty. Three independent techniques are discussed -- (i) standard candles via a two-step distance ladder applied to metal, poor stellar populations, (ii) standard clocks via gravitational lens cosmography, and (iii) standard sirens via gravitational wave sources -- each of which can reach 1% with relatively modest investment from 30-m class facilities.

8 citations

Posted Content
TL;DR: Perlmutter et al. as discussed by the authors presented evidence for a low-mass density/positive cosmological-constant universe that will expand forever, based on observations of a set of 40 high-redshift supernovae.
Abstract: This presentation reports on first evidence for a low-mass-density/positive-cosmological-constant universe that will expand forever, based on observations of a set of 40 high-redshift supernovae. The experimental strategy, data sets, and analysis techniques are described. More extensive analyses of these results with some additional methods and data are presented in the more recent LBNL report #41801 (Perlmutter et al., 1998; accepted for publication in Ap.J.), astro-ph/9812133 . This Lawrence Berkeley National Laboratory reprint is a reduction of a poster presentation from the Cosmology Display Session #85 on 9 January 1998 at the American Astronomical Society meeting in Washington D.C. It is also available on the World Wide Web at this http URL This work has also been referenced in the literature by the pre-meeting abstract citation: Perlmutter et al., B.A.A.S., volume 29, page 1351 (1997).

8 citations

Proceedings ArticleDOI
TL;DR: The Supernova / Acceleration Probe (SNAP) is a proposed space-borne observatory that will survey the sky with a wide-field optical/near-infrared (NIR) imager.
Abstract: The Supernova / Acceleration Probe (SNAP) is a proposed space-borne observatory that will survey the sky with a wide-field optical/near-infrared (NIR) imager. The images produced by SNAP will have an unprecedented combination of depth, solid-angle, angular resolution, and temporal sampling. For 16 months each, two 7.5 square-degree fields will be observed every four days to a magnitude depth of AB=27.7 in each of the SNAP filters, spanning 3500-17000a. Co-adding images over all epochs will give AB=30.3 per filter. In addition, a 300 square-degree field will be surveyed to AB=28 per filter, with no repeated temporal sampling. Although the survey strategy is tailored for supernova and weak gravitational lensing observations, the resulting data will support a broad range of auxiliary science programs.

8 citations

Book ChapterDOI
24 Mar 2014
TL;DR: This paper evaluates a novel implementation of the classifier in GLADE, a parallel data processing system that combines the efficiency of a database with the extensibility of Map-Reduce, and shows how each stage in the classifiers maps optimally into GLADE tasks by taking advantage of the unique features of the system.
Abstract: Palomar Transient Factory is a comprehensive detection system for the identification and classification of transient astrophysical objects. The central piece in the identification pipeline is represented by an automated classifier that distinguishes between real and bogus objects with high accuracy. Given that the classifier has to identify the most significant transients out of a large number of candidates in near real-time, the response time it provides is of critical importance. In this paper, we present an experimental study that evaluates a novel implementation of the classifier in GLADE–a parallel data processing system that combines the efficiency of a database with the extensibility of Map-Reduce. We show how each stage in the classifier — candidate identification, pruning, and contextual realbogus — maps optimally into GLADE tasks by taking advantage of the unique features of the system–range-based data partitioning, columnar storage, multi-query execution, and in-database support for complex aggregate computation. The result is an efficient classifier implementation capable to process a new set of acquired images in a matter of minutes even on a low-end server. For comparison, an optimized PostgreSQL implementation of the classifier takes hours on the same machine.

8 citations

Journal ArticleDOI
TL;DR: In this paper, the authors presented high-quality light curves of 127 SNe Ia discovered by the Zwicky Transient Facility (ZTF) in 2018, which can be used to study the shape and color evolution of the rising light curves in unprecedented detail.
Abstract: Early-time observations of Type Ia supernovae (SNe Ia) are essential to constrain their progenitor properties. In this paper, we present high-quality light curves of 127 SNe Ia discovered by the Zwicky Transient Facility (ZTF) in 2018. We describe our method to perform forced point spread function (PSF) photometry, which can be applied to other types of extragalactic transients. With a planned cadence of six observations per night ($3g+3r$), all of the 127 SNe Ia are detected in both $g$ and $r$ band more than 10\,d (in the rest frame) prior to the epoch of $g$-band maximum light. The redshifts of these objects range from $z=0.0181$ to 0.165; the median redshift is 0.074. Among the 127 SNe, 50 are detected at least 14\,d prior to maximum light (in the rest frame), with a subset of 9 objects being detected more than 17\,d before $g$-band peak. This is the largest sample of young SNe Ia collected to date; it can be used to study the shape and color evolution of the rising light curves in unprecedented detail. We discuss six peculiar events in this sample, including one 02cx-like event ZTF18abclfee (SN\,2018crl), one Ia-CSM SN ZTF18aaykjei (SN\,2018cxk), and four objects with possible super-Chandrasekhar mass progenitors: ZTF18abhpgje (SN\,2018eul), ZTF18abdpvnd (SN\,2018dvf), ZTF18aawpcel (SN\,2018cir) and ZTF18abddmrf (SN\,2018dsx).

8 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the mass density, Omega_M, and cosmological-constant energy density of the universe were measured using the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology project.
Abstract: We report measurements of the mass density, Omega_M, and cosmological-constant energy density, Omega_Lambda, of the universe based on the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology Project. The magnitude-redshift data for these SNe, at redshifts between 0.18 and 0.83, are fit jointly with a set of SNe from the Calan/Tololo Supernova Survey, at redshifts below 0.1, to yield values for the cosmological parameters. All SN peak magnitudes are standardized using a SN Ia lightcurve width-luminosity relation. The measurement yields a joint probability distribution of the cosmological parameters that is approximated by the relation 0.8 Omega_M - 0.6 Omega_Lambda ~= -0.2 +/- 0.1 in the region of interest (Omega_M <~ 1.5). For a flat (Omega_M + Omega_Lambda = 1) cosmology we find Omega_M = 0.28{+0.09,-0.08} (1 sigma statistical) {+0.05,-0.04} (identified systematics). The data are strongly inconsistent with a Lambda = 0 flat cosmology, the simplest inflationary universe model. An open, Lambda = 0 cosmology also does not fit the data well: the data indicate that the cosmological constant is non-zero and positive, with a confidence of P(Lambda > 0) = 99%, including the identified systematic uncertainties. The best-fit age of the universe relative to the Hubble time is t_0 = 14.9{+1.4,-1.1} (0.63/h) Gyr for a flat cosmology. The size of our sample allows us to perform a variety of statistical tests to check for possible systematic errors and biases. We find no significant differences in either the host reddening distribution or Malmquist bias between the low-redshift Calan/Tololo sample and our high-redshift sample. The conclusions are robust whether or not a width-luminosity relation is used to standardize the SN peak magnitudes.

16,838 citations

Journal ArticleDOI
TL;DR: In this article, the authors used spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62.
Abstract: We present spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-z Supernova Search Team and recent results by Riess et al., this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmo- logical parameters: the Hubble constant the mass density the cosmological constant (i.e., the (H 0 ), () M ), vacuum energy density, the deceleration parameter and the dynamical age of the universe ) " ), (q 0 ), ) M \ 1) methods. We estimate the dynamical age of the universe to be 14.2 ^ 1.7 Gyr including systematic uncer- tainties in the current Cepheid distance scale. We estimate the likely e†ect of several sources of system- atic error, including progenitor and metallicity evolution, extinction, sample selection bias, local perturbations in the expansion rate, gravitational lensing, and sample contamination. Presently, none of these e†ects appear to reconcile the data with and ) " \ 0 q 0 " 0.

16,674 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.

11,309 citations

01 Jan 1998
TL;DR: The spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62 were presented in this paper.
Abstract: We present spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-Z Supernova Search Team (Garnavich et al. 1998; Schmidt et al. 1998) and Riess et al. (1998a), this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmological parameters: the Hubble constant (H0), the mass density (M), the cosmological constant (i.e., the vacuum energy density, �), the deceleration parameter (q0), and the dynamical age of the Universe (t0). The distances of the high-redshift SNe Ia are, on average, 10% to 15% farther than expected in a low mass density (M = 0.2) Universe without a cosmological constant. Different light curve fitting methods, SN Ia subsamples, and prior constraints unanimously favor eternally expanding models with positive cosmological constant (i.e., � > 0) and a current acceleration of the expansion (i.e., q0 < 0). With no prior constraint on mass density other than M � 0, the spectroscopically confirmed SNe Ia are statistically consistent with q0 < 0 at the 2.8�

11,197 citations