Author
Peter Nugent
Other affiliations: Liverpool John Moores University, National Autonomous University of Mexico, California Institute of Technology ...read more
Bio: Peter Nugent is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Supernova & Light curve. The author has an hindex of 127, co-authored 754 publications receiving 92988 citations. Previous affiliations of Peter Nugent include Liverpool John Moores University & National Autonomous University of Mexico.
Topics: Supernova, Light curve, Galaxy, Redshift, White dwarf
Papers published on a yearly basis
Papers
More filters
••
Queen's University Belfast1, University of California, Santa Barbara2, Las Cumbres Observatory Global Telescope Network3, University of Cambridge4, University of Southampton5, Weizmann Institute of Science6, Max Planck Society7, European Southern Observatory8, Liverpool John Moores University9, Pierre-and-Marie-Curie University10, Yale University11, Sapienza University of Rome12, Millennium Institute13, Pontifical Catholic University of Chile14, Space Science Institute15, Andrés Bello National University16, Australian National University17, Carnegie Institution for Science18, Aarhus University19, University of Chile20, Institut d'Astrophysique de Paris21, Spanish National Research Council22, Humboldt University of Berlin23, University of Bonn24, University of Würzburg25, University of Turku26, Stockholm University27, University of Copenhagen28, University of Warwick29, University of Oxford30, University of California, Berkeley31, Lawrence Berkeley National Laboratory32, University of Padua33, University of Warsaw34
TL;DR: The first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013), and a total of 221 confirmed supernovae were classified, and they released calibrated optical spectra and classifications publicly within 24 h of the data being taken as mentioned in this paper.
Abstract: Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month. Aims. We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1). Methods. PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5(m) for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 angstrom between 3345-9995 angstrom. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 mu m and resolutions 23-33 angstrom) and imaging with broadband JHK(s) filters. Results. This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically similar to 15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHK(s) imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this. Conclusions. Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey.
286 citations
••
TL;DR: In this article, the authors used the Supernova Cosmology Project (SCP) to fit R-band intensity measurements along the light curve of Type Ia supernovae discovered by the SCP to templates allowing a free parameter w = s(1+z).
Abstract: R-band intensity measurements along the light curve of Type Ia supernovae discovered by the Supernova Cosmology Project (SCP) are fitted in brightness to templates allowing a free parameter the time-axis width factor w = s(1+z). The data points are then individually aligned in the time-axis, normalized and K-corrected back to the rest frame, after which the nearly 1300 normalized intensity measurements are found to lie on a well-determined common rest-frame B-band curve which we call the ``composite curve''. The same procedure is applied to 18 low-redshift Calan/Tololo SNe with z < 0.11; these nearly 300 B-band photometry points are found to lie on the composite curve equally well. The SCP search technique produces several measurements before maximum light for each supernova. We demonstrate that the linear stretch factor, s, which parameterizes the light-curve timescale appears independent of z,and applies equally well to the declining and rising parts of the light curve. In fact, the B-band template that best fits this composite curve fits the individual supernova photometry data when stretched by a factor s with chi^2/DoF approx = 1, thus as well as any parameterization can, given the current data sets. The measurement of the date of explosion, however, is model dependent and not tightly constrained by the current data.
We also demonstrate the 1+z light-curve time-axis broadening expected from cosmological expansion. This argues strongly against alternative explanations, such as tired light, for the redshift of distant objects.
285 citations
••
California Institute of Technology1, University of Washington2, Stockholm University3, University of Maryland, College Park4, Auburn University5, University of Wisconsin–Milwaukee6, Goddard Space Flight Center7, National Central University8, University of California, Santa Barbara9, University of Michigan10, Northwestern University11, Adler Planetarium12, Lawrence Berkeley National Laboratory13, University of California, Berkeley14, Weizmann Institute of Science15, Radboud University Nijmegen16, Humboldt University of Berlin17, Macau University of Science and Technology18, Tel Aviv University19, Soka University of America20, Centre national de la recherche scientifique21, Los Alamos National Laboratory22
TL;DR: The Zwicky Transient Facility (ZTF) as mentioned in this paper is a new time-domain survey employing a dedicated camera on the Palomar 48-inch Schmidt telescope with a 47 deg^2 field of view and an 8 second readout time.
Abstract: The Zwicky Transient Facility (ZTF), a public–private enterprise, is a new time-domain survey employing a dedicated camera on the Palomar 48-inch Schmidt telescope with a 47 deg^2 field of view and an 8 second readout time. It is well positioned in the development of time-domain astronomy, offering operations at 10% of the scale and style of the Large Synoptic Survey Telescope (LSST) with a single 1-m class survey telescope. The public surveys will cover the observable northern sky every three nights in g and r filters and the visible Galactic plane every night in g and r. Alerts generated by these surveys are sent in real time to brokers. A consortium of universities that provided funding ("partnership") are undertaking several boutique surveys. The combination of these surveys producing one million alerts per night allows for exploration of transient and variable astrophysical phenomena brighter than r ~ 20.5 on timescales of minutes to years. We describe the primary science objectives driving ZTF, including the physics of supernovae and relativistic explosions, multi-messenger astrophysics, supernova cosmology, active galactic nuclei, and tidal disruption events, stellar variability, and solar system objects.
280 citations
••
TL;DR: The second confirmed case of a hybrid Type Ia/IIn supernova, SN 2005gj, was observed in this paper, where the early spectrum showed a hot continuum with broad and narrow H-alpha emission.
Abstract: We report Nearby Supernova Factory observations of SN 2005gj, the second confirmed case of a "hybrid" Type Ia/IIn supernova. Our early-phase photometry of SN 2005gj shows that the interaction is much stronger than for the prototype, SN 2002ic. Our first spectrum shows a hot continuum with broad and narrow H-alpha emission. Later spectra, spanning over 4 months from outburst, show clear Type Ia features combined with broad and narrow H-gamma, H-beta, H-alpha and HeI 5876,7065 in emission. At higher resolution, P Cygni profiles are apparent. Surprisingly, we also observe an inverted P Cygni profile for [OIII] 5007. We find that the lightcurve and measured velocity of the unshocked circumstellar material imply mass loss as recently as 8 years ago. The early lightcurve is well-described by a flat radial density profile for the circumstellar material. However, our decomposition of the spectra into Type Ia and shock emission components allows for little obscuration of the supernova, suggesting an aspherical or clumpy distribution for the circumstellar material. We suggest that the emission line velocity profiles arise from electron scattering rather than the kinematics of the shock. This is supported by the inferred high densities, and the lack of evidence for evolution in the line widths. Ground- and space-based photometry, and Keck spectroscopy, of the host galaxy are used to ascertain that the host galaxy has low metallicity Z/Zsun < 0.3; (95% confidence) and that this galaxy is undergoing a significant star formation event that began roughly 200+/-70 Myr ago. We discuss the implications of these observations for progenitor models and cosmology using Type Ia supernovae.
277 citations
••
University College London1, Rhodes University2, Fermilab3, École Polytechnique4, Ohio State University5, University of Chicago6, Carnegie Institution for Science7, University of Pennsylvania8, Institut d'Astrophysique de Paris9, SLAC National Accelerator Laboratory10, Stanford University11, National Center for Supercomputing Applications12, University of Illinois at Urbana–Champaign13, IFAE14, Spanish National Research Council15, Argonne National Laboratory16, Indian Institute of Technology, Hyderabad17, Ludwig Maximilian University of Munich18, University of Michigan19, Autonomous University of Madrid20, University of Cambridge21, ETH Zurich22, Max Planck Society23, University of Washington24, Santa Cruz Institute for Particle Physics25, California Institute of Technology26, Australian Astronomical Observatory27, University of Edinburgh28, University of São Paulo29, Texas A&M University30, Catalan Institution for Research and Advanced Studies31, University of Toronto32, Lawrence Berkeley National Laboratory33, University of Arizona34, University of Melbourne35, Brookhaven National Laboratory36, University of Southampton37, State University of Campinas38, Oak Ridge National Laboratory39, Institute of Cosmology and Gravitation, University of Portsmouth40
TL;DR: In this article, the authors combine Dark Energy Survey Year 1 clustering and weak lensing data with baryon acoustic oscillations and Big Bang nucleosynthesis experiments to constrain the Hubble constant.
Abstract: We combine Dark Energy Survey Year 1 clustering and weak lensing data with baryon acoustic oscillations and Big Bang nucleosynthesis experiments to constrain the Hubble constant. Assuming a flat ΛCDM model with minimal neutrino mass (∑m_ν = 0.06 eV), we find |$H_0=67.4^{+1.1}_{-1.2}\ \rm {km\,\rm s^{-1}\,\rm Mpc^{-1}}$| (68 per cent CL). This result is completely independent of Hubble constant measurements based on the distance ladder, cosmic microwave background anisotropies (both temperature and polarization), and strong lensing constraints. There are now five data sets that: (a) have no shared observational systematics; and (b) each constrains the Hubble constant with fractional uncertainty at the few-per cent level. We compare these five independent estimates, and find that, as a set, the differences between them are significant at the 2.5σ level (χ^2/dof = 24/11, probability to exceed = 1.1 per cent). Having set the threshold for consistency at 3σ, we combine all five data sets to arrive at |$H_0=69.3^{+0.4}_{-0.6}\ \rm {km\,\mathrm{ s}^{-1}\,\mathrm{ Mpc}^{-1}}$|.
263 citations
Cited by
More filters
••
University of California, Berkeley1, Lawrence Berkeley National Laboratory2, Instituto Superior Técnico3, Pierre-and-Marie-Curie University4, Stockholm University5, European Southern Observatory6, Collège de France7, University of Cambridge8, University of Barcelona9, Yale University10, Space Telescope Science Institute11, European Space Agency12, University of New South Wales13
TL;DR: In this paper, the mass density, Omega_M, and cosmological-constant energy density of the universe were measured using the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology project.
Abstract: We report measurements of the mass density, Omega_M, and
cosmological-constant energy density, Omega_Lambda, of the universe based on
the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology
Project. The magnitude-redshift data for these SNe, at redshifts between 0.18
and 0.83, are fit jointly with a set of SNe from the Calan/Tololo Supernova
Survey, at redshifts below 0.1, to yield values for the cosmological
parameters. All SN peak magnitudes are standardized using a SN Ia lightcurve
width-luminosity relation. The measurement yields a joint probability
distribution of the cosmological parameters that is approximated by the
relation 0.8 Omega_M - 0.6 Omega_Lambda ~= -0.2 +/- 0.1 in the region of
interest (Omega_M <~ 1.5). For a flat (Omega_M + Omega_Lambda = 1) cosmology we
find Omega_M = 0.28{+0.09,-0.08} (1 sigma statistical) {+0.05,-0.04}
(identified systematics). The data are strongly inconsistent with a Lambda = 0
flat cosmology, the simplest inflationary universe model. An open, Lambda = 0
cosmology also does not fit the data well: the data indicate that the
cosmological constant is non-zero and positive, with a confidence of P(Lambda >
0) = 99%, including the identified systematic uncertainties. The best-fit age
of the universe relative to the Hubble time is t_0 = 14.9{+1.4,-1.1} (0.63/h)
Gyr for a flat cosmology. The size of our sample allows us to perform a variety
of statistical tests to check for possible systematic errors and biases. We
find no significant differences in either the host reddening distribution or
Malmquist bias between the low-redshift Calan/Tololo sample and our
high-redshift sample. The conclusions are robust whether or not a
width-luminosity relation is used to standardize the SN peak magnitudes.
16,838 citations
••
TL;DR: In this article, the authors used spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62.
Abstract: We present spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-z Supernova Search Team and recent results by Riess et al., this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmo- logical parameters: the Hubble constant the mass density the cosmological constant (i.e., the (H 0 ), () M ), vacuum energy density, the deceleration parameter and the dynamical age of the universe ) " ), (q 0 ), ) M \ 1) methods. We estimate the dynamical age of the universe to be 14.2 ^ 1.7 Gyr including systematic uncer- tainties in the current Cepheid distance scale. We estimate the likely e†ect of several sources of system- atic error, including progenitor and metallicity evolution, extinction, sample selection bias, local perturbations in the expansion rate, gravitational lensing, and sample contamination. Presently, none of these e†ects appear to reconcile the data with and ) " \ 0 q 0 " 0.
16,674 citations
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
••
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.
11,309 citations
01 Jan 1998
TL;DR: The spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62 were presented in this paper.
Abstract: We present spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-Z Supernova Search Team (Garnavich et al. 1998; Schmidt et al. 1998) and Riess et al. (1998a), this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmological parameters: the Hubble constant (H0), the mass density (M), the cosmological constant (i.e., the vacuum energy density, �), the deceleration parameter (q0), and the dynamical age of the Universe (t0). The distances of the high-redshift SNe Ia are, on average, 10% to 15% farther than expected in a low mass density (M = 0.2) Universe without a cosmological constant. Different light curve fitting methods, SN Ia subsamples, and prior constraints unanimously favor eternally expanding models with positive cosmological constant (i.e., � > 0) and a current acceleration of the expansion (i.e., q0 < 0). With no prior constraint on mass density other than M � 0, the spectroscopically confirmed SNe Ia are statistically consistent with q0 < 0 at the 2.8�
11,197 citations