scispace - formally typeset
Search or ask a question
Author

Peter Nugent

Bio: Peter Nugent is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Supernova & Light curve. The author has an hindex of 127, co-authored 754 publications receiving 92988 citations. Previous affiliations of Peter Nugent include Liverpool John Moores University & National Autonomous University of Mexico.
Topics: Supernova, Light curve, Galaxy, Redshift, White dwarf


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors use non-local thermal equilibrium radiative transport modeling to examine observational signatures of sub- Chandrasekhar mass double detonation explosions in the nebular phase.
Abstract: Author(s): Polin, A; Nugent, P; Kasen, D | Abstract: We use non-local thermal equilibrium radiative transport modeling to examine observational signatures of sub- Chandrasekhar mass double detonation explosions in the nebular phase. Results range from spectra that look like typical and subluminous Type Ia supernovae (SNe) for higher mass progenitors to spectra that look like Ca-rich transients for lower mass progenitors. This ignition mechanism produces an inherent relationship between emission features and the progenitor mass as the ratio of the nebular [Ca II]/[Fe III] emission lines increases with decreasing white dwarf mass. Examining the [Ca II]/[Fe III] nebular line ratio in a sample of observed SNe we find further evidence for the two distinct classes of SNe Ia identified in Polin et al. by their relationship between Si II velocity and B-band magnitude, both at time of peak brightness. This suggests that SNe Ia arise from more than one progenitor channel, and provides an empirical method for classifying events based on their physical origin. Furthermore, we provide insight to the mysterious origin of Ca-rich transients. Low-mass double detonation models with only a small mass fraction of Ca (1%) produce nebular spectra that cool primarily through forbidden [Ca II] emission.

9 citations

Journal ArticleDOI
Claudia P. Gutiérrez1, Mark Sullivan1, L. Martinez2, L. Martinez3, Melina C. Bersten3, Melina C. Bersten2, Melina C. Bersten4, Cosimo Inserra5, Mathew Smith1, Joseph P. Anderson6, Yen-Chen Pan7, A. Pastorello, Lluís Galbany8, Peter Nugent9, C. R. Angus10, Cristina Barbarino11, Daniela Carollo, T. W. Chen12, Tamara M. Davis13, M. Della Valle6, Ryan J. Foley14, Morgan Fraser15, C. Frohmaier16, Santiago González-Gaitán17, Mariusz Gromadzki18, Erkki Kankare19, R. Kokotanekova6, Juna A. Kollmeier20, Geraint F. Lewis21, M. R. Magee22, Kate Maguire22, Anais Möller, Nidia Morrell20, Matt Nicholl23, M. Pursiainen1, Jesper Sollerman11, N. E. Sommer24, E. Swann16, B. E. Tucker24, P. Wiseman1, Michel Aguena25, S. Allam26, Santiago Avila27, E. Bertin28, E. Bertin29, David Brooks30, E. Buckley-Geer26, D. L. Burke31, A. Carnero Rosell, M. Carrasco Kind32, J. Carretero, M. Costanzi33, L. N. da Costa, J. De Vicente, S. Desai34, H. T. Diehl26, P. Doel30, T. F. Eifler35, T. F. Eifler36, B. Flaugher26, Pablo Fosalba37, Josh Frieman26, Juan Garcia-Bellido27, D. W. Gerdes38, Daniel Gruen31, Robert A. Gruendl32, J. Gschwend, G. Gutierrez26, Samuel Hinton13, D. L. Hollowood14, K. Honscheid39, David J. James40, Kyler Kuehn41, Kyler Kuehn42, N. Kuropatkin26, Ofer Lahav30, Marcos Lima25, M. A. G. Maia, M. March43, Felipe Menanteau32, Ramon Miquel, Eric Morganson32, Antonella Palmese26, F. Paz-Chinchón32, A. A. Plazas44, M. Sako43, E. J. Sanchez, V. Scarpine26, Michael Schubnell38, S. Serrano37, I. Sevilla-Noarbe, Marcelle Soares-Santos45, E. Suchyta46, M. E. C. Swanson32, Gregory Tarle38, Daniel Thomas16, T. N. Varga12, T. N. Varga47, A. R. Walker, R. D. Wilkinson48 
TL;DR: DES16C3cje as mentioned in this paper is a unique type II supernova with very narrow photospheric lines corresponding to very low expansion velocities of ≲1500 km/s−1, and the light curve shows an initial peak that fades after 50-d before slowly rebrightening over a further 100-d to reach an absolute brightness of −15.5
Abstract: We present DES16C3cje, a low-luminosity, long-lived type II supernova (SN II) at redshift 0.0618, detected by the Dark Energy Survey (DES). DES16C3cje is a unique SN. The spectra are characterized by extremely narrow photospheric lines corresponding to very low expansion velocities of ≲1500 km s−1, and the light curve shows an initial peak that fades after 50 d before slowly rebrightening over a further 100 d to reach an absolute brightness of Mr ∼ −15.5 mag. The decline rate of the late-time light curve is then slower than that expected from the powering by radioactive decay of 56Co, but is comparable to that expected from accretion power. Comparing the bolometric light curve with hydrodynamical models, we find that DES16C3cje can be explained by either (i) a low explosion energy (0.11 foe) and relatively large 56Ni production of 0.075 M⊙ from an ∼15 M⊙ red supergiant progenitor typical of other SNe II, or (ii) a relatively compact ∼40 M⊙ star, explosion energy of 1 foe, and 0.08 M⊙ of 56Ni. Both scenarios require additional energy input to explain the late-time light curve, which is consistent with fallback accretion at a rate of ∼0.5 × 10−8 M⊙ s−1.

9 citations

Journal ArticleDOI
TL;DR: In this article, a survey of the early evolution of 12 Type IIn supernovae (SNe IIn) in the Ultra-Violet (UV) and visible light is presented.
Abstract: We present a survey of the early evolution of 12 Type IIn supernovae (SNe IIn) in the Ultra-Violet (UV) and visible light. We use this survey to constrain the geometry of the circumstellar material (CSM) surrounding SN IIn explosions, which may shed light on their progenitor diversity. In order to distinguish between aspherical and spherical circumstellar material (CSM), we estimate the blackbody radius temporal evolution of the SNe IIn of our sample, following the method introduced by Soumagnac et al. We find that higher luminosity objects tend to show evidence for aspherical CSM. Depending on whether this correlation is due to physical reasons or to some selection bias, we derive a lower limit between 35% and 66% on the fraction of SNe IIn showing evidence for aspherical CSM. This result suggests that asphericity of the CSM surrounding SNe IIn is common - consistent with data from resolved images of stars undergoing considerable mass loss. It should be taken into account for more realistic modelling of these events.

9 citations

Journal ArticleDOI
TL;DR: In this article , infant-phase detections of SN 2018aoz from a brightness of -10.5 absolute AB magnitudes reveal a hitherto unseen plateau in the $B$-band that results in a rapid redward color evolution between 1.0 and 12.4 hours after the estimated epoch of first light.
Abstract: Type Ia Supernovae are thermonuclear explosions of white dwarf stars. They play a central role in the chemical evolution of the Universe and are an important measure of cosmological distances. However, outstanding questions remain about their origins. Despite extensive efforts to obtain natal information from their earliest signals, observations have thus far failed to identify how the majority of them explode. Here, we present infant-phase detections of SN 2018aoz from a brightness of -10.5 absolute AB magnitudes -- the lowest luminosity early Type Ia signals ever detected -- revealing a hitherto unseen plateau in the $B$-band that results in a rapid redward color evolution between 1.0 and 12.4 hours after the estimated epoch of first light. The missing $B$-band flux is best-explained by line-blanket absorption from Fe-peak elements in the outer 1% of the ejected mass. The observed $B-V$ color evolution of the SN also matches the prediction from an over-density of Fe-peak elements in the same outer 1% of the ejected mass, whereas bluer colors are expected from a purely monotonic distribution of Fe-peak elements. The presence of excess nucleosynthetic material in the extreme outer layers of the ejecta points to enhanced surface nuclear burning or extended sub-sonic mixing processes in some normal Type Ia Supernova explosions.

9 citations

Posted Content
TL;DR: A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments and long-range planning between HEP and ASCR will be required to meet HEP's research needs.
Abstract: Author(s): Habib, Salman; Roser, Robert; Gerber, Richard; Antypas, Katie; Riley, Katherine; Williams, Tim; Wells, Jack; Straatsma, Tjerk; Almgren, A; Amundson, J; Bailey, S; Bard, D; Bloom, K; Bockelman, B; Borgland, A; Borrill, J; Boughezal, R; Brower, R; Cowan, B; Finkel, H; Frontiere, N; Fuess, S; Ge, L; Gnedin, N; Gottlieb, S; Gutsche, O; Han, T; Heitmann, K; Hoeche, S; Ko, K; Kononenko, O; LeCompte, T; Li, Z; Lukic, Z; Mori, W; Nugent, P; Ng, C-K; Oleynik, G; O'Shea, B; Padmanabhan, N; Petravick, D; Petriello, FJ; Power, J; Qiang, J; Reina, L; Rizzo, TJ; Ryne, R; Schram, M; Spentzouris, P; Toussaint, D; Vay, J-L; Viren, B; Wurthwein, F; Xiao, L | Abstract: This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the mass density, Omega_M, and cosmological-constant energy density of the universe were measured using the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology project.
Abstract: We report measurements of the mass density, Omega_M, and cosmological-constant energy density, Omega_Lambda, of the universe based on the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology Project. The magnitude-redshift data for these SNe, at redshifts between 0.18 and 0.83, are fit jointly with a set of SNe from the Calan/Tololo Supernova Survey, at redshifts below 0.1, to yield values for the cosmological parameters. All SN peak magnitudes are standardized using a SN Ia lightcurve width-luminosity relation. The measurement yields a joint probability distribution of the cosmological parameters that is approximated by the relation 0.8 Omega_M - 0.6 Omega_Lambda ~= -0.2 +/- 0.1 in the region of interest (Omega_M <~ 1.5). For a flat (Omega_M + Omega_Lambda = 1) cosmology we find Omega_M = 0.28{+0.09,-0.08} (1 sigma statistical) {+0.05,-0.04} (identified systematics). The data are strongly inconsistent with a Lambda = 0 flat cosmology, the simplest inflationary universe model. An open, Lambda = 0 cosmology also does not fit the data well: the data indicate that the cosmological constant is non-zero and positive, with a confidence of P(Lambda > 0) = 99%, including the identified systematic uncertainties. The best-fit age of the universe relative to the Hubble time is t_0 = 14.9{+1.4,-1.1} (0.63/h) Gyr for a flat cosmology. The size of our sample allows us to perform a variety of statistical tests to check for possible systematic errors and biases. We find no significant differences in either the host reddening distribution or Malmquist bias between the low-redshift Calan/Tololo sample and our high-redshift sample. The conclusions are robust whether or not a width-luminosity relation is used to standardize the SN peak magnitudes.

16,838 citations

Journal ArticleDOI
TL;DR: In this article, the authors used spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62.
Abstract: We present spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-z Supernova Search Team and recent results by Riess et al., this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmo- logical parameters: the Hubble constant the mass density the cosmological constant (i.e., the (H 0 ), () M ), vacuum energy density, the deceleration parameter and the dynamical age of the universe ) " ), (q 0 ), ) M \ 1) methods. We estimate the dynamical age of the universe to be 14.2 ^ 1.7 Gyr including systematic uncer- tainties in the current Cepheid distance scale. We estimate the likely e†ect of several sources of system- atic error, including progenitor and metallicity evolution, extinction, sample selection bias, local perturbations in the expansion rate, gravitational lensing, and sample contamination. Presently, none of these e†ects appear to reconcile the data with and ) " \ 0 q 0 " 0.

16,674 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.

11,309 citations

01 Jan 1998
TL;DR: The spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62 were presented in this paper.
Abstract: We present spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-Z Supernova Search Team (Garnavich et al. 1998; Schmidt et al. 1998) and Riess et al. (1998a), this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmological parameters: the Hubble constant (H0), the mass density (M), the cosmological constant (i.e., the vacuum energy density, �), the deceleration parameter (q0), and the dynamical age of the Universe (t0). The distances of the high-redshift SNe Ia are, on average, 10% to 15% farther than expected in a low mass density (M = 0.2) Universe without a cosmological constant. Different light curve fitting methods, SN Ia subsamples, and prior constraints unanimously favor eternally expanding models with positive cosmological constant (i.e., � > 0) and a current acceleration of the expansion (i.e., q0 < 0). With no prior constraint on mass density other than M � 0, the spectroscopically confirmed SNe Ia are statistically consistent with q0 < 0 at the 2.8�

11,197 citations