scispace - formally typeset
Search or ask a question
Author

Peter Nugent

Bio: Peter Nugent is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Supernova & Light curve. The author has an hindex of 127, co-authored 754 publications receiving 92988 citations. Previous affiliations of Peter Nugent include Liverpool John Moores University & National Autonomous University of Mexico.
Topics: Supernova, Light curve, Galaxy, Redshift, White dwarf


Papers
More filters
Journal ArticleDOI
01 Jan 2010-Science
TL;DR: SN 2002bj appears to be a member of a new class of supernovae, possibly formed by a helium detonation on a white dwarf ejecting a small envelope of material, and its properties suggest that it is of a kind predicted by theory but not previously observed.
Abstract: Analyses of supernovae (SNe) have revealed two main types of progenitors: exploding white dwarfs and collapsing massive stars. Here we describe SN 2002bj, which stands out as different from any SN reported to date. Its light curve rose and declined very rapidly, yet reached a peak intrinsic brightness greater than −18 magnitude. A spectrum obtained 7 days after discovery shows the presence of helium and intermediate-mass elements, yet no clear hydrogen or iron-peak elements. The spectrum only barely resembles that of a type Ia SN, with added carbon and helium. Its properties suggest that SN 2002bj may be representative of a class of progenitors that previously has been only hypothesized: a helium detonation on a white dwarf, ejecting a small envelope of material. New surveys should find many such objects, despite their scarcity.

141 citations

Journal ArticleDOI
TL;DR: In this paper, an extensive time series of spectroscopic data of the peculiar SN 1999aa in NGC 2595 is presented, including 25 optical spectra between -11 and +58 days with respect to B-band maximum light, providing an unusually complete time history.
Abstract: We present an extensive new time series of spectroscopic data of the peculiar SN 1999aa in NGC 2595. Our data set includes 25 optical spectra between -11 and +58 days with respect to B-band maximum light, providing an unusually complete time history. The early spectra resemble those of an SN 1991T-like object but with a relatively strong CaH and K absorption feature. The first clear sign of Si II lambda 6355, characteristic of Type Ia supernovae, is found at day -7, and its velocity remains constant up to at least the first month after B-band maximum light. The transition to normal-looking spectra is found to occur earlier than in SN 1991T, suggesting SN 1999aa as a possible link between SN 1991T-like and Branch-normal supernovae. Comparing the observations with synthetic spectra, doubly ionized Fe, Si, and Ni are identified at early epochs. These are characteristic of SN 1991 T-like objects. Furthermore, in the day -11 spectrum, evidence is found for an absorption feature that could be identified as high velocity C II lambda 6580 or H alpha. At the same epoch C III lambda 4648.8 at photospheric velocity is probably responsible for the absorption feature at 4500 8. High-velocity Ca ismore » found around maximum light together with Si II and Fe II confined in a narrow velocity window. Implied constraints on supernovae progenitor systems and explosion hydrodynamic models are briefly discussed.« less

137 citations

Journal ArticleDOI
TL;DR: The inner workings of a framework, based on machine-learning algorithms, that captures expert training and ground-truth knowledge about the variable and transient sky to automate the process of discovery on image differences, and the generation of preliminary science-type classifications of discovered sources are presented.
Abstract: The rate of image acquisition in modern synoptic imaging surveys has already begun to outpace the feasibility of keeping astronomers in the real-time discovery and classification loop. Here we present the inner workings of a framework, based on machine-learning algorithms, that captures expert training and ground-truth knowledge about the variable and transient sky to automate 1) the process of discovery on image differences and, 2) the generation of preliminary science-type classifications of discovered sources. Since follow-up resources for extracting novel science from fast-changing transients are precious, self-calibrating classification probabilities must be couched in terms of efficiencies for discovery and purity of the samples generated. We estimate the purity and efficiency in identifying real sources with a two-epoch image-difference discovery algorithm for the Palomar Transient Factory (PTF) survey. Once given a source discovery, using machine-learned classification trained on PTF data, we distinguish between transients and variable stars with a 3.8% overall error rate (with 1.7% errors for imaging within the Sloan Digital Sky Survey footprint). At >96% classification efficiency, the samples achieve 90% purity. Initial classifications are shown to rely primarily on context-based features, determined from the data itself and external archival databases. In the ~one year since autonomous operations, this discovery and classification framework has led to several significant science results, from outbursting young stars to subluminous Type IIP supernovae to candidate tidal disruption events. We discuss future directions of this approach, including the possible roles of crowdsourcing and the scalability of machine learning to future surveys such a the Large Synoptical Survey Telescope (LSST).

137 citations

Journal ArticleDOI
TL;DR: In this paper, the authors presented the first high-redshift Hubble diagram for Type II-P supernovae based upon five events at redshift up to z~0.3, using photometry from Canada-France-Hawaii Telescope Supernova Legacy Survey and absorption line spectroscopy from the Keck observatory.
Abstract: We present the first high-redshift Hubble diagram for Type II-P supernovae (SNe II-P) based upon five events at redshift up to z~0.3. This diagram was constructed using photometry from the Canada-France-Hawaii Telescope Supernova Legacy Survey and absorption line spectroscopy from the Keck observatory. The method used to measure distances to these supernovae is based on recent work by Hamuy & Pinto (2002) and exploits a correlation between the absolute brightness of SNe II-P and the expansion velocities derived from the minimum of the Fe II 516.9 nm P-Cygni feature observed during the plateau phases. We present three refinements to this method which significantly improve the practicality of measuring the distances of SNe II-P at cosmologically interesting redshifts. These are an extinction correction measurement based on the V-I colors at day 50, a cross-correlation measurement for the expansion velocity and the ability to extrapolate such velocities accurately over almost the entire plateau phase. We apply this revised method to our dataset of high-redshift SNe II-P and find that the resulting Hubble diagram has a scatter of only 0.26 magnitudes, thus demonstrating the feasibility of measuring the expansion history, with present facilities, using a method independent of that based upon supernovae of Type Ia.

134 citations

Journal ArticleDOI
TL;DR: In this article, the authors constructed a relatively detailed picture of the layered geometrical structure of the supernova ejecta, showing that the ejecta layers near the photosphere obey a near axial symmetry, while a detached, high-velocity structure with high CaII line opacity deviates from the photospheric axisymmetry.
Abstract: SN 2001el is the first normal Type Ia supernova to show a strong, intrinsic polarization signal. In addition, during the epochs prior to maximum light, the CaII IR triplet absorption is seen distinctly and separately at both normal photospheric velocities and at very high velocities. The high-velocity triplet absorption is highly polarized, with a different polarization angle than the rest of the spectrum. The unique observation allows us to construct a relatively detailed picture of the layered geometrical structure of the supernova ejecta: in our interpretation, the ejecta layers near the photosphere (v \approx 10,000 km/s) obey a near axial symmetry, while a detached, high-velocity structure (v \approx 18,000-25,000 km/s) with high CaII line opacity deviates from the photospheric axisymmetry. By partially obscuring the underlying photosphere, the high-velocity structure causes a more incomplete cancellation of the polarization of the photospheric light, and so gives rise to the polarization peak and rotated polarization angle of the high-velocity IR triplet feature. In an effort to constrain the ejecta geometry, we develop a technique for calculating 3-D synthetic polarization spectra and use it to generate polarization profiles for several parameterized configurations. In particular, we examine the case where the inner ejecta layers are ellipsoidal and the outer, high-velocity structure is one of four possibilities: a spherical shell, an ellipsoidal shell, a clumped shell, or a toroid. The synthetic spectra rule out the spherical shell model, disfavor a toroid, and find a best fit with the clumped shell. We show further that different geometries can be more clearly discriminated if observations are obtained from several different lines of sight.

134 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the mass density, Omega_M, and cosmological-constant energy density of the universe were measured using the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology project.
Abstract: We report measurements of the mass density, Omega_M, and cosmological-constant energy density, Omega_Lambda, of the universe based on the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology Project. The magnitude-redshift data for these SNe, at redshifts between 0.18 and 0.83, are fit jointly with a set of SNe from the Calan/Tololo Supernova Survey, at redshifts below 0.1, to yield values for the cosmological parameters. All SN peak magnitudes are standardized using a SN Ia lightcurve width-luminosity relation. The measurement yields a joint probability distribution of the cosmological parameters that is approximated by the relation 0.8 Omega_M - 0.6 Omega_Lambda ~= -0.2 +/- 0.1 in the region of interest (Omega_M <~ 1.5). For a flat (Omega_M + Omega_Lambda = 1) cosmology we find Omega_M = 0.28{+0.09,-0.08} (1 sigma statistical) {+0.05,-0.04} (identified systematics). The data are strongly inconsistent with a Lambda = 0 flat cosmology, the simplest inflationary universe model. An open, Lambda = 0 cosmology also does not fit the data well: the data indicate that the cosmological constant is non-zero and positive, with a confidence of P(Lambda > 0) = 99%, including the identified systematic uncertainties. The best-fit age of the universe relative to the Hubble time is t_0 = 14.9{+1.4,-1.1} (0.63/h) Gyr for a flat cosmology. The size of our sample allows us to perform a variety of statistical tests to check for possible systematic errors and biases. We find no significant differences in either the host reddening distribution or Malmquist bias between the low-redshift Calan/Tololo sample and our high-redshift sample. The conclusions are robust whether or not a width-luminosity relation is used to standardize the SN peak magnitudes.

16,838 citations

Journal ArticleDOI
TL;DR: In this article, the authors used spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62.
Abstract: We present spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-z Supernova Search Team and recent results by Riess et al., this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmo- logical parameters: the Hubble constant the mass density the cosmological constant (i.e., the (H 0 ), () M ), vacuum energy density, the deceleration parameter and the dynamical age of the universe ) " ), (q 0 ), ) M \ 1) methods. We estimate the dynamical age of the universe to be 14.2 ^ 1.7 Gyr including systematic uncer- tainties in the current Cepheid distance scale. We estimate the likely e†ect of several sources of system- atic error, including progenitor and metallicity evolution, extinction, sample selection bias, local perturbations in the expansion rate, gravitational lensing, and sample contamination. Presently, none of these e†ects appear to reconcile the data with and ) " \ 0 q 0 " 0.

16,674 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.

11,309 citations

01 Jan 1998
TL;DR: The spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62 were presented in this paper.
Abstract: We present spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-Z Supernova Search Team (Garnavich et al. 1998; Schmidt et al. 1998) and Riess et al. (1998a), this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmological parameters: the Hubble constant (H0), the mass density (M), the cosmological constant (i.e., the vacuum energy density, �), the deceleration parameter (q0), and the dynamical age of the Universe (t0). The distances of the high-redshift SNe Ia are, on average, 10% to 15% farther than expected in a low mass density (M = 0.2) Universe without a cosmological constant. Different light curve fitting methods, SN Ia subsamples, and prior constraints unanimously favor eternally expanding models with positive cosmological constant (i.e., � > 0) and a current acceleration of the expansion (i.e., q0 < 0). With no prior constraint on mass density other than M � 0, the spectroscopically confirmed SNe Ia are statistically consistent with q0 < 0 at the 2.8�

11,197 citations