Author
Peter Nugent
Other affiliations: Liverpool John Moores University, National Autonomous University of Mexico, California Institute of Technology ...read more
Bio: Peter Nugent is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Supernova & Light curve. The author has an hindex of 127, co-authored 754 publications receiving 92988 citations. Previous affiliations of Peter Nugent include Liverpool John Moores University & National Autonomous University of Mexico.
Topics: Supernova, Light curve, Galaxy, Redshift, White dwarf
Papers published on a yearly basis
Papers
More filters
••
08 Aug 2005
TL;DR: The Supernova Acceleration Probe (SNAP) as discussed by the authors uses Type Ia supernovae (SNe Ia) as distance indicators to measure the effect of dark energy on the expansion history of the universe.
Abstract: The Supernova Acceleration Probe (SNAP) will use Type Ia supernovae (SNe Ia) as distance indicators to measure the effect of dark energy on the expansion history of the Universe. (SNAP's weak-lensing program is described in a companion White Paper.) The experiment exploits supernova distance measurements up to their fundamental systematic limit; strict requirements on the monitoring of each supernova's properties lead to the need for a space-based mission. Results from pre-SNAP experiments, which characterize fundamental SN Ia properties, will be used to optimize the SNAP observing strategy to yield data, which minimize both systematic and statistical uncertainties. SNAP has achieved technological readiness and the collaboration is poised to begin construction.
6 citations
••
Princeton University1, Carnegie Institution for Science2, Lawrence Berkeley National Laboratory3, University of California, Berkeley4, Weizmann Institute of Science5, University of California, Santa Barbara6, Las Cumbres Observatory Global Telescope Network7, University of Southampton8, Stockholm University9, California Institute of Technology10, University of Maryland, College Park11, Goddard Space Flight Center12, INAF13, University of Oxford14, Max Planck Society15, University of Tokyo16, San Diego State University17
TL;DR: In this paper, the authors present the results of a systematic study of 1077 hydrogen-poor supernovae discovered by the Palomar Transient Factory, leading to nine new members of this peculiar class.
Abstract: Since the discovery of the unusual prototype SN 2002cx, the eponymous class of low-velocity, hydrogen-poor supernovae has grown to include at most another two dozen members identified from several heterogeneous surveys, in some cases ambiguously. Here we present the results of a systematic study of 1077 hydrogen-poor supernovae discovered by the Palomar Transient Factory, leading to nine new members of this peculiar class. Moreover we find there are two distinct subclasses based on their spectroscopic, photometric, and host galaxy properties: The "SN 2002cx-like" supernovae tend to be in later-type or more irregular hosts, have more varied and generally dimmer luminosities, have longer rise times, and lack a Ti II trough when compared to the "SN 2002es-like" supernovae. None of our objects show helium, and we counter a previous claim of two such events. We also find that these transients comprise 5.6+17-3.7% (90% confidence) of all SNe Ia, lower compared to earlier estimates. Combining our objects with the literature sample, we propose that these subclasses have two distinct physical origins.
6 citations
••
09 Sep 2022
TL;DR: The MegaMapper as mentioned in this paper is a ground-based experiment to measure Inflation parameters and Dark Energy from galaxy redshifts at 2 < z < 5 , in order to achieve path-breaking results with a mid-scale investment, combining existing tech-nologies for critical path elements and pushing innovative development in other design areas.
Abstract: In this white paper, we present the MegaMapper concept. The MegaMapper is a proposed ground-based experiment to measure Inflation parameters and Dark Energy from galaxy redshifts at 2 < z < 5 . In order to achieve path-breaking results with a mid-scale investment, the MegaMapper combines existing tech-nologies for critical path elements and pushes innovative development in other design areas. To this aim, we envision a 6.5-m Magellan-like telescope, with a newly designed wide field, coupled with DESI spectrographs, and small-pitch robots to achieve multiplexing of 26,100. This will match the expected achiev-able target density in the redshift range of interest and provide a 15x capability over the existing state-of the art, without a 15x increase in project budget.
6 citations
••
09 Jul 2018
TL;DR: A distributed framework for cost-based caching of multi-dimensional arrays in native format is introduced and cache eviction and placement heuristic algorithms that consider the historical query workload are designed.
Abstract: As applications continue to generate multi-dimensional data at exponentially increasing rates, fast analytics to extract meaningful results is becoming extremely important. The database community has developed array databases that alleviate this problem through a series of techniques. In-situ mechanisms provide direct access to raw data in the original format---without loading and partitioning. Parallel processing scales to the largest datasets. In-memory caching reduces latency when the same data are accessed across a workload of queries. However, we are not aware of any work on distributed caching of multi-dimensional raw arrays. In this paper, we introduce a distributed framework for cost-based caching of multi-dimensional arrays in native format. Given a set of files that contain portions of an array and an online query workload, the framework computes an effective caching plan in two stages. First, the plan identifies the cells to be cached locally from each of the input files by continuously refining an evolving R-tree index. In the second stage, an optimal assignment of cells to nodes that collocates dependent cells in order to minimize the overall data transfer is determined. We design cache eviction and placement heuristic algorithms that consider the historical query workload. A thorough experimental evaluation over two real datasets in three file formats confirms the superiority - by as much as two orders of magnitude - of the proposed framework over existing techniques in terms of cache overhead and workload execution time.
6 citations
Cited by
More filters
••
University of California, Berkeley1, Lawrence Berkeley National Laboratory2, Instituto Superior Técnico3, Pierre-and-Marie-Curie University4, Stockholm University5, European Southern Observatory6, Collège de France7, University of Cambridge8, University of Barcelona9, Yale University10, Space Telescope Science Institute11, European Space Agency12, University of New South Wales13
TL;DR: In this paper, the mass density, Omega_M, and cosmological-constant energy density of the universe were measured using the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology project.
Abstract: We report measurements of the mass density, Omega_M, and
cosmological-constant energy density, Omega_Lambda, of the universe based on
the analysis of 42 Type Ia supernovae discovered by the Supernova Cosmology
Project. The magnitude-redshift data for these SNe, at redshifts between 0.18
and 0.83, are fit jointly with a set of SNe from the Calan/Tololo Supernova
Survey, at redshifts below 0.1, to yield values for the cosmological
parameters. All SN peak magnitudes are standardized using a SN Ia lightcurve
width-luminosity relation. The measurement yields a joint probability
distribution of the cosmological parameters that is approximated by the
relation 0.8 Omega_M - 0.6 Omega_Lambda ~= -0.2 +/- 0.1 in the region of
interest (Omega_M <~ 1.5). For a flat (Omega_M + Omega_Lambda = 1) cosmology we
find Omega_M = 0.28{+0.09,-0.08} (1 sigma statistical) {+0.05,-0.04}
(identified systematics). The data are strongly inconsistent with a Lambda = 0
flat cosmology, the simplest inflationary universe model. An open, Lambda = 0
cosmology also does not fit the data well: the data indicate that the
cosmological constant is non-zero and positive, with a confidence of P(Lambda >
0) = 99%, including the identified systematic uncertainties. The best-fit age
of the universe relative to the Hubble time is t_0 = 14.9{+1.4,-1.1} (0.63/h)
Gyr for a flat cosmology. The size of our sample allows us to perform a variety
of statistical tests to check for possible systematic errors and biases. We
find no significant differences in either the host reddening distribution or
Malmquist bias between the low-redshift Calan/Tololo sample and our
high-redshift sample. The conclusions are robust whether or not a
width-luminosity relation is used to standardize the SN peak magnitudes.
16,838 citations
••
TL;DR: In this article, the authors used spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62.
Abstract: We present spectral and photometric observations of 10 Type Ia supernovae (SNe Ia) in the redshift range 0.16 " z " 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-z Supernova Search Team and recent results by Riess et al., this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmo- logical parameters: the Hubble constant the mass density the cosmological constant (i.e., the (H 0 ), () M ), vacuum energy density, the deceleration parameter and the dynamical age of the universe ) " ), (q 0 ), ) M \ 1) methods. We estimate the dynamical age of the universe to be 14.2 ^ 1.7 Gyr including systematic uncer- tainties in the current Cepheid distance scale. We estimate the likely e†ect of several sources of system- atic error, including progenitor and metallicity evolution, extinction, sample selection bias, local perturbations in the expansion rate, gravitational lensing, and sample contamination. Presently, none of these e†ects appear to reconcile the data with and ) " \ 0 q 0 " 0.
16,674 citations
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
••
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.
11,309 citations
01 Jan 1998
TL;DR: The spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62 were presented in this paper.
Abstract: We present spectral and photometric observations of 10 type Ia supernovae (SNe Ia) in the redshift range 0.16 � z � 0.62. The luminosity distances of these objects are determined by methods that employ relations between SN Ia luminosity and light curve shape. Combined with previous data from our High-Z Supernova Search Team (Garnavich et al. 1998; Schmidt et al. 1998) and Riess et al. (1998a), this expanded set of 16 high-redshift supernovae and a set of 34 nearby supernovae are used to place constraints on the following cosmological parameters: the Hubble constant (H0), the mass density (M), the cosmological constant (i.e., the vacuum energy density, �), the deceleration parameter (q0), and the dynamical age of the Universe (t0). The distances of the high-redshift SNe Ia are, on average, 10% to 15% farther than expected in a low mass density (M = 0.2) Universe without a cosmological constant. Different light curve fitting methods, SN Ia subsamples, and prior constraints unanimously favor eternally expanding models with positive cosmological constant (i.e., � > 0) and a current acceleration of the expansion (i.e., q0 < 0). With no prior constraint on mass density other than M � 0, the spectroscopically confirmed SNe Ia are statistically consistent with q0 < 0 at the 2.8�
11,197 citations