scispace - formally typeset
Search or ask a question
Author

Mark Sullivan

Bio: Mark Sullivan is an academic researcher from University of Southampton. The author has contributed to research in topics: Supernova & Galaxy. The author has an hindex of 126, co-authored 802 publications receiving 63916 citations. Previous affiliations of Mark Sullivan include John Radcliffe Hospital & University of Cambridge.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, distance measurements to 71 high redshift type Ia supernovae discovered during the first year of the 5-year Supernova Legacy Survey (SNLS) were presented.
Abstract: We present distance measurements to 71 high redshift type Ia supernovae discovered during the first year of the 5-year Supernova Legacy Survey (SNLS). These events were detected and their multi-color light-curves measured using the MegaPrime/MegaCam instrument at the Canada-France-Hawaii Telescope (CFHT), by repeatedly imaging four one-square degree fields in four bands. Follow-up spectroscopy was performed at the VLT, Gemini and Keck telescopes to confirm the nature of the supernovae and to measure their redshift. With this data set, we have built a Hubble diagram extending to z = 1, with all distance measurements involving at least two bands. Systematic uncertainties are evaluated making use of the multiband photometry obtained at CFHT. Cosmological fits to this first year SNLS Hubble diagram give the following results: {Omega}{sub M} = 0.263 {+-} 0.042 (stat) {+-} 0.032 (sys) for a flat {Lambda}CDM model; and w = -1.023 {+-} 0.090 (stat) {+-} 0.054 (sys) for a flat cosmology with constant equation of state w when combined with the constraint from the recent Sloan Digital Sky Survey measurement of baryon acoustic oscillations.

2,273 citations

Journal ArticleDOI
TL;DR: In this article, the authors presented cosmological constraints from a joint analysis of type Ia supernova (SN Ia) observations obtained by the SDSS-II and SNLS collaborations.
Abstract: Aims. We present cosmological constraints from a joint analysis of type Ia supernova (SN Ia) observations obtained by the SDSS-II and SNLS collaborations. The dataset includes several low-redshift samples (z< 0.1), all three seasons from the SDSS-II (0.05

1,939 citations

Journal ArticleDOI
TL;DR: In this paper, a set of high-redshift supernovae were used to confirm previous supernova evidence for an accelerating universe, and the supernova results were combined with independent flat-universe measurements of the mass density from CMB and galaxy redshift distortion data, they provided a measurement of $w=-1.05^{+0.15}-0.09$ if w is assumed to be constant in time.
Abstract: We report measurements of $\Omega_M$, $\Omega_\Lambda$, and w from eleven supernovae at z=0.36-0.86 with high-quality lightcurves measured using WFPC-2 on the HST. This is an independent set of high-redshift supernovae that confirms previous supernova evidence for an accelerating Universe. Combined with earlier Supernova Cosmology Project data, the new supernovae yield a flat-universe measurement of the mass density $\Omega_M=0.25^{+0.07}_{-0.06}$ (statistical) $\pm0.04$ (identified systematics), or equivalently, a cosmological constant of $\Omega_\Lambda=0.75^{+0.06}_{-0.07}$ (statistical) $\pm0.04$ (identified systematics). When the supernova results are combined with independent flat-universe measurements of $\Omega_M$ from CMB and galaxy redshift distortion data, they provide a measurement of $w=-1.05^{+0.15}_{-0.20}$ (statistical) $\pm0.09$ (identified systematic), if w is assumed to be constant in time. The new data offer greatly improved color measurements of the high-redshift supernovae, and hence improved host-galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host-galaxy extinction correction directly for individual supernovae without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with $P(\Omega_\Lambda>0)>0.99$, a result consistent with previous and current supernova analyses which rely upon the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution.

1,687 citations

Journal ArticleDOI
TL;DR: In this paper, a set of high-redshift supernovae were used to confirm previous supernova evidence for an accelerating universe, and the supernova results were combined with independent flat-universe measurements of the mass density from CMB and galaxy redshift distortion data, they provided a measurement of $w=-1.05^{+0.15}-0.09$ if w is assumed to be constant in time.
Abstract: We report measurements of $\Omega_M$, $\Omega_\Lambda$, and w from eleven supernovae at z=0.36-0.86 with high-quality lightcurves measured using WFPC-2 on the HST. This is an independent set of high-redshift supernovae that confirms previous supernova evidence for an accelerating Universe. Combined with earlier Supernova Cosmology Project data, the new supernovae yield a flat-universe measurement of the mass density $\Omega_M=0.25^{+0.07}_{-0.06}$ (statistical) $\pm0.04$ (identified systematics), or equivalently, a cosmological constant of $\Omega_\Lambda=0.75^{+0.06}_{-0.07}$ (statistical) $\pm0.04$ (identified systematics). When the supernova results are combined with independent flat-universe measurements of $\Omega_M$ from CMB and galaxy redshift distortion data, they provide a measurement of $w=-1.05^{+0.15}_{-0.20}$ (statistical) $\pm0.09$ (identified systematic), if w is assumed to be constant in time. The new data offer greatly improved color measurements of the high-redshift supernovae, and hence improved host-galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host-galaxy extinction correction directly for individual supernovae without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with $P(\Omega_\Lambda>0)>0.99$, a result consistent with previous and current supernova analyses which rely upon the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution.

1,537 citations

Journal ArticleDOI
TL;DR: The Palomar Transient Factory (PTF) as mentioned in this paper is a fully-automated, wide-field survey aimed at a systematic exploration of the optical transient sky.
Abstract: The Palomar Transient Factory (PTF) is a fully-automated, wide-field survey aimed at a systematic exploration of the optical transient sky. The transient survey is performed using a new 8.1 square degree camera installed on the 48 inch Samuel Oschin telescope at Palomar Observatory; colors and light curves for detected transients are obtained with the automated Palomar 60 inch telescope. PTF uses 80% of the 1.2 m and 50% of the 1.5 m telescope time. With an exposure of 60 s the survey reaches a depth of m_(g′) ≈ 21.3 and m_R ≈ 20.6 (5σ, median seeing). Four major experiments are planned for the five-year project: (1) a 5 day cadence supernova search; (2) a rapid transient search with cadences between 90 s and 1 day; (3) a search for eclipsing binaries and transiting planets in Orion; and (4) a 3π sr deep H-alpha survey. PTF provides automatic, real-time transient classification and follow-up, as well as a database including every source detected in each frame. This paper summarizes the PTF project, including several months of on-sky performance tests of the new survey camera, the observing plans, and the data reduction strategy. We conclude by detailing the first 51 PTF optical transient detections, found in commissioning data.

1,312 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Authors/Task Force Members: Piotr Ponikowski* (Chairperson) (Poland), Adriaan A. Voors* (Co-Chair person) (The Netherlands), Stefan D. Anker (Germany), Héctor Bueno (Spain), John G. F. Cleland (UK), Andrew J. S. Coats (UK)

13,400 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.

11,309 citations