scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Cosmological inference from Bayesian forward modelling of deep galaxy redshift surveys

TL;DR: In this article, the authors present a large-scale Bayesian inference framework to constrain cosmological parameters using galaxy redshift surveys, via an application of the Alcock-Paczynski (AP) test.
Abstract: We present a large-scale Bayesian inference framework to constrain cosmological parameters using galaxy redshift surveys, via an application of the Alcock-Paczynski (AP) test. Our physical model of the non-linearly evolved density field, as probed by galaxy surveys, employs Lagrangian perturbation theory (LPT) to connect Gaussian initial conditions to the final density field, followed by a coordinate transformation to obtain the redshift space representation for comparison with data. We have implemented a Hamiltonian Monte Carlo sampler to generate realisations of three-dimensional (3D) primordial and present-day matter fluctuations from a non-Gaussian LPT-Poissonian density posterior given a set of observations. This hierarchical approach encodes a novel AP test, extracting several orders of magnitude more information from the cosmic expansion compared to classical approaches, to infer cosmological parameters and jointly reconstruct the underlying 3D dark matter density field. The novelty of this AP test lies in constraining the comoving-redshift transformation to infer the appropriate cosmology which yields isotropic correlations of the galaxy density field, with the underlying assumption relying purely on the geometrical symmetries of the cosmological principle. Such an AP test does not rely explicitly on modelling the full statistics of the field. We verified in depth via simulations that this renders our test robust to model misspecification. This leads to another crucial advantage, namely that the cosmological parameters exhibit extremely weak dependence on the currently unresolved phenomenon of galaxy bias, thereby circumventing a potentially key limitation. This is consequently among the first methods to extract a large fraction of information from statistics other than that of direct density contrast correlations, without being sensitive to the amplitude of density fluctuations. We perform several statistical efficiency and consistency tests on a mock galaxy catalogue, using the SDSS-III survey as template, taking into account the survey geometry and selection effects, to validate the Bayesian inference machinery implemented.Key words: methods: data analysis / methods: statistical / cosmology: observations / large-scale structure of Universe / galaxies: statistics

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a hierarchical Bayes approach is employed to jointly account for various observational effects, such as unknown galaxy biases, selection effects, and observational noise, in order to handle effects of non-linear gravitational structure formation processes in data.
Abstract: Accurate analyses of present and next-generation cosmological galaxy surveys require new ways to handle effects of non-linear gravitational structure formation processes in data. To address these needs we present an extension of our previously developed algorithm for Bayesian Origin Reconstruction from Galaxies (BORG) to analyse matter clustering at non-linear scales in observations. This is achieved by incorporating a numerical particle mesh model of gravitational structure formation into our Bayesian inference framework. The algorithm simultaneously infers the three-dimensional primordial matter fluctuations from which present non-linear observations formed and provides reconstructions of velocity fields and structure formation histories. The physical forward modelling approach automatically accounts for the non-Gaussian features in gravitationally evolved matter density fields and addresses the redshift space distortion problem associated with peculiar motions of observed galaxies. Our algorithm employs a hierarchical Bayes approach to jointly account for various observational effects, such as unknown galaxy biases, selection effects, and observational noise. Corresponding parameters of the data model are marginalized out via a sophisticated Markov chain Monte Carlo approach relying on a combination of a multiple block sampling framework and an efficient implementation of a Hamiltonian Monte Carlo sampler. We demonstrate the performance of the method by applying it to the 2M++ galaxy compilation, tracing the matter distribution of the nearby universe. We show accurate and detailed inferences of the three-dimensional non-linear dark matter distribution of the nearby universe. As exemplified in the case of the Coma cluster, our method provides complementary mass estimates that are compatible with those obtained from weak lensing and X-ray observations. For the first time, we also present a reconstruction of the vorticity of the non-linear velocity field from observations. In summary, our method provides plausible and very detailed inferences of the dark matter and velocity fields of our cosmic neighbourhood.

83 citations

Journal ArticleDOI
TL;DR: In this article, a hierarchical Bayes approach is employed to jointly account for observational effects, such as galaxy biases, selection effects, and noise, which is used to reconstruct the vorticity of the non-linear velocity field from observations.
Abstract: Accurate analyses of present and next-generation galaxy surveys require new ways to handle effects of non-linear gravitational structure formation in data. To address these needs we present an extension of our previously developed algorithm for Bayesian Origin Reconstruction from Galaxies to analyse matter clustering at non-linear scales in observations. This is achieved by incorporating a numerical particle mesh model of structure formation into our Bayesian inference framework. The algorithm simultaneously infers the 3D primordial matter fluctuations from which present non-linear observations formed and provides reconstructions of velocity fields and structure formation histories. The physical forward modelling approach automatically accounts for non-Gaussian features in evolved matter density fields and addresses the redshift space distortion problem associated with peculiar motions of galaxies. Our algorithm employs a hierarchical Bayes approach to jointly account for observational effects, such as galaxy biases, selection effects, and noise. Corresponding parameters are marginalized out via a sophisticated Markov Chain Monte Carlo approach relying on a combination of a multiple block sampling framework and a Hamiltonian Monte Carlo sampler. We demonstrate the performance of the method by applying it to the 2M++ galaxy compilation, tracing the matter distribution of the Nearby Universe. We show accurate and detailed inferences of the 3D non-linear dark matter distribution of the Nearby Universe. As exemplified in the case of the Coma cluster, we provide mass estimates that are compatible with those obtained from weak lensing and X-ray observations. For the first time, we reconstruct the vorticity of the non-linear velocity field from observations. In summary, our method provides plausible and detailed inferences of dark matter and velocity fields of our cosmic neighbourhood.

56 citations

Journal ArticleDOI
TL;DR: In this article, an extension of Wasserstein optimized model to emulate accurate high-resolution (HR) features from computationally cheaper low-resolution cosmological simulations is presented.
Abstract: We present an extension of our recently developed Wasserstein optimized model to emulate accurate high-resolution (HR) features from computationally cheaper low-resolution (LR) cosmological simulations. Our deep physical modelling technique relies on restricted neural networks to perform a mapping of the distribution of the LR cosmic density field to the space of the HR small-scale structures. We constrain our network using a single triplet of HR initial conditions and the corresponding LR and HR evolved dark matter simulations from the quijote suite of simulations. We exploit the information content of the HR initial conditions as a well-constructed prior distribution from which the network emulates the small-scale structures. Once fitted, our physical model yields emulated HR simulations at low computational cost, while also providing some insights about how the large-scale modes affect the small-scale structure in real space.

47 citations

Journal ArticleDOI
TL;DR: In this paper, a halo painting network is proposed to map approximate 3D dark matter fields to realistic halo distributions via a physically motivated network with which they can learn the nontrivial local relation between dark matter density field and halo distribution without relying on a physical model.
Abstract: We present a novel halo painting network that learns to map approximate 3D dark matter fields to realistic halo distributions. This map is provided via a physically motivated network with which we can learn the nontrivial local relation between dark matter density field and halo distributions without relying on a physical model. Unlike other generative or regressive models, a well motivated prior and simple physical principles allow us to train the mapping network quickly and with relatively little data. In learning to paint halo distributions from computationally cheap, analytical and nonlinear density fields, we bypass the need for full particle mesh simulations and halo finding algorithms. Furthermore, by design, our halo painting network needs only local patches of dark matter density to predict the halos, and as such, it can predict the 3D halo distribution for any arbitrary simulation box size. Our neural network can be trained using small simulations and used to predict large halo distributions, as long as the resolutions are equivalent. We evaluate our model's ability to generate 3D halo count distributions which reproduce, to a high degree, summary statistics such as the power spectrum and bispectrum, of the input or reference realizations.

32 citations

Journal ArticleDOI
TL;DR: The effective field theory (EFT) approach to the clustering of galaxies and other biased tracers allows for an isolation of the cosmological information that is protected by symmetries as mentioned in this paper.
Abstract: The effective-field-theory (EFT) approach to the clustering of galaxies and other biased tracers allows for an isolation of the cosmological information that is protected by symmetries, in particul ...

30 citations

References
More filters
Journal ArticleDOI
TL;DR: A generalization of the sampling method introduced by Metropolis et al. as mentioned in this paper is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates.
Abstract: SUMMARY A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices and potential applications of the methods to numerical problems arising in statistics, are discussed. For numerical problems in a large number of dimensions, Monte Carlo methods are often more efficient than conventional numerical methods. However, implementation of the Monte Carlo methods requires sampling from high dimensional probability distributions and this may be very difficult and expensive in analysis and computer time. General methods for sampling from, or estimating expectations with respect to, such distributions are as follows. (i) If possible, factorize the distribution into the product of one-dimensional conditional distributions from which samples may be obtained. (ii) Use importance sampling, which may also be used for variance reduction. That is, in order to evaluate the integral J = X) p(x)dx = Ev(f), where p(x) is a probability density function, instead of obtaining independent samples XI, ..., Xv from p(x) and using the estimate J, = Zf(xi)/N, we instead obtain the sample from a distribution with density q(x) and use the estimate J2 = Y{f(xj)p(x1)}/{q(xj)N}. This may be advantageous if it is easier to sample from q(x) thanp(x), but it is a difficult method to use in a large number of dimensions, since the values of the weights w(xi) = p(x1)/q(xj) for reasonable values of N may all be extremely small, or a few may be extremely large. In estimating the probability of an event A, however, these difficulties may not be as serious since the only values of w(x) which are important are those for which x -A. Since the methods proposed by Trotter & Tukey (1956) for the estimation of conditional expectations require the use of importance sampling, the same difficulties may be encountered in their use. (iii) Use a simulation technique; that is, if it is difficult to sample directly from p(x) or if p(x) is unknown, sample from some distribution q(y) and obtain the sample x values as some function of the corresponding y values. If we want samples from the conditional dis

14,965 citations

Journal ArticleDOI
TL;DR: In this article, a combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions.
Abstract: The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is ns = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison–Zel’dovich–Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, � mν < 0.58 eV (95% CL), and the effective number of neutrino species, Neff = 4.34 +0.86 −0.88 (68% CL), which benefit from better determinations of the third peak and H0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H0, without high-redshift Type Ia supernovae, is w =− 1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Yp = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature–E-mode polarization cross power spectrum at 21σ , compared with 13σ from the five-year data. With the seven-year temperature–B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to Δα =− 1. 1 ± 1. 4(statistical) ± 1. 5(systematic) (68% CL). We report significant detections of the Sunyaev–Zel’dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5–0.7 times the predictions from “universal profile” of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.

11,309 citations

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations

Journal ArticleDOI
TL;DR: The Sloan Digital Sky Survey (SDSS) as mentioned in this paper provides the data to support detailed investigations of the distribution of luminous and non-luminous matter in the Universe: a photometrically and astrometrically calibrated digital imaging survey of pi steradians above about Galactic latitude 30 degrees in five broad optical bands.
Abstract: The Sloan Digital Sky Survey (SDSS) will provide the data to support detailed investigations of the distribution of luminous and non- luminous matter in the Universe: a photometrically and astrometrically calibrated digital imaging survey of pi steradians above about Galactic latitude 30 degrees in five broad optical bands to a depth of g' about 23 magnitudes, and a spectroscopic survey of the approximately one million brightest galaxies and 10^5 brightest quasars found in the photometric object catalog produced by the imaging survey. This paper summarizes the observational parameters and data products of the SDSS, and serves as an introduction to extensive technical on-line documentation.

10,039 citations

Journal ArticleDOI
Donald G. York1, Jennifer Adelman2, John E. Anderson2, Scott F. Anderson3  +148 moreInstitutions (29)
TL;DR: The Sloan Digital Sky Survey (SDSS) as discussed by the authors provides the data to support detailed investigations of the distribution of luminous and non-luminous matter in the universe: a photometrically and astrometrically calibrated digital imaging survey of π sr above about Galactic latitude 30° in five broad optical bands to a depth of g' ~ 23 mag.
Abstract: The Sloan Digital Sky Survey (SDSS) will provide the data to support detailed investigations of the distribution of luminous and nonluminous matter in the universe: a photometrically and astrometrically calibrated digital imaging survey of π sr above about Galactic latitude 30° in five broad optical bands to a depth of g' ~ 23 mag, and a spectroscopic survey of the approximately 106 brightest galaxies and 105 brightest quasars found in the photometric object catalog produced by the imaging survey. This paper summarizes the observational parameters and data products of the SDSS and serves as an introduction to extensive technical on-line documentation.

9,835 citations

Related Papers (5)