scispace - formally typeset
Search or ask a question

Showing papers by "Aseem Paranjape published in 2022"


Journal ArticleDOI
TL;DR: In this paper , the authors present simple fitting functions in the spherically averaged quasi-adiabatic relaxation framework that accurately capture the dark matter response over the full range of halo mass and halo-centric distance.
Abstract: The dark matter content of a gravitationally bound halo is known to be affected by the galaxy and gas it hosts. We characterise this response for haloes spanning over four orders of magnitude in mass in the hydrodynamical simulation suites IllustrisTNG and EAGLE. We present simple fitting functions in the spherically averaged quasi-adiabatic relaxation framework that accurately capture the dark matter response over the full range of halo mass and halo-centric distance we explore. We show that commonly employed schemes, which consider the relative change in radius $r_f/r_i-1$ of a spherical dark matter shell to be a function of only the relative change in its mass $M_i/M_f-1$, do not accurately describe the measured response of most haloes in IllustrisTNG and EAGLE. Rather, $r_f/r_i$ additionally explicitly depends upon halo-centric distance $r_f/R_{\rm vir}$ for haloes with virial radius $R_{\rm vir}$, being very similar between IllustrisTNG and EAGLE and across halo mass. We also account for a previously unmodelled effect, likely driven by feedback-related outflows, in which shells having $r_f/r_i\simeq1$ (i.e., no relaxation) have $M_i/M_f$ significantly different from unity. Our results are immediately applicable to a number of semi-analytical tools for modelling galactic and large-scale structure. We also study the dependence of this response on several halo and galaxy properties beyond total mass, finding that it is primarily related to halo concentration and star formation rate. We discuss possible extensions of these results to build a deeper physical understanding of the small-scale connection between dark matter and baryons.

1 citations


16 May 2022
TL;DR: A new approach to parameter inference targeted on generic situations where the evaluation of the likelihood L is numerically expensive, and which combines the minimizer (of a user-defined χ 2 ) with Gaussian Process Regression for training the interpolator and a subsequent MCMC implementation using the cobaya framework.
Abstract: We present a new approach to parameter inference targeted on generic situations where the evaluation of the likelihood L (i.e., the probability to observe the data given a fixed model configuration) is numerically expensive. Inspired by ideas underlying simulated annealing, the method first evaluates χ 2 = − 2 ln L on a sparse sequence of Latin hypercubes of increasing density in parameter (eigen)space. The semi-stochastic choice of sampling points accounts for anisotropic gradients of χ 2 and rapidly zooms in on the minimum of χ 2 . The sampled χ 2 values are then used to train an interpolator which is further used in a standard Markov Chain Monte Carlo (MCMC) algorithm to inexpensively explore the parameter space with high density, similarly to emulator-based approaches now popular in cosmological studies. Comparisons with example linear and non-linear problems show gains in the number of likelihood evaluations of factors of 10 to 100 or more, as compared to standard MCMC algorithms. As a specific implementation, we publicly release the code picasa : Parameter Inference using Cobaya with Anisotropic Simulated Annealing, which combines the minimizer (of a user-defined χ 2 ) with Gaussian Process Regression for training the interpolator and a subsequent MCMC implementation using the cobaya framework. Being agnostic to the nature of the observable data and the theoretical model, our implementation is potentially useful for a number of emerging problems in cosmology, astrophysics and beyond.

Journal ArticleDOI
TL;DR: In this paper , the robustness of the lognormal model of the Lyα forest in recovering a set of IGM parameters by comparing with high-resolution Sherwood SPH simulations was evaluated.
Abstract: Observations of the Lyman-α (Lyα) forest in spectra of distant quasars enable us to probe the matter power spectrum at relatively small scales. With several upcoming surveys, it is expected that there will be a many-fold increase in the quantity and quality of data, and hence it is important to develop efficient simulations to forward model these data sets. One such semi-numerical method is based on the assumption that the baryonic densities in the intergalactic medium (IGM) follow a lognormal distribution. In this work, we test the robustness of the lognormal model of the Lyα forest in recovering a set of IGM parameters by comparing with high-resolution Sherwood SPH simulations. We study the recovery of the parameters T0 (temperature of the mean-density IGM), γ (slope of the temperature-density relation) and Γ12 (hydrogen photoionization rate) at z ∼ 2.5 using a Markov Chain Monte Carlo (MCMC) technique for parameter estimation. Using three flux statistics, the probability distribution, the mean flux and the power spectrum, values of all three parameters, T0, γ and Γ12 implied in the SPH simulations are recovered within 1 − σ (∼ 9, 4 and 1% respectively) of the median (best-fit) values. We verify the validity of our results at different baryon smoothing filter, SNR, box size & resolution, and data seed and confirm that the lognormal model can be used as an efficient tool for modelling the Lyα transmitted flux at z ∼ 2.5.

Journal ArticleDOI
TL;DR: This work describes a Bayesian framework for determining how many basis functions to use and comparing one basis set with another and provides intuition into how one’s degree of belief in different basis sets together determine the derived constraints.
Abstract: Constraints on cosmological parameters are often distilled from sky surveys by fitting templates to summary statistics of the data that are motivated by a fiducial cosmological model. However, recent work has shown how to estimate the distance scale using templates that are more generic: the basis functions used are not explicitly tied to any one cosmological model. We describe a Bayesian framework for (i) determining how many basis functions to use and (ii) comparing one basis set with another. Our formulation provides intuition into how (a) one’s degree of belief in different basis sets, (b) the fact that the choice of priors depends on basis set, and (c) the data set itself, together determine the derived constraints. We illustrate our framework using measurements in simulated datasets before applying it to real data.