scispace - formally typeset
Open accessJournal ArticleDOI: 10.1093/MNRAS/STAB152

MAXSMOOTH: rapid maximally smooth function fitting with applications in Global 21-cm cosmology

02 Mar 2021-Monthly Notices of the Royal Astronomical Society (Oxford Academic)-Vol. 502, Iss: 3, pp 4405-4425
Abstract: Maximally Smooth Functions (MSFs) are a form of constrained functions in which there are no inflection points or zero crossings in high order derivatives. Consequently, they have applications to signal recovery in experiments where signals of interest are expected to be non-smooth features masked by larger smooth signals or foregrounds. They can also act as a powerful tool for diagnosing the presence of systematics. The constrained nature of MSFs makes fitting these functions a non-trivial task. We introduce maxsmooth, an open source package that uses quadratic programming to rapidly fit MSFs. We demonstrate the efficiency and reliability of maxsmooth by comparison to commonly used fitting routines. We show that by using quadratic programming we can reduce the fitting time by approximately two orders of magnitude. maxsmooth features a built-in library of MSF models and allows the user to define their own. We also introduce and implement with maxsmooth Partially Smooth Functions, which are useful for describing elements of non-smooth structure in foregrounds. This work has been motivated by the problem of foreground modelling in 21-cm cosmology for which MSFs have been shown to be a viable alternative to polynomials. We discuss applications of maxsmooth to 21-cm cosmology and highlight this with examples using data from the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) and the Large-aperture Experiment to Detect the Dark Ages (LEDA) experiments. MSFs are applied to data from LEDA for the first time in this paper. maxsmooth is pip installable and available for download at:this https URL

... read more

Topics: Quadratic programming (51%)
Citations
  More

5 results found


Open accessJournal ArticleDOI: 10.1093/MNRAS/STAB1765
Abstract: The HI 21cm absorption line is masked by bright foregrounds and systematic distortions that arise due to the chromaticity of the antenna used to make the observation coupling to the spectral inhomogeneity of these foregrounds. We demonstrate that these distortions are sufficient to conceal the 21cm signal when the antenna is not perfectly achromatic and that simple corrections assuming a constant spatial distribution of foreground power are insufficient to overcome them. We then propose a new physics-motivated method of modelling the foregrounds of 21cm experiments in order to fit the chromatic distortions as part of the foregrounds. This is done by generating a simulated sky model across the observing band by dividing the sky into $N$ regions and scaling a base map assuming a distinct uniform spectral index in each region. The resulting sky map can then be convolved with a model of the antenna beam to give a model of foregrounds and chromaticity parameterised by the spectral indices of the $N$ regions. We demonstrate that fitting this model for varying $N$ using a Bayesian nested sampling algorithm and comparing the results using the evidence allows the 21cm signal to be reliably detected in data of a relatively smooth conical log spiral antenna. We also test a much more chromatic conical sinuous antenna and find this model will not produce a reliable signal detection, but in a manner that is easily distinguishable from a true detection.

... read more

Topics: Antenna (radio) (52%)

8 Citations


Open accessJournal ArticleDOI: 10.1093/MNRAS/STAB429
Emma Shen1, Dominic Anstey1, Eloy de Lera Acedo1, Anastasia Fialkov1  +1 moreInstitutions (1)
Abstract: We modelled the two major layer of Earth's ionosphere, the F-layer and the D-layer, by a simplified spatial model with temporal variance to study the chromatic ionospheric effects on global 21-cm observations. From the analyses, we found that the magnitude of the ionospheric disruptions due to ionospheric refraction and absorption can be greater than the expected global 21-cm signal, and the variation of its magnitude can differ, depending on the ionospheric conditions. Within the parameter space adopted in the model, the shape of the global 21-cm signal is distorted after propagating through the ionosphere, while its amplitude is weakened. It is observed that the ionospheric effects do not cancel out over time, and thus should be accounted for in the foreground calibration at each timestep to account for the chromaticity introduced by the ionosphere.

... read more

Topics: Ionosphere (52%)

7 Citations


Open accessJournal ArticleDOI: 10.1093/MNRAS/STAB2312
M. Spinelli1, Gianni Bernardi2, Gianni Bernardi1, H. Garsden3  +5 moreInstitutions (6)
Abstract: Total-power radiometry with individual meter-wave antennas is a potentially effective way to study the Cosmic Dawn ($z\sim20$) through measurement of sky brightness arising from the $21$~cm transition of neutral hydrogen, provided this can be disentangled from much stronger Galactic and extra-galactic foregrounds. In the process, measured spectra of integrated sky brightness temperature can be used to quantify the foreground emission properties. In this work, we analyze a subset of data from the Large-aperture Experiment to Detect the Dark Age (LEDA) in the range $50-87$~MHz and constrain the foreground spectral index $\beta$ in the northern sky visible from mid-latitudes. We focus on two zenith-directed LEDA radiometers and study how estimates of $\beta$ vary with local sidereal time (LST). We correct for the effect of gain pattern chromaticity and compare estimated absolute temperatures with simulations. We develop a reference dataset consisting of 14 days of optimal condition observations. Using this dataset we estimate, for one radiometer, that $\beta$ varies from $-2.55$ at LST~$<6$~h to a steeper $-2.58$ at LST~$\sim13$~h, consistently with sky models and previous southern sky measurements. In the LST~$=13-24$~h range, however, we find that $\beta$ fluctuates between $-2.55$ and $-2.61$ (data scatter $\sim0.01$). We observe a similar $\beta$ vs. LST trend for the second radiometer, although with slightly smaller $|\beta|$, in the $-2.46<\beta<-2.43$ range, over $24$~h of LST (data scatter $\sim0.02$). Combining all data gathered during the extended campaign between mid-2018 to mid-2019, and focusing on the LST~$=9-12.5$~h range, we infer good instrument stability and find $-2.56<\beta<-2.50$ with $0.09<\Delta\beta<0.12$.

... read more

Topics: Spectral index (61%), Sky brightness (55%)

2 Citations


Open accessJournal ArticleDOI: 10.21105/JOSS.02596
Abstract: maxsmooth is an optimisation routine written in Python (supporting version ≥ 3.6) for fitting Derivative Constrained Functions (DCFs) to data. DCFs are a family of functions which have derivatives that do not cross zero in the band of interest. Two special cases of DCF are Maximally Smooth Functions (MSFs) which have derivatives with order m ≥ 2 constrained and Completely Smooth Functions (CSFs) with m ≥ 1 constrained. Alternatively, we can constrain an arbitrary set of derivatives and we refer to these models as Partially Smooth Functions. Due to their constrained nature, DCFs can produce perfectly smooth fits to data and reveal non-smooth signals of interest in the residuals.

... read more

Topics: Derivative (57%)

1 Citations


References
  More

70 results found



Journal ArticleDOI: 10.1093/COMJNL/7.4.308
John A. Nelder, R. Mead1Institutions (1)
Abstract: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

... read more

Topics: Nelder–Mead method (59%), Simplex algorithm (57%), Hessian matrix (57%) ... read more

25,414 Citations


Open accessJournal ArticleDOI: 10.1090/QAM/10666
Abstract: The standard method for solving least squares problems which lead to non-linear normal equations depends upon a reduction of the residuals to linear form by first order Taylor approximations taken about an initial or trial solution for the parameters.2 If the usual least squares procedure, performed with these linear approximations, yields new values for the parameters which are not sufficiently close to the initial values, the neglect of second and higher order terms may invalidate the process, and may actually give rise to a larger value of the sum of the squares of the residuals than that corresponding to the initial solution. This failure of the standard method to improve the initial solution has received some notice in statistical applications of least squares3 and has been encountered rather frequently in connection with certain engineering applications involving the approximate representation of one function by another. The purpose of this article is to show how the problem may be solved by an extension of the standard method which insures improvement of the initial solution.4 The process can also be used for solving non-linear simultaneous equations, in which case it may be considered an extension of Newton's method. Let the function to be approximated be h{x, y, z, • • • ), and let the approximating function be H{oc, y, z, • • ■ ; a, j3, y, ■ • ■ ), where a, /3, 7, • ■ ■ are the unknown parameters. Then the residuals at the points, yit zit • • • ), i = 1, 2, ■ • • , n, are

... read more

Topics: Non-linear least squares (80%), Iteratively reweighted least squares (79%), Least squares (76%) ... read more

10,148 Citations


Open accessJournal ArticleDOI: 10.1086/513700
Abstract: A simple cosmological model with only six parameters (matter density, Omega_m h^2, baryon density, Omega_b h^2, Hubble Constant, H_0, amplitude of fluctuations, sigma_8, optical depth, tau, and a slope for the scalar perturbation spectrum, n_s) fits not only the three year WMAP temperature and polarization data, but also small scale CMB data, light element abundances, large-scale structure observations, and the supernova luminosity/distance relationship. Using WMAP data only, the best fit values for cosmological parameters for the power-law flat LCDM model are (Omega_m h^2, Omega_b h^2, h, n_s, tau, sigma_8) = 0.1277+0.0080-0.0079, 0.02229+-0.00073, 0.732+0.031-0.032, 0.958+-0.016, 0.089+-0.030, 0.761+0.049-0.048). The three year data dramatically shrink the allowed volume in this six dimensional parameter space. Assuming that the primordial fluctuations are adiabatic with a power law spectrum, the WMAP data_alone_ require dark matter, and favor a spectral index that is significantly less than the Harrison-Zel'dovich-Peebles scale-invariant spectrum (n_s=1, r=0). Models that suppress large-scale power through a running spectral index or a large-scale cut-off in the power spectrum are a better fit to the WMAP and small scale CMB data than the power-law LCDM model; however, the improvement in the fit to the WMAP data is only Delta chi^2 = 3 for 1 extra degree of freedom. The combination of WMAP and other astronomical data yields significant constraints on the geometry of the universe, the equation of state of the dark energy, the gravitational wave energy density, and neutrino properties. Consistent with the predictions of simple inflationary theories, we detect no significant deviations from Gaussianity in the CMB maps.

... read more

Topics: Cosmic microwave background (57%), CMB cold spot (57%), Sachs–Wolfe effect (55%) ... read more

6,139 Citations


Open accessJournal ArticleDOI: 10.1086/513700
David N. Spergel1, Rachel Bean2, Rachel Bean1, Olivier Doré1  +24 moreInstitutions (10)
Abstract: A simple cosmological model with only six parameters (matter density, Omega_m h^2, baryon density, Omega_b h^2, Hubble Constant, H_0, amplitude of fluctuations, sigma_8, optical depth, tau, and a slope for the scalar perturbation spectrum, n_s) fits not only the three year WMAP temperature and polarization data, but also small scale CMB data, light element abundances, large-scale structure observations, and the supernova luminosity/distance relationship. Using WMAP data only, the best fit values for cosmological parameters for the power-law flat LCDM model are (Omega_m h^2, Omega_b h^2, h, n_s, tau, sigma_8) = 0.1277+0.0080-0.0079, 0.02229+-0.00073, 0.732+0.031-0.032, 0.958+-0.016, 0.089+-0.030, 0.761+0.049-0.048). The three year data dramatically shrink the allowed volume in this six dimensional parameter space. Assuming that the primordial fluctuations are adiabatic with a power law spectrum, the WMAP data_alone_ require dark matter, and favor a spectral index that is significantly less than the Harrison-Zel'dovich-Peebles scale-invariant spectrum (n_s=1, r=0). Models that suppress large-scale power through a running spectral index or a large-scale cut-off in the power spectrum are a better fit to the WMAP and small scale CMB data than the power-law LCDM model: however, the improvement in the fit to the WMAP data is only Delta chi^2 = 3 for 1 extra degree of freedom. The combination of WMAP and other astronomical data yields significant constraints on the geometry of the universe, the equation of state of the dark energy, the gravitational wave energy density, and neutrino properties. Consistent with the predictions of simple inflationary theories, we detect no significant deviations from Gaussianity in the CMB maps.

... read more

Topics: Cosmic microwave background (57%), CMB cold spot (57%), Sachs–Wolfe effect (55%) ... read more

5,799 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20214
20201