scispace - formally typeset
Search or ask a question
Journal ArticleDOI

DeepCMB: Lensing Reconstruction of the Cosmic Microwave Background with Deep Neural Networks

TL;DR: In this paper, deep convolutional neural networks (CNNs) are used to reconstruct the CMB lensing potential with a high signal-to-noise ratio, reaching levels comparable to analytic approximations of MLE methods.
Abstract: Next-generation cosmic microwave background (CMB) experiments will have lower noise and therefore increased sensitivity, enabling improved constraints on fundamental physics parameters such as the sum of neutrino masses and the tensor-to-scalar ratio r. Achieving competitive constraints on these parameters requires high signal-to-noise extraction of the projected gravitational potential from the CMB maps. Standard methods for reconstructing the lensing potential employ the quadratic estimator (QE). However, the QE performs suboptimally at the low noise levels expected in upcoming experiments. Other methods, like maximum likelihood estimators (MLE), are under active development. In this work, we demonstrate reconstruction of the CMB lensing potential with deep convolutional neural networks (CNN) - ie, a ResUNet. The network is trained and tested on simulated data, and otherwise has no physical parametrization related to the physical processes of the CMB and gravitational lensing. We show that, over a wide range of angular scales, ResUNets recover the input gravitational potential with a higher signal-to-noise ratio than the QE method, reaching levels comparable to analytic approximations of MLE methods. We demonstrate that the network outputs quantifiably different lensing maps when given input CMB maps generated with different cosmologies. We also show we can use the reconstructed lensing map for cosmological parameter estimation. This application of CNN provides a few innovations at the intersection of cosmology and machine learning. First, while training and regressing on images, we predict a continuous-variable field rather than discrete classes. Second, we are able to establish uncertainty measures for the network output that are analogous to standard methods. We expect this approach to excel in capturing hard-to-model non-Gaussian astrophysical foreground and noise contributions.
Citations
More filters
Journal ArticleDOI
TL;DR: The review begins by covering fundamental concepts in ML and modern statistics such as the bias-variance tradeoff, overfitting, regularization, generalization, and gradient descent before moving on to more advanced topics in both supervised and unsupervised learning.

664 citations

Posted Content
TL;DR: In this article, the authors present the science case, reference design, and project plan for the Stage-4 ground-based cosmic microwave background experiment CMB-S4, as well as the experimental data.
Abstract: We present the science case, reference design, and project plan for the Stage-4 ground-based cosmic microwave background experiment CMB-S4.

362 citations

Journal ArticleDOI
TL;DR: A deep neural network is built, the Deep Density Displacement Model (D3M), which learns from a set of prerun numerical simulations, to predict the nonlinear large-scale structure of the Universe with the Zel’dovich Approximation (ZA), an analytical approximation based on perturbation theory, as the input.
Abstract: Matter evolved under the influence of gravity from minuscule density fluctuations. Nonperturbative structure formed hierarchically over all scales and developed non-Gaussian features in the Universe, known as the cosmic web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and use a large ensemble of computer simulations to compare with the observed data to extract the full information of our own Universe. However, to evolve billions of particles over billions of years, even with the simplest physics, is a daunting task. We build a deep neural network, the Deep Density Displacement Model ([Formula: see text]), which learns from a set of prerun numerical simulations, to predict the nonlinear large-scale structure of the Universe with the Zel'dovich Approximation (ZA), an analytical approximation based on perturbation theory, as the input. Our extensive analysis demonstrates that [Formula: see text] outperforms the second-order perturbation theory (2LPT), the commonly used fast-approximate simulation method, in predicting cosmic structure in the nonlinear regime. We also show that [Formula: see text] is able to accurately extrapolate far beyond its training data and predict structure formation for significantly different cosmological parameters. Our study proves that deep learning is a practical and accurate alternative to approximate 3D simulations of the gravitational structure formation of the Universe.

143 citations

Journal ArticleDOI
TL;DR: In this paper, the authors apply a particular ML method, the GA, to cosmological data that describes the background expansion of the Universe, namely the pantheon Type Ia supernovae and the Hubble expansion history.
Abstract: Machine learning (ML) algorithms have revolutionized the way we interpret data in astronomy, particle physics, biology, and even economics, since they can remove biases due to a priori chosen models. Here we apply a particular ML method, the genetic algorithms (GA), to cosmological data that describes the background expansion of the Universe, namely the pantheon Type Ia supernovae and the Hubble expansion history $H(z)$ datasets. We obtain model independent and nonparametric reconstructions of the luminosity distance ${d}_{L}(z)$ and Hubble parameter $H(z)$ without assuming any dark energy model or a flat Universe. We then estimate the deceleration parameter $q(z)$, a measure of the acceleration of the Universe, and we make a $\ensuremath{\sim}4.5\ensuremath{\sigma}$ model independent detection of the accelerated expansion, but we also place constraints on the transition redshift of the acceleration phase $({z}_{\mathrm{tr}}=0.662\ifmmode\pm\else\textpm\fi{}0.027)$. We also find a deviation from $\mathrm{\ensuremath{\Lambda}}\mathrm{CDM}$ at high redshifts, albeit within the errors, hinting toward the recently alleged tension between the SnIa/quasar data and the cosmological constant $\mathrm{\ensuremath{\Lambda}}\mathrm{CDM}$ model at high redshifts ($z\ensuremath{\gtrsim}1.5$). Finally, we show the GA can be used in complementary null tests of the $\mathrm{\ensuremath{\Lambda}}\mathrm{CDM}$ via reconstructions of the Hubble parameter and the luminosity distance.

51 citations

Journal ArticleDOI
TL;DR: A pixel-based approach to implement convolutional and pooling layers on the spherical surface, similarly to what is commonly done for CNNs applied to Euclidean space, and demonstrates that CNNs are able to extract information from polarization fields, both in full-sky and masked maps, and to distinguish between E and B-modes in pixel space.
Abstract: We describe a novel method for the application of convolutional neural networks (CNNs) to fields defined on the sphere, using the Hierarchical Equal Area Latitude Pixelization scheme (HEALPix). Specifically, we have developed a pixel-based approach to implement convolutional and pooling layers on the spherical surface, similarly to what is commonly done for CNNs applied to Euclidean space. The main advantage of our algorithm is to be fully integrable with existing, highly optimized libraries for NNs (e.g., PyTorch, TensorFlow, etc.). We present two applications of our method: (i) recognition of handwritten digits projected on the sphere; (ii) estimation of cosmological parameter from simulated maps of the cosmic microwave background (CMB). The latter represents the main target of this exploratory work, whose goal is to show the applicability of our CNN to CMB parameter estimation. We have built a simple NN architecture, consisting of four convolutional and pooling layers, and we have used it for all the applications explored herein. Concerning the recognition of handwritten digits, our CNN reaches an accuracy of ∼95%, comparable with other existing spherical CNNs, and this is true regardless of the position and orientation of the image on the sphere. For CMB-related applications, we tested the CNN on the estimation of a mock cosmological parameter, defining the angular scale at which the power spectrum of a Gaussian field projected on the sphere peaks. We estimated the value of this parameter directly from simulated maps, in several cases: temperature and polarization maps, presence of white noise, and partially covered maps. For temperature maps, the NN performances are comparable with those from standard spectrum-based Bayesian methods. For polarization, CNNs perform about a factor four worse than standard algorithms. Nonetheless, our results demonstrate, for the first time, that CNNs are able to extract information from polarization fields, both in full-sky and masked maps, and to distinguish between E and B -modes in pixel space. Lastly, we have applied our CNN to the estimation of the Thomson scattering optical depth at reionization (τ ) from simulated CMB maps. Even without any specific optimization of the NN architecture, we reach an accuracy comparable with standard Bayesian methods. This work represents a first step towards the exploitation of NNs in CMB parameter estimation and demonstrates the feasibility of our approach.

37 citations

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Posted Content
TL;DR: It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .

19,534 citations

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Posted Content
TL;DR: In this paper, the authors propose to use a soft-searching model to find the parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.

14,077 citations

Related Papers (5)