In this article, an alternative calibration approach based on convolutional neural networks (CNNs) was proposed to calibrate Extreme UV (EUV) wavelength observations from space, which can comprehensively reproduce the sounding rocket experiments' outcomes within a reasonable degree of accuracy.
Abstract:
Solar activity plays a quintessential role in influencing the interplanetary medium and space-weather around the Earth. Remote sensing instruments onboard heliophysics space missions provide a pool of information about the Sun's activity via the measurement of its magnetic field and the emission of light from the multi-layered, multi-thermal, and dynamic solar atmosphere. Extreme UV (EUV) wavelength observations from space help in understanding the subtleties of the outer layers of the Sun, namely the chromosphere and the corona. Unfortunately, such instruments, like the Atmospheric Imaging Assembly (AIA) onboard NASA's Solar Dynamics Observatory (SDO), suffer from time-dependent degradation, reducing their sensitivity. Current state-of-the-art calibration techniques rely on periodic sounding rockets, which can be infrequent and rather unfeasible for deep-space missions. We present an alternative calibration approach based on convolutional neural networks (CNNs). We use SDO-AIA data for our analysis. Our results show that CNN-based models could comprehensively reproduce the sounding rocket experiments' outcomes within a reasonable degree of accuracy, indicating that it performs equally well compared with the current techniques. Furthermore, a comparison with a standard "astronomer's technique" baseline model reveals that the CNN approach significantly outperforms this baseline. Our approach establishes the framework for a novel technique to calibrate EUV instruments and advance our understanding of the cross-channel relation between different EUV channels.
TL;DR: In this article , a new model for predicting the Disturbance storm time (Dst) index exceeds -100 nT, with a lead time between 1 and 3 days, was developed using an ensemble of CNNs trained using SoHO images.
TL;DR: In this article , the authors introduce the application of intelligent IoT technologies in meteorological science and elaborate the encountered open problems as well as the challenges in the future, along with a comprehensive review of current studies on meteorological observation, forecast, and services with intelligent IoT is provided.
TL;DR: In this article , the Deep Solar ALMA Neural Network Estimator (Deep-SANNE) is trained to improve the resolution and contrast of solar observations by recog-nizing dynamic patterns in both the spatial and temporal domains of small-scale features at an angular resolution corresponding to observational data and correlated them to highly resolved nondegraded data from the magnetohydrodynamic simulations.
TL;DR: The Atmospheric Imaging Assembly (AIA) as discussed by the authors provides multiple simultaneous high-resolution full-disk images of the corona and transition region up to 0.5 R ⊙ above the solar limb with 1.5-arcsec spatial resolution and 12-second temporal resolution.
TL;DR: This paper proposed WaveNet, a deep neural network for generating audio waveforms, which is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones.
TL;DR: The advantages of open source to achieve the goals of the scikit-image library are highlighted, and several real-world image processing applications that use scik it-image are showcased.
TL;DR: The Solar Dynamics Observatory (SDO) was launched on 11 February 2010 at 15:23 UT from Kennedy Space Center aboard an Atlas V 401 (AV-021) launch vehicle as mentioned in this paper.
TL;DR: In this paper, a Monte Carlo sampler (The Joker) is used to perform a search for companions to 96,231 red-giant stars observed in the APOGEE survey (DR14) with $ ≥ 3$ spectroscopic epochs.
Q1. What are the contributions mentioned in the paper "Multichannel autocalibration for the atmospheric imaging assembly using machine learning" ?
The authors aim to develop a novel method based on machine learning ( ML ) that exploits spatial patterns on the solar surface across multiwavelength observations to autocalibrate the instrument degradation. Their approach establishes the framework for a novel technique based on CNNs to calibrate EUV instruments. The dataset was further augmented by randomly degrading images at each epoch, with the training dataset spanning nonoverlapping months with the test dataset. Moreover, multichannel CNN outperforms the single-channel CNN, which suggests that cross-channel relations between different EUV channels are important to recover the degradation profiles. The authors envision that this technique can be adapted to other imaging or spectral instruments operating at other wavelengths.
Q2. What are the future works mentioned in the paper "Multichannel autocalibration for the atmospheric imaging assembly using machine learning" ?
The authors showed that the CNNs learned representations that make use of the different features within solar images, but further work needs to be done on this aspect to establish a more detailed interpretation. This paper finally presents a unique possibility of autocalibrating deep-space instruments such as those onboard the STEREO spacecraft and the recently launched remote-sensing instrument called Extreme Ultraviolet Imager ( Rochus et al. 2020 ) on board the Solar Orbiter satellite ( Müller et al. 2020 ), which are too far away from Earth to be calibrated using a traditional method such as sounding-rockets. The authors wish to thank IBM for providing computing power through access to the Accelerated Computing Cloud, as well as NASA, Google Cloud and Lockheed Martin for supporting this project. This is particularly promising, given that no time information has been used in training the models.