scispace - formally typeset
Search or ask a question

Showing papers on "White noise published in 2019"


Journal ArticleDOI
TL;DR: Comparisons illustrate the superiority of SP over kurtosis for selecting the sensitive mode from the resulted signal of CCEEMEDAN and over two other popular signal-processing methods, variational mode decomposition and fast kurtogram.
Abstract: A novel time–frequency analysis method called complementary complete ensemble empirical mode decomposition (EEMD) with adaptive noise (CCEEMDAN) is proposed to analyze nonstationary vibration signals. CCEEMDAN combines the advantages of improved EEMD with adaptive noise and complementary EEMD, and it improves decomposition performance by reducing reconstruction error and mitigating the effect of mode mixing. However, because white noise mixed in with the raw vibration signal covers the whole frequency bandwidth, each mode inevitably contains some mode noise, which can easily inundate the fault-related information. This paper proposes a time–frequency analysis method based on CCEEMDAN and minimum entropy deconvolution (MED) for fault detection of rolling element bearings. First, a raw signal is decomposed into a series of intrinsic mode functions (IMFs) by using the CCEEMDAN method. Then a sensitive parameter (SP) based on adjusted kurtosis and Pearson’s correlation coefficient is applied to select a sensitive mode that contains the most fault-related information. Finally, the MED is applied to enhance the fault-related impulses in the selected IMF. The fault signals of high-speed train axle-box bearing are applied to verify the effectiveness of the proposed method. Results show that the proposed method can effectively reveal axle-bearing defects’ fault information. The comparisons illustrate the superiority of SP over kurtosis for selecting the sensitive mode from the resulted signal of CCEEMEDAN. Further, we conducted comparisons that highlight the superiority of our proposed method over individual CCEEMDAN and MED methods and over two other popular signal-processing methods, variational mode decomposition and fast kurtogram.

102 citations


Journal ArticleDOI
TL;DR: The results of empirical study suggest that one of the proposed models, namely SARIMA_SVR3, can achieve better accuracy than other methods, and prove that incorporating Gaussian White Noise is able to increase forecasting accuracy.
Abstract: In this study, a novel SARIMA-SVR model is proposed to forecast statistical indicators in the aviation industry that can be used for later capacity management and planning purpose. First, the time series is analysed by SARIMA. Then, Gaussian White Noise is reversely calculated. Next, four hybrid models are proposed and applied to forecast the future statistical indicators in the aviation industry. The results of empirical study suggest that one of the proposed models, namely SARIMA_SVR3, can achieve better accuracy than other methods, and prove that incorporating Gaussian White Noise is able to increase forecasting accuracy.

94 citations


Journal ArticleDOI
TL;DR: The distribution of a typical additive white Gaussian noise channel is successfully approximated by using the proposed GAN-based channel modeling framework, thus verifying its good performance and effectiveness.
Abstract: In modern wireless communication systems, wireless channel modeling has always been a fundamental task in system design and performance optimization. Traditional channel modeling methods, such as ray-tracing and geometry- based stochastic channel models, require in-depth domain-specific knowledge and technical expertise in radio signal propagations across electromagnetic fields. To avoid these difficulties and complexities, a novel generative adversarial network (GAN) framework is proposed for the first time to address the problem of autonomous wireless channel modeling without complex theoretical analysis or data processing. Specifically, the GAN is trained by raw measurement data to reach the Nash equilibrium of a MinMax game between a channel data generator and a channel data discriminator. Once this process converges, the resulting channel data generator is extracted as the target channel model for a specific application scenario. To demonstrate, the distribution of a typical additive white Gaussian noise channel is successfully approximated by using the proposed GAN-based channel modeling framework, thus verifying its good performance and effectiveness.

85 citations


Journal ArticleDOI
TL;DR: Generally, the positioning performance of PPP in terms of convergence time and positioning accuracy with the final products from CODE, CNES, and WHU is comparable among the three ISB handling schemes, however, estimating ISBs as random walk process or white noise process outperforms that as the random constant when using the GFZ products.
Abstract: The focus of this study is on proper modeling of the dynamics for inter-system biases (ISBs) in multi-constellation Global Navigation Satellite System (GNSS) precise point positioning (PPP) processing. First, the theoretical derivation demonstrates that the ISBs originate from not only the receiver-dependent hardware delay differences among different GNSSs but also the receiver-independent time differences caused by the different clock datum constraints among different GNSS satellite clock products. Afterward, a comprehensive evaluation of the influence of ISB stochastic modeling on undifferenced and uncombined PPP performance is conducted, i.e., random constant, random walk process, and white noise process are considered. We use data based on a 1-month period (September 2017) Multi-GNSS Experiment (MGEX) precise orbit and clock products from four analysis centers (CODE, GFZ, CNES, and WHU) and 160 MGEX tracking stations. The results demonstrate that generally, the positioning performance of PPP in terms of convergence time and positioning accuracy with the final products from CODE, CNES, and WHU is comparable among the three ISB handling schemes. However, estimating ISBs as random walk process or white noise process outperforms that as the random constant when using the GFZ products. These results indicate that the traditional estimation of ISBs as the random constant may not always be reasonable in multi-GNSS PPP processing. To achieve more reliable positioning results, it is highly recommended to consider the ISBs as random walk process or white noise process in multi-GNSS PPP processing.

77 citations


Journal ArticleDOI
TL;DR: An improved feed-forward denoising convolution neural network (DnCNN) is proposed to suppress random noise in desert seismic data and can open a new direction in the area of seismic data processing.
Abstract: High-quality seismic data are the basis for stratigraphic imaging and interpretation, but the existence of random noise can greatly affect the quality of seismic data. At present, most understanding and processing of random noise still stay at the level of Gaussian white noise. With the reduction of resource, the acquired seismic data have lower signal-to-noise ratio and more complex noise natures. In particular, the random noise in the desert area has the characteristics of low frequency, non-Gaussian, nonstationary, high energy, and serious aliasing between effective signal and random noise in the frequency domain, which has brought great difficulties to the recovery of seismic events by conventional denoising methods. To solve this problem, an improved feed-forward denoising convolution neural network (DnCNN) is proposed to suppress random noise in desert seismic data. DnCNN has the characteristics of automatic feature extraction and blind denoising. According to the characteristics of desert noise, we modify the original DnCNN from the aspects of patch size, convolution kernel size, network depth, and training set to make it suitable for low-frequency and non-Gaussian desert noise suppression. Both simulation and practical experiments prove that the improved DnCNN has obvious advantages in terms of desert noise and surface wave suppression as well as effective signal amplitude preservation. In addition, the improved DnCNN, in contrast to existing methods, has considerable potential to benefit from large data sets. Therefore, we believe that it can open a new direction in the area of seismic data processing.

73 citations


Journal ArticleDOI
TL;DR: DeepDenoiser as discussed by the authors uses a deep neural network to simultaneously learn a sparse representation of data in the time-frequency domain and a non-linear function that maps this representation into masks that decompose input data into a signal of interest and noise.
Abstract: Frequency filtering is widely used in routine processing of seismic data to improve the signal-to-noise ratio (SNR) of recorded signals and by doing so to improve subsequent analyses. In this paper, we develop a new denoising/decomposition method, DeepDenoiser, based on a deep neural network. This network is able to simultaneously learn a sparse representation of data in the time–frequency domain and a non-linear function that maps this representation into masks that decompose input data into a signal of interest and noise (defined as any non-seismic signal). We show that DeepDenoiser achieves impressive denoising of seismic signals even when the signal and noise share a common frequency band. Because the noise statistics are automatically learned from data and require no assumptions, our method properly handles white noise, a variety of colored noise, and non-earthquake signals. DeepDenoiser can significantly improve the SNR with minimal changes in the waveform shape of interest, even in the presence of high noise levels. We demonstrate the effect of our method on improving earthquake detection. There are clear applications of DeepDenoiser to seismic imaging, micro-seismic monitoring, and preprocessing of ambient noise data. We also note that the potential applications of our approach are not limited to these applications or even to earthquake data and that our approach can be adapted to diverse signals and applications in other settings.

72 citations


Proceedings ArticleDOI
11 Oct 2019
TL;DR: This work introduces the first contactless system that uses white noise to achieve motion and respiratory monitoring in infants and demonstrates that the respiratory rate computed by the system is highly correlated with the ground truth with a correlation coefficient of 0.938.
Abstract: White noise machines are among the most popular devices to facilitate infant sleep. We introduce the first contactless system that uses white noise to achieve motion and respiratory monitoring in infants. Our system is designed for smart speakers that can monitor an infant's sleep using white noise. The key enabler underlying our system is a set of novel algorithms that can extract the minute infant breathing motion as well as position information from white noise which is random in both the time and frequency domain. We describe the design and implementation of our system, and present experiments with a life-like infant simulator as well as a clinical study at the neonatal intensive care unit with five new-born infants. Our study demonstrates that the respiratory rate computed by our system is highly correlated with the ground truth with a correlation coefficient of 0.938.

69 citations


Journal ArticleDOI
TL;DR: Improved ensemble local mean decomposition method is an effective method for extracting composite fault features and avoids the occurrence of pseudocomponents and reduces the amount of calculation.
Abstract: In industrial production, it is highly essential to extract faults in gearbox accurately. Specifically, in a strong noise environment, it is difficult to extract the fault features accurately. LMD (local mean decomposition) is widely used as an adaptive decomposition method in fault diagnosis. In order to improve the mode mixing of LMD, ELMD (ensemble Local Mean Decomposition) is proposed as local mode mixing exists in noisy environment, but white noise added in ELMD cannot be completely neutralized leading to the influence of increased white noise on PF (product function) component. This further leads to the increase in reconstruction errors. Therefore, this paper proposes a composite fault diagnosis method for gearboxes based on an improved ensemble local mean decomposition. The idea is to add white noise in pairs to optimize ELMD, defined as CELMD (Complementary Ensemble Local Mean Decomposition) then remove the decomposed high noise component by PE (Permutation Entropy) while applying the SG (Savitzky-Golay) filter to smooth out the low noise in PFs. The method is applied to both simulated signal and experimental signal, which overcomes mode mixing phenomenon and reduces reconstruction error. At the same time, this method avoids the occurrence of pseudocomponents and reduces the amount of calculation. Compared with LMD, ELMD, CELMD, and CELMDAN, it shows that improved ensemble local mean decomposition method is an effective method for extracting composite fault features.

63 citations


Journal ArticleDOI
TL;DR: This paper proves the convergence of solutions of Wong–Zakai approximations and the upper semicontinuity of random attractors of the approximate random system as the size of approximation approaches zero.
Abstract: In this paper we study the Wong–Zakai approximations given by a stationary process via the Wiener shift and their associated long term pathwise behavior for the stochastic partial differential equations driven by a white noise. We prove that the approximate equation has a pullback random attractor under much weaker conditions than the original stochastic equation. When the stochastic partial differential equation is driven by a linear multiplicative noise or additive white noise, we prove the convergence of solutions of Wong–Zakai approximations and the upper semicontinuity of random attractors of the approximate random system as the size of approximation approaches zero.

54 citations


Journal ArticleDOI
TL;DR: A receiver clock offset model is presented that considers the correlation of the receiver clock offsets between adjacent epochs using an a priori value and concludes that all RT-PPP solutions with different real-time products are capable of time transfer.
Abstract: Thanks to the international GNSS service (IGS), which has provided an open-access real-time service (RTS) since 2013, real-time precise point positioning (RT-PPP) has become a major topic in the time community. Currently, a few scholars have studied RT-PPP time transfer, and the correlation of the receiver clock offsets between adjacent epochs have not been considered. We present a receiver clock offset model that considers the correlation of the receiver clock offsets between adjacent epochs using an a priori value. The clock offset is estimated using a between-epoch constraint model rather than a white noise model. This approach is based on two steps. First, the a priori noise variance is based on the Allan variance of the receiver clock offset derived from GPS PPP solutions with IGS final products. Second, by applying the between-epoch constraint model, the RT-PPP time transfer is achieved. Our numerical analyses clarify how the approach performs for RT-PPP time and frequency transfer. Based on five commonly used RTS products and six IGS stations, two conclusions are obtained straightforwardly. First, all RT-PPP solutions with different real-time products are capable of time transfer. The standard deviation (STD) values of the clock difference between the PPP solutions with respect to the IGS final clock products are less than 0.3 ns. Second, the STD values are reduced significantly by applying our approach. The reduction percent of STD values ranges from 4.0 to 35.5%. Moreover, the largest improvement ratio of frequency stability is 12 as compared to the solution of the white noise model. Note that the receiver clock offset from IGS final clock products is regarded as a reference.

53 citations


Journal ArticleDOI
TL;DR: To guarantee synchronisation of SCSTLNC, several useful criteria are obtained by applying some techniques of inequalities, such as a graph-theoretic approach, a hierarchical approach, and the theory of asymptotically autonomous systems.
Abstract: This study considers the synchronisation of stochastic-coupled systems with time-varying delays and Levy noise on networks without strong connectedness (SCSTLNC) through periodically intermittent control. Also, here, internal delays, coupling delay, white noise, and Levy noise are considered in SCSTLNC. Then, to guarantee synchronisation of SCSTLNC, several useful criteria are obtained by applying some techniques of inequalities, such as a graph-theoretic approach, a hierarchical approach, and the theory of asymptotically autonomous systems. The intensity of control is closely related to the coupling strength and the perturbed intensity of noise. In particular, the synchronisation of stochastic-coupled oscillators with time-varying delays and Levy noise on networks without strong connectedness as a practical application of the authors' theoretical results are investigated. Finally, a numerical example of oscillator networks is provided to demonstrate the validity and feasibility of their analytical results.

Journal ArticleDOI
27 Nov 2019-Sensors
TL;DR: RDE takes PE as its theoretical basis and combines the advantages of DE and RPE by introducing amplitude information and distance information, and shows higher distinguishing ability than the other four kinds of PE for sensor signals.
Abstract: Permutation entropy (PE), as one of the powerful complexity measures for analyzing time series, has advantages of easy implementation and high efficiency. In order to improve the performance of PE, some improved PE methods have been proposed through introducing amplitude information and distance information in recent years. Weighted-permutation entropy (W-PE) weight each arrangement pattern by using variance information, which has good robustness and stability in the case of high noise level and can extract complexity information from data with spike feature or abrupt amplitude change. Dispersion entropy (DE) introduces amplitude information by using the normal cumulative distribution function (NCDF); it not only can detect the change of simultaneous frequency and amplitude, but also is superior to the PE method in distinguishing different data sets. Reverse permutation entropy (RPE) is defined as the distance to white noise in the opposite trend with PE and W-PE, which has high stability for time series with varying lengths. To further improve the performance of PE, we propose a new complexity measure for analyzing time series, and term it as reverse dispersion entropy (RDE). RDE takes PE as its theoretical basis and combines the advantages of DE and RPE by introducing amplitude information and distance information. Simulation experiments were carried out on simulated and sensor signals, including mutation signal detection under different parameters, noise robustness testing, stability testing under different signal-to-noise ratios (SNRs), and distinguishing real data for different kinds of ships and faults. The experimental results show, compared with PE, W-PE, RPE, and DE, that RDE has better performance in detecting abrupt signal and noise robustness testing, and has better stability for simulated and sensor signal. Moreover, it also shows higher distinguishing ability than the other four kinds of PE for sensor signals.

Journal ArticleDOI
TL;DR: In this article, an explicit temporal splitting numerical scheme for the stochastic Allen-Cahn equation driven by additive noise was proposed, in a bounded spatial domain with smooth boundary in dimension d ≤ 3.
Abstract: This article analyzes an explicit temporal splitting numerical scheme for the stochastic Allen-Cahn equation driven by additive noise, in a bounded spatial domain with smooth boundary in dimension d ≤ 3. The splitting strategy is combined with an exponential Euler scheme of an auxiliary problem. When d = 1 and the driving noise is a space-time white noise, we first show some a priori estimates of this splitting scheme. Using the monotonicity of the drift nonlinearity, we then prove that under very mild assumptions on the initial data, this scheme achieves the optimal strong convergence rate O(δt 1 4). When d ≤ 3 and the driving noise possesses some regularity in space, we study exponential integrability properties of the exact and numerical solutions. Finally, in dimension d = 1, these properties are used to prove that the splitting scheme has a strong convergence rate O(δt).

Journal ArticleDOI
TL;DR: This work uses sparse Bayesian learning (SBL) to learn a common sparsity profile corresponding to the location of present sources, and demonstrates improved source localization performance when compared to the white noise gain constraint (–3 dB) and Bartlett processors.
Abstract: Matched field processing (MFP) compares the measures to the modeled pressure fields received at an array of sensors to localize a source in an ocean waveguide. Typically, there are only a few sources when compared to the number of candidate source locations or range-depth cells. We use sparse Bayesian learning (SBL) to learn a common sparsity profile corresponding to the location of present sources. SBL performance is compared to traditional processing in simulations and using experimental ocean acoustic data. Specifically, we localize a quiet source in the presence of a surface interferer in a shallow water environment. This multi-frequency scenario requires adaptive processing and includes modest environmental and sensor position mismatch in the MFP model. The noise process changes likely with time and is modeled as a non-stationary Gaussian process, meaning that the noise variance changes across snapshots. The adaptive SBL algorithm models the complex source amplitudes as random quantities, providing robustness to amplitude and phase errors in the model. This is demonstrated with experimental data, where SBL exhibits improved source localization performance when compared to the white noise gain constraint (–3 dB) and Bartlett processors.

Journal ArticleDOI
TL;DR: This study gives explicit closed-form expressions of the stochastic Cramer–Rao bounds (STO-CRBs) on direction-of-departure and direction- of-arrival estimation accuracies for collocated multiple-input–multiple-output (MIMO) radar with unknown spatially coloured noise.
Abstract: This study gives explicit closed-form expressions of the stochastic Cramer–Rao bounds (STO-CRBs) on direction-of-departure and direction-of-arrival estimation accuracies for collocated multiple-input–multiple-output (MIMO) radar with unknown spatially coloured noise. In some special cases, i.e. the CRB of direction estimation accuracy for monostatic MIMO radar, the white noise scenario is discussed. Theoretical comparisons between the STO-CRBs and the deterministic ones are presented. Finally, these bounds are numerically compared.

Journal ArticleDOI
TL;DR: In this article, the authors investigate whether alternative noise models should be considered using the log-likelihood, Akaike and Bayesian information criteria, and find that for 80-90% of the stations, the preferred noise models are still the power law or flicker noise with white noise.
Abstract: The accuracy by which velocities can be estimated from GNSS time series is mainly determined by the low-frequency noise, below 0.2–0.1 cpy, which are normally described by a power-law model. As GNSS observations have now been recorded for over two decades, new information about the noise at these low frequencies has become available and we investigate whether alternative noise models should be considered using the log-likelihood, Akaike and Bayesian information criteria. Using 110 globally distributed IGS stations with at least 12 years of observations, we find that for 80–90% of them the preferred noise models are still the power law or flicker noise with white noise. For around 6% of the stations, we found the presence of random-walk noise, which increases the linear trend uncertainty when taken into account in the stochastic noise model of the time series by about a factor of 1.5 to 8.4, in agreement with previous studies. Next, the Generalised Gauss–Markov with white noise model describes the stochastic properties better for 4% and 5% of the stations for the East and North component, respectively, and 13% for the vertical component. For these stations, the uncertainty associated with the tectonic rate is about 2 times smaller compared to the case when the standard power-law plus white noise model is used.

Journal ArticleDOI
TL;DR: The results in both numerical simulations and experimental validations demonstrate that the enhanced EWT approach can effectively and reliably identify the instantaneous frequencies of time-varying systems.

Journal ArticleDOI
TL;DR: The oscillator noise model for the satellites of GPS, GLONASS, BDS and Galileo according to the oscillator type as well as the block type is developed and the efficiency of this oscillator Noise model in multi-GNSS satellite clock estimation is demonstrated with 2-months data for both regional and global networks in simultaneous real-time mode.
Abstract: During the past years, real-time precise point positioning has been proven to be an efficient tool in the applications of navigation, precise orbit determination of LEO as well as earthquake and tsunami early warning, etc. One of the most crucial issues of these applications is the high-precision real-time GNSS satellite clock. Though the performance and character of the GNSS onboard atomic frequency standard have been widely studied, the white noise model is still the most popular hypothesis that employed in the real-time GNSS satellite clock estimation. However, concerning the real-time applications, significant data discontinuity may arise either due to the fact that only regional stations involved, or the failure in the stations, satellites and network connections. These data discontinuity would result in an arbitrary clock jump between adjacent arcs when the clock offsets are modeled as white noise. In addition, it is also expected that the detection and identification of outliers would be benefited from the constrains of the satellite oscillator noise model. Thus in this contribution, based on the statistic analysis of almost 2-year multi-GNSS precise clock products, we developed the oscillator noise model for the satellites of GPS, GLONASS, BDS and Galileo according to the oscillator type as well as the block type. Then, the efficiency of this oscillator noise model in multi-GNSS satellite clock estimation is demonstrated with 2-months data for both regional and global networks in simultaneous real-time mode. For the regional network, the results suggest that compared with the traditional solution based on white noise model, the improvement is 44.4 and 12.1% on average for STD and RMS, respectively, and the improvement is mainly attributed to the efficiency of the oscillator noise model during the convergence period and the gross error resistance. Concerning the global experiment, since the stations guarantee the continuous tracking of the satellites with redundant observable, the improvement is not as evident as that of regional experiment for GPS, GLONASS and BDS. The STD of Galileo clock improves from 0.28 to 0.19 ns due to that, the satellites E14 and E18 still suffer significant data discontinuity during our experimental period.

Journal ArticleDOI
TL;DR: This work aims to attenuate white noise of seismic data using the convolutional neural network (CNN) and demonstrates the robustness and superiority of the method over the traditional methods.
Abstract: Seismic noise attenuation is an important step in seismic data processing. Most noise attenuation algorithms are based on the analysis of time-frequency characteristics of the seismic data ...

Journal ArticleDOI
TL;DR: A new second-order variable structure predictive filter is first designed with the measurement errors and their difference reduced and the robust version of the preceding filter is developed by using the Huber technique to ensure great robustness and perfect estimation accuracy/precision for the satellite attitude.
Abstract: This work presents a novel filtering approach to the high-accuracy attitude estimation problem of satellites. A new second-order variable structure predictive filter is first designed with the measurement errors and their difference reduced. The key feature of this filter is that the noise handled is not constrained to be the Gaussian white noise. Hence, it is a new solution to filtering problem in the presence of modeling errors or heavy-tailed noise. Then, the robust version of the preceding filter is developed by using the Huber technique. This robust filter can ensure great robustness and perfect estimation accuracy/precision for the satellite attitude. The Lyapunov stability analysis proves that the measurement error and its difference can be stabilized into a small set with a faster rate of convergence. The effectiveness of the presented attitude estimation filters is validated via simulation by comparing with the traditional cubature Kalman filter.

Journal ArticleDOI
TL;DR: In this article, the authors consider the generation of samples of a mean zero Gaussian random field with Matern covariance function, where every sample requires the solution of a differential equation with Gaussian white noise.
Abstract: We consider the generation of samples of a mean-zero Gaussian random field with Matern covariance function. Every sample requires the solution of a differential equation with Gaussian white noise f...

Journal ArticleDOI
TL;DR: The path integral method is extended to solve one-dimensional space fractional Fokker-Planck-Kolmogorov (FPK) equations, which are the governing equations corresponded to scalar SDEs excited by α-stable Levy white noise, and it is demonstrated that the PI method has a higher accuracy than the first order finite difference method for one step iteration in time.

Journal ArticleDOI
TL;DR: In this article, a periodically driven bistable eutrophication model with Gaussian white noise is introduced as a prototype class of real systems and the residence probability is presented to measure the possibility that the given system stays in the oligotrophic state versus Gaussian White Noise and periodic force.
Abstract: Stochastic perturbations and periodic excitations are generally regarded as sources to induce critical transitions in complex systems. However, we find that they are also able to slow down an imminent critical transition. To illustrate this phenomenon, a periodically driven bistable eutrophication model with Gaussian white noise is introduced as a prototype class of real systems. The residence probability (RP) is presented to measure the possibility that the given system stays in the oligotrophic state versus Gaussian white noise and periodic force. Variations in the mean first passage time (MFPT) and the mean velocity (MV) of the first right-crossing process are also calculated respectively. We show that the frequency of the periodic force can increase the MFPT while reduce the MV under different control parameters. Nevertheless, the noise intensity or the amplitude may result in an increase of the RP only in the case of control parameters approaching the critical values. Furthermore, for an impending critical transition, an increase of the RP appears with the interaction between the amplitude and noise intensity or the combination of the noise intensity and frequency, while the interaction of the frequency and amplitude leads to an extension of the MFPT or a decrease of the MV. As a result, an increase of the RP and MFPT, and a decrease of the MV obtained from our results claim that it is possible to slow down an imminent critical transition via Gaussian white noise and periodic force.

Journal ArticleDOI
TL;DR: In this article, the authors report on ESPRESSO observations of HD41248 and analyze them together with previous observations from HARPS with the goal of evaluating the presence of orbiting planets.
Abstract: Twenty-four years after the discoveries of the first exoplanets, the radial-velocity (RV) method is still one of the most productive techniques to detect and confirm exoplanets. But stellar magnetic activity can induce RV variations large enough to make it difficult to disentangle planet signals from the stellar noise. In this context, HD41248 is an interesting planet-host candidate, with RV observations plagued by activity-induced signals. We report on ESPRESSO observations of HD41248 and analyse them together with previous observations from HARPS with the goal of evaluating the presence of orbiting planets. Using different noise models within a general Bayesian framework designed for planet detection in RV data, we test the significance of the various signals present in the HD41248 dataset. We use Gaussian processes as well as a first-order moving average component to try to correct for activity-induced signals. At the same time, we analyse photometry from the TESS mission, searching for transits and rotational modulation in the light curve. The number of significantly detected Keplerian signals depends on the noise model employed, which can range from 0 with the Gaussian process model to 3 with a white noise model. We find that the Gaussian process alone can explain the RV data while allowing for the stellar rotation period and active region evolution timescale to be constrained. The rotation period estimated from the RVs agrees with the value determined from the TESS light curve. Based on the data that is currently available, we conclude that the RV variations of HD41248 can be explained by stellar activity (using the Gaussian process model) in line with the evidence from activity indicators and the TESS photometry.

Journal ArticleDOI
TL;DR: A hybrid ICA (H-ICA) algorithm for process monitoring by concurrent analysis of both high-order and second-order statistics is proposed which is verified by both a numerical example and a real thermal power plant process which illustrates its feasibility and efficacy.

Posted Content
TL;DR: A frequency-band correspondence measure is introduced to characterize the spectral bias of the deep image prior, where low-frequency image signals are learned faster and better than high-frequency counterparts.
Abstract: The "deep image prior" proposed by Ulyanov et al. is an intriguing property of neural nets: a convolutional encoder-decoder network can be used as a prior for natural images. The network architecture implicitly introduces a bias; If we train the model to map white noise to a corrupted image, this bias guides the model to fit the true image before fitting the corrupted regions. This paper explores why the deep image prior helps in denoising natural images. We present a novel method to analyze trajectories generated by the deep image prior optimization and demonstrate: (i) convolution layers of the an encoder-decoder decouple the frequency components of the image, learning each at different rates (ii) the model fits lower frequencies first, making early stopping behave as a low pass filter. The experiments study an extension of Cheng et al which showed that at initialization, the deep image prior is equivalent to a stationary Gaussian process.

Journal ArticleDOI
TL;DR: In this paper, the authors further model the deterministic part with Fourier series and update the variogram of the stochastic part accordingly based on two-year data collected by about 150 stations, and the experimental results suggest that compared with ionosphere-free model and their previous method, the averaged 3D improvement of their new method is 17.8 and 7.6% for dual-frequency PPP, respectively.
Abstract: To access the full capabilities of multi-frequency signals from the modernized GPS, GLONASS and newly deployed BDS, Galileo, the undifferenced and uncombined observable model in which the individual signal of each frequency is treated as independent observable has drawn increasing interest in GNSS community. The ionosphere delay is the major issue in the undifferenced and uncombined observable model. Though several ionosphere delay parameterization approaches have been promoted, we argue that the functional model with only deterministic characteristic may not follow the irregular spatial and temporal variations. On the contrary, when the ionosphere delay is estimated as random walk or even white noise with only stochastic characteristic, the ionosphere terms turn out to be non-estimable or not sensitive to their absolute value. In the authors’ previous study, we have developed the deterministic plus stochastic ionosphere model, denoted as DESIGN, in which the deterministic part expressed with second-order polynomial is estimated as piece-wise constant over 5 min and the stochastic part is estimated as random walk with constrains derived based on statistics of 4 weeks data in 2010. In this contribution, we further model the deterministic part with Fourier series and update the variogram of the stochastic part accordingly based on two-year data collected by about 150 stations. From the statistic studies, it is concluded that the main frequency components are identical for different coefficients, different stations, as well as different ionosphere activity status, but with varying amplitude. Thus, in the Fourier series expression of the deterministic part, we fix the frequency and estimate the amplitude as daily constant unknowns. Concerning the stochastic component, the variation of variogram is both, geomagnetic latitude and ionosphere activity status dependent. Thus, we use the Gaussian function and Epstein function to model the variation of geomagnetic latitude and ionosphere activity status, respectively. Based on the undifferenced and uncombined observable model with ionosphere constrained with DESIGN, both dual-frequency and single-frequency PPP are carried out to demonstrate its efficiency with three-month data collected in 2010, 2014, and 2017 with different ionosphere activity status. The experimental results suggest that compared with ionosphere-free model and our previous method, the averaged 3D improvement of our new method is 17.8 and 7.6% for dual-frequency PPP, respectively. While for single-frequency PPP, the averaged 3D improvement is 37.0 and 14%, respectively.

Journal ArticleDOI
TL;DR: The usual way in which mathematicians work with randomness is by a rigorous formulation of the idea of Brownian motion, which is the limit of a random walk as the step length goes to zero.
Abstract: The usual way in which mathematicians work with randomness is by a rigorous formulation of the idea of Brownian motion, which is the limit of a random walk as the step length goes to zero. A Browni...

Journal ArticleDOI
TL;DR: A mathematical theory foundation of the output signal-to-noise ratio (SNR) improvement, an output SNR inequality relation between the traditional WD and the CICFWD for a general noisy signal, and how to solve this inequality to come out LCT free parameters is formulated.
Abstract: In our previous work, we addressed a unified representation problem on Wigner distribution (WD) in linear canonical transform (LCT) domains by introducing a kind of closed-form instantaneous cross-correlation function type of Wigner distribution (CICFWD) that can be regarded as the WD's closed-form representation in the linear canonical domain. We then discussed the application of CICFWD in the detection of linear frequency-modulated (LFM) signals through the numerical simulation analysis approach, and it turns out that there is a causation between LCT free parameters and the detection performance. The main contribution of this paper is to provide a mathematical model for this causality in order to disclose the intrinsic mechanism of non-stationary signal detection performance improvement triggered by LCT free parameters. We first revisit the definition of CICFWD, and establish some related theories including its essential properties and discretization. We then formulate a mathematical theory foundation of the output signal-to-noise ratio (SNR) improvement, an output SNR inequality relation between the traditional WD and the CICFWD for a general noisy signal, and discuss how to solve this inequality to come out LCT free parameters. Within the well-established output SNR improvement analysis framework, we also explore LCT free parameters selection results on LFM signals added with white noise. Finally, numerical experiments are carried out to validate the correctness of strategies on determination LCT free parameters and the feasibility of output SNR improvement analysis approach.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a portmanteau-type test statistic which is the sum of squared singular values of the first q lagged sample autocovariance matrices.
Abstract: Testing for white noise is a classical yet important problem in statistics, especially for diagnostic checks in time series modeling and linear regression. For high-dimensional time series in the sense that the dimension p is large in relation to the sample size T, the popular omnibus tests including the multivariate Hosking and Li-McLeod tests are extremely conservative, leading to substantial power loss. To develop more relevant tests for high-dimensional cases, we propose a portmanteau-type test statistic which is the sum of squared singular values of the first q lagged sample autocovariance matrices. It, therefore, encapsulates all the serial correlations (upto the time lag q) within and across all component series. Using the tools from random matrix theory and assuming both p and T diverge to infinity, we derive the asymptotic normality of the test statistic under both the null and a specific VMA(1) alternative hypothesis. As the actual implementation of the test requires the knowledge of three characteristic constants of the population cross-sectional covariance matrix and the value of the fourth moment of the standardized innovations, non trivial estimations are proposed for these parameters and their integration leads to a practically usable test. Extensive simulation confirms the excellent finite-sample performance of the new test with accurate size and satisfactory power for a large range of finite (p, T) combinations, therefore ensuring wide applicability in practice. In particular, the new tests are consistently superior to the traditional Hosking and Li-McLeod tests.