scispace - formally typeset
Search or ask a question

Showing papers on "Autocorrelation published in 2022"


Journal ArticleDOI
TL;DR: An adaptive maximum cyclostationarity blind deconvolution (ACYCBD) is proposed, aiming at the determination of cyclic frequency set estimation method based on autocorrelation function of morphological envelope and the validity of the method is verified.

107 citations


Journal ArticleDOI
TL;DR: In this article , an adaptive maximum cyclostationarity blind deconvolution (ACYCBD) method was proposed for fault detection, which can recover periodic impulses from mixed fault signals convoluted by noise and periodic impulses.

80 citations


Journal ArticleDOI
TL;DR: Lucy-Richardson-Rosen algorithm has been proposed in this article for 3D image reconstruction without two-beam interference (TBII) by using deterministic fields.
Abstract: In recent years, there has been a significant transformation in the field of incoherent imaging with new possibilities of compressing three-dimensional (3D) information into a two-dimensional intensity distribution without two-beam interference (TBI). Most of the incoherent 3D imagers without TBI are based on scattering by a random phase mask exhibiting sharp autocorrelation and low cross-correlation along the depth. Consequently, during reconstruction, high lateral and axial resolutions are obtained. Imaging based on scattering requires an astronomical photon budget and is therefore precluded in many power-sensitive applications. In this study, a proof-of-concept 3D imaging method without TBI using deterministic fields has been demonstrated. A new reconstruction method called the Lucy-Richardson-Rosen algorithm has been developed for this imaging concept. We believe that the proposed approach will cause a paradigm-shift in the current state-of-the-art incoherent imaging, fluorescence microscopy, mid-infrared fingerprinting, astronomical imaging, and fast object recognition applications.

29 citations


Journal ArticleDOI
TL;DR: In this paper , a time series modeling technique was adopted for the reservoir water level prediction in Thiruvallur district, Tamil Nadu, India, also expected to be converted into the other productive services in the future.
Abstract: Reservoir water level (RWL) prediction has become a challenging task due to spatio-temporal changes in climatic conditions and complicated physical process. The Red Hills Reservoir (RHR) is an important source of drinking and irrigation water supply in Thiruvallur district, Tamil Nadu, India, also expected to be converted into the other productive services in the future. However, climate change in the region is expected to have consequences over the RHR’s future prospects. As a result, accurate and reliable prediction of the RWL is crucial to develop an appropriate water release mechanism of RHR to satisfy the population’s water demand. In the current study, time series modelling technique was adopted for the RWL prediction in RHR using Box–Jenkins autoregressive seasonal autoregressive integrated moving average (SARIMA) and artificial neural network (ANN) hybrid models. In this research, the SARIMA model was obtained as SARIMA (0, 0, 1) (0, 3, 2)12 but the residual of the SARIMA model could not meet the autocorrelation requirement of the modelling approach. In order to overcome this weakness of the SARIMA model, a new SARIMA–ANN hybrid time series model was developed and demonstrated in this study. The average monthly RWL data from January 2004 to November 2020 was used for developing and testing the models. Several model assessment criteria were used to evaluate the performance of each model. The findings showed that the SARIMA–ANN hybrid model outperformed the remaining models considering all performance criteria for reservoir RWL prediction. Thus, this study conclusively proves that the SARIMA–ANN hybrid model could be a viable option for the accurate prediction of reservoir water level.

26 citations


Journal ArticleDOI
TL;DR: In this paper , the age of the young open cluster Melotte 20, known as α Per, using seismic indices was determined by extracting the frequency content of a sample of stars in the field of an open cluster.
Abstract: ABSTRACT In this work, we aim at constraining the age of the young open cluster Melotte 20, known as α Per, using seismic indices. The method consists of the following steps: (1) Extract the frequency content of a sample of stars in the field of an open cluster. (2) Search for possible regularities in the frequency spectra of δ Sct stars candidates, using different techniques, such as the Fourier transform, the autocorrelation function, the histogram of frequency differences and the échelle diagram. (3) Constrain the age of the selected stars by both the physical parameters and seismic indices by comparing them with a grid of asteroseismic models representative of δ Sct stars. (4) Find possible common ages between these stars to determine the age of the cluster. We performed the pulsation analysis with MultiModes, a rapid, accurate and powerful open-source code, which is presented in this paper. The result is that the age of α Per could be between 96 and 100 Myr. This is an improvement over different techniques in the past. We therefore show that space astroseismology is capable of taking important steps in the dating of young open clusters.

23 citations


Journal ArticleDOI
TL;DR: In this article , a novel hidden Markov model (HMM) is developed to explore both the temporal autocorrelation of WSF error and the nonlinear correlation between the WSF result and the error.
Abstract: Short-term wind power forecast (WPF) depends highly on the wind speed forecast (WSF), which is the prime contributor to the forecasting error. To achieve more accurate WPF results, this article proposes a wind speed correction method to improve the WSF result obtained by using the weather research and forecasting (WRF) model. First, the WRF model is constructed to forecast the wind speed, and its performance is analyzed. Second, a novel hidden Markov model (HMM) is developed to explore both the temporal autocorrelation of WSF error and the nonlinear correlation between the WSF result and the error. In the model, the fuzzy C-means cluster is introduced to properly divide the hidden state space of HMM and the emission probability of HMM is improved as continuous by the kernel density estimation (KDE) to make full use of the observation information. The proposed HMM model is better at wind speed correction through modification. Third, the HMM is solved by the Viterbi algorithm and the minimum mean-square error regulation to correct the predicted wind speed. Finally, the deterministic and probabilistic WPF results are obtained by using another KDE model, the proposed method is demonstrated to be superior to the benchmarks in case studies.

21 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used regularized machine learning models to forecast Brazilian power electricity consumption for short and medium terms, and compared their models to benchmark specifications such as Random Walk and Autoregressive Integrated Moving Average.
Abstract: We use regularized machine learning models to forecast Brazilian power electricity consumption for short and medium terms. We compare our models to benchmark specifications such as Random Walk and Autoregressive Integrated Moving Average. Our results show that machine learning methods, especially Random Forest and Lasso Lars, give more accurate forecasts for all horizons. Random Forest and Lasso Lars managed to keep up with the trend and the seasonality for various time horizons. The gain in predicting PEC using machine learning models relative to the benchmarks is considerably higher for the very short-term. Machine learning variable selection further shows that lagged consumption values are extremely important for very short-term forecasting due to the series high autocorrelation. Other variables such as weather and calendar variables are important for longer time horizons.

18 citations


Journal ArticleDOI
TL;DR: In this paper , the authors demonstrate how neglecting spatial autocorrelation during cross-validation leads to an optimistic model performance assessment, using the example of a tree species segmentation problem in multiple, spatially distributed drone image acquisitions.
Abstract: Deep learning and particularly Convolutional Neural Networks (CNN) in concert with remote sensing are becoming standard analytical tools in the geosciences. A series of studies has presented the seemingly outstanding performance of CNN for predictive modelling. However, the predictive performance of such models is commonly estimated using random cross-validation, which does not account for spatial autocorrelation between training and validation data. Independent of the analytical method, such spatial dependence will inevitably inflate the estimated model performance. This problem is ignored in most CNN-related studies and suggests a flaw in their validation procedure. Here, we demonstrate how neglecting spatial autocorrelation during cross-validation leads to an optimistic model performance assessment, using the example of a tree species segmentation problem in multiple, spatially distributed drone image acquisitions. We evaluated CNN-based predictions with test data sampled from 1) randomly sampled hold-outs and 2) spatially blocked hold-outs. Assuming that a block cross-validation provides a realistic model performance, a validation with randomly sampled holdouts overestimated the model performance by up to 28%. Smaller training sample size increased this optimism. Spatial autocorrelation among observations was significantly higher within than between different remote sensing acquisitions. Thus, model performance should be tested with spatial cross-validation strategies and multiple independent remote sensing acquisitions. Otherwise, the estimated performance of any geospatial deep learning method is likely to be overestimated.

16 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a new scheme for probabilistic precipitation forecasting, in which signal decomposition techniques (complete ensemble empirical mode decomposition with adaptive noise) are used to decompose original precipitation series into subsequences, and an ensemble model is used to assemble the outputs of empirical approaches, whose weights are determined by Adaptive Metropolis-Markov Chain Monte Carlo algorithm (AM-MCMC).
Abstract: • The new scheme determines prediction uncertainty to consider the value of forecasting. • The signal decomposition technique reveals the stochastic characteristics of sequences. • The single-model weight of the ensemble model is determined along with its ability. Precipitation affects the generation of runoff and concentration of water resources in basins. The randomness of precipitation contributes to the difficulty and uncertainty of forecasting it. To improve precipitation forecasting accuracy and account for this uncertainty, a new scheme for probabilistic precipitation forecasting is proposed. In the scheme, first, signal decomposition techniques (complete ensemble empirical mode decomposition with adaptive noise) are used to decompose original precipitation series into subsequences. Second, empirical approaches (time series analysis model, grey self-memory model and long-short-term memory) are used to produce a quantitative precipitation forecast. Third, an ensemble model is used to assemble the outputs of empirical approaches, whose weights are determined by the Adaptive Metropolis-Markov Chain Monte Carlo algorithm (AM-MCMC). The AM-MCMC is adopted to produce a large number of weights for single models in an ensemble model. The quantitative forecasting (prediction) and its confidence interval at a given probability (90%) are obtained by multiplying the single-model predictions by the mean and the confidence interval of the weights assigned to those predictions, respectively. In this study, the annual precipitation (single annual value of each year) is adopted to test the performance of the new scheme. The precipitation of the forecast year is obtained from the precipitation forecast of the previous p years ( p is the autocorrelation order of annual precipitation series). The results show that the new scheme for probabilistic forecasting for precipitation has better forecasting accuracy than single-model predictions; the RMSE is less than 139, and the MARE is less than 8.99%. Moreover, the new scheme for probabilistic forecasting for precipitation gets great probabilistic metrics, the CRPS ranges from 0.009 to 0.036, the reliability ranges from 0.001 to 0.008, and sharpness ranges from 24 to 77.

15 citations


Journal ArticleDOI
TL;DR: In this paper , the authors studied the dynamical properties of an active particle subject to a swimming speed explicitly depending on the position of the particle, and derived the spatial density profile of the swim velocity.
Abstract: We study the dynamical properties of an active particle subject to a swimming speed explicitly depending on the particle position. The oscillating spatial profile of the swim velocity considered in this paper takes inspiration from experimental studies based on Janus particles whose speed can be modulated by an external source of light. We suggest and apply an appropriate model of an active Ornstein Uhlenbeck particle (AOUP) to the present case. This allows us to predict the stationary properties, by finding the exact solution of the steady-state probability distribution of particle position and velocity. From this, we obtain the spatial density profile and show that its form is consistent with the one found in the framework of other popular models. The reduced velocity distribution highlights the emergence of non-Gaussianity in our generalized AOUP model which becomes more evident as the spatial dependence of the velocity profile becomes more pronounced. Then, we focus on the time-dependent properties of the system. Velocity autocorrelation functions are studied in the steady-state combining numerical and analytical methods derived under suitable approximations. We observe a non-monotonic decay in the temporal shape of the velocity autocorrelation function which depends on the ratio between the persistence length and the spatial period of the swim velocity. In addition, we numerically and analytically study the mean square displacement and the long-time diffusion coefficient. The ballistic regime, observed in the small-time region, is deeply affected by the properties of the swim velocity landscape which induces also a crossover to a sub-ballistic but superdiffusive regime for intermediate times. Finally, the long-time diffusion coefficient decreases as the amplitude of the swim velocity oscillations increases because the diffusion is mainly determined by those regions where the particles are slow.

15 citations


Journal ArticleDOI
TL;DR: In this paper, a new scheme for probabilistic precipitation forecasting is proposed, where signal decomposition techniques (complete ensemble empirical mode decomposition with adaptive noise) are used to decompose original precipitation series into subsequences.
Abstract: Precipitation affects the generation of runoff and concentration of water resources in basins. The randomness of precipitation contributes to the difficulty and uncertainty of forecasting it. To improve precipitation forecasting accuracy and account for this uncertainty, a new scheme for probabilistic precipitation forecasting is proposed. In the scheme, first, signal decomposition techniques (complete ensemble empirical mode decomposition with adaptive noise) are used to decompose original precipitation series into subsequences. Second, empirical approaches (time series analysis model, grey self-memory model and long-short-term memory) are used to produce a quantitative precipitation forecast. Third, an ensemble model is used to assemble the outputs of empirical approaches, whose weights are determined by the Adaptive Metropolis-Markov Chain Monte Carlo algorithm (AM-MCMC). The AM-MCMC is adopted to produce a large number of weights for single models in an ensemble model. The quantitative forecasting (prediction) and its confidence interval at a given probability (90%) are obtained by multiplying the single-model predictions by the mean and the confidence interval of the weights assigned to those predictions, respectively. In this study, the annual precipitation (single annual value of each year) is adopted to test the performance of the new scheme. The precipitation of the forecast year is obtained from the precipitation forecast of the previous p years (p is the autocorrelation order of annual precipitation series). The results show that the new scheme for probabilistic forecasting for precipitation has better forecasting accuracy than single-model predictions; the RMSE is less than 139, and the MARE is less than 8.99%. Moreover, the new scheme for probabilistic forecasting for precipitation gets great probabilistic metrics, the CRPS ranges from 0.009 to 0.036, the reliability ranges from 0.001 to 0.008, and sharpness ranges from 24 to 77.

Journal ArticleDOI
TL;DR: In this paper , a machine learning integrated UAV-to-Vehicle (U2V) mmWave channel model is proposed, where the deterministic parameters are calculated based on simplified geometry information, while the random ones are generated by the back propagation based neural network (BPNN) and generative adversarial network (GAN), where the training data set is obtained from massive ray-tracing (RT) simulations.
Abstract: Unmanned aerial vehicle (UAV) millimeter wave (mmWave) technologies can provide flexible link and high data rate for future communication networks. By considering the new features of three-dimensional (3D) scattering space, 3D velocity, 3D antenna array, and especially 3D rotations, a machine learning (ML) integrated UAV-to-Vehicle (U2V) mmWave channel model is proposed. Meanwhile, a ML-based network for channel parameter calculation and generation is developed. The deterministic parameters are calculated based on the simplified geometry information, while the random ones are generated by the back propagation based neural network (BPNN) and generative adversarial network (GAN), where the training data set is obtained from massive ray-tracing (RT) simulations. Moreover, theoretical expressions of channel statistical properties, i.e., power delay profile (PDP), autocorrelation function (ACF), Doppler power spectrum density (DPSD), and cross-correlation function (CCF) are derived and analyzed. Finally, the U2V mmWave channel is generated under a typical urban scenario at 28 GHz. The generated PDP and DPSD show good agreement with RT-based results, which validates the effectiveness of proposed method. Moreover, the impact of 3D rotations, which has rarely been reported in previous works, can be observed in the generated CCF and ACF, which are also consistent with the theoretical and measurement results.

Journal ArticleDOI
01 May 2022-Cities
TL;DR: Zhang et al. as mentioned in this paper used an eigenvector spatial filtering (ESF) model to capture the spatial effect when modeling the scaling relations of cities and proposed a new set of urban indicators (spatial and scale adjusted metropolitan indicators) to address the effect of spatial autocorrelation on SAMIs.

Journal ArticleDOI
TL;DR: In this paper , an improved sequence-to-sequence gated recurrent unit network (S2S-IGRU) is proposed for short-term electric load forecast with a three-step adaptive framework for following dynamic temporal dependency pattern.

Journal ArticleDOI
TL;DR: In this paper , an iterative model of the generalized Cauchy process with LRD characteristics is proposed for the remaining useful life (RUL) prediction of lithium-ion batteries.

Journal ArticleDOI
TL;DR: In this article , a novel echo state network with multiple delayed outputs (MDO-ESN) is proposed for time series prediction, where the authors consider the influence of multiple delayed output items on the prediction accuracy.
Abstract: In this paper, considering the influence of multiple delayed output items on the prediction accuracy of echo state network, a novel echo state network with multiple delayed outputs (MDO-ESN) is proposed for time series prediction with multiple delayed outputs. Firstly, for a given learning task, through studying the autocorrelation of output signal, its delayed characteristics can be determined, and then the corresponding delayed item of output equation of the MDO-ESN can be adjusted adaptively. Secondly, in order to improve the adaptability of the MDO-ESN in different learning tasks, a sufficient condition is given to satisfy the stability of the MDO-ESN. Thirdly, a parameter optimization method is given to reduce the dependence of the prediction accuracy of the MDO-ESN on the reservoir parameters of the MDO-ESN. Finally, two numerical simulation examples and one actual simulation example are used for verifying the effectiveness of the MDO-ESN.

Journal ArticleDOI
TL;DR: In this paper , the authors present principles, limitations, data collection, and processing methods for MAM data collection and processing using spatial autocorrelation processing (SPAC) methods.
Abstract: Abstract Microtremor array measurements, and passive surface wave methods in general, have been increasingly used to non-invasively estimate shear-wave velocity structures for various purposes. The methods estimate dispersion curves and invert them for retrieving S-wave velocity profiles. This paper summarizes principles, limitations, data collection, and processing methods. It intends to enable students and practitioners to understand the principles needed to plan a microtremor array investigation, record and process the data, and evaluate the quality of investigation result. The paper focuses on the spatial autocorrelation processing method among microtremor array processing methods because of its relatively simple calculation and stable applicability. Highlights 1. A summary of fundamental principles of calculating phase velocity from ambient noise 2. General recommendations for MAM data collection and processing using SPAC methods 3. A discussion of limitations and uncertainties in the methods

Journal ArticleDOI
TL;DR: In this paper , the authors introduce an indicator for systems driven by nonstationary short-term memory noise, and show that this indicator performs well in situations where the classical methods fail.
Abstract: Precursor signals for bifurcation-induced critical transitions have recently gained interest across many research fields. Common indicators, including variance and autocorrelation increases, rely on the dynamical system being driven by white noise. Here, we show that these metrics raise false alarms for systems driven by time-correlated noise, if the autocorrelation of the noise process increases with time. We introduce an indicator for systems driven by nonstationary short-term memory noise, and show that this indicator performs well in situations where the classical methods fail.

Journal ArticleDOI
TL;DR: In this article , an enhanced periodic mode decomposition (EPMD) method is proposed to extract the periodic impulse components from the composite fault signals, and the experimental results indicate that the EPMD is an effective method for composite fault diagnosis of rolling bearings.
Abstract: The impulse components of different periods in the composite fault signal of rolling bearing are extracted difficultly due to the background noise and the coupling of composite faults, which greatly affects the accuracy of composite fault diagnosis. To accurately extract the periodic impulse components from the composite fault signals, we introduce the theory of Ramanujan sum to generate the precise periodic components (PPCs). In order to comprehensively extract major periods in composite fault signals, the SOSO-maximum autocorrelation impulse harmonic to noise deconvolution (SOSO-MAIHND) method is proposed to reduce noise and enhance the relatively weak periodic impulses. Based on this, an enhanced periodic mode decomposition (EPMD) method is proposed. The experimental results indicate that the EPMD is an effective method for composite fault diagnosis of rolling bearings.

Journal ArticleDOI
TL;DR: In this paper , a fair train-test split method was proposed for spatial autocorrelation and planned real-world use of the spatial prediction model to design a fair test set.

Journal ArticleDOI
TL;DR: Multi-regional modeling of CCC cases for the first scenario and the outcomes of the two scenarios indicate the superiority of ARIMA time series and DL models in further decision making for FK, with ARIMAGLS and ensemble ARimA demonstrating superiority to the other models.
Abstract: Reliable modeling of novel commutative cases of COVID-19 (CCC) is essential for determining hospitalization needs and providing the benchmark for health-related policies. The current study proposes multi-regional modeling of CCC cases for the first scenario using autoregressive integrated moving average (ARIMA) based on automatic routines (AUTOARIMA), ARIMA with maximum likelihood (ARIMAML), and ARIMA with generalized least squares method (ARIMAGLS) and ensembled (ARIMAML-ARIMAGLS). Subsequently, different deep learning (DL) models viz: long short-term memory (LSTM), random forest (RF), and ensemble learning (EML) were applied to the second scenario to predict the effect of forest knowledge (FK) during the COVID-19 pandemic. For this purpose, augmented Dickey–Fuller (ADF) and Phillips–Perron (PP) unit root tests, autocorrelation function (ACF), partial autocorrelation function (PACF), Schwarz information criterion (SIC), and residual diagnostics were considered in determining the best ARIMA model for cumulative COVID-19 cases (CCC) across multi-region countries. Seven different performance criteria were used to evaluate the accuracy of the models. The obtained results justified both types of ARIMA model, with ARIMAGLS and ensemble ARIMA demonstrating superiority to the other models. Among the DL models analyzed, LSTM-M1 emerged as the best and most reliable estimation model, with both RF and LSTM attaining more than 80% prediction accuracy. While the EML of the DL proved merit with 96% accuracy. The outcomes of the two scenarios indicate the superiority of ARIMA time series and DL models in further decision making for FK.

Journal ArticleDOI
TL;DR: In this article , the authors used autocorrelation functions from long-term precision broadband differential light curves to estimate the average lifetimes of starspot groups for two large samples of Kepler stars: stars with and without previously known rotation periods.
Abstract: Abstract We present a method that utilizes autocorrelation functions from long-term precision broadband differential light curves to estimate the average lifetimes of starspot groups for two large samples of Kepler stars: stars with and without previously known rotation periods. Our method is calibrated by comparing the strengths of the first few normalized autocorrelation peaks using ensembles of models that have various starspot lifetimes. We find that we must mix models of short and long lifetimes together (in heuristically determined ratios) to align the models with the Kepler data. Our fundamental result is that short starspot-group lifetimes (one to four rotations) are implied when the first normalized peak is weaker than about 0.4, long lifetimes (15 or greater) are implied when it is greater than about 0.7, and in between are the intermediate cases. Rotational lifetimes can be converted to physical lifetimes if the rotation period is known. Stars with shorter rotation periods tend to have longer rotational (but not physical) spot lifetimes, and cooler stars tend to have longer physical spot lifetimes than warmer stars with the same rotation period. The distributions of the physical lifetimes are log-normal for both samples and generally longer in the first sample. The shorter lifetimes in the stars without known periods probably explain why their periods are difficult to measure. Some stars exhibit longer than average physical starspot lifetimes; their percentage drops with increasing temperature from nearly half at 3000 K to nearly zero for hotter than 6000 K.

Journal ArticleDOI
TL;DR: In this paper , the authors present a systemic search for millilensing of gamma-ray bursts (GRBs) among 3000 GRBs observed by the Fermi GBM up to 2021 April.
Abstract: Abstract Millilensing of gamma-ray bursts (GRBs) is expected to manifest as multiple emission episodes in a single triggered GRB with similar light-curve patterns and similar spectrum properties. Identifying such lensed GRBs could help improve constraints on the abundance of compact dark matter. Here we present a systemic search for millilensing among 3000 GRBs observed by the Fermi GBM up to 2021 April. Eventually we find four interesting candidates by performing an autocorrelation test, hardness test, and time-integrated/resolved spectrum test. GRB 081126A and GRB 090717A are ranked as the first-class candidates based on their excellent performance in both temporal and spectrum analysis. GRB 081122A and GRB 110517B are ranked as the second-class candidates (suspected candidates), mainly because their two emission episodes show clear deviations in part of the time-resolved spectrum or in the time-integrated spectrum. Considering a point-mass model for the gravitational lens, our results suggest that the density parameter of lens objects with mass M L ∼ 10 6 M ⊙ is larger than 1.5 × 10 −3 .

Journal ArticleDOI
TL;DR: It is shown that even in the absence of intercellular interactions, cells undergo diffusive behavior, and the observed non-Markovian effects emerge due to the interplay of cell division and mechanical feedback, and is inherently a non-equilibrium phenomenon.
Abstract: The growth of a tissue, which depends on cell-cell interactions and biologically relevant process such as cell division and apoptosis, is regulated by a mechanical feedback mechanism. We account for these effects in a minimal two-dimensional model in order to investigate the consequences of mechanical feedback, which is controlled by a critical pressure, pc. A cell can only grow and divide if the pressure it experiences, due to interaction with its neighbors, is less than pc. Because temperature is an irrelevant variable in the model, the cell dynamics is driven by self-generated active forces (SGAFs) that are created by cell division. It is shown that even in the absence of intercellular interactions, cells undergo diffusive behavior. The SGAF-driven diffusion is indistinguishable from the well-known dynamics of a free Brownian particle at a fixed finite temperature. When intercellular interactions are taken into account, we find persistent temporal correlations in the force-force autocorrelation function (FAF) that extends over timescale of several cell division times. The time-dependence of the FAF reveals memory effects, which increases as pc increases. The observed non-Markovian effects emerge due to the interplay of cell division and mechanical feedback, and is inherently a non-equilibrium phenomenon.

Journal ArticleDOI
TL;DR: The results show that the classical strategy presented in textbooks appears to be the least accurate one, except for cases with a high negative demand autocorrelation.

Journal ArticleDOI
TL;DR: Nearest Neighbour Distance Matching (NNDM) as discussed by the authors is a variation of leave-one-out (LOO) CV for map validation, in which the nearest neighbor distance distribution function between the test and training data during the CV process is matched to the nearest neighbour distance distribution functions between the target prediction and training points.
Abstract: Several spatial and non-spatial Cross-Validation (CV) methods have been used to perform map validation when additional sampling for validation purposes is not possible, yet it is unclear in which situations one CV method might be preferred over the other. Three factors have been identified as determinants of the performance of CV methods for map validation: the prediction area (geographical interpolation vs. extrapolation), the sampling pattern and the landscape spatial autocorrelation. In this study, we propose a new CV strategy that takes the geographical prediction space into account, and test how the new method compares with other established CV methods under different configurations of these three factors. We propose a variation of Leave-One-Out (LOO) CV for map validation, called Nearest Neighbour Distance Matching (NNDM) LOO CV, in which the nearest neighbour distance distribution function between the test and training data during the CV process is matched to the nearest neighbour distance distribution function between the target prediction and training points. Using random forest as a machine learning algorithm, we then examine the suitability of NNDM LOO CV as well as the established LOO (non-spatial) and buffered-LOO (bLOO, spatial) CV methods in two simulations with varying prediction areas, landscape autocorrelation and sampling distributions. LOO CV provided good map accuracy estimates in landscapes with short autocorrelation ranges, or when estimating geographical interpolation map accuracy with randomly distributed samples. bLOO CV yielded realistic error estimates when estimating map accuracy in new prediction areas, but generally overestimated geographical interpolation errors. NNDM LOO CV returned reliable estimates in all scenarios we considered. While LOO and bLOO CV provided reliable map accuracy estimates only in certain situations, our newly proposed NNDM LOO CV method returned robust estimates and generalised to LOO and bLOO CV whenever these methods were the most appropriate approach. Our work recognises the necessity of considering the geographical prediction space when designing CV-based methods for map validation.

Journal ArticleDOI
TL;DR: This article proposes solutions for capturing cross-correlations between different stocks and for transitioning from fixed to variable length time-series without resorting to sequence modeling networks, and adapt various network architectures, e.g., fully connected and convolutional GANs, variational autoencoders, and generative moment matching networks.
Abstract: Financial markets have always been a point of interest for automated systems. Due to their complex nature, financial algorithms and fintech frameworks require vast amounts of data to accurately respond to market fluctuations. This data availability is tied to the daily market evolution, so it is impossible to accelerate its acquisition. In this article, we discuss several solutions for augmenting financial datasets via synthesizing realistic time-series with the help of generative models. This problem is complex, since financial time series present very specific properties, e.g., fat-tail distribution, cross-correlation between different stocks, specific autocorrelation, cluster volatility and so on. In particular, we propose solutions for capturing cross-correlations between different stocks and for transitioning from fixed to variable length time-series without resorting to sequence modeling networks, and adapt various network architectures, e.g., fully connected and convolutional GANs, variational autoencoders, and generative moment matching networks. Finally, we tackle the problem of evaluating the quality of synthetic financial time-series. We introduce qualitative and quantitative metrics, along with a portfolio trend prediction framework that validates our generative models’ performance. We carry out experiments on real-world financial data extracted from the US stock market, proving the benefits of these techniques.


Journal ArticleDOI
TL;DR: In this paper , a line scan confocal microscopy with laser speckle autocorrelation imaging (LS-LSAI) is proposed for in-vivo visualization of blood perfusion in biological tissues.
Abstract: Laser speckle imaging has been widely used for in-vivo visualization of blood perfusion in biological tissues. However, existing laser speckle imaging techniques suffer from limited quantification accuracy and spatial resolution. Here we report a novel design and implementation of a powerful laser speckle imaging platform to solve the two critical limitations. The core technique of our platform is a combination of line scan confocal microscopy with laser speckle autocorrelation imaging, which is termed Line Scan Laser Speckle Autocorrelation Imaging (LS-LSAI). The technical advantages of LS-LSAI include high spatial resolution (~4.4 μm) for visualizing and quantifying blood flow in microvessels, as well as video-rate imaging speed for tracing dynamic flow.

Journal ArticleDOI
TL;DR: In this paper , a ray-level process was proposed to model the spatial-temporal evolution of individual multipath components, including near-field effects and (dis)appearance, and cluster-level large-scale fading.
Abstract: In this paper, a novel space-time non-stationary three-dimensional (3D) wideband massive multiple-input multiple-output (MIMO) channel model is proposed. We then propose a ray-level process to model the spatial-temporal evolution of individual multipath components (MPCs), including near-field effects and (dis)appearance, and cluster-level large-scale fading. The proposed evolution process can flexibly control rays’ lifespans and smoothness of (dis)appearance in both space and time domains. In addition, we propose an improved Rayleigh-distance criterion to determine the most adequate wavefront for each cluster and ray. Existing models can easily implement the proposed criterion and make a more efficient use of computation resources. Also, a Gamma-Poisson mixture distribution is introduced to model the distribution of the number of clusters when multiple locations of the mobile station are considered. Key statistical properties of the channel, including the autocorrelation function (ACF), Doppler power spectral density (PSD), spatial cross-correlation function (S-CCF), and frequency correlation function (FCF), are derived and the impact of the ray-level evolution process on them is analyzed. We demonstrate the correctness of the derived statistical properties through numerical and simulation results.