scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Efficient Volterra systems identification using hierarchical genetic algorithms

TL;DR: This work presents a robust hierarchical evolutionary technique which employs a heuristic initialization and provides robustness against noise in the Volterra series and improves on the computational complexity of existing methods without harming the identification accuracy.
About: This article is published in Applied Soft Computing.The article was published on 2019-12-01. It has received 17 citations till now. The article focuses on the topics: Volterra series & Genetic algorithm.
Citations
More filters
Journal ArticleDOI
01 Jan 2022
TL;DR: In this article , the identification issue of discrete-time nonlinear Volterra systems and uses a tensorial decomposition called PARAFAC to represent the VOLTERRA kernels.
Abstract: The Volterra model can represent a wide range of nonlinear dynamical systems. However, its practical use in nonlinear system identification is limited due to the exponentially growing number of Volterra kernel coefficients as the degree increases. This paper considers the identification issue of discrete-time nonlinear Volterra systems and uses a tensorial decomposition called PARAFAC to represent the Volterra kernels which can provide a significant parametric reduction compared with the conventional Volterra model. Applying the multi-innovation identification theory, the recursive algorithm by combining the l2-norm is proposed for the PARAFAC-Volterra models with the Gaussian noises. In addition, the multi-innovation algorithm combining with the logarithmic p-norms is investigated for the nonlinear Volterra systems with the non-Gaussian noises. Finally, some simulation results illustrate the effectiveness of the proposed identification methods.

41 citations

Journal ArticleDOI
TL;DR: The simulation results confirm that the GGS-KF-based identification approach results in the most accurate estimations compared to the conventional KF and other reported techniques in terms of parameter estimation error, mean-squared error (MSE), fitness percentage (FIT%), mean-Squared deviation (MSD), and cumulative density function (CDF).
Abstract: This paper proposes an efficient global gravitational search (GGS) algorithm-assisted Kalman filter (KF) design, called a GGS-KF technique, for accurate estimation of the Volterra-type nonlinear systems. KF is a well-known estimation technique for the dynamic states of the system. The best estimate is achieved if the system dynamics and noise statistical model parameters are available at the beginning. However, to estimate the real-time problems, these parameters are unstipulated or partly known. Due to this limitation, the performance of the KF degrades or sometimes diverges. In this work, two steps have been proposed for unknown system identification while overcoming the difficulty encountered in KF. The first step is to optimise the parameters of the KF using the GGS algorithm by considering a properly balanced fitness function. The second step is to estimate the unknown coefficients of the system by using the basic KF method with the optimally tuned KF parameters obtained from the first step. The proposed GGS-KF technique is tested on five different Volterra systems with various levels of noisy (10 dB, 15 dB and 20 dB) and noise-free input conditions. The simulation results confirm that the GGS-KF-based identification approach results in the most accurate estimations compared to the conventional KF and other reported techniques in terms of parameter estimation error, mean-squared error (MSE), fitness percentage (FIT%), mean-squared deviation (MSD), and cumulative density function (CDF). To validate the practical applicability of the proposed technique, two benchmark systems have also been identified based on the original data sets.

23 citations

Journal ArticleDOI
TL;DR: A new and efficient approach where a nature-inspired optimisation technique has abetted the Kalman filter (KF) for accurately solving the parametric estimation problem of highly complex non-linear systems is proposed and the experimental results illustrate that the ALO-KF approach leads to better coefficient estimation compared to FA, FA, PSO,PSO, GA, and basic KF methods.
Abstract: This paper proposes a new and efficient approach where a nature-inspired optimisation technique has abetted the Kalman filter (KF) for accurately solving the parametric estimation problem of highly complex non-linear systems. The KF is the best optimal state estimator in terms of normalised mean squared error (NMSE) for linear Gaussian state-space models. However, the use of mismatched noise statistics in KF might result in performance degradation. To address this issue, three steps are proposed in this work for the accurate estimation of the unknown non-linear system parameters by using the Volterra model. The first step is to reformulate the Volterra model into a measurement form. Secondly, the KF parameters are optimised by using an evolutionary algorithm with an efficient objective function. The third step is to estimate the coefficients of the unknown system by using the KF technique with the help of optimally tuned KF parameters achieved in the second step. In simulations, three distinct higher memory size second-order Volterra models, two non-linear benchmark systems and the primary path of active-noise control (ANC) system based on real data sets are identified by using the basic KF, genetic algorithm (GA) assisted KF (GA-KF), particle swarm optimisation (PSO) assisted KF (PSO-KF), firefly algorithm (FA) assisted KF (FA-KF) and ant lion optimisation (ALO) assisted KF (ALO-KF) techniques. The experimental results illustrate that the ALO-KF approach leads to better coefficient estimation compared to FA-KF, PSO-KF, GA-KF, and basic KF methods.

14 citations

Journal ArticleDOI
01 Jan 2020
TL;DR: This work derive and evaluate a method based on genetic algorithms to find the relative maximum of differentiable functions that are difficult to find by analytical methods, and builds a library in Python that includes different components from genetic algorithms.
Abstract: The inability to find the solution in engineering problems has led to a large part of the scientific community developing indirect and alternative techniques to find optimization problem-solving. Genetic algorithms are looking for models based on the natural and genetic selection process, which optimizes a population or set of possible solutions to deliver one that is optimal or at least very close to it in the sense of a fitting function. In this work, we derive and evaluate a method based on genetic algorithms to find the relative maximum of differentiable functions that are difficult to find by analytical methods. We build a library in Python that includes different components from genetic algorithms. The test problems include finding the maximum or minimum of functions in one and two dimensions.

12 citations

Journal ArticleDOI
TL;DR: Dimov et al. as discussed by the authors proposed two approaches to extend the Fredholm algorithm to Volterra equations, one based on a change in variable at each step of the Markov chain.
Abstract: In previous works (Dimov and Maire in Adv Comput Math 45(3):1499–1519, 2019; Dimov et al. in Appl Math Model 39(15):4494–4510, https://doi.org/10.1016/j.apm.2014.12.018 , 2015), we have developed two Monte Carlo algorithms to solve linear systems and Fredholm integral equations of the second kind. These algorithms rely on the computation of a score along a discrete or continuous homogeneous Markov chain until absorption. Here, we propose two approaches to extend the Fredholm algorithm to Volterra equations. The first one is based on a change in variable at each step of the Markov chain. The second one uses the indicator function to transform the Volterra equation into an appropriate form. The resulting Markov chains are inhomogeneous with an increasing absorption rate. The convergence is ensured as soon as the Volterra kernel is bounded. Numerical examples are given on basic reference problems and on high dimensional test cases up to 100 dimensions.

10 citations

References
More filters
Journal ArticleDOI
TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Abstract: Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.

4,321 citations


"Efficient Volterra systems identifi..." refers methods in this paper

  • ...swarm optimization [23], multiobjective decomposition-based algorithm [24], genetic programming [25], reaction optimisation [26], indicator-based algorithms [27], firefly algorithms [28], artificial bee or ant colony algorithms [29], differential evolution [30], learning automata-based selection [31]....

    [...]

Book ChapterDOI
18 Sep 2004
TL;DR: In this article, the authors propose a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators and can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used.
Abstract: This paper discusses how preference information of the decision maker can in general be integrated into multiobjective search. The main idea is to first define the optimization goal in terms of a binary performance measure (indicator) and then to directly use this measure in the selection process. To this end, we propose a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators. In contrast to existing algorithms, IBEA can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used. It is shown on several continuous and discrete benchmark problems that IBEA can substantially improve on the results generated by two popular algorithms, namely NSGA-II and SPEA2, with respect to different performance measures.

1,849 citations

Journal Article
TL;DR: This paper proposes a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators and can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used.
Abstract: This paper discusses how preference information of the decision maker can in general be integrated into multiobjective search. The main idea is to first define the optimization goal in terms of a binary performance measure (indicator) and then to directly use this measure in the selection process. To this end, we propose a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators. In contrast to existing algorithms, IBEA can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used. It is shown on several continuous and discrete benchmark problems that IBEA can substantially improve on the results generated by two popular algorithms, namely NSGA-II and SPEA2, with respect to different performance measures.

1,625 citations


"Efficient Volterra systems identifi..." refers methods in this paper

  • ...swarm optimization [23], multiobjective decomposition-based algorithm [24], genetic programming [25], reaction optimisation [26], indicator-based algorithms [27], firefly algorithms [28], artificial bee or ant colony algorithms [29], differential evolution [30], learning automata-based selection [31]....

    [...]

Book
31 May 1997
TL;DR: Adaptive Filtering: Algorithms and Practical Implementation may be used as the principle text for courses on the subject, and serves as an excellent reference for professional engineers and researchers in the field.
Abstract: From the Publisher: Adaptive Filtering: Algorithms and Practical Implementation is a concise presentation of adaptive filtering, covering as many algorithms as possible while avoiding adapting notations and derivations related to the different algorithms. Furthermore, the book points out the algorithms which really work in a finite-precision implementation, and provides easy access to the working algorithms for the practicing engineer. Adaptive Filtering: Algorithms and Practical Implementation may be used as the principle text for courses on the subject, and serves as an excellent reference for professional engineers and researchers in the field.

1,294 citations


"Efficient Volterra systems identifi..." refers methods in this paper

  • ...Comparison with Alternative Approaches This section assesses the performance of the proposed algorithm when compared against: (i) Recursive Least Squares (RLS) with forgetting factor parameter λ [68] 415...

    [...]

Journal ArticleDOI
TL;DR: In this article, it was shown that any time-invariant continuous nonlinear operator with fading memory can be approximated by a Volterra series operator, and that the approximating operator can be realized as a finite-dimensional linear dynamical system with a nonlinear readout map.
Abstract: Using the notion of fading memory we prove very strong versions of two folk theorems. The first is that any time-invariant (TI) continuous nonlinear operator can be approximated by a Volterra series operator, and the second is that the approximating operator can be realized as a finite-dimensional linear dynamical system with a nonlinear readout map. While previous approximation results are valid over finite time intervals and for signals in compact sets, the approximations presented here hold for all time and for signals in useful (noncompact) sets. The discretetime analog of the second theorem asserts that any TI operator with fading memory can be approximated (in our strong sense) by a nonlinear moving- average operator. Some further discussion of the notion of fading memory is given.

923 citations