scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Lorentzian Based Adaptive Filters for Impulsive Noise Environments

TL;DR: Simulation results show that the Lorentzian variable hard thresholding adaptive filtering (LVHTAF) outperforms the existing robust sparse adaptive algorithms by producing lesser steady state mean square error.
Abstract: In this paper, three Lorentzian based robust adaptive algorithms are proposed for identifying systems in presence of impulsive noise. The first algorithm called Lorentzian adaptive filtering (LAF) is derived from a sliding window type cost function with Lorentzian norm of past errors to combat adverse effect of impulsive noise on systems. The first and second order convergence analyses of the LAF algorithm are carried out in this paper. Then, to identify sparse systems in impulsive noise environment, $l_{0}$ norm penalty is introduced to the cost function of the LAF algorithm leading to a new algorithm called Lorentzian hard thresholding adaptive filtering (LHTAF) which employs hard thresholding operator with a fixed hard thresholding parameter to obtain sparse solutions. The effect of the hard thresholding operator is further analyzed, and the analysis shows that a variable hard thresholding parameter offers significant improvement in the performance of the algorithm, and this result in the final algorithm called Lorentzian variable hard thresholding adaptive filtering (LVHTAF) where the hard thresholding parameter is adjusted adaptively. Simulation results show that the LVTHAF outperforms the existing robust sparse adaptive algorithms by producing lesser steady state mean square error.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel normalization based on the logarithmic hyperbolic cosine function is proposed to achieve the stabilization for the case of large initial weight errors, which generates a logarathmic HCAF (LHCAF) and a variable scaling factor and step-size LHCAF and VSS-LH CAF are proposed to improve the filtering accuracy and stability.
Abstract: The hyperbolic cosine function with high-order errors can be utilized to improve the accuracy of adaptive filters. However, when initial weight errors are large, the hyperbolic cosine-based adaptive filter (HCAF) may be unstable. In this paper, a novel normalization based on the logarithmic hyperbolic cosine function is proposed to achieve the stabilization for the case of large initial weight errors, which generates a logarithmic HCAF (LHCAF). Actually, the cost function of LHCAF is the logarithmic hyperbolic cosine function that is robust to large errors and smooth to small errors. The transient and steady-state analyses of LHCAF in terms of the mean-square deviation (MSD) are performed for a stationary white input with an even probability density function in a stationary zero-mean white noise. The convergence and stability of LHCAF can be therefore guaranteed as long as the filtering parameters satisfy certain conditions. The theoretical results based on the MSD are supported by the simulations. In addition, a variable scaling factor and step-size LHCAF (VSS-LHCAF) is proposed to improve the filtering accuracy of LHCAF further. The proposed LHCAF and VSS-LHCAF are superior to HCAF and other robust adaptive filters in terms of filtering accuracy and stability.

67 citations


Additional excerpts

  • ...in the non-Gaussian noises [3], [6]–[9]....

    [...]

Journal ArticleDOI
Zongsheng Zheng1, Zhigang Liu1, Haiquan Zhao1, Yi Yu1, Lu Lu1 
TL;DR: This paper presents a family of robust set-membership normalized subband adaptive filtering algorithms for acoustic echo cancellation (AEC) that obtains improved robustness against impulsive noises and decreased steady-state misalignment relative to the conventional set- membership NSAF (SM-NSAF) algorithm.
Abstract: This paper presents a family of robust set-membership normalized subband adaptive filtering (RSM-NSAF) algorithms for acoustic echo cancellation (AEC). By using a new robust set-membership error bound, the RSM-NSAF algorithm obtains improved robustness against impulsive noises and decreased steady-state misalignment relative to the conventional set-membership NSAF (SM-NSAF) algorithm. To exploit the sparsity of the impulse response, the $L_{0}$ norm constraint robust set-membership NSAF ( $L_{0}$ -RSM-NSAF), robust set-membership improved proportionate NSAF (RSM-IPNSAF), and $L_{0}$ norm constraint robust set-membership improved proportionate NSAF ( $L_{0}$ -RSM-IPNSAF) algorithms are derived by minimizing a differentiable cost function that utilizes the Riemannian distance between the updated and previous weight vectors as well as the $L_{0}$ norm of the weighted updated weight vector. Simulations in AEC application confirm the improvements of the proposed algorithms in performance.

47 citations

Journal ArticleDOI
TL;DR: Simulations in system identification and acoustic echo-cancellation scenarios have demonstrated that the proposed algorithms outperform the corresponding order's generalized maximum correntropy criterion, normalized least mean square using step-size scaler, sign algorithm and least logarithmic absolute difference algorithms.

37 citations

Journal ArticleDOI
TL;DR: In this article, an exponential hyperbolic cosine function (EHCF) based new robust norm is introduced and a corresponding EHCF based adaptive filter called exponential Hyperbolic Cosine adaptive filter (EHCAF) is developed.
Abstract: In recent years, correntropy-based algorithms which include maximum correntropy criterion (MCC), generalized MCC (GMCC), kernel MCC (KMCC) and hyperbolic cosine function-based algorithms such as hyperbolic cosine adaptive filter (HCAF), logarithmic HCAF (LHCAF), least lncosh (Llncosh) have been widely utilized in adaptive filtering due to their robustness towards non-Gaussian/impulsive background noises. However, the performance of such algorithms suffers from high steady-state misalignment. To minimize the steady-state misalignment along with having comparable computational complexity, an exponential hyperbolic cosine function (EHCF) based new robust norm is introduced and a corresponding EHCF based adaptive filter called exponential hyperbolic cosine adaptive filter (EHCAF) is developed in this letter. Further, computational complexity and bound on learning rate for stability of the proposed algorithm is also studied. A set of simulation studies has been carried out for system identification scenario to assess the performance of the proposed algorithm. Further, EHCAF algorithm has been extended and the filtered-x EHCAF (Fx-EHCAF) algorithm is proposed for robust room equalization.

35 citations

Journal ArticleDOI
Tao Liang1, Yingsong Li1, Wei Xue1, Yibing Li1, Tao Jiang1 
TL;DR: Compared with other typical recursive methods, the proposed RCLL algorithm can obtain superior steady state behavior and better robustness for combating impulsive noises.
Abstract: We propose a recursive constrained least lncosh (RCLL) adaptive algorithm to combat the impulsive noises. In general, the lncosh function is used to develop a new algorithm within the context of constrained adaptive filtering via solving a linear constrained optimization problem, where the lncosh function is a natural logarithm of hyperbolic cosine function, which can be regarded as a combination of mean-square-error (MSE) and mean-absolute-error (MAE) criteria. Compared with other typical recursive methods, the proposed RCLL algorithm can obtain superior steady state behavior and better robustness for combating impulsive noises. Besides, the mean-square convergence condition and theoretical transient mean-square-deviation of the RCLL algorithm is presented. Simulation results verified the theoretical analysis in non-Gaussian noises and shown the superior performance of the proposed RCLL algorithm.

35 citations


Cites methods from "Lorentzian Based Adaptive Filters f..."

  • ...The experiment was performed under three different disturbance noises, namely, uniform noise, binary noise and impulsive noise, and the impulsive noise is modeled as vi(l) = z(l)+ w(l)ψ(l), where z(l) is white Gaussian noise with zero-mean and variance σ 2 z , and w(l)ψ(l) is BernoulliGaussian process with probability of success P[w(l)=1]=Pr, P[w(l)=0]=1-Pr, and ψ(l) denotes a zero-mean Gaussian process with variance σ 2 ψ [30]....

    [...]

  • ...The variance of vi(l) is σ 2 vi , which is given by σ 2 vi=σ 2 z +Pr×σ 2 ψ [30]....

    [...]

References
More filters
Book
01 Jan 1986
TL;DR: In this paper, the authors propose a recursive least square adaptive filter (RLF) based on the Kalman filter, which is used as the unifying base for RLS Filters.
Abstract: Background and Overview. 1. Stochastic Processes and Models. 2. Wiener Filters. 3. Linear Prediction. 4. Method of Steepest Descent. 5. Least-Mean-Square Adaptive Filters. 6. Normalized Least-Mean-Square Adaptive Filters. 7. Transform-Domain and Sub-Band Adaptive Filters. 8. Method of Least Squares. 9. Recursive Least-Square Adaptive Filters. 10. Kalman Filters as the Unifying Bases for RLS Filters. 11. Square-Root Adaptive Filters. 12. Order-Recursive Adaptive Filters. 13. Finite-Precision Effects. 14. Tracking of Time-Varying Systems. 15. Adaptive Filters Using Infinite-Duration Impulse Response Structures. 16. Blind Deconvolution. 17. Back-Propagation Learning. Epilogue. Appendix A. Complex Variables. Appendix B. Differentiation with Respect to a Vector. Appendix C. Method of Lagrange Multipliers. Appendix D. Estimation Theory. Appendix E. Eigenanalysis. Appendix F. Rotations and Reflections. Appendix G. Complex Wishart Distribution. Glossary. Abbreviations. Principal Symbols. Bibliography. Index.

16,062 citations


"Lorentzian Based Adaptive Filters f..." refers background in this paper

  • ...FOR online applications like echo cancellation, system identification, noise cancellation and channel estimation, several adaptive algorithms have been developed over past decades [1], [2]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem, and show that the algorithm has the following properties.

2,017 citations

Book
01 Jan 2003
TL;DR: This paper presents a meta-anatomy of Adaptive Filters, a system of filters and algorithms that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing these filters.
Abstract: Preface. Acknowledgments. Notation. Symbols. Optimal Estimation. Linear Estimation. Constrained Linear Estimation. Steepest-Descent Algorithms. Stochastic-Gradient Algorithms. Steady-State Performance of Adaptive Filters. Tracking Performance of Adaptive Filters. Finite Precision Effects. Transient Performance of Adaptive Filters. Block Adaptive Filters. The Least-Squares Criterion. Recursive Least-Squares. RLS Array Algorithms. Fast Fixed-Order Filters. Lattice Filters. Laguerre Adaptive Filters. Robust Adaptive Filters. Bibliography. Author Index. Subject Index. AC

1,987 citations


"Lorentzian Based Adaptive Filters f..." refers background in this paper

  • ...FOR online applications like echo cancellation, system identification, noise cancellation and channel estimation, several adaptive algorithms have been developed over past decades [1], [2]....

    [...]

Book ChapterDOI
04 Dec 2017
TL;DR: Probability theory as mentioned in this paper is a framework and tools to quantify and predict the chance of occurrence of an event in the presence of uncertainties, and also provides a logical way to make decisions in situations where the outcomes are uncertain.
Abstract: This chapter focuses on the basic results and illustrate the theory with several numerical examples. Probability theory essentially provides a framework and tools to quantify and predict the chance of occurrence of an event in the presence of uncertainties. Probability theory also provides a logical way to make decisions in situations where the outcomes are uncertain. Probability theory has widespread applications in a plethora of different fields such as financial modeling, weather prediction, and engineering. The literature on probability theory is rich and extensive. The proofs of the major results are not provided and relegated to the references. While there are many different philosophical approaches to define and derive probability theory, Kolmogorov's axiomatic approach is the most widely used. This axiomatic approach begins by defining a small number of precise axioms or postulates and then deriving the rest of the theory from these postulates.

1,563 citations


Additional excerpts

  • ...Using Bernoulli trial [43], we can have...

    [...]

Journal ArticleDOI
TL;DR: This paper studies two iterative algorithms that are minimising the cost functions of interest and adapts the algorithms and shows on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching pursuit, while retaining the computational complexity of the Matching pursuit algorithm.
Abstract: Sparse signal expansions represent or approximate a signal using a small number of elements from a large collection of elementary waveforms. Finding the optimal sparse expansion is known to be NP hard in general and non-optimal strategies such as Matching Pursuit, Orthogonal Matching Pursuit, Basis Pursuit and Basis Pursuit De-noising are often called upon. These methods show good performance in practical situations, however, they do not operate on the l0 penalised cost functions that are often at the heart of the problem. In this paper we study two iterative algorithms that are minimising the cost functions of interest. Furthermore, each iteration of these strategies has computational complexity similar to a Matching Pursuit iteration, making the methods applicable to many real world problems. However, the optimisation problem is non-convex and the strategies are only guaranteed to find local solutions, so good initialisation becomes paramount. We here study two approaches. The first approach uses the proposed algorithms to refine the solutions found with other methods, replacing the typically used conjugate gradient solver. The second strategy adapts the algorithms and we show on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching Pursuit, while retaining the computational complexity of the Matching Pursuit algorithm.

1,246 citations


"Lorentzian Based Adaptive Filters f..." refers background in this paper

  • ...convergence analysis has been fairly well investigated in the domain of compressive sensing [35], [38]- [40])....

    [...]

  • ...We, therefore, follow the suboptimal strategy adopted in [32], [38] to solve the above optimization problem, and it is expected that the resulting algorithm will have faster convergence rate....

    [...]