Topic
Kernel adaptive filter
About: Kernel adaptive filter is a research topic. Over the lifetime, 8771 publications have been published within this topic receiving 142711 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, it was shown that for the problem of detecting a nonfluctuating target in Gaussian noise, three common optimality criteria lead to identical multichannel filter designs.
Abstract: It is shown that for the problem of detecting a nonfluctuating target in Gaussian noise, three common optimality criteria lead to identical multichannel filter designs.
57 citations
••
14 Dec 2005TL;DR: In this paper, an improved particle-PHD filter is proposed that combines the particles approximation of the posterior PHD function and the peak extraction from the posterior PHD particles to create the target identities of the individual estimates.
Abstract: The probability hypothesis density (PHD) filter is a practical alternative to the optimal Bayesian multi-target filter based on random finite sets. It propagates the PHD function, the first order moment of the posterior multi-target density, from which the number of targets as well as their individual states can be extracted. Furthermore, the sequential Monte Carlo (SMC) approximation of the PHD filter (also known as particle-PHD filter) is available in the literature in order to overcome its intractability. However, the PHD filter keeps no track of the target identities and hence cannot produce track-valued estimates of individual targets. This work consider the use of an improved implementation, of the particle-PHD filter that gives the track-valued estimates of individual targets and propose a novel way for doing so. The improved PHD filter combines the particles approximation of the posterior PHD function and the peak extraction from the posterior PHD particles to create the target identities of the individual estimates. The improved PHD filter does not affect the convergence results of the particle-PHD filter
57 citations
••
TL;DR: An exact upper bound for the mean squared error is provided, and sufficient conditions on the bandwidth and kernel under which the ABC filter converges to the target distribution as the number of particles goes to infinity are derived.
Abstract: The Approximate Bayesian Computation (ABC) filter extends the particle filtering methodology to general state-space models in which the density of the observation conditional on the state is intractable. We provide an exact upper bound for the mean squared error of the ABC filter, and derive sufficient conditions on the bandwidth and kernel under which the ABC filter converges to the target distribution as the number of particles goes to infinity. The optimal convergence rate decreases with the dimension of the observation space but is invariant to the complexity of the state space. We show that the adaptive bandwidth commonly used in the ABC literature can lead to an inconsistent filter. We develop a plug-in bandwidth guaranteeing convergence at the optimal rate, and demonstrate the powerful estimation, model selection, and forecasting performance of the resulting filter in a variety of examples.
57 citations
••
TL;DR: A fully adaptive normalized nonlinear gradient descent (FANNGD) algorithm for online adaptation of nonlinear neural filters is proposed and is shown to converge faster than previously introduced algorithms of this kind.
Abstract: A fully adaptive normalized nonlinear gradient descent (FANNGD) algorithm for online adaptation of nonlinear neural filters is proposed. An adaptive stepsize that minimizes the instantaneous output error of the filter is derived using a linearization performed by a Taylor series expansion of the output error. For rigor, the remainder of the truncated Taylor series expansion within the expression for the adaptive learning rate is made adaptive and is updated using gradient descent. The FANNGD algorithm is shown to converge faster than previously introduced algorithms of this kind.
57 citations
••
01 Jan 1984TL;DR: The proposed adaptive inverse modeling process is a promising new approach to the design of adaptive control systems and can be used to obtain a stable controller, whether the plant is minimum or non-minimum phase.
Abstract: A few of the well established methods of adaptive signal processing are modified and extended for application to adaptive control. An unknown plant will track an input command signal if the plant is preceded by a controller whose transfer function approximates the inverse of the plant transfer function. An adaptive inverse modeling process can be used to obtain a stable controller, whether the plant is minimum or non-minimum phase. No direct feedback is involved. However the system output is monitored and utilized in order to adjust the parameters of the controller. The proposed method is a promising new approach to the design of adaptive control systems.
57 citations