scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive algorithm published in 2011"


Journal ArticleDOI
TL;DR: A non intrusive method that builds a sparse PC expansion, which may be obtained at a reduced computational cost compared to the classical ''full'' PC approximation.

1,112 citations


Journal ArticleDOI
TL;DR: An online adaptive control algorithm based on policy iteration reinforcement learning techniques to solve the continuous-time (CT) multi player non-zero-sum (NZS) game with infinite horizon for linear and nonlinear systems.

368 citations


Journal Article
TL;DR: An adaptive algorithm for low Mach number reacting flows with complex chemistry that uses an operator-split treatment of stiff reaction terms and includes effects of differential diffusion is presented.
Abstract: We present an adaptive algorithm for low Mach number reacting flows with complex chemistry. Our approach uses a form of the low Mach number equations that discretely conserves both mass and energy. The discretization methodology is based on a robust projection formulation that accommodates large density contrasts. The algorithm uses an operator-split treatment of stiff reaction terms and includes effects of differential diffusion. The basic computational approach is embedded in an adaptive projection framework that uses structured hierarchical grids with subcycling in time that preserves the discrete conservation properties of the underlying single-grid algorithm. We present numerical examples illustrating the performance of the method on both premixed and non-premixed flames.

247 citations


Journal ArticleDOI
TL;DR: Simulations show that the proposed equalization algorithms outperform the existing reduced- and full- algorithms while requiring a comparable computational cost.
Abstract: This paper presents a novel adaptive reduced-rank multiple-input-multiple-output (MIMO) equalization scheme and algorithms based on alternating optimization design techniques for MIMO spatial multiplexing systems. The proposed reduced-rank equalization structure consists of a joint iterative optimization of the following two equalization stages: 1) a transformation matrix that performs dimensionality reduction and 2) a reduced-rank estimator that retrieves the desired transmitted symbol. The proposed reduced-rank architecture is incorporated into an equalization structure that allows both decision feedback and linear schemes to mitigate the interantenna (IAI) and intersymbol interference (ISI). We develop alternating least squares (LS) expressions for the design of the transformation matrix and the reduced-rank estimator along with computationally efficient alternating recursive least squares (RLS) adaptive estimation algorithms. We then present an algorithm that automatically adjusts the model order of the proposed scheme. An analysis of the LS algorithms is carried out along with sufficient conditions for convergence and a proof of convergence of the proposed algorithms to the reduced-rank Wiener filter. Simulations show that the proposed equalization algorithms outperform the existing reduced- and full- algorithms while requiring a comparable computational cost.

181 citations


Journal ArticleDOI
TL;DR: An iterative framelet-based approximation/sparsity deblurring algorithm (IFASDA) for the proposed functional, which has a content-dependent fidelity term which assimilates the strength of fidelity terms measured by the l1 and l2 norms.
Abstract: This paper studies a problem of image restoration that observed images are contaminated by Gaussian and impulse noise. Existing methods for this problem in the literature are based on minimizing an objective functional having the l1 fidelity term and the Mumford-Shah regularizer. We present an algorithm on this problem by minimizing a new objective functional. The proposed functional has a content-dependent fidelity term which assimilates the strength of fidelity terms measured by the l1 and l2 norms. The regularizer in the functional is formed by the l1 norm of tight framelet coefficients of the underlying image. The selected tight framelet filters are able to extract geometric features of images. We then propose an iterative framelet-based approximation/sparsity deblurring algorithm (IFASDA) for the proposed functional. Parameters in IFASDA are adaptively varying at each iteration and are determined automatically. In this sense, IFASDA is a parameter-free algorithm. This advantage makes the algorithm more attractive and practical. The effectiveness of IFASDA is experimentally illustrated on problems of image deblurring with Gaussian and impulse noise. Improvements in both PSNR and visual quality of IFASDA over a typical existing method are demonstrated. In addition, Fast_IFASDA, an accelerated algorithm of IFASDA, is also developed.

178 citations


Proceedings ArticleDOI
03 Oct 2011
TL;DR: A new kernel adaptive algorithm is developed, called the kernel maximum correntropy (KMC), which combines the advantages of the KLMS and maximum Correntropy criterion (MCC), and also studies its convergence and self-regularization properties by using the energy conservation relation.
Abstract: Kernel adaptive filters have drawn increasing attention due to their advantages such as universal nonlinear approximation with universal kernels, linearity and convexity in Reproducing Kernel Hilbert Space (RKHS). Among them, the kernel least mean square (KLMS) algorithm deserves particular attention because of its simplicity and sequential learning approach. Similar to most conventional adaptive filtering algorithms, the KLMS adopts the mean square error (MSE) as the adaptation cost. However, the mere second-order statistics is often not suitable for nonlinear and non-Gaussian situations. Therefore, various non-MSE criteria, which involve higher-order statistics, have received an increasing interest. Recently, the correntropy, as an alternative of MSE, has been successfully used in nonlinear and non-Gaussian signal processing and machine learning domains. This fact motivates us in this paper to develop a new kernel adaptive algorithm, called the kernel maximum correntropy (KMC), which combines the advantages of the KLMS and maximum correntropy criterion (MCC). We also study its convergence and self-regularization properties by using the energy conservation relation. The superior performance of the new algorithm has been demonstrated by simulation experiments in the noisy frequency doubling problem.

175 citations


Journal ArticleDOI
TL;DR: This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification that develops around projections onto the sequence of the generated hyperslabs as well as the weighted ℓ1 balls.
Abstract: This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantifies “data mismatch.” Sparsity is imposed by the introduction of appropriately designed weighted l1 balls and the related projection operator is also derived. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted l1 balls. The resulting scheme exhibits linear dependence, with respect to the unknown system's order, on the number of multiplications/additions and an O(Llog2L) dependence on sorting operations, where L is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the Least-Absolute Shrinkage and Selection Operator (LASSO) algorithm and two very recently developed adaptive sparse schemes that fuse arguments from the LMS/RLS adaptation mechanisms with those imposed by the lasso rational.

172 citations


Journal ArticleDOI
TL;DR: The main contributions of the proposed algorithm include the following: use of multiple features, namely, minutiae, density, orientation, and principal lines, for palmprint recognition to significantly improve the matching performance of the conventional algorithm.
Abstract: Palmprint is a promising biometric feature for use in access control and forensic applications. Previous research on palmprint recognition mainly concentrates on low-resolution (about 100 ppi) palmprints. But for high-security applications (e.g., forensic usage), high-resolution palmprints (500 ppi or higher) are required from which more useful information can be extracted. In this paper, we propose a novel recognition algorithm for high-resolution palmprint. The main contributions of the proposed algorithm include the following: 1) use of multiple features, namely, minutiae, density, orientation, and principal lines, for palmprint recognition to significantly improve the matching performance of the conventional algorithm. 2) Design of a quality-based and adaptive orientation field estimation algorithm which performs better than the existing algorithm in case of regions with a large number of creases. 3) Use of a novel fusion scheme for an identification application which performs better than conventional fusion methods, e.g., weighted sum rule, SVMs, or Neyman-Pearson rule. Besides, we analyze the discriminative power of different feature combinations and find that density is very useful for palmprint recognition. Experimental results on the database containing 14,576 full palmprints show that the proposed algorithm has achieved a good performance. In the case of verification, the recognition system's False Rejection Rate (FRR) is 16 percent, which is 17 percent lower than the best existing algorithm at a False Acceptance Rate (FAR) of 10-5, while in the identification experiment, the rank-1 live-scan partial palmprint recognition rate is improved from 82.0 to 91.7 percent.

139 citations


Journal ArticleDOI
TL;DR: An adaptive very short-term wind power prediction scheme that uses an artificial neural network as predictor along with adaptive Bayesian learning and Gaussian process approximation is presented.

125 citations


Journal ArticleDOI
TL;DR: It is shown that the convex regularized RLS algorithm performs as well as, and possibly better than, the regular RLS when there is a constraint on the value of the conveX function evaluated at the true weight vector.
Abstract: In this letter, the RLS adaptive algorithm is considered in the system identification setting. The RLS algorithm is regularized using a general convex function of the system impulse response estimate. The normal equations corresponding to the convex regularized cost function are derived, and a recursive algorithm for the update of the tap estimates is established. We also introduce a closed-form expression for selecting the regularization parameter. With this selection of the regularization parameter, we show that the convex regularized RLS algorithm performs as well as, and possibly better than, the regular RLS when there is a constraint on the value of the convex function evaluated at the true weight vector. Simulations demonstrate the superiority of the convex regularized RLS with automatic parameter selection over regular RLS for the sparse system identification setting.

125 citations


Journal ArticleDOI
TL;DR: The results indicate that incremental LMS can outperform spatial LMS, and that network-based implementations can outperforms the aforementioned fusion-based solutions in some revealing ways.
Abstract: Consider a set of nodes distributed spatially over some region forming a network, where every node takes measurements of an underlying process. The objective is for every node in the network to estimate some parameter of interest from these measurements by cooperating with other nodes. In this work we compare the performance of four adaptive implementations. Two of the implementations are distributed and network-based; they are spatial LMS and incremental LMS. In both algorithms, the nodes share information in a cyclic manner and both algorithms differ by the amount of information shared (less information is shared in the incremental case). The two other adaptive algorithms that we study deal with centralized implementations of spatial and incremental LMS. In these latter cases, all nodes exchange data with a fusion center where the computations are performed. In the centralized approach, all nodes receive the same estimates back from the fusion center, while these estimates differ among the nodes in the distributed implementation. We analyze and compare the performance of fusion-based and network-based versions of spatial LMS and incremental LMS processing and reveal some interesting conclusions. The results indicate that incremental LMS can outperform spatial LMS, and that network-based implementations can outperform the aforementioned fusion-based solutions in some revealing ways.

Journal ArticleDOI
TL;DR: The results show that the proposed algorithms outperform the best known reduced-rank schemes, while requiring lower complexity.
Abstract: This work proposes a blind adaptive reduced-rank scheme and constrained constant-modulus (CCM) adaptive algorithms for interference suppression in wireless communications systems. The proposed scheme and algorithms are based on a two-stage processing framework that consists of a transformation matrix that performs dimensionality reduction followed by a reduced-rank estimator. The complex structure of the transformation matrix of existing methods motivates the development of a blind adaptive reduced-rank constrained (BARC) scheme along with a low-complexity reduced-rank decomposition. The proposed BARC scheme and a reduced-rank decomposition based on the concept of joint interpolation, switched decimation and reduced-rank estimation subject to a set of constraints are then detailed. The proposed set of constraints ensures that the multipath components of the channel are combined prior to dimensionality reduction. We develop low-complexity joint interpolation and decimation techniques, stochastic gradient, and recursive least squares reduced-rank estimation algorithms. A model-order selection algorithm for adjusting the length of the estimators is devised along with techniques for determining the required number of switching branches to attain a predefined performance. An analysis of the convergence properties and issues of the proposed optimization and algorithms is carried out, and the key features of the optimization problem are discussed. We consider the application of the proposed algorithms to interference suppression in DS-CDMA systems. The results show that the proposed algorithms outperform the best known reduced-rank schemes, while requiring lower complexity.

Journal ArticleDOI
TL;DR: A new computational modeling framework, Fluidity, for application to a range of two‐ and three‐dimensional geodynamic problems, with the focus here on mantle convection, based upon a finite element discretization on unstructured simplex meshes.
Abstract: We present a new computational modeling framework, Fluidity, for application to a range of two- and three-dimensional geodynamic problems, with the focus here on mantle convection. The approach centers upon a finite element discretization on unstructured simplex meshes, which represent complex geometries in a straightforward manner. Throughout a simulation, the mesh is dynamically adapted to optimize the representation of evolving solution structures. The adaptive algorithm makes use of anisotropic measures of solution complexity, to vary resolution and allow long, thin elements to align with features such as boundary layers. The modeling framework presented differs from the majority of current mantle convection codes, which are typically based upon fixed structured grids. This necessitates a thorough and detailed validation, which is a focus of this paper. Benchmark comparisons are undertaken with a range of two- and three-dimensional, isoviscous and variable viscosity cases. In addition, model predictions are compared to experimental results. Such comparisons highlight not only the robustness and accuracy of Fluidity but also the advantages of anisotropic adaptive unstructured meshes, significantly reducing computational requirements when compared to a fixed mesh simulation.

Journal ArticleDOI
TL;DR: In this article, a unified adaptive consensus protocol for non-point, non-linear networked Euler-Lagrange systems with unknown parameters is proposed. But it is shown that state consensus is reachable despite the unknown parameters, and the estimation errors of these parameters converge to zero.
Abstract: Most consensus protocols developed in the past are for linear-integrator systems or deterministic non-linear systems. Here, the authors study the state consensus for non-point, non-linear networked Euler-Lagrange systems with unknown parameters. Specifically, state consensus problems with both coupling time delay and switching topology are investigated. By establishing a unified architecture based on the passivity property, adaptive consensus protocols are developed. It is shown that state consensus is reachable despite the unknown parameters, and the estimation errors of these parameters converge to zero. Furthermore, by introducing the leader-follower architecture, the authors show that each agent will converge its origin. Finally, a numerical example is given to illustrate the effectiveness of the proposed algorithms.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a two-stage hybrid forecasting method for short-term load forecasting in China, where in the first stage, daily load is forecast by time-series methods; in the second stage, the deviation caused by time series methods is forecasted considering the impact of relative factors, and then is added to the result of first stage.
Abstract: Short-term load forecasting (STLF) is the basis of power system planning and operation. With regard to the fast-growing load in China, a novel two-stage hybrid forecasting method is proposed in this paper. In the first stage, daily load is forecasted by time-series methods; in the second stage, the deviation caused by time-series methods is forecasted considering the impact of relative factors, and then is added to the result of the first stage. Different from other conventional methods, this paper does an in-depth analysis on the impact of relative factors on the deviation between actual load and the forecasting result of traditional time-series methods. On the basis of this analysis, an adaptive algorithm is proposed to perform the second stage which can be used to choose the most appropriate algorithm among linear regression, quadratic programming, and support vector machine (SVM) according to the characteristic of historical data. These ideas make the forecasting procedure more accurate, adaptive, and effective, comparing with SVM and other prevalent methods. The effectiveness has been demonstrated by the experiments and practical application in China.

Journal ArticleDOI
TL;DR: Efficient algorithms are developed based on Kalman filtering and Expectation-Maximization based on sparse Volterra models and incorporate the effect of power amplifiers to identify sparse linear and nonlinear systems.

Journal ArticleDOI
TL;DR: The goal of this article is to provide the theoretical basis for enabling tractable solutions to the "arriving on time" problem and enabling its use in real-time mobile phone applications and to present an efficient algorithm for finding an optimal routing policy with a well bounded computational complexity.
Abstract: The goal of this article is to provide the theoretical basis for enabling tractable solutions to the "arriving on time" problem and enabling its use in real-time mobile phone applications. Optimal routing in transportation networks with highly varying traffic conditions is a challenging problem due to the stochastic nature of travel-times on links of the network. The definition of optimality criteria and the design of solution methods must account for the random nature of the travel-time on each link. Most common routing algorithms consider the expected value of link travel-time as a sufficient statistic for the problem and produce least expected travel-time paths without consideration of travel-time variability. However, in numerous practical settings the reliability of the route is also an important decision factor. In this article, the authors consider the following optimality criterion: maximizing the probability of arriving on time at a destination given a departure time and a time budget. The authors present an efficient algorithm for finding an optimal routing policy with a well bounded computational complexity, improving on an existing solution that takes an unbounded number of iterations to converge to the optimal solution. A routing policy is an adaptive algorithm that determines the optimal solution based on en route travel-times and therefore provides better reliability guarantees than an a-priori solution. Novel speed-up techniques to efficiently compute the adaptive optimal strategy and methods to prune the search space of the problem are also investigated. Finally, an extension of this algorithm which allows for both time varying traffic conditions and spatio-temporal correlations of link travel-time distributions is presented. The dramatic runtime improvements provided by the algorithm are demonstrated for practical scenarios in California.

Journal ArticleDOI
TL;DR: An adaptive spatial clustering algorithm based on Delaunay triangulation (ASCDT for short) that can automatically discover clusters of complicated shapes, and non-homogeneous densities in a spatial database, without the need to set parameters or prior knowledge is proposed.

Patent
05 Jan 2011
TL;DR: In this paper, an active noise cancellation controller for performing noise attenuation in a system over a predetermined frequency range is presented, in which a fixed feedback controller, fixed feedforward controller and adaptive feed-forward controller are arranged to, in use, provide a noise cancellation signal at the output in dependence on a reference signal received at the first input and an error signal received on the second input.
Abstract: An active noise cancellation controller for performing noise attenuation in a system over a predetermined frequency range, the active noise cancellation controller comprising: a first input for receiving a reference signal indicative of a noise level; a second input for receiving an error signal indicative of a remnant noise level; an output for providing a noise cancellation signal to a system in which noise attenuation is to be performed; a fixed feedback controller having a fixed infinite impulse response filter arranged for operation on an error signal received at the second input; a fixed feedforward controller having a fixed infinite impulse response filter arranged for operation on a reference signal received at the first input; and an adaptive feedforward controller having a digital adaptive finite impulse response filter arranged for operation on a reference signal received at the first input and on an error signal received at the second input, the coefficients of the digital adaptive filter being determined by: in the frequency domain, independently generating a set of initial coefficients for each of a plurality of subbands into which the predetermined frequency range is divided, said sets of initial coefficients being generated in accordance with a predetermined adaptive algorithm; and transforming said sets of initial coefficients into the time domain for use as the said coefficients of the digital adaptive filter; wherein the fixed feedback controller, fixed feedforward controller and adaptive feedforward controller are arranged to, in use, provide a noise cancellation signal at the output in dependence on a reference signal received at the first input and an error signal received at the second input.

Journal ArticleDOI
TL;DR: An iterative method for the accurate estimation of amplitude and frequency modulations (AM-FM) in time-varying multi-component quasi-periodic signals such as voiced speech and suggests an adaptive algorithm for nonparametric estimation of AM-FM components in voiced speech.
Abstract: In this paper, we present an iterative method for the accurate estimation of amplitude and frequency modulations (AM-FM) in time-varying multi-component quasi-periodic signals such as voiced speech. Based on a deterministic plus noise representation of speech initially suggested by Laroche (“HNM: A simple, efficient harmonic plus noise model for speech,” Proc. WASPAA, Oct., 1993, pp. 169-172), and focusing on the deterministic representation, we reveal the properties of the model showing that such a representation is equivalent to a time-varying quasi-harmonic representation of voiced speech. Next, we show how this representation can be used for the estimation of amplitude and frequency modulations and provide the conditions under which such an estimation is valid. Finally, we suggest an adaptive algorithm for nonparametric estimation of AM-FM components in voiced speech. Based on the estimated amplitude and frequency components, a high-resolution time-frequency representation is obtained. The suggested approach was evaluated on synthetic AM-FM signals, while using the estimated AM-FM information, speech signal reconstruction was performed, resulting in a high signal-to-reconstruction error ratio (around 30 dB).

Journal ArticleDOI
TL;DR: The extension of the a posteriori error estimation and goal-oriented mesh refinement approach from laminar to turbulent flows, which are governed by the Reynolds-averaged Navier–Stokes and k – ω turbulence model (RANS- kω ) equations, is presented.

Journal ArticleDOI
TL;DR: This work considers adaptive meshless discretisation of the Dirichlet problem for Poisson equation based on numerical differentiation stencils obtained with the help of radial basis functions using meshless stencil selection and adaptive refinement algorithms.

Journal ArticleDOI
TL;DR: Results show that the proposed adaptive model with the ALS-SVM method is able to track the time-varying characteristics of a boiler combustion system.

Journal ArticleDOI
TL;DR: Novel fast implementations of the weighted least-squares based iterative adaptive approach (IAA) for one-dimensional and two-dimensional spectral estimation of uniformly sampled data are considered using the Gohberg-Semencul (G-S)-type factorization of the IAA covariance matrices.
Abstract: We consider fast implementations of the weighted least-squares based iterative adaptive approach (IAA) for one-dimensional (1-D) and two-dimensional (2-D) spectral estimation of uniformly sampled data. IAA is a robust, user parameter-free and nonparametric adaptive algorithm that can work with a single data sequence or snapshot. Compared to the conventional periodogram, IAA can be used to significantly increase the resolution and suppress the sidelobe levels. However, due to its high computational complexity, IAA can only be used in applications involving small-sized data. We present herein novel fast implementations of IAA using the Gohberg-Semencul (G-S)-type factorization of the IAA covariance matrices. By exploiting the Toeplitz structure of the said matrices, we are able to reduce the computational cost by at least two orders of magnitudes even for moderate data sizes.

Journal ArticleDOI
TL;DR: In this article, an enhanced cross-entropy (ECE) method was proposed to solve dynamic economic dispatch (DED) problem with valve-point effects, which is a generic approach to combinatorial and multi-extremal optimization.

Journal ArticleDOI
TL;DR: In this article, the adaptive finite-element approximation to solutions of partial differential equations in variational formulation is analyzed and convergence of the sequence of discrete solutions to the true one is proved.
Abstract: We analyse the adaptive finite-element approximation to solutions of partial differential equations in variational formulation. Assuming well-posedness of the continuous problem and requiring only basic properties of the adaptive algorithm, we prove convergence of the sequence of discrete solutions to the true one. The proof is based on the ideas by Morin, Siebert and Veeser but replaces local efficiency of the estimator by a local density property of the adaptively generated finite-element spaces. As a result, estimators without a discrete lower bound are also included in our theory. The assumptions of the presented framework are fulfilled by a large class of important applications, estimators and adaptive strategies.

Journal ArticleDOI
TL;DR: This paper presents a novel adaptive fuzzy logic controller equipped with an adaptive algorithm to achieve H(∞) synchronization performance for uncertain fractional order chaotic systems and results signify the effectiveness of the proposed control scheme.
Abstract: This paper presents a novel adaptive fuzzy logic controller (FLC) equipped with an adaptive algorithm to achieve H ∞ synchronization performance for uncertain fractional order chaotic systems. In order to handle the high level of uncertainties and noisy training data, a desired synchronization error can be attenuated to a prescribed level by incorporating fuzzy control design and H ∞ tracking approach. Based on a Lyapunov stability criterion, not only the performance of the proposed method is satisfying with an acceptable synchronization error level, but also a rather simple stability analysis is performed. The simulation results signify the effectiveness of the proposed control scheme.

Journal ArticleDOI
TL;DR: Analytical and simulation results show that the proposed Lp-norm detector yields significant performance gains compared to conventional energy detection in non-Gaussian noise and approaches the performance of the locally optimal detector which requires knowledge of the noise distribution.
Abstract: In cognitive radio (CR) systems, reliable spectrum sensing techniques are required in order to avoid interference to the primary users of the spectrum. Whereas most of the existing literature on spectrum sensing considers impairment by additive white Gaussian noise (AWGN) only, in practice, CRs also have to cope with various types of non-Gaussian noise such as man-made impulsive noise, co-channel interference, and ultra-wideband interference. In this paper, we propose an adaptive Lp-norm detector which does not require any a priori knowledge about the primary user signal and performs well for a wide range of circularly symmetric non-Gaussian noises with finite moments. We analyze the probabilities of false alarm and missed detection of the proposed detector in Rayleigh fading in the low signal-to-noise ratio regime and investigate its asymptotic performance if the number of samples available for spectrum sensing is large. Furthermore, we consider the deflection coefficient for optimization of the Lp-norm parameters and discuss its connection to the probabilities of false alarm and missed detection. Based on the deflection coefficient an adaptive algorithm for online optimization of the Lp-norm parameters is developed. Analytical and simulation results show that the proposed Lp-norm detector yields significant performance gains compared to conventional energy detection in non-Gaussian noise and approaches the performance of the locally optimal detector which requires knowledge of the noise distribution.

Journal ArticleDOI
TL;DR: Systematic adaptive algorithms and strategies to optimize both the surface quality and fabrication efficiency in RP and to identify the best slope degree of zigzag tool-paths to further minimize the build time are developed.

Journal ArticleDOI
TL;DR: This work proposes a nonlinear adaptive algorithm to recover continuous-time chaotic signals in heavy-noise environments and shows that it is more effective than both chaos-based approaches and wavelet shrinkage.
Abstract: Detecting chaos and estimating the limit of prediction time in heavy-noise environments is an important and challenging task in many areas of science and engineering. An important first step toward this goal is to reduce noise in the signals. Two major types of methods for reducing noise in chaotic signals are chaos-based approaches and wavelet shrinkage. When noise is strong, chaos-based approaches are not very effective, due to failure to accurately approximate the local chaotic dynamics. Here, we propose a nonlinear adaptive algorithm to recover continuous-time chaotic signals in heavy-noise environments. We show that it is more effective than both chaos-based approaches and wavelet shrinkage. Furthermore, we apply our algorithm to study two important issues in geophysics. One is whether chaos exists in river flow dynamics. The other is the limit of prediction time for the Madden-Julian oscillation (MJO), which is one of the most dominant modes of low-frequency variability in the tropical troposphere and affects a wide range of weather and climate systems. Using the adaptive filter, we show that river flow dynamics can indeed be chaotic. We also show that the MJO is weakly chaotic with the prediction time around 50 days, which is considerably longer than the prediction times determined by other approaches.