scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive filter published in 2013"


Journal ArticleDOI
TL;DR: This paper presents a new approach to build adaptive wavelets, the main idea is to extract the different modes of a signal by designing an appropriate wavelet filter bank, which leads to a new wavelet transform, called the empirical wavelets transform.
Abstract: Some recent methods, like the empirical mode decomposition (EMD), propose to decompose a signal accordingly to its contained information. Even though its adaptability seems useful for many applications, the main issue with this approach is its lack of theory. This paper presents a new approach to build adaptive wavelets. The main idea is to extract the different modes of a signal by designing an appropriate wavelet filter bank. This construction leads us to a new wavelet transform, called the empirical wavelet transform. Many experiments are presented showing the usefulness of this method compared to the classic EMD.

1,398 citations


Book
31 Jan 2013
TL;DR: In this paper, the authors present a survey of algorithms and architectures for image and signal processing based on order statistics and homomorphies, including adaptive nonlinear filters and median filters.
Abstract: 1. Introduction.- 2. Statistical preliminaries.- 3. Image formation.- 4. Median filters.- 5. Digital filters based on order statistics.- 6. Morphological image and signal processing.- 7. Homomorphie filters.- 8. Polynomial filters.- 9. Adaptive nonlinear filters.- 10. Generalizations and new trends.- 11. Algorithms and architectures.

974 citations


Journal ArticleDOI
TL;DR: The results indicate that the proposed online SoC estimation with the AEKF algorithm performs optimally, and for different error initial values, the maximum soC estimation error is less than 2% with close-loop state estimation characteristics.
Abstract: An accurate State-of-Charge (SoC) estimation plays a significant role in battery systems used in electric vehicles due to the arduous operation environments and the requirement of ensuring safe and reliable operations of batteries. Among the conventional methods to estimate SoC, the Coulomb counting method is widely used, but its accuracy is limited due to the accumulated error. Another commonly used method is model-based online iterative estimation with the Kalman filters, which improves the estimation accuracy in some extent. To improve the performance of Kalman filters in SoC estimation, the adaptive extended Kalman filter (AEKF), which employs the covariance matching approach, is applied in this paper. First, we built an implementation flowchart of the AEKF for a general system. Second, we built an online open-circuit voltage (OCV) estimation approach with the AEKF algorithm so that we can then get the SoC estimate by looking up the OCV-SoC table. Third, we proposed a robust online model-based SoC estimation approach with the AEKF algorithm. Finally, an evaluation on the SoC estimation approaches is performed by the experiment approach from the aspects of SoC estimation accuracy and robustness. The results indicate that the proposed online SoC estimation with the AEKF algorithm performs optimally, and for different error initial values, the maximum SoC estimation error is less than 2% with close-loop state estimation characteristics.

345 citations


Book
22 Feb 2013
TL;DR: Digital Signal Processing, Second Edition enables electrical engineers and technicians in the fields of biomedical, computer, and electronics engineering to master the essential fundamentals of DSP principles and practice.
Abstract: Digital Signal Processing, Second Edition enables electrical engineers and technicians in the fields of biomedical, computer, and electronics engineering to master the essential fundamentals of DSP principles and practice. Many instructive worked examples are used to illustrate the material, and the use of mathematics is minimized for easier grasp of concepts. As such, this title is also useful to undergraduates in electrical engineering, and as a reference for science students and practicing engineers. The book goes beyond DSP theory, to show implementation of algorithms in hardware and software. Additional topics covered include adaptive filtering with noise reduction and echo cancellations, speech compression, signal sampling, digital filter realizations, filter design, multimedia applications, over-sampling, etc. More advanced topics are also covered, such as adaptive filters, speech compression such as PCM, u-law, ADPCM, and multi-rate DSP and over-sampling ADC. New to this edition: MATLAB projects dealing with practical applications added throughout the bookNew chapter (chapter 13) covering sub-band coding and wavelet transforms, methods that have become popular in the DSP fieldNew applications included in many chapters, including applications of DFT to seismic signals, electrocardiography data, and vibration signalsAll real-time C programs revised for the TMS320C6713 DSKCovers DSP principles with emphasis on communications and control applicationsChapter objectives, worked examples, and end-of-chapter exercises aid the reader in grasping key concepts and solving related problemsWebsite with MATLAB programs for simulation and C programs for real-time DSP

241 citations


Journal ArticleDOI
TL;DR: A prototype of a longitudinal driving-assistance system, which is adaptive to driver behavior, is developed, and results show that the self-learning algorithm is effective and that the system can, to some extent, adapt to individual characteristics.
Abstract: A prototype of a longitudinal driving-assistance system, which is adaptive to driver behavior, is developed. Its functions include adaptive cruise control and forward collision warning/avoidance. The research data came from driver car-following tests in real traffic environments. Based on the data analysis, a driver model imitating the driver's operation is established to generate the desired throttle depression and braking pressure. Algorithms for collision warning and automatic braking activation are designed based on the driver's pedal deflection timing during approach (gap closing). A self-learning algorithm for driver characteristics is proposed based on the recursive least-square method with a forgetting factor. Using this algorithm, the parameters of the driver model can be identified from the data in the manual operation phase, and the identification result is applied during the automatic control phase in real time. A test bed with an electronic throttle and an electrohydraulic brake actuator is developed for system validation. The experimental results show that the self-learning algorithm is effective and that the system can, to some extent, adapt to individual characteristics.

237 citations


Proceedings ArticleDOI
01 Sep 2013
TL;DR: An adaptive rain streak removal algorithm for a single image is proposed and experimental results demonstrate that the proposed algorithm removes rain streaks more efficiently and provides higher restored image qualities than conventional algorithms.
Abstract: An adaptive rain streak removal algorithm for a single image is proposed in this work. We observe that a typical rain streak has an elongated elliptical shape with a vertical orientation. Thus, we first detect rain streak regions by analyzing the rotation angle and the aspect ratio of the elliptical kernel at each pixel location. We then perform the nonlocal means filtering on the detected rain streak regions by selecting nonlocal neighbor pixels and their weights adaptively. Experimental results demonstrate that the proposed algorithm removes rain streaks more efficiently and provides higher restored image qualities than conventional algorithms.

236 citations


Journal ArticleDOI
TL;DR: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data.
Abstract: Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the shape and peak frequency of the noise power spectrum better than commercial smoothing kernels, and indicate that the spatial resolution at low contrast levels is not significantly degraded. Both the subjective evaluation using the ACR phantom and the objective evaluation on a low-contrast detection task using a CHO model observer demonstrate an improvement on low-contrast performance. The GPU implementation can process and transfer 300 slice images within 5 min. On patient data, the adaptive NLM algorithm provides more effective denoising of CT data throughout a volume than standard NLM, and may allow significant lowering of radiation dose. After a two week pilot study of lower dose CT urography and CT enterography exams, both GI and GU radiology groups elected to proceed with permanent implementation of adaptive NLM in their GI and GU CT practices. Conclusions: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with clinical workflow. The adaptive NLM algorithm provides effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose.

235 citations


Journal ArticleDOI
TL;DR: An adaptive filtering approach based on discrete wavelet transform and artificial neural network is proposed for ECG signal noise reduction that can successfully remove a wide range of noise with significant improvement on SNR (signal-to-noise ratio).

219 citations


Journal ArticleDOI
TL;DR: It is proved that the advantages offered by clever adaptive strategies and sophisticated estimation procedures-no matter how intractable-over classical compressed acquisition/recovery schemes are, in general, minimal.
Abstract: Suppose we can sequentially acquire arbitrary linear measurements of an n -dimensional vector x resulting in the linear model y = A x + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy which cleverly selects the next row of A based on what has been previously observed should do far better than a nonadaptive strategy which sets the rows of A ahead of time, thus not trying to learn anything about the signal in between observations. This paper shows that the folk theorem is false. We prove that the advantages offered by clever adaptive strategies and sophisticated estimation procedures-no matter how intractable-over classical compressed acquisition/recovery schemes are, in general, minimal.

157 citations


Journal ArticleDOI
TL;DR: A new class of nonlinear adaptive filters, consisting of a linear combiner followed by a flexible memory-less function, is presented, based on a spline function that can be modified during learning.

155 citations


Journal ArticleDOI
TL;DR: Simulation results show that the MB-MMSE-DF detector achieves a performance superior to existing suboptimal detectors and close to the MLD, while requiring significantly lower complexity.
Abstract: In this work, decision feedback (DF) detection algorithms based on multiple processing branches for multi-input multi-output (MIMO) spatial multiplexing systems are proposed. The proposed detector employs multiple cancellation branches with receive filters that are obtained from a common matrix inverse and achieves a performance close to the maximum likelihood detector (MLD). Constrained minimum mean-squared error (MMSE) receive filters designed with constraints on the shape and magnitude of the feedback filters for the multi-branch MMSE DF (MB-MMSE-DF) receivers are presented. An adaptive implementation of the proposed MB-MMSE-DF detector is developed along with a recursive least squares-type algorithm for estimating the parameters of the receive filters when the channel is time-varying. A soft-output version of the MB-MMSE-DF detector is also proposed as a component of an iterative detection and decoding receiver structure. A computational complexity analysis shows that the MB-MMSE-DF detector does not require a significant additional complexity over the conventional MMSE-DF detector, whereas a diversity analysis discusses the diversity order achieved by the MB-MMSE-DF detector. Simulation results show that the MB-MMSE-DF detector achieves a performance superior to existing suboptimal detectors and close to the MLD, while requiring significantly lower complexity.

Journal ArticleDOI
TL;DR: Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.
Abstract: Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, “how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.” We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.

Journal ArticleDOI
TL;DR: In this paper, the performance of adaptive filtering techniques for P and M class PMUs was analyzed in real-time up to 10 kHz sample rates, allowing consistent accuracy to be maintained across a ±33% frequency range, while the reference algorithm is not able to achieve a useful rate of change of frequency (ROCOF) accuracy.
Abstract: The new standard C37.118.1 lays down strict performance limits for phasor measurement units (PMUs) under steady-state and dynamic conditions. Reference algorithms are also presented for the performance (P) and measurement (M) class PMUs. In this paper, the performance of these algorithms is analyzed during some key signal scenarios, particularly those of offnominal frequency, frequency ramps, and harmonic contamination. While it is found that total vector error (TVE) accuracy is relatively easy to achieve, the reference algorithm is not able to achieve a useful rate of change of frequency (ROCOF) accuracy. Instead, this paper presents alternative algorithms for P and M class PMUs, which use adaptive filtering techniques in real time up to 10-kHz sample rates, allowing consistent accuracy to be maintained across a ±33% frequency range. ROCOF errors can be reduced by factors of > 40 for P class and > 100 for M class devices.

Journal ArticleDOI
TL;DR: Experimental results show the effectiveness of the proposed FLAF-based architectures in nonlinear AEC scenarios, thus resulting an important solution to the modeling of nonlinear acoustic channels.
Abstract: This paper introduces a new class of nonlinear adaptive filters, whose structure is based on Hammerstein model. Such filters derive from the functional link adaptive filter (FLAF) model, defined by a nonlinear input expansion, which enhances the representation of the input signal through a projection in a higher dimensional space, and a subsequent adaptive filtering. In particular, two robust FLAF-based architectures are proposed and designed ad hoc to tackle nonlinearities in acoustic echo cancellation (AEC). The simplest architecture is the split FLAF, which separates the adaptation of linear and nonlinear elements using two different adaptive filters in parallel. In this way, the architecture can accomplish distinctly at best the linear and the nonlinear modeling. Moreover, in order to give robustness against different degrees of nonlinearity, a collaborative FLAF is proposed based on the adaptive combination of filters. Such architecture allows to achieve the best performance regardless of the nonlinearity degree in the echo path. Experimental results show the effectiveness of the proposed FLAF-based architectures in nonlinear AEC scenarios, thus resulting an important solution to the modeling of nonlinear acoustic channels.

Journal ArticleDOI
TL;DR: The proposed ALF is located at the last processing stage for each picture and can be regarded as a tool to catch and fix artifacts from previous stages, and achieves a low encoding latency.
Abstract: Adaptive loop filtering for video coding is to minimize the mean square error between original samples and decoded samples by using Wiener-based adaptive filter. The proposed ALF is located at the last processing stage for each picture and can be regarded as a tool to catch and fix artifacts from previous stages. The suitable filter coefficients are determined by the encoder and explicitly signaled to the decoder. In order to achieve better coding efficiency, especially for high resolution videos, local adaptation is used for luma signals by applying different filters to different regions or blocks in a picture. In addition to filter adaptation, filter on/off control at coding tree unit (CTU) level is also helpful for improving coding efficiency. Syntax-wise, filter coefficients are sent in a picture level header called adaptation parameter set, and filter on/off flags of CTUs are interleaved at CTU level in the slice data. This syntax design not only supports picture level optimization but also achieves a low encoding latency. Simulation results show that the ALF can achieve on average 7% bit rate reduction for 25 HD sequences. The run time increases are 1% and 10% for encoders and decoders, respectively, without special attention to optimization in C++ code.

Journal ArticleDOI
TL;DR: A novel model is developed to describe possible random delays and losses of measurements transmitted from a sensor to a filter by a group of Bernoulli distributed random variables and the optimal filter is given by Kalman filter when packets are time-stamped.
Abstract: A novel model is developed to describe possible random delays and losses of measurements transmitted from a sensor to a filter by a group of Bernoulli distributed random variables. Based on the new developed model, an optimal linear filter dependent on the probabilities is presented in the linear minimum variance sense by the innovation analysis approach when packets are not time-stamped. The solution to the optimal linear filter is given in terms of a Riccati difference equation and a Lyapunov difference equation. A sufficient condition for the existence of the steady-state filter is given. At last, the optimal filter is given by Kalman filter when packets are time-stamped.

Journal ArticleDOI
TL;DR: Performance of proposed method is superior to wavelet thresholding, bilateral filter and non-local means filter and superior/akin to multi-resolution bilateral filter in terms of method noise, visual quality, PSNR and Image Quality Index.
Abstract: Non-local means filter uses all the possible self-predictions and self-similarities the image can provide to determine the pixel weights for filtering the noisy image, with the assumption that the image contains an extensive amount of self-similarity. As the pixels are highly correlated and the noise is typically independently and identically distributed, averaging of these pixels results in noise suppression thereby yielding a pixel that is similar to its original value. The non-local means filter removes the noise and cleans the edges without losing too many fine structure and details. But as the noise increases, the performance of non-local means filter deteriorates and the denoised image suffers from blurring and loss of image details. This is because the similar local patches used to find the pixel weights contains noisy pixels. In this paper, the blend of non-local means filter and its method noise thresholding using wavelets is proposed for better image denoising. The performance of the proposed method is compared with wavelet thresholding, bilateral filter, non-local means filter and multi-resolution bilateral filter. It is found that performance of proposed method is superior to wavelet thresholding, bilateral filter and non-local means filter and superior/akin to multi-resolution bilateral filter in terms of method noise, visual quality, PSNR and Image Quality Index.

Patent
22 Feb 2013
TL;DR: In this article, an Angle and Distance Processing (ADP) module is employed on a mobile device and configured to provide runtime angle and distance information to an adaptive beamformer for canceling noise signals, provides a means for building a table of filter coefficients for adaptive filters used in echo cancellation, provides faster and more accurate Automatic Gain Control (AGC), provides delay information for a classifier in a Voice Activity Detector (VAD), and assists in separating echo path changes from double talk.
Abstract: The disclosed system and method for a mobile device combines information derived from onboard sensors with conventional signal processing information derived from a speech or audio signal to assist in noise and echo cancellation. In some implementations, an Angle and Distance Processing (ADP) module is employed on a mobile device and configured to provide runtime angle and distance information to an adaptive beamformer for canceling noise signals, provides a means for building a table of filter coefficients for adaptive filters used in echo cancellation, provides faster and more accurate Automatic Gain Control (AGC), provides delay information for a classifier in a Voice Activity Detector (VAD), provides a means for automatic switching between a speakerphone and handset mode of the mobile device, or primary microphone and reference microphones and assists in separating echo path changes from double talk.

Journal ArticleDOI
TL;DR: A new state estimation algorithm called the square root cubature information filter (SRCIF) for nonlinear systems, first derived from an extended information filter and a recently developed cubature Kalman filter.
Abstract: Nonlinear state estimation plays a major role in many real-life applications. Recently, some sigma-point filters, such as the unscented Kalman filter, the particle filter, or the cubature Kalman filter have been proposed as promising substitutes for the conventional extended Kalman filter. For multisensor fusion, the information form of the Kalman filter is preferred over standard covariance filters due to its simpler measurement update stage. This paper presents a new state estimation algorithm called the square root cubature information filter (SRCIF) for nonlinear systems. The cubature information filter is first derived from an extended information filter and a recently developed cubature Kalman filter. For numerical accuracy, its square root version is then developed. Unlike the extended Kalman or extended information filters, the proposed filter does not require the evaluation of Jacobians during state estimation. The proposed approach is further extended for use in multisensor state estimation. The efficacy of the SRCIF is demonstrated by a simulation example of a permanent magnet synchronous motor.

Journal ArticleDOI
TL;DR: This study focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation and used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise- free signal.
Abstract: In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter.

Journal ArticleDOI
TL;DR: Qualitative and quantitative analyses of the nonlocal means (NLM) denoising algorithm have demonstrated the efficiency of the algorithm in recovering the noise-free change image while preserving the complex structures in urban areas.
Abstract: Multitemporal synthetic aperture radar (SAR) images have been increasingly used in change detection studies. However, the presence of speckle is the main disadvantage of this type of data. To reduce speckle, many local adaptive filters have been developed. Although these filters are effective in reducing speckle in homogeneous areas, their use is often accompanied with the degradation of spatial details and fine structures. In this paper, we investigate a nonlocal means (NLM) denoising algorithm that combines local structures with a global averaging scheme in the context of change detection using multitemporal SAR images. First, the ratio image is logarithmically scaled to convert the multiplicative noise model to an additive model. A multidimensional change image is then constructed using image neighborhood feature vectors. Principle component analysis is then used to reduce the dimensionality of the neighborhood feature vectors. Recursive linear regression combined with fitting-accuracy assessment strategy is developed to determine the number of significant PC components to be retained for similarity weight computation. An intuitive method to estimate the unknown noise variance (necessary to run the NLM algorithm) based on the discarded PC components is also proposed. The efficiency of the method has been assessed using two different bitemporal SAR datasets acquired in Beijing and Shanghai, respectively. For comparison purposes, the algorithm is also tested against some of the most commonly used local adaptive filters. Qualitative and quantitative analyses of the algorithm have demonstrated the efficiency of the algorithm in recovering the noise-free change image while preserving the complex structures in urban areas.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A fast single image defogging method that uses a novel approach to refining the estimate of amount of fog in an image with the Locally Adaptive Wiener Filter and provides a solution for estimating noise parameters for the filter when the observation and noise are correlated by decorrelating with a naively estimated defogged image.
Abstract: We present in this paper a fast single image defogging method that uses a novel approach to refining the estimate of amount of fog in an image with the Locally Adaptive Wiener Filter. We provide a solution for estimating noise parameters for the filter when the observation and noise are correlated by decorrelating with a naively estimated defogged image. We demonstrate our method is 50 to 100 times faster than existing fast single image defogging methods and that our proposed method subjectively performs as well as the Spectral Matting smoothed Dark Channel Prior method.

Journal ArticleDOI
TL;DR: An efficient distributed-arithmetic formulation for the implementation of block least mean square (BLMS) algorithm using a novel look-up table (LUT)-sharing technique for the computation of filter outputs and weight-increment terms of BLMS algorithm, which offers significant saving of adders which constitute a major component of DA-based structures.
Abstract: In this paper, we present an efficient distributed-arithmetic (DA) formulation for the implementation of block least mean square (BLMS) algorithm. The proposed DA-based design uses a novel look-up table (LUT)-sharing technique for the computation of filter outputs and weight-increment terms of BLMS algorithm. Besides, it offers significant saving of adders which constitute a major component of DA-based structures. Also, we have suggested a novel LUT-based weight updating scheme for BLMS algorithm, where only one set of LUTs out of M sets need to be modified in every iteration, where N=ML, N, and L are, respectively, the filter length and input block-size. Based on the proposed DA formulation, we have derived a parallel architecture for the implementation of BLMS adaptive digital filter (ADF). Compared with the best of the existing DA-based LMS structures, proposed one involves nearly L/6 times adders and L times LUT words, and offers nearly L times throughput of the other. It requires nearly 25% more flip-flops and does not involve variable shifters like those of existing structures. It involves less LUT access per output (LAPO) than the existing structure for block-size higher than 4. For block-size 8 and filter length 64, the proposed structure involves 2.47 times more adders, 15% more flip-flops, 43% less LAPO than the best of existing structures, and offers 5.22 times higher throughput. The number of adders of the proposed structure does not increase proportionately with block size; and the number of flip-flops is independent of block-size. This is a major advantage of the proposed structure for reducing its area delay product (ADP); particularly, when a large order ADF is implemented for higher block-sizes. ASIC synthesis result shows that, the proposed structure for filter length 64, has almost 14% and 30% less ADP and 25% and 37% less EPO than the best of the existing structures for block size 4 and 8, respectively.

Journal ArticleDOI
TL;DR: A new digital background calibration technique for gain mismatches and sample-time mismatches in a Time-Interleaved Analog-to-Digital Converter (TI-ADC) is presented to reduce the circuit area.
Abstract: A new digital background calibration technique for gain mismatches and sample-time mismatches in a Time-Interleaved Analog-to-Digital Converter (TI-ADC) is presented to reduce the circuit area. In the proposed technique, the gain mismatches and the sample-time mismatches are calibrated by using pseudo aliasing signals instead of using a bank of adaptive FIR filters which is conventionally utilized. The pseudo aliasing signals are generated and subtracted from an ADC output. A pseudo aliasing generator consists of the Hadamard transform and a fixed FIR filter. In case of a two-channel 10-bit TI-ADC, the proposed technique reduces the requirement for a word length of the FIR filter by about 50% without a look-up table (LUT) compared with the conventional technique. In addition, the proposed technique requires only one FIR filter compared with the bank of adaptive filters which requires (M-1) FIR filters in an M-channel TI-ADC.

Journal ArticleDOI
TL;DR: This paper shows that the two problems of Raman spectral deconvolution and feature-extraction processes within a joint variational framework are tightly coupled and can be successfully solved together.
Abstract: Raman spectral interpretation often suffers common problems of band overlapping and random noise. Spectral deconvolution and feature-parameter extraction are both classical problems, which are known to be difficult and have attracted major research efforts. This paper shows that the two problems are tightly coupled and can be successfully solved together. Mutual support of Raman spectral deconvolution and feature-extraction processes within a joint variational framework are theoretically motivated and validated by successful experimental results. The main idea is to recover latent spectrum and extract spectral feature parameters from slit-distorted Raman spectrum simultaneously. Moreover, a robust adaptive Tikhonov regularization function is suggested to distinguish the flat, noise, and points, which can suppress noise effectively as well as preserve details. To evaluate the performance of the proposed method, quantitative and qualitative analyses were carried out by visual inspection and quality indexes of the simulated and real Raman spectra.

Journal ArticleDOI
TL;DR: This paper proposes a method to dramatically reduce the number of unknowns of the optimization problem through approximation of the constraints, so that the optimal solution of the approximated optimization problem can be obtained with acceptable computational complexity.
Abstract: Recently, filter bank multicarrier (FBMC) modulations have attracted increasing attention. The filter banks of FBMC are derived from a prototype filter that determines the system performance, such as stopband attenuation, intersymbol interference (ISI) and interchannel interference (ICI). In this paper, we formulate a problem of direct optimization of the filter impulse-response coefficients for the FBMC systems to minimize the stopband energy and constrain the ISI/ICI. Unfortunately, this filter optimization problem is nonconvex and highly nonlinear. Nevertheless, observing that all the functions in the optimization problem are twice-differentiable, we propose using the $\alpha$ -based Branch and Bound ( $\alpha$ BB) algorithm to obtain the optimal solution. However, the convergence time of the algorithm is unacceptable because the number of unknowns (i.e., the filter coefficients) in the optimization problem is too large. The main contribution of this paper is that we propose a method to dramatically reduce the number of unknowns of the optimization problem through approximation of the constraints, so that the optimal solution of the approximated optimization problem can be obtained with acceptable computational complexity. Numerical results show that, the proposed approximation is reasonable, and the optimized filters obtained with the proposed method achieve significantly lower stopband energy than those with the frequency sampling and windowing based techniques.

Journal ArticleDOI
TL;DR: The MAPF is significantly more computationally efficient than a comparable particle filter that runs on the full augmented state and can handle sensor and actuator offsets as unknown means in the noise distributions, avoiding the standard approach of augmenting the state with such offsets.

Journal ArticleDOI
TL;DR: Simulation results show that the diffusion ITL-based distributed estimation method can achieve superior performance comparing to the standard diffusion least mean square (LMS) algorithm when the noise is modeled to be non-Gaussian.
Abstract: Distributed estimation over networks has received a lot of attention due to its broad applicability. In diffusion type of distributed estimation, the parameters of interest can be well estimated from noisy measurements through diffusion cooperation between nodes. Meanwhile, the consumption of communication resources is low, since each node exchanges information only with its neighbors. In previous studies, most of the cost functions used in diffusion distributed estimation are based on mean square error (MSE) criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not always hold in real-world environments. In non-Gaussian cases, the information theoretic learning (ITL) provides a more general framework and has a better performance than the MSE-based method. In this work, we incorporate information theoretic measure into the cost function of diffusion distributed estimation. Moreover, an information theoretic measure based adaptive diffusion strategy is proposed to further promote estimation performance. Simulation results show that the diffusion ITL-based distributed estimation method can achieve superior performance comparing to the standard diffusion least mean square (LMS) algorithm when the noise is modeled to be non-Gaussian.

Journal ArticleDOI
TL;DR: This work derives a different form of the Kalman filter by considering, at each iteration, a block of time samples instead of one time sample as it is the case in the conventional approach.
Abstract: The Kalman filter is a very interesting signal processing tool, which is widely used in many practical applications. In this paper, we study the Kalman filter in the context of echo cancellation. The contribution of this work is threefold. First, we derive a different form of the Kalman filter by considering, at each iteration, a block of time samples instead of one time sample as it is the case in the conventional approach. Second, we show how this general Kalman filter (GKF) is connected with some of the most popular adaptive filters for echo cancellation, i.e., the normalized least-mean-square (NLMS) algorithm, the affine projection algorithm (APA) and its proportionate version (PAPA). Third, a simplified Kalman filter is developed in order to reduce the computational load of the GKF; this algorithm behaves like a variable step-size adaptive filter. Simulation results indicate the good performance of the proposed algorithms, which can be attractive choices for echo cancellation.

Journal ArticleDOI
TL;DR: Experiments illustrate that the proposed spatially adaptive iterative filtering (SAIF) strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-the-art results under both simulated and practical conditions.
Abstract: Spatial domain image filters (e.g., bilateral filter, non-local means, locally adaptive regression kernel) have achieved great success in denoising. Their overall performance, however, has not generally surpassed the leading transform domain-based filters (such as BM3-D). One important reason is that spatial domain filters lack efficiency to adaptively fine tune their denoising strength; something that is relatively easy to do in transform domain method with shrinkage operators. In the pixel domain, the smoothing strength is usually controlled globally by, for example, tuning a regularization parameter. In this paper, we propose spatially adaptive iterative filtering (SAIF) a new strategy to control the denoising strength locally for any spatial domain method. This approach is capable of filtering local image content iteratively using the given base filter, and the type of iteration and the iteration number are automatically optimized with respect to estimated risk (i.e., mean-squared error). In exploiting the estimated local signal-to-noise-ratio, we also present a new risk estimator that is different from the often-employed SURE method, and exceeds its performance in many cases. Experiments illustrate that our strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-the-art results under both simulated and practical conditions.