scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive filter published in 2005"


Journal ArticleDOI
TL;DR: The transmit filters are based on similar optimizations as the respective receive filters with an additional constraint for the transmit power and has similar convergence properties as the receive Wiener filter, i.e., it converges to the matched filter and the zero-forcing filter for low and high signal-to-noise ratio, respectively.
Abstract: We examine and compare the different types of linear transmit processing for multiple input, multiple output systems, where we assume that the receive filter is independent of the transmit filter contrary to the joint optimization of transmit and receive filters. We can identify three filter types similar to receive processing: the transmit matched filter, the transmit zero-forcing filter, and the transmit Wiener filter. We show that the transmit filters are based on similar optimizations as the respective receive filters with an additional constraint for the transmit power. Moreover, the transmit Wiener filter has similar convergence properties as the receive Wiener filter, i.e., it converges to the matched filter and the zero-forcing filter for low and high signal-to-noise ratio, respectively. We give closed-form solutions for all transmit filters and present the fundamental result that their mean-square errors are equal to the errors of the respective receive filters, if the information symbols and the additive noise are uncorrelated. However, our simulations reveal that the bit-error ratio results of the transmit filters differ from the results for the respective receive filters.

792 citations


Proceedings ArticleDOI
18 Apr 2005
TL;DR: Adapt techniques to reduce the number of particles in a Rao-Blackwellized particle filter for learning grid maps are presented and an approach to selectively carry out re-sampling operations which seriously reduces the problem of particle depletion is presented.
Abstract: Recently Rao-Blackwellized particle filters have been introduced as effective means to solve the simultaneous localization and mapping (SLAM) problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper we present adaptive techniques to reduce the number of particles in a Rao-Blackwellized particle filter for learning grid maps. We propose an approach to compute an accurate proposal distribution taking into account not only the movement of the robot but also the most recent observation. This drastically decrease the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out re-sampling operations which seriously reduces the problem of particle depletion. Experimental results carried out with mobile robots in large-scale indoor as well as in outdoor environments illustrate the advantages of our methods over previous approaches.

763 citations


Journal ArticleDOI
TL;DR: The derivation of the details for the marginalized particle filter for a general nonlinear state-space model is derived and it is demonstrated that the complete high-dimensional system can be based on a particle filter using marginalization for all but three states.
Abstract: The particle filter offers a general numerical tool to approximate the posterior density function for the state in nonlinear and non-Gaussian filtering problems. While the particle filter is fairly easy to implement and tune, its main drawback is that it is quite computer intensive, with the computational complexity increasing quickly with the state dimension. One remedy to this problem is to marginalize out the states appearing linearly in the dynamics. The result is that one Kalman filter is associated with each particle. The main contribution in this paper is the derivation of the details for the marginalized particle filter for a general nonlinear state-space model. Several important special cases occurring in typical signal processing applications will also be discussed. The marginalized particle filter is applied to an integrated navigation system for aircraft. It is demonstrated that the complete high-dimensional system can be based on a particle filter using marginalization for all but three states. Excellent performance on real flight data is reported.

649 citations


Journal ArticleDOI
P.E. Howland1, D. Maksimiuk1, G. Reitsma1
03 Jun 2005
TL;DR: An experimental bistatic radar system is described that detects and tracks targets to ranges in excess of 150 km from the receiver, using echoes from a non-cooperative FM radio transmitter.
Abstract: An experimental bistatic radar system is described that detects and tracks targets to ranges in excess of 150 km from the receiver, using echoes from a non-cooperative FM radio transmitter The system concept and limitations on performance are described, followed by details of the processing used to implement the system An adaptive filter algorithm is described that is used to efficiently remove interference and strong clutter signals from the receiver channels A computationally efficient algorithm for target detection using Doppler-sensitive cross-correlation techniques is described A simple constant false alarm rate algorithm for target detection is described, together with a description of a Kalman filter based target association algorithm Representative results from the system are provided and compared to truth data derived from air traffic control data

642 citations


01 Jan 2005
TL;DR: In this article, the authors provide a unified, comprehensive and practical treatment of spectral estimation, signal modeling, adaptive filtering, and array processing, with a broad range of critical topics from industry and academia.
Abstract: This authoritative volume on statistical and adaptive signal processing offers you a unified, comprehensive and practical treatment of spectral estimation, signal modeling, adaptive filtering, and array processing. Packed with over 3,000 equations and more than 300 illustrations, this unique resource provides you with balanced coverage of implementation issues, applications, and theory, making it a smart choice for professional engineers and students alike.; From the fundamentals of discrete-time signal processing and linear signal models, to optimum linear filters and least-squares filtering and prediction, you get in-depth information on a broad range of critical topics from leading experts in industry and academia. This invaluable reference provides clear examples, problem sets, and computer experiments that help you master the material and learn how to implement various methods presented in the book. You also find a set of MATLAB functions that illustrate the use of various techniques and can be used to solve real-world problems in the field.

515 citations


Journal ArticleDOI
TL;DR: A foreground validation algorithm that first builds a foreground mask using a slow-adapting Kalman filter, and then validates individual foreground pixels by a simple moving object model built using both the foreground and background statistics as well as the frame difference is proposed.
Abstract: Identifying moving objects in a video sequence is a fundamental and critical task in many computer-vision applications. Background subtraction techniques are commonly used to separate foreground moving objects from the background. Most background subtraction techniques assume a single rate of adaptation, which is inadequate for complex scenes such as a traffic intersection where objects are moving at different and varying speeds. In this paper, we propose a foreground validation algorithm that first builds a foreground mask using a slow-adapting Kalman filter, and then validates individual foreground pixels by a simple moving object model built using both the foreground and background statistics as well as the frame difference. Ground-truth experiments with urban traffic sequences show that our proposed algorithm significantly improves upon results using only Kalman filter or frame-differencing, and outperforms other techniques based on mixture of Gaussians, median filter, and approximated median filter.

294 citations


Journal ArticleDOI
TL;DR: The Savitzky-Golay smoothing and differentiation filter is extended for even number data to validate the feasibility of such an approach and some corresponding properties are discussed.

287 citations


Journal ArticleDOI
TL;DR: In this article, a least mean square (LMS) algorithm in complex form is presented to estimate power system frequency where the formulated structure is very simple and the three-phase voltages are converted to a complex form for processing by the proposed algorithm.
Abstract: Frequency is an important parameter in power system monitoring, control, and protection. A least mean square (LMS) algorithm in complex form is presented in this paper to estimate power system frequency where the formulated structure is very simple. The three-phase voltages are converted to a complex form for processing by the proposed algorithm. To enhance the convergence characteristic of the complex form of the LMS algorithm, a variable adaptation step-size is incorporated. The performance of the new algorithm is studied through simulations at different situations of the power system.

277 citations


Proceedings ArticleDOI
06 Jul 2005
TL;DR: The separable implementation of the bilateral filter offers equivalent adaptive filtering capability at a fraction of execution time compared to the traditional filter.
Abstract: Bilateral filtering is an edge-preserving filtering technique that employs both geometric closeness and photometric similarity of neighboring pixels to construct its filter kernel. Multi-dimensional bilateral filtering is computationally expensive because the adaptive kernel has to be recomputed at every pixel. In this paper, we present a separable implementation of the bilateral filter. The separable implementation offers equivalent adaptive filtering capability at a fraction of execution time compared to the traditional filter. Because of this efficiency, the separable bilateral filter can be used for fast preprocessing of images and videos. Experiments show that better image quality and higher compression efficiency is achievable if the original video is preprocessed with the separable bilateral filter.

268 citations


Journal ArticleDOI
TL;DR: The objective is to build a set of filters that are capable of responding stronger to features present in vehicles than to nonvehicles, therefore improving class discrimination and unifies filter design with filter selection by integrating genetic algorithms (GAs) with an incremental clustering approach.
Abstract: Robust and reliable vehicle detection from images acquired by a moving vehicle is an important problem with numerous applications including driver assistance systems and self-guided vehicles. Our focus in this paper is on improving the performance of on-road vehicle detection by employing a set of Gabor filters specifically optimized for the task of vehicle detection. This is essentially a kind of feature selection, a critical issue when designing any pattern classification system. Specifically, we propose a systematic and general evolutionary Gabor filter optimization (EGFO) approach for optimizing the parameters of a set of Gabor filters in the context of vehicle detection. The objective is to build a set of filters that are capable of responding stronger to features present in vehicles than to nonvehicles, therefore improving class discrimination. The EGFO approach unifies filter design with filter selection by integrating genetic algorithms (GAs) with an incremental clustering approach. Filter design is performed using GAs, a global optimization approach that encodes the Gabor filter parameters in a chromosome and uses genetic operators to optimize them. Filter selection is performed by grouping filters having similar characteristics in the parameter space using an incremental clustering approach. This step eliminates redundant filters, yielding a more compact optimized set of filters. The resulting filters have been evaluated using an application-oriented fitness criterion based on support vector machines. We have tested the proposed framework on real data collected in Dearborn, MI, in summer and fall 2001, using Ford's proprietary low-light camera.

235 citations


Journal ArticleDOI
TL;DR: The subtraction procedure has largely proved advantageous over other methods for power-line interference cancellation in ECG signals and has been used in thousands of ECG instruments and computer-aided systems.
Abstract: Modern biomedical amplifiers have a very high common mode rejection ratio. Nevertheless, recordings are often contaminated by residual power-line interference. Traditional analogue and digital filters are known to suppress ECG components near to the power-line frequency. Different types of digital notch filters are widely used despite their inherent contradiction: tolerable signal distortion needs a narrow frequency band, which leads to ineffective filtering in cases of larger frequency deviation of the interference. Adaptive filtering introduces unacceptable transient response time, especially after steep and large QRS complexes. Other available techniques such as Fourier transform do not work in real time. The subtraction procedure is found to cope better with this problem. The subtraction procedure was developed some two decades ago, and almost totally eliminates power-line interference from the ECG signal. This procedure does not affect the signal frequency components around the interfering frequency. Digital filtering is applied on linear segments of the signal to remove the interference components. These interference components are stored and further subtracted from the signal wherever non-linear segments are encountered. Modifications of the subtraction procedure have been used in thousands of ECG instruments and computer-aided systems. Other work has extended this procedure to almost all possible cases of sampling rate and interference frequency variation. Improved structure of the on-line procedure has worked successfully regardless of the multiplicity between the sampling rate and the interference frequency. Such flexibility is due to the use of specific filter modules. The subtraction procedure has largely proved advantageous over other methods for power-line interference cancellation in ECG signals.


Journal ArticleDOI
TL;DR: In this article, a dual-drive Mach-Zehnder modulator, driven by adaptive nonlinear digital filters, was used for signal pre-compensation in a single-mode fiber at 10 Gb/s.
Abstract: We propose and investigate a novel electronic dispersion compensation technique, in which signal precompensation is achieved using a dual-drive Mach-Zehnder modulator, driven by adaptive nonlinear digital filters. The results demonstrate effective compensation of over 13600 ps/nm, equivalent to 800 km of standard single-mode fiber, at 10 Gb/s.

Journal ArticleDOI
TL;DR: Differential evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum of a multimodal search space regardless of the initial parameter values, fast convergence, and using a few control parameters.
Abstract: Any digital signal processing algorithm or processor can be reasonably described as a digital filter. The main advantage of an infinite impulse response (IIR) filter is that it can provide a much better performance than the finite impulse response (FIR) filter having the same number of coefficients. However, they might have a multimodal error surface. Differential evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum of a multimodal search space regardless of the initial parameter values, fast convergence, and using a few control parameters. In this work, DE algorithm has been applied to the design of digital IIR filters and its performance has been compared to that of a genetic algorithm.

Journal ArticleDOI
TL;DR: The main theorem, showing the strong convergence of the algorithm as well as the asymptotic optimality of the sequence generated by the algorithm, can serve as a unified guiding principle of a wide range of set theoretic adaptive filtering schemes for nonstationary random processes.
Abstract: This paper presents an algorithm, named adaptive projected subgradient method that can minimize asymptotically a certain sequence of nonnegative convex functions over a closed convex set in a real Hilbert space. The proposed algorithm is a natural extension of the Polyak's subgradient algorithm, for nonsmooth convex optimization problem with a fixed target value, to the case where the convex objective itself keeps changing in the whole process. The main theorem, showing the strong convergence of the algorithm as well as the asymptotic optimality of the sequence generated by the algorithm, can serve as a unified guiding principle of a wide range of set theoretic adaptive filtering schemes for nonstationary random processes. These include not only the existing adaptive filtering techniques; e.g., NLMS, Projected NLMS, Constrained NLMS, APA, and Adaptive parallel outer projection algorithm etc., but also new techniques; e.g., Adaptive parallel min-max projection algorithm, and their embedded constra...

Proceedings ArticleDOI
18 Mar 2005
TL;DR: A tracking algorithm based on a combination of particle filter and mean shift, and enhanced with a new adaptive state transition model that predicts the state based on adaptive variances is proposed.
Abstract: We propose a tracking algorithm based on a combination of particle filter and mean shift, and enhanced with a new adaptive state transition model. The particle filter is robust to partial and total occlusions, can deal with multi-modal pdf and can recover lost tracks. However, its complexity dramatically increases with the dimensionality of the sampled pdf. Mean shift has a low complexity, but is unable to deal with multi-modal pdf. To overcome these problems, the proposed tracker first produces a smaller number of samples than the particle filter and then shifts the samples toward a close local maximum using mean shift. The transition model predicts the state based on adaptive variances. Experimental results show that the combined tracker outperforms the particle filter and mean shift in terms of accuracy in estimating the target size and position while generating 80% less samples than the particle filter.

Journal ArticleDOI
TL;DR: Adaptive feedback cancellation techniques that are based on a closed-loop identification of the feedback path as well as the (auto-regressive) modeling of the desired signal are proposed.
Abstract: The standard continuous adaptation feedback cancellation algorithm for feedback suppression in hearing aids suffers from a large model error or bias if the received sound signal is spectrally colored. To reduce the bias in the feedback path estimate, we propose adaptive feedback cancellation techniques that are based on a closed-loop identification of the feedback path as well as the (auto-regressive) modeling of the desired signal. In general, both models are not simultaneously identifiable in the closed-loop system at hand. We show that-under certain conditions, e.g., if a delay is inserted in the forward path-identification of both models is indeed possible. Two classes of adaptive procedures for identifying the desired signal model and the feedback path are derived: a two-channel identification method as well as a prediction error method. In contrast to the two-channel identification method, the prediction error method allows use of different adaptation schemes for the feedback path and for the desired signal model and, hence, is found to be preferable for highly nonstationary sound signals. Simulation results demonstrate that the proposed techniques outperform the standard continuous adaptation algorithm if the conditions for identifiability are satisfied.

Journal ArticleDOI
TL;DR: The interpolated minimum mean squared error (MMSE) solution is described and the normalized least mean squares (NLMS) and affine-projection (AP) algorithms for both the filter and the interpolator are proposed.
Abstract: In this letter, we propose a broadly applicable reduced-rank filtering approach with adaptive interpolated finite impulse response (FIR) filters in which the interpolator is rendered adaptive. We describe the interpolated minimum mean squared error (MMSE) solution and propose normalized least mean squares (NLMS) and affine-projection (AP) algorithms for both the filter and the interpolator. The resulting filtering structures are considered for equalization and echo cancellation applications. Simulation results showing significant improvements are presented for different scenarios.

Journal ArticleDOI
TL;DR: In this article, a new control design using artificial neural networks is proposed to make the conventional shunt active filter adaptive, which can compensate for harmonic currents, power factor and nonlinear load unbalance.
Abstract: Problems caused by power quality have great adverse economical impact on the utilities and customers. Current harmonics are one of the most common power quality problems and are usually resolved by the use of shunt passive or active filters. In this paper, a new control design using artificial neural networks is proposed to make the conventional shunt active filter adaptive. The proposed adaptive shunt active filter can compensate for harmonic currents, power factor and nonlinear load unbalance. A self-charging technique is also proposed to regulate the dc capacitor voltage at the desired level with the use of a PI controller. The design concept of the adaptive shunt active filter is verified through simulation studies and the results obtained are discussed.

Book
15 Apr 2005
TL;DR: This chapter discusses the Fundamentals of Array Signal Processing, which focuses on the development of Adaptive Antenna Arrays, and its application in Radiowave Propagation.
Abstract: Preface. Acknowledgments. List of Figures. List of Tables. Introduction. I.1 Adaptive Filtering. I.2 Historical Aspects. I.3 Concept of Spatial Signal Processing. 1 Fundamentals of Array Signal Processing. 1.1 Introduction. 1.2 The Key to Transmission. 1.3 Hertzian Dipole. 1.4 Antenna Parameters & Terminology. 1.5 Basic Antenna Elements. 1.6 Antenna Arrays. 1.7 Spatial Filtering. 1.8 Adaptive Antenna Arrays. 1.9 Mutual Coupling & Correlation. 1.10 Chapter Summary. 1.11 Problems. 2 Narrowband Array Systems. 2.1 Introduction. 2.2 Adaptive Antenna Terminology. 2.3 Beam Steering. 2.4 Grating Lobes. 2.5 Amplitude Weights. 2.6 Chapter Summary. 2.7 Problems. 3 Wideband Array Processing. 3.1 Introduction. 3.2 Basic concepts. 3.3 A Simple Delay-line Wideband Array. 3.4 Rectangular Arrays as Wideband Beamformers. 3.5 Wideband Beamforming using FIR Filters. 3.6 Chapter Summary. 3.7 Problems. 4 Adaptive Arrays. 4.1 Introduction. 4.2 Spatial Covariance Matrix. 4.3 Multi-beam Arrays. 4.4 Scanning Arrays. 4.5 Switched Beam Beamformers. 4.6 Fully Adaptive Beamformers. 4.7 Adaptive Algorithms. 4.8 Source Location Techniques. 4.9 Fourier Method. 4.10 Capon's Minimum Variance. 4.11 The MUSIC Algorithm. 4.12 ESPRIT. 4.13 Maximum Likelihood Techniques. 4.14 Spatial Smoothing. 4.15 Determination of Number of Signal Sources. 4.16 Blind Beamforming. 4.17 Chapter Summary. 4.18 Problems. 5 Practical Considerations. 5.1 Introduction. 5.2 Signal Processing Constraints. 5.3 Implementation Issues. 5.4 Radiowave Propagation. 5.5 Transmit Beamforming. 5.6 Chapter Summary. 5.7 Problems. 6 Applications. 6.1 Introduction. 6.2 Antenna Arrays for Radar Applications. 6.3 Antenna Arrays for Sonar Applications. 6.4 Antenna Arrays for Biomedical Applications. 6.5 Antenna Arrays for Wireless Communications. 6.6 Chapter Summary. 6.7 Problems. References. Index.

Journal ArticleDOI
TL;DR: A Wiener filtering based algorithm for the elimination of motion artifacts present in Near Infrared (NIR) spectroscopy measurements that gives better estimates than the classical adaptive filtering approach without the need for additional sensor measurements.
Abstract: We present a Wiener filtering based algorithm for the elimination of motion artifacts present in Near Infrared (NIR) spectroscopy measurements. Until now, adaptive filtering was the only technique used in the noise cancellation in NIR studies. The results in this preliminary study revealed that the proposed method gives better estimates than the classical adaptive filtering approach without the need for additional sensor measurements. Moreover, this novel technique has the potential to filter out motion artifacts in functional near infrared (fNIR) signals, too.

Journal ArticleDOI
TL;DR: An analysis of convergence of the class of Sequential Partial Update LMS algorithms (S-LMS) under various assumptions is presented and it is shown that divergence can be prevented by scheduling coefficient updates at random, which is called the Stochastic Partial update LMS algorithm (SPU-L MS).
Abstract: Partial updating of LMS filter coefficients is an effective method for reducing computational load and power consumption in adaptive filter implementations. This paper presents an analysis of convergence of the class of Sequential Partial Update LMS algorithms (S-LMS) under various assumptions and shows that divergence can be prevented by scheduling coefficient updates at random, which we call the Stochastic Partial Update LMS algorithm (SPU-LMS). Specifically, under the standard independence assumptions, for wide sense stationary signals, the S-LMS algorithm converges in the mean if the step-size parameter /spl mu/ is in the convergent range of ordinary LMS. Relaxing the independence assumption, it is shown that S-LMS and LMS algorithms have the same sufficient conditions for exponential stability. However, there exist nonstationary signals for which the existing algorithms, S-LMS included, are unstable and do not converge for any value of /spl mu/. On the other hand, under broad conditions, the SPU-LMS algorithm remains stable for nonstationary signals. Expressions for convergence rate and steady-state mean-square error of SPU-LMS are derived. The theoretical results of this paper are validated and compared by simulation through numerical examples.

Journal ArticleDOI
TL;DR: A strictly decentralized approach to Bayesian filtering that is well fit for in-network signal processing and significantly outperforms the existing schemes with similar computational and communication complexity is presented.
Abstract: Tracking a target in a cluttered environment is a representative application of sensor networks and a benchmark for collaborative signal processing algorithms. This paper presents a strictly decentralized approach to Bayesian filtering that is well fit for in-network signal processing. By combining the sigma-point filter methodology and the information filter framework, a class of algorithms denoted as sigma-point information filters is developed. These techniques exhibit the robustness and accuracy of the sigma-point filters for nonlinear dynamic inference while being as easily decentralized as the information filters. Furthermore, the computational cost of this approach is equivalent to a local Kalman filter running in each active node while the communication burden can be made linearly growing in the number of sensors involved. The proposed algorithms are then adapted to the specific problem of target tracking with data association ambiguity. Making use of a local probabilistic data association, we formulate a decentralized tracking scheme that significantly outperforms the existing schemes with similar computational and communication complexity.

Journal ArticleDOI
TL;DR: It is shown that practical implementations of DA adaptive filters have very high throughput relative to multiply and accumulate architectures and have a potential area and power consumption advantage over digital signal processing microprocessor architectures.
Abstract: We present a new hardware adaptive filter architecture for very high throughput LMS adaptive filters using distributed arithmetic (DA). DA uses bit-serial operations and look-up tables (LUTs) to implement high throughput filters that use only about one cycle per bit of resolution regardless of filter length. However, building adaptive DA filters requires recalculating the LUTs for each adaptation which can negate any performance advantages of DA filtering. By using an auxiliary LUT with special addressing, the efficiency and throughput of DA adaptive filters can be of the same order as fixed DA filters. In this paper, we discuss a new hardware adaptive filter structure for very high throughput LMS adaptive filters. We describe the development of DA adaptive filters and show that practical implementations of DA adaptive filters have very high throughput relative to multiply and accumulate architectures. We also show that DA adaptive filters have a potential area and power consumption advantage over digital signal processing microprocessor architectures.

Journal ArticleDOI
TL;DR: The use of neural network filters to correlate tumor position with external surrogate markers while simultaneously predicting the motion ahead in time is demonstrated, for situations in which neither the breathing pattern nor the correlation between moving anatomical elements is constant in time.
Abstract: In this study we address the problem of predicting the position of a moving lung tumor during respiration on the basis of external breathing signals--a technique used for beam gating, tracking, and other dynamic motion management techniques in radiation therapy. We demonstrate the use of neural network filters to correlate tumor position with external surrogate markers while simultaneously predicting the motion ahead in time, for situations in which neither the breathing pattern nor the correlation between moving anatomical elements is constant in time. One pancreatic cancer patient and two lung cancer patients with mid/upper lobe tumors were fluoroscopically imaged to observe tumor motion synchronously with the movement of external chest markers during free breathing. The external marker position was provided as input to a feed-forward neural network that correlated the marker and tumor movement to predict the tumor position up to 800 ms in advance. The predicted tumor position was compared to its observed position to establish the accuracy with which the filter could dynamically track tumor motion under nonstationary conditions. These results were compared to simplified linear versions of the filter. The two lung cancer patients exhibited complex respiratory behavior in which the correlation between surrogate marker and tumor position changed with each cycle of breathing. By automatically and continuously adjusting its parameters to the observations, the neural network achieved better tracking accuracy than the fixed and adaptive linear filters. Variability and instability in human respiration complicate the task of predicting tumor position from surrogate breathing signals. Our results show that adaptive signal-processing filters can provide more accurate tumor position estimates than simpler stationary filters when presented with nonstationary breathing motion.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: A novel unsupervised, information-theoretic, adaptive filter (UINTA) that improves the predictability of pixel intensities from their neighborhoods by decreasing the joint entropy between them and can thereby restore a wide spectrum of images and applications.
Abstract: The restoration of images is an important and widely studied problem in computer vision and image processing. Various image filtering strategies have been effective, but invariably make strong assumptions about the properties of the signal and/or degradation. Therefore, these methods typically lack the generality to be easily applied to new applications or diverse image collections. This paper describes a novel unsupervised, information-theoretic, adaptive filter (UINTA) that improves the predictability of pixel intensities from their neighborhoods by decreasing the joint entropy between them. Thus UINTA automatically discovers the statistical properties of the signal and can thereby restore a wide spectrum of images and applications. This paper describes the formulation required to minimize the joint entropy measure, presents several important practical considerations in estimating image-region statistics, and then presents results on both real and synthetic data.

Journal ArticleDOI
TL;DR: It is shown that the new filter outperforms the classical-order statistics filtering techniques and its performance is similar to FSVF, outperforming it in some cases.
Abstract: In this paper, the problem of impulsive noise reduction in multichannel images is addressed. A new filter is proposed on the basis of a recently introduced family of computationally attractive filters with a good detail-preserving ability (FSVF). FSVF is based on privileging the central pixel in each filtering window in order to replace it only when it is really noisy and preserve the original undistorted image structures. The new filter is based on a novel fuzzy metric and it is created by combining the mentioned scheme and the fuzzy metric. The use of the fuzzy metric makes the filter computationally simpler and it allows to adjust the privilege of the central pixel giving the filter an adaptive nature. Moreover, it is shown that the new filter outperforms the classical-order statistics filtering techniques and its performance is similar to FSVF, outperforming it in some cases.

Journal ArticleDOI
TL;DR: Equations derived for analyzing the performance of channel estimate based equalizers yield insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
Abstract: Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.

Journal ArticleDOI
12 Jun 2005
TL;DR: The theory and practical implementation of a continuous-time LMS adaptive filter of the TX leakage in CDMA receivers are described, which achieved the maximum TXRR of 28 dB, which was limited by the reference signal coupling.
Abstract: The theory and practical implementation of a continuous-time LMS adaptive filter of the TX leakage in CDMA receivers are described. The filter works by injecting a matched out-of-phase copy of the TX leakage into the LNA output. It requires a reference signal coupled from the TX chain, whose I and Q components are appropriately scaled to generate the matched copy. The scale factors are the results of the correlation between the filter output signal and the I/Q components of the reference signal. The filter was designed as part of a 0.25-/spl mu/m CMOS cellular-band receiver. The effect of the DC offsets in the correlators on the TX leakage rejection ratio (TXRR) was minimized by using the sign-data variant of the LMS algorithm and by increasing the gain of the correlating multipliers. The loop stability margin was improved by swapping the I and Q reference inputs of the scaling multipliers. Without a significant group delay of the TX leakage relative to the reference signal, the filter achieved the maximum TXRR of 28 dB, which was limited by the reference signal coupling. The group delay introduced by the SAW duplexer reduced the minimum TXRR to 10.8 dB. The filter degraded the LNA noise factor and gain by 1.3 dB and 1.7 dB, respectively.

Proceedings ArticleDOI
Magali Sasso1, C. Cohen-Bacrie1
18 Mar 2005
TL;DR: It is shown that the fully adaptive beamformer cannot be applied to medical ultrasound as it was initially derived since the medical ultrasonic medium produces coherent or highly correlated signals and the algorithm fails to work within this context.
Abstract: Medical ultrasound beamforming is conventionally done using a classical delay-and-sum operation. This simplest beamforming suffers from drawbacks. Indeed, in phased array imaging, the beamformed radiofrequency signal is often polluted with off-axis energies. We investigate the use of an adaptive beamforming approach widely used in array processing, the fully adaptive beamformer, to reduce the bright off-axis energies contribution. We show that the fully adaptive beamformer cannot be applied to medical ultrasound as it was initially derived since the medical ultrasonic medium produces coherent or highly correlated signals and the algorithm fails to work within this context. Spatial smoothing preprocessing is introduced which allows the fully adaptive beamformer to operate. A complementary preprocessing that uses the received data obtained using consecutive transmission lines further improve the performances. Very promising results for the application of adaptive array processing techniques in medical ultrasound are obtained.