Author

# Jose C. M. Bermudez

Other affiliations: Federal University of Rio de Janeiro, Universidade Católica de Pelotas, California State Polytechnic University, Pomona ...read more

Bio: Jose C. M. Bermudez is an academic researcher from Universidade Federal de Santa Catarina. The author has contributed to research in topics: Adaptive filter & Monte Carlo method. The author has an hindex of 28, co-authored 226 publications receiving 3672 citations. Previous affiliations of Jose C. M. Bermudez include Federal University of Rio de Janeiro & Universidade Católica de Pelotas.

##### Papers published on a yearly basis

##### Papers

More filters

••

TL;DR: This paper investigates a new model reduction criterion that makes computationally demanding sparsification procedures unnecessary and incorporates the coherence criterion into a new kernel-based affine projection algorithm for time series prediction.

Abstract: Kernel-based algorithms have been a topic of considerable interest in the machine learning community over the last ten years. Their attractiveness resides in their elegant treatment of nonlinear problems. They have been successfully applied to pattern recognition, regression and density estimation. A common characteristic of kernel-based methods is that they deal with kernel expansions whose number of terms equals the number of input data, making them unsuitable for online applications. Recently, several solutions have been proposed to circumvent this computational burden in time series prediction problems. Nevertheless, most of them require excessively elaborate and costly operations. In this paper, we investigate a new model reduction criterion that makes computationally demanding sparsification procedures unnecessary. The increase in the number of variables is controlled by the coherence parameter, a fundamental quantity that characterizes the behavior of dictionaries in sparse approximation problems. We incorporate the coherence criterion into a new kernel-based affine projection algorithm for time series prediction. We also derive the kernel-based normalized LMS algorithm as a particular case. Finally, experiments are conducted to compare our approach to existing methods.

405 citations

••

TL;DR: In this article, the authors present an overview of recent advances in nonlinear unmixing modeling, for instance, when there are multiscattering effects or intimate interactions, and several significant contributions have been proposed to overcome the limitations inherent in the LMM.

Abstract: When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid, and other nonlinear models need to be considered, for instance, when there are multiscattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this article, we present an overview of recent advances in nonlinear unmixing modeling.

381 citations

••

TL;DR: This article presents an overview of recent advances in nonlinear unmixing modeling and proposes several significant contributions to overcome the limitations inherent in the LMM.

Abstract: When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid and other nonlinear models need to be considered, for instance, when there are multi-scattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this paper, we present an overview of recent advances in nonlinear unmixing modeling.

325 citations

••

TL;DR: This paper studies the statistical behavior of an affine combination of the outputs of two least-mean-square adaptive filters that simultaneously adapt using the same white Gaussian inputs to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD).

Abstract: This paper studies the statistical behavior of an affine combination of the outputs of two least mean-square (LMS) adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor lambda(n) is restricted to the interval (0,1). The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the mean-square error (MSE). First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSDs of either filter.

164 citations

••

TL;DR: A new stochastic analysis is presented for the filtered-X LMS (FXLMS) algorithm and an analytical model is derived for the mean behavior of the adaptive weights.

Abstract: A new stochastic analysis is presented for the filtered-X LMS (FXLMS) algorithm. The analysis does not use independence theory. An analytical model is derived for the mean behavior of the adaptive weights. The model is valid for white or colored reference inputs and accurately predicts the mean weight behavior even for large step sizes. The constrained Wiener solution is determined as a function of the input statistics and the impulse responses of the adaptation loop filters. Effects of secondary path estimation error are studied. Monte Carlo simulations demonstrate the accuracy of the theoretical model.

103 citations

##### Cited by

More filters

••

2,140 citations

[...]

TL;DR: In this paper, the authors studied the effect of local derivatives on the detection of intensity edges in images, where the local difference of intensities is computed for each pixel in the image.

Abstract: Most of the signal processing that we will study in this course involves local operations on a signal, namely transforming the signal by applying linear combinations of values in the neighborhood of each sample point. You are familiar with such operations from Calculus, namely, taking derivatives and you are also familiar with this from optics namely blurring a signal. We will be looking at sampled signals only. Let's start with a few basic examples. Local difference Suppose we have a 1D image and we take the local difference of intensities, DI(x) = 1 2 (I(x + 1) − I(x − 1)) which give a discrete approximation to a partial derivative. (We compute this for each x in the image.) What is the effect of such a transformation? One key idea is that such a derivative would be useful for marking positions where the intensity changes. Such a change is called an edge. It is important to detect edges in images because they often mark locations at which object properties change. These can include changes in illumination along a surface due to a shadow boundary, or a material (pigment) change, or a change in depth as when one object ends and another begins. The computational problem of finding intensity edges in images is called edge detection. We could look for positions at which DI(x) has a large negative or positive value. Large positive values indicate an edge that goes from low to high intensity, and large negative values indicate an edge that goes from high to low intensity. Example Suppose the image consists of a single (slightly sloped) edge:

1,829 citations

01 Jan 2015

TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.

Abstract: Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications, and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book’s practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB/GNU Octave source code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.

1,102 citations

••

01 Aug 2014TL;DR: The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy, and common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super- resolution algorithms, and the most commonly employed databases are discussed.

Abstract: Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real-world problems in different fields, from satellite and aerial imaging to medical image processing, to facial image analysis, text image analysis, sign and number plates reading, and biometrics recognition, to name a few. This has resulted in many research papers, each developing a new super-resolution algorithm for a specific purpose. The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy. For each of the groups in the taxonomy, the basic concepts of the algorithms are first explained and then the paths through which each of these groups have evolved are given in detail, by mentioning the contributions of different authors to the basic concepts of each group. Furthermore, common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super-resolution algorithms, and the most commonly employed databases are discussed.

602 citations

••

Technische Universität München

^{1}, University of Tokyo^{2}, Sun Yat-sen University^{3}, Ghent University^{4}, Tongji University^{5}, University of Extremadura^{6}, University of Iceland^{7}TL;DR: Rigorous and innovative methodologies are required for hyperspectral image (HSI) and signal processing and have become a center of attention for researchers worldwide.

Abstract: Recent advances in airborne and spaceborne hyperspectral imaging technology have provided end users with rich spectral, spatial, and temporal information. They have made a plethora of applications feasible for the analysis of large areas of the Earth?s surface. However, a significant number of factors-such as the high dimensions and size of the hyperspectral data, the lack of training samples, mixed pixels, light-scattering mechanisms in the acquisition process, and different atmospheric and geometric distortions-make such data inherently nonlinear and complex, which poses major challenges for existing methodologies to effectively process and analyze the data sets. Hence, rigorous and innovative methodologies are required for hyperspectral image (HSI) and signal processing and have become a center of attention for researchers worldwide.

536 citations