scispace - formally typeset
Search or ask a question

Showing papers in "EURASIP Journal on Advances in Signal Processing in 2005"


Journal ArticleDOI
TL;DR: The performance of the BCI was found to be robust to distracting visual stimulation in the game and relatively consistent across six subjects, with 41 of 48 games successfully completed.
Abstract: This paper presents the application of an effective EEG-based brain-computer interface design for binary control in a visually elaborate immersive 3D game. The BCI uses the steady-state visual evoked potential (SSVEP) generated in response to phase-reversing checkerboard patterns. Two power-spectrum estimation methods were employed for feature extraction in a series of offline classification tests. Both methods were also implemented during real-time game play. The performance of the BCI was found to be robust to distracting visual stimulation in the game and relatively consistent across six subjects, with 41 of 48 games successfully completed. For the best performing feature extraction method, the average real-time control accuracy across subjects was 89%. The feasibility of obtaining reliable control in such a visually rich environment using SSVEPs is thus demonstrated and the impact of this result is discussed.

442 citations


Journal ArticleDOI
TL;DR: Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.
Abstract: This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

343 citations


Journal ArticleDOI
TL;DR: A foreground validation algorithm that first builds a foreground mask using a slow-adapting Kalman filter, and then validates individual foreground pixels by a simple moving object model built using both the foreground and background statistics as well as the frame difference is proposed.
Abstract: Identifying moving objects in a video sequence is a fundamental and critical task in many computer-vision applications. Background subtraction techniques are commonly used to separate foreground moving objects from the background. Most background subtraction techniques assume a single rate of adaptation, which is inadequate for complex scenes such as a traffic intersection where objects are moving at different and varying speeds. In this paper, we propose a foreground validation algorithm that first builds a foreground mask using a slow-adapting Kalman filter, and then validates individual foreground pixels by a simple moving object model built using both the foreground and background statistics as well as the frame difference. Ground-truth experiments with urban traffic sequences show that our proposed algorithm significantly improves upon results using only Kalman filter or frame-differencing, and outperforms other techniques based on mixture of Gaussians, median filter, and approximated median filter.

294 citations


Journal ArticleDOI
TL;DR: The measurement of the antenna's frequency-dependent directional transfer function is described and quality measures for the antennas like the peak value of the transient response, its width and ringing, as well as the transient gain are discussed.
Abstract: Spectrum is presently one of the most valuable goods worldwide as the demand is permanently increasing and it can be traded only locally. Since the United States Federal Communications Commission (FCC) has opened the spectrum from 3.1 GHz to 10.6GHz, that is, a bandwidth of 7.5GHz, for unlicensed use with up to -41.25dBm/MHz EIRP, numerous applications in communications and sensor areas are showing up. Like all wireless devices, these have an antenna as an integral part of the air interface. The antennas are modeled as linear time-invariant (LTI) systems with a transfer function. The measurement of the antenna's frequency-dependent directional transfer function is described. Quality measures for the antennas like the peak value of the transient response, its width and ringing, as well as the transient gain are discussed. The application of these quality measures is shown for measurements of different UWB antennas.

241 citations


Journal ArticleDOI
TL;DR: This work demonstrates an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas using a differential morphological profile (DMP) that provides image structural information and shadow information.
Abstract: High-resolution satellite imagery provides an important new data source for building extraction. We demonstrate an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas. Buildings are extracted using structural, contextual, and spectral information. First, a series of geodesic opening and closing operations are used to build a differential morphological profile (DMP) that provides image structural information. Building hypotheses are generated and verified through shape analysis applied to the DMP. Second, shadows are extracted using the DMP to provide reliable contextual information to hypothesize position and size of adjacent buildings. Seed building rectangles are verified and grown on a finely segmented image. Next, bright buildings are extracted using spectral information. The extraction results from the different information sources are combined after independent extraction. Performance evaluation of the building extraction on an urban test site using IKONOS satellite imagery of the City of Columbia, Missouri, is reported. With the combination of structural, contextual, and spectral information, 72.7% of the building areas are extracted with a quality percentage 58.8%.

240 citations


Journal ArticleDOI
TL;DR: Experiments show that the parameterized description of spatial properties enables a highly efficient, high-quality stereo audio representation.
Abstract: Parametric-stereo coding is a technique to efficiently code a stereo audio signal as a monaural signal plus a small amount of parametric overhead to describe the stereo image. The stereo properties are analyzed, encoded, and reinstated in a decoder according to spatial psychoacoustical principles. The monaural signal can be encoded using any (conventional) audio coder. Experiments show that the parameterized description of spatial properties enables a highly efficient, high-quality stereo audio representation.

228 citations


Journal ArticleDOI
TL;DR: Differential evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum of a multimodal search space regardless of the initial parameter values, fast convergence, and using a few control parameters.
Abstract: Any digital signal processing algorithm or processor can be reasonably described as a digital filter. The main advantage of an infinite impulse response (IIR) filter is that it can provide a much better performance than the finite impulse response (FIR) filter having the same number of coefficients. However, they might have a multimodal error surface. Differential evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum of a multimodal search space regardless of the initial parameter values, fast convergence, and using a few control parameters. In this work, DE algorithm has been applied to the design of digital IIR filters and its performance has been compared to that of a genetic algorithm.

208 citations


Journal ArticleDOI
Volkmar Hamacher1, J. Chalupper1, J. Eggers1, E. Fischer1, Ulrich Kornagel1, H. Puder1, Uwe Rass1 
TL;DR: An overview of state-of-the-art algorithms intending to improve the hearing ability of hearing-impaired persons are presented in this paper.
Abstract: The development of hearing aids incorporates two aspects, namely, the audiological and the technical point of view. The former focuses on items like the recruitment phenomenon, the speech intelligibility of hearing-impaired persons, or just on the question of hearing comfort. Concerning these subjects, different algorithms intending to improve the hearing ability are presented in this paper. These are automatic gain controls, directional microphones, and noise reduction algorithms. Besides the audiological point of view, there are several purely technical problems which have to be solved. An important one is the acoustic feedback. Another instance is the proper automatic control of all hearing aid components bymeans of a classification unit. In addition to an overview of state-of-the-art algorithms, this paper focuses on future trends.

208 citations


Journal ArticleDOI
TL;DR: A detailed rationale for the EVP architecture, based on the analysis of a number of key algorithms, as well as implementation and benchmarking results are described.
Abstract: A major challenge of software-defined radio (SDR) is to realize many giga operations per second of flexible baseband processing within a power budget of only a few hundred mW. A heterogeneous hardware architecture with the programmable vector processor EVP as key component can support WLAN, UMTS, and other standards. A detailed rationale for the EVP architecture, based on the analysis of a number of key algorithms, as well as implementation and benchmarking results are described.

165 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a direct position determination method for radio signal emitters that uses exactly the same data as the common AOA methods but the position determination is direct and can handle more than M - 1 cochannel simultaneous signals.
Abstract: The most common methods for position determination of radio signal emitters such as communications or radar transmitters are based on measuring a specified parameter such as angle of arrival (AOA) or time of arrival (TOA) of the signal. The measured parameters are then used to estimate the transmitter's location. Since the measurements are done at each base station independently, without using the constraint that the AOA/TOA estimates at different base stations should correspond to the same transmitter's location, this is a suboptimal location determination technique. Further, if the number of array elements at each base station is M, and the signal waveforms are unknown, the number of cochannel simultaneous transmitters that can be localized by AOA is limited to M - 1. Also, most AOA algorithms fail when the sources are not well angularly separated. We propose a technique that uses exactly the same data as the common AOA methods but the position determination is direct. The proposed method can handle more than M - 1 cochannel simultaneous signals. Although there are many stray parameters, only a two-dimensional search is required for a planar geometry. The technique provides a natural solution to the measurements sources association problem that is encountered in AOA-based location systems. In addition to new algorithms, we provide analytical performance analysis, Cramer-Rao bounds and Monte Carlo simulations. We demonstrate that the proposed approach frequently outperforms the traditional AOA methods for unknown as well as known signal waveforms.

164 citations


Journal ArticleDOI
TL;DR: In this paper, the correlation between bit planes as well as the binary texture characteristics within the bit planes will differ between a stego image and a cover image and these telltale marks are used to construct a classifier that can distinguish between Stego and cover images.
Abstract: We present a novel technique for steganalysis of images that have been subjected to embedding by steganographic algorithms. The seventh and eighth bit planes in an image are used for the computation of several binary similarity measures. The basic idea is that the correlation between the bit planes as well as the binary texture characteristics within the bit planes will differ between a stego image and a cover image. These telltale marks are used to construct a classifier that can distinguish between stego and cover images. We also provide experimental results using some of the latest steganographic algorithms. The proposed scheme is found to have complementary performance vis-a-vis Farid's scheme in that they outperform each other in alternate embedding techniques.

Journal ArticleDOI
TL;DR: An EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsness level in a virtual-reality-based driving simulator is proposed.
Abstract: The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

Journal ArticleDOI
TL;DR: The performance with AWGN and multipath, the resistance to narrowband interference, as well as the simultaneous detection of multiple FM signals at the same carrier frequency are addressed.
Abstract: This paper presents a novel UWB communications system using double FM: a low-modulation index digital FSK followed by a high-modulation index analog FM to create a constant-envelope UWB signal. FDMA techniques at the subcarrier level are exploited to accommodate multiple users. The system is intended for low (1-10 kbps) and medium (100-1000 kbps) bit rate, and short-range WPAN systems. A wideband delay-line FM demodulator that is not preceded by any limiting amplifier constitutes the key component of the UWBFM receiver. This unusual approach permits multiple users to share the same RF bandwidth. Multipath, however, may limit the useful subcarrier bandwidth to one octave. This paper addresses the performance with AWGN and multipath, the resistance to narrowband interference, as well as the simultaneous detection of multiple FM signals at the same carrier frequency. SPICE and Matlab simulation results illustrate the principles and limitations of this new technology. A hardware demonstrator has been realized and has allowed the confirmation of theory with practical results.

Journal ArticleDOI
TL;DR: A sound classification system for the automatic recognition of the acoustic environment in a hearing aid that distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music" is discussed.
Abstract: A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

Journal ArticleDOI
TL;DR: The impact of the US FCC's regulations and the characteristics of the low-power UWB propagation channels are explored, and their effects on UWB hardware design are illustrated.
Abstract: The application of ultra-wideband (UWB) technology to low-cost short-range communications presents unique challenges to the communications engineer. The impact of the US FCC's regulations and the characteristics of the low-power UWB propagation channels are explored, and their effects on UWB hardware design are illustrated. This tutorial introduction includes references to more detailed explorations of the subject.

Journal ArticleDOI
TL;DR: Although the RCE method was not provided with prior knowledge about the mental task, channels that are well known to be important were consistently selected whereas task-irrelevant channels were reliably disregarded.
Abstract: Most EEG-based brain-computer interface (BCI) paradigms come along with specific electrode positions, for example, for a visual-based BCI, electrode positions close to the primary visual cortex are used. For new BCI paradigms it is usually not known where task relevant activity can be measured from the scalp. For individual subjects, Lal et al. in 2004 showed that recording positions can be found without the use of prior knowledge about the paradigm used. However it remains unclear to what extent their method of recursive channel elimination (RCE) can be generalized across subjects. In this paper we transfer channel rankings from a group of subjects to a new subject. For motor imagery tasks the results are promising, although cross-subject channel selection does not quite achieve the performance of channel selection on data of single subjects. Although the RCE method was not provided with prior knowledge about the mental task, channels that are well known to be important (from a physiological point of view) were consistently selected whereas task-irrelevant channels were reliably disregarded.

Journal ArticleDOI
TL;DR: By the application of FES, noninvasive restoration of hand grasp function in a tetraplegic patient was achieved and the patient was able to grasp a glass with the paralyzed hand completely on his own without additional help or other technical aids.
Abstract: The present study reports on the use of an EEG-based asynchronous (uncued, user-driven) brain-computer interface (BCI) for the control of functional electrical stimulation (FES). By the application of FES, noninvasive restoration of hand grasp function in a tetraplegic patient was achieved. The patient was able to induce bursts of beta oscillations by imagination of foot movement. These beta oscillations were recorded in a one EEG-channel configuration, bandpass filtered and squared. When this beta activity exceeded a predefined threshold, a trigger for the FES was generated. Whenever the trigger was detected, a subsequent switching of a grasp sequence composed of 4 phases occurred. The patient was able to grasp a glass with the paralyzed hand completely on his own without additional help or other technical aids.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed algorithm can consistently locate a unique reference point and compute the corresponding reference orientation with high accuracy for all types of fingerprints.
Abstract: A robust fingerprint recognition algorithm should tolerate the rotation and translation of the fingerprint image. One popular solution is to consistently detect a unique reference point and compute a unique reference orientation for translational and rotational alignment. This paper develops an effective algorithm to locate a reference point and compute the corresponding reference orientation consistently and accurately for all types of fingerprints. To compute the reliable orientation field, an improved orientation smoothing method is proposed based on adaptive neighborhood. It shows better performance in filtering noise while maintaining the orientation localization than the conventional averaging method. The reference-point localization is based on multiscale analysis of the orientation consistency to search the local minimum. The unique reference orientation is computed based on the analysis of the orientation differences between the radial directions fromthe reference point, which are the directions of the radii emitted from the reference point with equivalent angle interval, and the local ridge orientations along these radii. Experimental results demonstrate that our proposed algorithm can consistently locate a unique reference point and compute the reference orientation with high accuracy for all types of fingerprints.

Journal ArticleDOI
TL;DR: In this article, a blind separation of nonstationary sources in the underdetermined case, where there are more sources than sensors, is studied, where the original sources are disjoint in the time-frequency domain.
Abstract: We examine the problem of blind separation of nonstationary sources in the underdetermined case, where there are more sources than sensors. Since time-frequency (TF) signal processing provides effective tools for dealing with nonstationary signals, we propose a new separation method that is based on time-frequency distributions (TFDs). The underlying assumption is that the original sources are disjoint in the time-frequency (TF) domain. The successful method recovers the sources by performing the following four main procedures. First, the spatial time-frequency distribution (STFD) matrices are computed from the observed mixtures. Next, the auto-source TF points are separated from cross-source TF points thanks to the special structure of these mixture STFD matrices. Then, the vectors that correspond to the selected auto-source points are clustered into different classes according to the spatial directions which differ among different sources; each class, now containing the auto-source points of only one source, gives an estimation of the TFD of this source. Finally, the source waveforms are recovered from their TFD estimates using TF synthesis. Simulated experiments indicate the success of the proposed algorithm in different scenarios. We also contribute with two other modified versions of the algorithm to better deal with auto-source point selection.

Journal ArticleDOI
TL;DR: A new signal processing technique for cochlear implants using a psychoacoustic-masking model in order to determine the essential components of any given audio signal.
Abstract: We describe a new signal processing technique for cochlear implants using a psychoacoustic-masking model. The technique is based on the principle of a so-called "NofM" strategy. These strategies stimulate fewer channels (N) per cycle than active electrodes (NofM; N < M). In "NofM" strategies such as ACE or SPEAK, only the N channels with higher amplitudes are stimulated. The new strategy is based on the ACE strategy but uses a psychoacoustic-masking model in order to determine the essential components of any given audio signal. This new strategy was tested on device users in an acute study, with either 4 or 8 channels stimulated per cycle. For the first condition (4 channels), the mean improvement over the ACE strategy was 17%. For the second condition (8 channels), no significant difference was found between the two strategies.

Journal ArticleDOI
TL;DR: A coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes) for channel coding and joint source-channel coding of multiterminal correlated binary sources and a concatenated scheme aimed at reducing the error floor is proposed.
Abstract: We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes) for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

Journal ArticleDOI
TL;DR: A novel dynamic hand gesture recognition technique is proposed, based on the 2D skeleton representation of the hand, which is performed by comparing this signature with the ones from a gesture alphabet using Baddeley's distance as a measure of dissimilarities between model parameters.
Abstract: This paper discusses the use of the computer vision in the interpretation of human gestures. Hand gestures would be an intuitive and ideal way of exchanging information with other people in a virtual space, guiding some robots to perform certain tasks in a hostile environment, or interacting with computers. Hand gestures can be divided into two main categories: static gestures and dynamic gestures. In this paper, a novel dynamic hand gesture recognition technique is proposed. It is based on the 2D skeleton representation of the hand. For each gesture, the hand skeletons of each posture are superposed providing a single image which is the dynamic signature of the gesture. The recognition is performed by comparing this signature with the ones from a gesture alphabet, using Baddeley's distance as a measure of dissimilarities between model parameters.

Journal ArticleDOI
TL;DR: This paper has built an ArSL system and measured its performance using real ArSL data collected from deaf people and achieved a 36% reduction of misclassifications on the training data and 57% on the test data.
Abstract: Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

Journal ArticleDOI
TL;DR: A robust vision-based traffic monitoring system for vehicle and traffic information extraction and a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing is proposed.
Abstract: A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1) the moving cast shadow effect, (2) the occlusion effect, and (3) nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

Journal ArticleDOI
TL;DR: A new perceptual model is presented that predicts masked thresholds for sinusoidal distortions and leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.
Abstract: Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant) norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

Journal ArticleDOI
TL;DR: This paper reexamines the GCC- and AMDF-based TDE techniques in real room reverberant and noisy environments and proposes a weighted cross-correlation (WCC) estimator in which the GCC function is weighted by the reciprocal of AMDF, which leads to a better estimation performance as compared to the conventional GCC estimator.
Abstract: Recently, there has been an increased interest in the use of the time-delay estimation (TDE) technique to locate and track acoustic sources in a reverberant environment. Typically, the delay estimate is obtained through identifying the extremum of the generalized cross-correlation (GCC) function or the average magnitude difference function (AMDF). These estimators are well studied and their statistical performance is well understood for single-path propagation situations. However, fewer efforts have been reported to show their performance behavior in real reverberation conditions. This paper reexamines the GCC-and AMDF-based TDE techniques in real room reverberant and noisy environments. Our contribution is threefold. First, we propose a weighted cross-correlation (WCC) estimator in which the GCC function is weighted by the reciprocal of AMDF. This new method can sharpen the peak of the GCC function, which corresponds to the true time delay and thus leads to a better estimation performance as compared to the conventional GCC estimator. Second, we propose a modified version of the AMDF (MAMDF) estimator in which the delay is determined by jointly considering the AMDF and the average magnitude sum function (AMSF). Third, we compare the performance of the GCC, AMDF, WCC, and MAMDF estimators in real reverberant and noisy environments. It is shown that the AMDF estimator can yield better performance in favorable noise conditions and is slightly more resilient to reverberation than the GCC method. The GCC approach, however, is found to outperform the AMDF method in strong noisy environments. Weighting the correlation function by the reciprocal of AMDF can improve the performance of the GCC estimator in reverberation conditions, yet its improvement in noisy environments is limited. The MAMDF algorithm can enhance the AMDF estimator in both reverberant and noisy environments.

Journal ArticleDOI
TL;DR: The novel concept of a phase tube is introduced which enables a quantitative assessment of the Pol-InSAR performance, a comparison between different sensor configurations, and an optimization of the instrument settings for different Pol-inSAR applications, which may serve as an interface between system engineers and application-oriented scientists.
Abstract: We investigate multichannel imaging radar systems employing coherent combinations of polarimetry and interferometry (Pol-InSAR). Such systems are well suited for the extraction of bio- and geophysical parameters by evaluating the combined scattering from surfaces and volumes. This combination leads to several important differences between the design of Pol-InSAR sensors and conventional single polarisation SAR interferometers. We first highlight these differences and then investigate the Pol-InSAR performance of two proposed spaceborne SAR systems (ALOS/PalSAR and TerraSAR-L) operating in repeat-pass mode. For this, we introduce the novel concept of a phase tube which enables (1) a quantitative assessment of the Pol-InSAR performance, (2) a comparison between different sensor configurations, and (3) an optimization of the instrument settings for different Pol-InSAR applications. The phase tube may hence serve as an interface between system engineers and application-oriented scientists. The performance analysis reveals major limitations for even moderate levels of temporal decorrelation. Such deteriorations may be avoided in single-pass sensor configurations and we demonstrate the potential benefits from the use of future bi- and multistatic SAR interferometers.

Journal ArticleDOI
TL;DR: The mathematical framework related to higher-order SAR interferometry is presented as well as preliminary results obtained on simulated and real data showing how the PS density can be increased at the price of a higher computational load.
Abstract: The permanent scatterers (PS) technique is a multi-interferogram algorithm for DInSAR analyses developed in the late nineties to overcome the difficulties related to the conventional approach, namely, phase decorrelation and atmospheric effects. The successful application of this technology to many geophysical studies is now pushing toward further improvements and optimizations. A possible strategy to increase the number of radar targets that can be exploited for surface deformation monitoring is the adoption of parametric super-resolution algorithms that can cope with multiple scattering centres within the same resolution cell. In fact, since a PS is usually modelled as a single pointwise scatterer dominating the background clutter, radar targets having cross-range dimension exceeding a few meters can be lost (at least in C-band datasets), due to geometrical decorrelation phenomena induced in the high normal baseline interferograms of the dataset. In this paper, the mathematical framework related to higher-order SAR interferometry is presented as well as preliminary results obtained on simulated and real data. It is shown how the PS density can be increased at the price of a higher computational load.

Journal ArticleDOI
TL;DR: It is hypothesized that signal processing and machine learning methods can be used to discriminate EEG in a direct "yes"/"no" BCI from a single session and the results suggest that BSS and feature selection can be use to improve the performance of even a "direct," single-session BCI.
Abstract: Most EEG-based BCI systems make use of well-studied patterns of brain activity. However, those systems involve tasks that indirectly map to simple binary commands such as "yes" or "no" or require many weeks of biofeedback training. We hypothesized that signal processing and machine learning methods can be used to discriminate EEG in a direct "yes"/"no" BCI from a single session. Blind source separation (BSS) and spectral transformations of the EEG produced a 180-dimensional feature space. We used a modified genetic algorithm (GA) wrapped around a support vector machine (SVM) classifier to search the space of feature subsets. The GA-based search found feature subsets that outperform full feature sets and random feature subsets. Also, BSS transformations of the EEG outperformed the original time series, particularly in conjunction with a subset search of both spaces. The results suggest that BSS and feature selection can be used to improve the performance of even a "direct," single-session BCI.

Journal ArticleDOI
TL;DR: This paper addresses the problem of high-resolution polarized source detection and introduces a new eigenstructure-based algorithm that yields direction of arrival (DOA) and polarization estimates using a vector-sensor (or multicomponent) array using fourth-order tensor decomposition.
Abstract: This paper addresses the problem of high-resolution polarized source detection and introduces a new eigenstructure-based algorithm that yields direction of arrival (DOA) and polarization estimates using a vector-sensor (or multicomponent-sensor) array. This method is based on separation of the observation space into signal and noise subspaces using fourth-order tensor decomposition. In geophysics, in particular for reservoir acquisition and monitoring, a set of Nx-multicomponent sensors is laid on the ground with constant distance Δx between them. Such a data acquisition scheme has intrinsically three modes: time, distance, and components. The proposed method needs multilinear algebra in order to preserve data structure and avoid reorganization. The data is thus stored in tridimensional arrays rather than matrices. Higher-order eigenvalue decomposition (HOEVD) for fourth-order tensors is considered to achieve subspaces estimation and to compute the eigenelements. We propose a tensorial version of the MUSIC algorithm for a vector-sensor array allowing a joint estimation of DOA and signal polarization estimation. Performances of the proposed algorithm are evaluated.