scispace - formally typeset
Search or ask a question

Showing papers in "EURASIP Journal on Advances in Signal Processing in 2006"


Journal ArticleDOI
TL;DR: A frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part, and a high-resolution image is then reconstructed using cubic interpolation.
Abstract: Super-resolution algorithms reconstruct a high-resolution image from a set of low-resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low-resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high-resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher-resolution final image.

520 citations


Journal ArticleDOI
TL;DR: An aided dead-reckoning navigation structure and signal processing algorithms for self localization of an autonomous mobile device by fusing pedestrian dead reckoning and WiFi signal strength measurements is presented.
Abstract: This paper presents an aided dead-reckoning navigation structure and signal processing algorithms for self localization of an autonomous mobile device by fusing pedestrian dead reckoning and WiFi signal strength measurements. WiFi and inertial navigation systems (INS) are used for positioning and attitude determination in a wide range of applications. Over the last few years, a number of low-cost inertial sensors have become available. Although they exhibit large errors, WiFi measurements can be used to correct the drift weakening the navigation based on this technology. On the other hand, INS sensors can interact with the WiFi positioning system as they provide high-accuracy real-time navigation. A structure based on a Kalman filter and a particle filter is proposed. It fuses the heterogeneous information coming from those two independent technologies. Finally, the benefits of the proposed architecture are evaluated and compared with the pure WiFi and INS positioning systems.

428 citations


Journal ArticleDOI
TL;DR: A systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques is presented.
Abstract: Time delay estimation has been a research topic of significant practical importance in many fields (radar, sonar, seismology, geophysics, ultrasonics, hands-free communications, etc.). It is a first stage that feeds into subsequent processing blocks for identifying, localizing, and tracking radiating sources. This area has made remarkable advances in the past few decades, and is continuing to progress, with an aim to create processors that are tolerant to both noise and reverberation. This paper presents a systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques. We discuss the pros and cons of each individual algorithm, and outline their inherent relationships. We also provide experimental results to illustrate their performance differences in room acoustic environments where reverberation and noise are commonly encountered.

390 citations


Journal ArticleDOI
TL;DR: This paper presents a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases and shows that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small.
Abstract: The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramer-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

276 citations


Journal ArticleDOI
TL;DR: This paper investigates time of arrival (ToA) estimation methods for ultra-wide bandwidth (UWB) propagation signals and tests different suboptimal, low-complex techniques based on peak detection to deal with partial overlap of signal paths.
Abstract: This paper investigates time of arrival (ToA) estimation methods for ultra-wide bandwidth (UWB) propagation signals. Different algorithms are implemented in order to detect the direct path in a dense multipath environment. Different suboptimal, low-complex techniques based on peak detection are used to deal with partial overlap of signal paths. A comparison in terms of ranging accuracy, complexity, and parameters sensitivity to propagation conditions is carried out also considering a conventional technique based on threshold detection. In particular, the algorithms are tested on experimental data collected from a measurement campaign performed in a typical office building.

248 citations


Journal ArticleDOI
TL;DR: Based on this LCP reformulation, the linear convergence of the popular distributed iterative waterfilling algorithm (IWFA) is established for arbitrary symmetric interference environment and for certain asymmetric channel conditions with any number of users.
Abstract: We present an equivalent linear complementarity problem (LCP) formulation of the noncooperative Nash game resulting from the DSL power control problem. Based on this LCP reformulation, we establish the linear convergence of the popular distributed iterative waterfilling algorithm (IWFA) for arbitrary symmetric interference environment and for certain asymmetric channel conditions with any number of users. In the case of symmetric interference crosstalk coefficients, we show that the users of IWFA in fact, unknowingly but willingly, cooperate to minimize a common quadratic cost function whose gradient measures the received signal power from all users. This is surprising since the DSL users in the IWFA have no intention to cooperate as each maximizes its own rate to reach a Nash equilibrium. In the case of asymmetric coefficients, the convergence of the IWFA is due to a contraction property of the iterates. In addition, the LCP reformulation enables us to solve the DSL power control problem under no restrictions on the interference coefficients using existing LCP algorithms, for example, Lemke's method. Indeed, we use the latter method to benchmark the empirical performance of IWFA in the presence of strong crosstalk interference.

226 citations


Journal ArticleDOI
TL;DR: A novel algorithm for image fusion from irregularly sampled data based on the framework of normalized convolution, in which the local signal is approximated through a projection onto a subspace through the use of polynomial basis functions.
Abstract: We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to a local Taylor series expansion. Unlike the traditional framework, however, the window function of adaptive NC is adapted to local linear structures. This leads to more samples of the same modality being gathered for the analysis, which in turn improves signal-to-noise ratio and reduces diffusion across discontinuities. A robust signal certainty is also adapted to the sample intensities to minimize the influence of outliers. Excellent fusion capability of adaptive NC is demonstrated through an application of super-resolution image reconstruction.

211 citations


Journal ArticleDOI
TL;DR: It is shown how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and its suitability is validated by comparing it to other methods described in the bibliography.
Abstract: Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from nonline-of-sight (NLOS) errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.). In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS), even when nearly half the measures suffered from NLOS or other coarse errors.

159 citations


Journal ArticleDOI
TL;DR: A study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result that uses Vinet's measure (correct classification rate) to compare the behavior of the different criteria.
Abstract: We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate) is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

136 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the finite Heisenberg-Weyl groups provide a unified basis for the construction of useful waveforms/sequences for radar, communications, and the theory of error-correcting codes.
Abstract: We investigate the theory of the finite Heisenberg-Weyl group in relation to the development of adaptive radar and to the construction of spreading sequences and error-correcting codes in communications. We contend that this group can form the basis for the representation of the radar environment in terms of operators on the space of waveforms. We also demonstrate, following recent developments in the theory of error-correcting codes, that the finite Heisenberg-Weyl groups provide a unified basis for the construction of useful waveforms/sequences for radar, communications, and the theory of error-correcting codes.

136 citations


Journal ArticleDOI
TL;DR: This paper boosts the BER performance of the BLE by designing a receiver window specially tailored to the band LDL factorization, and designs an MMSE block decision-feedback equalizer (BDFE) that can be modified to support receiver windowing.
Abstract: Recently, several approaches have been proposed for the equalization of orthogonal frequency-division multiplexing (OFDM) signals in challenging high-mobility scenarios. Among them, a minimum mean-squared error (MMSE) block linear equalizer (BLE), based on a band LDL factorization, is particularly attractive for its good tradeoff between performance and complexity. This paper extends this approach towards two directions. First, we boost the BER performance of the BLE by designing a receiver window specially tailored to the band LDL factorization. Second, we design an MMSE block decision-feedback equalizer (BDFE) that can bemodified to support receiver windowing. All the proposed banded equalizers share a similar computational complexity, which is linear in the number of subcarriers. Simulation results show that the proposed receiver architectures are effective in reducing the BER performance degradation caused by the intercarrier interference (ICI) generated by time-varying channels. We also consider a basis expansion model (BEM) channel estimation approach, to establish its impact on the BER performance of the proposed banded equalizers.

Journal ArticleDOI
TL;DR: Instrumental performance evaluations in a real environment with multiple speech sources indicate that the proposed computational efficient spectral weighting system can achieve significant attenuation of speech interferers while maintaining a high speech quality of the target signal.
Abstract: In this contribution, a dual-channel input-output speech enhancement system is introduced. The proposed algorithm is an adaptation of the well-known superdirective beamformer including postfiltering to the binaural application. In contrast to conventional beamformer processing, the proposed system outputs enhanced stereo signals while preserving the important interaural amplitude and phase differences of the original signal. Instrumental performance evaluations in a real environment with multiple speech sources indicate that the proposed computational efficient spectral weighting system can achieve significant attenuation of speech interferers while maintaining a high speech quality of the target signal.

Journal ArticleDOI
TL;DR: The image segmentation problem is proposed to be considered as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning.
Abstract: The task considered in this paper is performance evaluation of region segmentation lgorithms in the ground-truth-based paradigm. Given a machine segmentation and a ground-truth segmentation, performance measures are needed. We propose to consider the image segmentation problem as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning. By doing so, we obtain a variety of performance measures which have not been used before in image processing. In particular, some of these measures have the highly desired property of being a metric. Experimental results are reported on both synthetic and real data to validate the measures and compare them with others.

Journal ArticleDOI
TL;DR: A learning-based, single-image super-resolution reconstruction technique using the contourlet transform, which is capable of capturing the smoothness along contours making use of directional decompositions, which outperforms standard interpolation techniques as well as a standard (Cartesian) wavelet-based learning.
Abstract: We propose a learning-based, single-image super-resolution reconstruction technique using the contourlet transform, which is capable of capturing the smoothness along contoursmaking use of directional decompositions. The contourlet coefficients at finer scales of the unknown high-resolution image are learned locally from a set of high-resolution training images, the inverse contourlet transform of which recovers the super-resolved image. In effect, we learn the high-resolution representation of an oriented edge primitive from the training data. Our experiments show that the proposed approach outperforms standard interpolation techniques as well as a standard (Cartesian) wavelet-based learning both visually and in terms of the PSNR values, especially for images with arbitrarily oriented edges.

Journal ArticleDOI
TL;DR: A new methodology for the floating-to-fixed point conversion is proposed for software implementations to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint.
Abstract: Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

Journal ArticleDOI
TL;DR: The proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation and provides source localization accuracy superior to the standard spherical and linear intersection techniques.
Abstract: In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

Journal ArticleDOI
TL;DR: A flexible testbed developed to examine MIMO algorithms and channel models described in literature by transmitting data at through real, physical channels, supporting simultaneously four transmit and four receive antennas is presented.
Abstract: While the field of MIMO transmission has been explored over the past decade mainly theoretically, relatively few results exist on how these transmissions perform over realistic, imperfect channels The reason for this is that measurement equipment is expensive, difficult to obtain, and often inflexible when a multitude of transmission parameters are of interest This paper presents a flexible testbed developed to examine MIMO algorithms and channel models described in literature by transmitting data at 245 GHz through real, physical channels, supporting simultaneously four transmit and four receive antennas Operation is performed directly from Matlab allowing for a cornucopia of real-world experiments with minimum effort Examples measuring bit error rates on space-time block codes are provided in the paper

Journal ArticleDOI
TL;DR: This paper studies the Cramér-Rao lower bound (CRB) for two kinds of localization based on noisy range measurements and derives lower and upper bounds on the CRB which can be computed using only local information.
Abstract: The localization problem is fundamentally important for sensor networks. This paper, based on "Estimation bounds for localization" by the authors (2004 © IEEE), studies the Cramer-Rao lower bound (CRB) for two kinds of localization based on noisy range measurements. The first is anchored localization in which the estimated positions of at least 3 nodes are known in global coordinates. We show some basic invariances of the CRB in this case and derive lower and upper bounds on the CRB which can be computed using only local information. The second is anchor-free localization where no absolute positions are known. Although the Fisher information matrix is singular, a CRB-like bound exists on the total estimation variance. Finally, for both cases we discuss how the bounds scale to large networks under different models of wireless signal propagation.

Journal ArticleDOI
TL;DR: A new bound that depends on a new bound on approximating a Gaussian signal as a linear combination of elements of an overcomplete dictionary is given and asymptotic expressions reveal a critical input signal-to-noise ratio for signal recovery.
Abstract: If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noise-corrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently represents the noise. The mean-squared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal are analyzed. First an MSE bound that depends on a new bound on approximating a Gaussian signal as a linear combination of elements of an overcomplete dictionary is given. Further analyses are for dictionaries generated randomly according to a spherically-symmetric distribution and signals expressible with single dictionary elements. Easily-computed approximations for the probability of selecting the correct dictionary element and the MSE are given. Asymptotic expressions reveal a critical input signal-to-noise ratio for signal recovery.

Journal ArticleDOI
TL;DR: This paper deals with reconstruction of nonuniformly sampled bandlimited continuous-time signals using time-varying discrete-time finite-length impulse response (FIR) filters and shows how a slight oversampling should be utilized for designing the reconstruction filters in a proper manner.
Abstract: This paper deals with reconstruction of nonuniformly sampled bandlimited continuous-time signals using time-varying discrete-time finite-length impulse response (FIR) filters. The main theme of the paper is to show how a slight oversampling should be utilized for designing the reconstruction filters in a proper manner. Based on a time-frequency function, it is shown that the reconstruction problem can be posed as one that resembles an ordinary filter design problem, both for deterministic signals and random processes. From this facts, an analytic least-square design technique is then derived. Furthermore, for an important special case, corresponding to periodic nonuniform sampling, it is shown that the reconstruction problem alternatively can be posed as a filter bank design problem, thus with requirements on a distortion transfer function and a number of aliasing transfer functions. This eases the design and offers alternative practical design methods as discussed in the paper. Several design examples are included that illustrate the benefits of the proposed design techniques over previously existing techniques.

Journal ArticleDOI
TL;DR: This work addresses the dynamic super-resolution problem of reconstructing a high-quality set of monochromatic or color super-resolved images from low-quality monochromaatic, color, or mosaiced frames by proposing a joint method for simultaneous SR, deblurring, and demosaicing.
Abstract: We address the dynamic super-resolution (SR) problem of reconstructing a high-quality set of monochromatic or color superresolved images from low-quality monochromatic, color, or mosaiced frames. Our approach includes a joint method for simultaneous SR, deblurring, and demosaicing, this way taking into account practical color measurements encountered in video sequences. For the case of translational motion and common space-invariant blur, the proposed method is based on a very fast and memory efficient approximation of the Kalman filter (KF). Experimental results on both simulated and real data are supplied, demonstrating the presented algorithms, and their strength.

Journal ArticleDOI
TL;DR: Using a set of algorithms that assist in the localization and tracking of vibrational dipole sources underwater, accurate tracking of the trajectory of a moving dipole source has been demonstrated successfully.
Abstract: An engineered artificial lateral-line system has been recently developed, consisting of a 16-element array of finely spaced MEMS hot-wire flow sensors. This represents a new class of underwater flow sensing instruments and necessitates the development of rapid, efficient, and robust signal processing algorithms. In this paper, we report on the development and implementation of a set of algorithms that assist in the localization and tracking of vibrational dipole sources underwater. Using these algorithms, accurate tracking of the trajectory of a moving dipole source has been demonstrated successfully.

Journal ArticleDOI
TL;DR: The FTCM is applied to nine test images of natural textures commonly used in other texture classification work, yielding excellent overall performance.
Abstract: A new method for supervised texture classification, denoted by frame texture classification method (FTCM), is proposed. The method is based on a deterministic texture model in which a small image block, taken from a texture region, is modeled as a sparse linear combination of frame elements. FTCM has two phases. In the design phase a frame is trained for each texture class based on given texture example images. The design method is an iterative procedure in which the representation error, given a sparseness constraint, is minimized. In the classification phase each pixel in a test image is labeled by analyzing its spatial neighborhood. This block is represented by each of the frames designed for the texture classes under consideration, and the frame giving the best representation gives the class. The FTCM is applied to nine test images of natural textures commonly used in other texture classification work, yielding excellent overall performance.

Journal ArticleDOI
TL;DR: A novel algorithm to estimate direction and length of motion blur, using Radon transform and fuzzy set concepts is presented, which works highly satisfactory for SNR dB and supports lower SNR compared with other algorithms.
Abstract: Motion blur is one of the most common causes of image degradation. Restoration of such images is highly dependent on accurate estimation of motion blur parameters. To estimate these parameters, many algorithms have been proposed. These algorithms are different in their performance, time complexity, precision, and robustness in noisy environments. In this paper, we present a novel algorithm to estimate direction and length of motion blur, using Radon transform and fuzzy set concepts. The most important advantage of this algorithm is its robustness and precision in noisy images. This method was tested on a wide range of different types of standard images that were degraded with different directions (between and ) and motion lengths (between and pixels). The results showed that the method works highly satisfactory for SNR dB and supports lower SNR compared with other algorithms.

Journal ArticleDOI
TL;DR: A cost-effective microphone dish concept (microphone array with many concentric rings) is presented that can provide directional and accurate acquisition of bird sounds and can simultaneously pick up bird sounds from different directions.
Abstract: This paper presents a novel bird monitoring and recognition system in noisy environments. The project objective is to avoid bird strikes to aircraft. First, a cost-effective microphone dish concept (microphone array with many concentric rings) is presented that can provide directional and accurate acquisition of bird sounds and can simultaneously pick up bird sounds from different directions. Second, direction-of-arrival (DOA) and beamforming algorithms have been developed for the circular array. Third, an efficient recognition algorithm is proposed which uses Gaussian mixture models (GMMs). The overall system is suitable for monitoring and recognition for a large number of birds. Fourth, a hardware prototype has been built and initial experiments demonstrated that the array can acquire and classify birds accurately.

Journal ArticleDOI
TL;DR: A new evaluation methodology and a framework in which edge detection is evaluated through boundary detection, that is, the likelihood of retrieving the full object boundaries from this edge-detection output, reflects the performance of edge detection in many applications.
Abstract: Edge detection has been widely used in computer vision and image processing. However, the performance evaluation of the edge-detection results is still a challenging problem. A major dilemma in edge-detection evaluation is the difficulty to balance the objectivity and generality: a general-purpose edge-detection evaluation independent of specific applications is usually not well defined, while an evaluation on a specific application has weak generality. Aiming at addressing this dilemma, this paper presents new evaluation methodology and a framework in which edge detection is evaluated through boundary detection, that is, the likelihood of retrieving the full object boundaries from this edge-detection output. Such a likelihood, we believe, reflects the performance of edge detection in many applications since boundary detection is the direct and natural goal of edge detection. In this framework, we use the newly developed ratio-contour algorithm to group the detected edges into closed boundaries. We also collect a large data set (1030) of real images with unambiguous ground-truth boundaries for evaluation. Five edge detectors (Sobel, LoG, Canny, Rothwell, and Edison) are evaluated in this paper and we find that the current edge-detection performance still has scope for improvement by choosing appropriate detectors and detector parameters.

Journal ArticleDOI
TL;DR: This paper proposes an OSFB-based channel coder for a correlated additive Gaussian noise channel, of which the noise covariance matrix is assumed to be known, and develops a design for the decoder's synthesis filter bank to minimise the noise power in the decoded signal.
Abstract: Oversampled filter banks (OSFBs) have been considered for channel coding, since their redundancy can be utilised to permit the detection and correction of channel errors In this paper, we propose an OSFB-based channel coder for a correlated additive Gaussian noise channel, of which the noise covariance matrix is assumed to be known Based on a suitable factorisation of this matrix, we develop a design for the decoder's synthesis filter bank in order to minimise the noise power in the decoded signal, subject to admitting perfect reconstruction through paraunitarity of the filter bank We demonstrate that this approach can lead to a significant reduction of the noise interference by exploiting both the correlation of the channel and the redundancy of the filter banks Simulation results providing some insight into these mechanisms are provided

Journal ArticleDOI
TL;DR: This work proposes and analyzes the use of feedforward delay estimation techniques in order to improve the accuracy of the delay estimation in severe multipath scenarios and extends the techniques previously proposed in the context of wideband CDMA delay estimation to the BOC-modulated signals.
Abstract: The estimation with high accuracy of the line-of-sight delay is a prerequisite for all global navigation satellite systems. The delay locked loops and their enhanced variants are the structures of choice for the commercial GNSS receivers, but their performance in severe multipath scenarios is still rather limited. The new satellite positioning system proposals specify higher code-epoch lengths compared to the traditional GPS signal and the use of a new modulation, the binary offset carrier (BOC) modulation, which triggers new challenges in the delay tracking stage. We propose and analyze here the use of feedforward delay estimation techniques in order to improve the accuracy of the delay estimation in severe multipath scenarios. First, we give an extensive review of feedforward delay estimation techniques for CDMA signals in fading channels, by taking into account the impact of BOC modulation. Second, we extend the techniques previously proposed by the authors in the context of wideband CDMA delay estimation (e.g., Teager-Kaiser and the projection onto convex sets) to the BOC-modulated signals. These techniques are presented as possible alternatives to the feedback tracking loops. A particular attention is on the scenarios with closely spaced paths. We also discuss how these feedforward techniques can be implemented via DSPs.

Journal ArticleDOI
TL;DR: The proposed Wyner-Ziv scalable (WZS) coder can achieve higher coding efficiency, by selectively exploiting the high quality reconstruction of the previous frame in the enhancement layer coding of the current frame, thus providing improved temporal prediction as compared to MPEG-4 FGS.
Abstract: This paper proposes a practical video coding framework based on distributed source coding principles, with the goal to achieve efficient and low-complexity scalable coding. Starting from a standard predictive coder as base layer (such as MPEG-4 baseline video coder in our implementation), the proposed Wyner-Ziv scalable (WZS) coder can achieve higher coding efficiency, by selectively exploiting the high quality reconstruction of the previous frame in the enhancement layer coding of the current frame. This creates a multi-layer Wyner-Ziv prediction "link," connecting the same bitplane level between successive frames, thus providing improved temporal prediction as compared to MPEG-4 FGS, while keeping complexity reasonable at the encoder. Since the temporal correlation varies in time and space, a block-based adaptive mode selection algorithm is designed for each bitplane, so that it is possible to switch between different coding modes. Experimental results show improvements in coding efficiency of 3-4.5 dB over MPEG-4 FGS for video sequences with high temporal correlation.

Journal ArticleDOI
TL;DR: This study shows that in common TDOA-based localization scenarios—where the microphone array has small interelement spread relative to the source position—the elevation and azimuth angles can be accurately estimated, whereas the Cartesian coordinates as well as the range are poorly estimated.
Abstract: A dual-step approach for speaker localization based on a microphone array is addressed in this paper. In the first stage, which is not the main concern of this paper, the time difference between arrivals of the speech signal at each pair of microphones is estimated. These readings are combined in the second stage to obtain the source location. In this paper, we focus on the second stage of the localization task. In this contribution, we propose to exploit the speaker's smooth trajectory for improving the current position estimate. Three localization schemes, which use the temporal information, are presented. The first is a recursive form of the Gauss method. The other two are extensions of the Kalman filter to the nonlinear problem at hand, namely, the extended Kalman filter and the unscented Kalman filter. These methods are compared with other algorithms, which do not make use of the temporal information. An extensive experimental study demonstrates the advantage of using the spatial-temporal methods. To gain some insight on the obtainable performance of the localization algorithm, an approximate analytical evaluation, verified by an experimental study, is conducted. This study shows that in common TDOA-based localization scenarios--where the microphone array has small interelement spread relative to the source position--the elevation and azimuth angles can be accurately estimated, whereas the Cartesian coordinates as well as the range are poorly estimated.