scispace - formally typeset
Search or ask a question

Showing papers in "Signal, Image and Video Processing in 2007"


Journal ArticleDOI
TL;DR: The sampling theorem for OLCT signals presented here serves as a unification and generalization of previously developed sampling theorems.
Abstract: The offset linear canonical transform (OLCT) is the name of a parameterized continuum of transforms which include, as particular cases, the most widely used linear transforms in engineering such as the Fourier transform (FT), fractional Fourier transform (FRFT), Fresnel transform (FRST), frequency modulation, time shifting, time scaling, chirping and others. Therefore the OLCT provides a unified framework for studying the behavior of many practical transforms and system responses. In this paper the sampling theorem for OLCT is considered. The sampling theorem for OLCT signals presented here serves as a unification and generalization of previously developed sampling theorems.

100 citations


Journal ArticleDOI
TL;DR: A robust model for tracking in video sequences with non-static backgrounds that permits tracking that is robust to background distractions and occlusions and is shown to be effective for object tracking in color, infrared (IR), and fused color-infrared sequences.
Abstract: In this paper, we propose a robust model for tracking in video sequences with non-static backgrounds. The object boundaries are tracked on each frame of the sequence by minimizing an energy functional that combines region, boundary and shape information. The region information is formulated by minimizing the symmetric Kullback–Leibler (KL) distance between the local and global statistics of the objects versus the background. The boundary information is formulated using a color and texture edge map of the video frames. The shape information is calculated adaptively to the dynamic of the moving objects and permits tracking that is robust to background distractions and occlusions. Minimization of the energy functional is implemented using the level set method. We show the effectiveness of the approach for object tracking in color, infrared (IR), and fused color-infrared sequences.

29 citations


Journal ArticleDOI
TL;DR: The proposed method to combine the partial evidences obtained for each representation using an auto-associative neural network (AANN) model to arrive at a decision for face verification shows that the performance of the system using potential field representation is better than that using the edge gradient representation or the edge orientation representation.
Abstract: In this paper we discuss the significance of representation of images for face verification. We consider three different representations, namely, edge gradient, edge orientation and potential field derived from the edge gradient. These representations are examined in the context of face verification using a specific type of correlation filter, called the minimum average correlation energy (MACE) filter. The different representations are derived using one-dimensional (1-D) processing of image. The 1-D processing provides multiple partial evidences for a given face image, one evidence for each direction of the 1-D processing. Separate MACE filters are used for deriving each partial evidence. We propose a method to combine the partial evidences obtained for each representation using an auto-associative neural network (AANN) model, to arrive at a decision for face verification. Results show that the performance of the system using potential field representation is better than that using the edge gradient representation or the edge orientation representation. Also, the potential field representation derived from the edge gradient is observed to be less sensitive to variation in illumination compared to the gray level representation of images.

24 citations


Journal ArticleDOI
TL;DR: Simulation results with various video sequences show that the fast mode decision algorithm proposed in this paper can accelerate the encoding speed significantly only with negligible PSNR loss or bit rate increment.
Abstract: Compared with other existing video coding standards, H.264/AVC can achieve a significant improvement in compression performances. A robust criterion named the rate distortion optimization (RDO) is employed to select the optimal coding modes and motion vectors for each macroblock (MB), which achieves a high compression ratio while leading to a great increase in the complexity and computational load unfortunately. In this paper, a fast mode decision algorithm for H.264/AVC intra prediction based on integer transform and adaptive threshold is proposed. Before the intra prediction, integer transform operations on the original image are executed to find the directions of local textures. According to this direction, only a small part of the possible intra prediction modes are tested for RDO calculation at the first step. If the minimum mean absolute error (MMAE) of the reconstructed block corresponding to the best mode is smaller than an adaptive threshold which depends on the quantization parameter (QP), the RDO calculation is terminated. Otherwise, more possible modes need to be tested. The adaptive threshold aims to balance the compression performance and the computational load. Simulation results with various video sequences show that the fast mode decision algorithm proposed in this paper can accelerate the encoding speed significantly only with negligible PSNR loss or bit rate increment.

24 citations


Journal ArticleDOI
TL;DR: It is proved that the decision directed modulus (DDM) cost function has no local minima in the combined channel-equalizer system impulse response.
Abstract: In this paper, new decision directed algorithms for blind equalization of communication channels are presented. These algorithms use informations about the last decided symbol to improve the performance of the constant modulus algorithm (CMA). The main proposed technique, the so called decision directed modulus algorithm (DDMA), extends the CMA to non-CM modulations. Assuming correct decisions, it is proved that the decision directed modulus (DDM) cost function has no local minima in the combined channel-equalizer system impulse response. Additionally, a relationship between the Wiener and DDM minima is established. The other proposed algorithms can be viewed as modifications of the DDMA. They are divided into two families: stochastic gradient algorithms and recursive least squares (RLS) algorithms. Simulation results allow to compare the performance of the proposed algorithms and to conclude that they outperform well-known methods.

23 citations


Journal ArticleDOI
TL;DR: A modified scheme based on multiresolution fusion is proposed to process monochrome visual and infrared images to generate composite image that retains most important information from source images for human perception.
Abstract: In the night vision applications, visual and infrared images are often fused for an improved awareness of situation or environment. The fusion algorithms can generate a composite image that retains most important information from source images for human perception. The state of the art includes manipulating in the color spaces and implementing pixel-level fusion with multiresolution algorithms. In this paper, a modified scheme based on multiresolution fusion is proposed to process monochrome visual and infrared images. The visual image is first enhanced based on corresponding infrared image. The final result is obtained by fusing the enhanced image with the visual image. The process highlights the features from visual image, which is most suitable for human perception.

21 citations


Journal ArticleDOI
TL;DR: A discrete version of the level set formulation of a modified Mumford and Shah energy functional is investigated, and the optimal image segmentation is directly obtained through a nonlinear finite difference equation.
Abstract: Models and algorithms in image processing are usually defined in the continuum and then applied to discrete data, that is the signal samples over a lattice. In particular, the set up in the continuum of the segmentation problem allows a fine formulation basically through either a variational approach or a moving interfaces approach. In any case, the image segmentation is obtained as the steady-state solution of a nonlinear PDE. Nevertheless the application to real data requires discretization schemes where some of the basic image geometric features have a loose meaning. In this paper, a discrete version of the level set formulation of a modified Mumford and Shah energy functional is investigated, and the optimal image segmentation is directly obtained through a nonlinear finite difference equation. The typical characteristics of a segmentation, such as its component domains area and its boundary length, are all defined in the discrete context thus obtaining a more realistic description of the available data. The existence and uniqueness of the optimal solution is proved in the class of piece wise constant functions, but with no restrictions on the nature of the segmentation boundary multiple points. The proposed algorithm compared to a standard segmentation procedure in the continuum generally provides a more accurate segmentation, with a much lower computational cost.

19 citations


Journal ArticleDOI
TL;DR: A novel technique for ameliorating such misconvergence characteristics of the NMCFLMS algorithm for blind channel identification (BCI) with noise by attaching a spectral constraint in the adaptation rule is proposed.
Abstract: This paper deals with the blind adaptive identification of single-input multi-output (SIMO) finite impulse response acoustic channels from noise-corrupted observations. The normalized multichannel frequency-domain least-mean-squares (NMCFLMS) algorithm [1] is known to be a very effective and efficient technique for identification of such channels when noise effects can be ignored. It, however, misconverges in presence of noise [2]. In this paper, we present an analysis of noise effects on the NMCFLMS algorithm and propose a novel technique for ameliorating such misconvergence characteristics of the NMCFLMS algorithm for blind channel identification (BCI) with noise by attaching a spectral constraint in the adaptation rule. Experimental results demonstrate that the robustness of the NMCFLMS algorithm for BCI can be significantly improved using such a constraint.

18 citations


Journal ArticleDOI
TL;DR: Four popular biorthogonal wavelet filter banks are systematically verified to exhibit performance competitive to several state-of-the-art BWFBs for image compression, and yet require lower computational costs.
Abstract: We construct popular biorthogonal wavelet filter banks (BWFBs) having the linear phase and arbitrary multiplicity of vanishing moments (VMs). A novel parametrization construction technique, which is based on the theory of Diophantine equation, is presented and explicit one-parameter expressions of the BWFBs are derived. Using the expressions, any one-parameter family of BWFBs with different VMs can be constructed, and ten families, i.e., 5/7, 6/6, 9/7, 6/10, 5/11, 10/6, 13/7, 6/14, 17/11, and 10/18 families, are constructed here. The free parameter can be used to optimize the resulting BWFBs with respect to other criteria. In particular, in each family, three specific BWFBs with attractive features are obtained by adjusting the free parameter: the first has optimum coding gain and rational coefficients; the second which also has rational coefficients is very close to a quadrature mirror filter (QMF) bank; and the third which has binary coefficients can realize a multiplication-free discrete wavelet transform. In addition, four BWFBs are systematically verified to exhibit performance competitive to several state-of-the-art BWFBs for image compression, and yet require lower computational costs.

16 citations


Journal ArticleDOI
TL;DR: A satellite imagery based approach for selecting appropriate background and shadow models and a Hybrid Cone-Cylinder Codebook (HC3) model which combines an adaptive efficient background model with HSV-color space shadow suppression into a single coherent framework are developed.
Abstract: Accurate segmentation of foreground objects in video scenes is critical for assuring reliable performance of vision systems for object tracking and situational awareness in outdoor scenes. Most existing techniques for background modeling and shadow suppression require that a number of parameters be “hand-tuned” based on environmental conditions. This paper presents two contributions to overcome such limitations. First, we develop and demonstrate a satellite imagery based approach for selecting appropriate background and shadow models. It is shown that the illumination conditions (i.e. cloud cover) of a scene can be reliably inferred from visible satellite images in the local region of the camera. The second contribution presented in the paper is introduction and evaluation of a Hybrid Cone-Cylinder Codebook (HC3) model which combines an adaptive efficient background model with HSV-color space shadow suppression into a single coherent framework. The structure of the HC3 model allows for seamless fusion of the satellite data. We are thereby able to exploit the fact that, for example, shadows are more pronounced on sunny days than cloudy days, allowing for more sensitive detection. The paper presents a set of experiments using day long sequences of videos from an operational surveillance system testbed. Results of these experimental analyses quantitatively illustrate the benefits of using satellite imagery to inform and adaptively adjust background and shadow modeling.

12 citations


Journal ArticleDOI
TL;DR: A novel detection and tracking system that provides both frame-view and world-coordinate human location information, based on video from multiple synchronized and calibrated cameras with overlapping fields of view, developed and evaluated for a seminar lecturer presenting in front of an audience inside a “smart room”.
Abstract: The paper introduces a novel detection and tracking system that provides both frame-view and world-coordinate human location information, based on video from multiple synchronized and calibrated cameras with overlapping fields of view. The system is developed and evaluated for the specific scenario of a seminar lecturer presenting in front of an audience inside a “smart room”, its aim being to track the lecturer’s head centroid in the three-dimensional (3D) space and also yield two-dimensional (2D) face information in the available camera views. The proposed approach is primarily based on a statistical appearance model of human faces by means of well-known AdaBoost-like face detectors, extended to address the head pose variation observed in the smart room scenario of interest. The appearance module is complemented by two novel components and assisted by a simple tracking drift detection mechanism. The first component of interest is the initialization module, which employs a spatio-temporal dynamic programming approach with appropriate penalty functions to obtain optimal 3D location hypotheses. The second is an adaptive subspace learning based 2D tracking scheme with a novel forgetting mechanism, introduced to reduce tracking drift and increase robustness. System performance is benchmarked on an extensive database of realistic human interaction in the lecture smart room scenario, collected as part of the European integrated project “CHIL”. The system consistently achieves excellent tracking precision, with a 3D mean tracking error of less than 16 cm, and is demonstrated to outperform four alternative tracking schemes. Furthermore, the proposed system performs relatively well in detecting frontal and near-frontal faces in the available frame views.

Journal ArticleDOI
TL;DR: An energy-based evaluation model, derived from the Total Variation principle, is proposed, which has shown a number of advantages over previous ones, for example the ability to preserve all relevant information and remove some of side effects such as reducing contrast and sensitive to error of registration.
Abstract: A new adaptive region-based image fusion approach is proposed. To implement image segmentation, the piecewise smooth Mumford-Shah segmentation algorithm is studied and a fast and simple method is proposed to solve the energy function. Two complementary functions u + and u − of the algorithm, which are respectively looked as objects and background of the image, are extended into the whole image domain, and they are computed by linear or nonlinear diffusion. The key to the algorithm is to make optimal fusion decisions for every segmented region. For this purpose, an evaluation approach has to be given to measure the performances of the available fusion rules. Therefore an energy-based evaluation model, derived from the Total Variation principle, is proposed. By numerical experiment it has been demonstrated that despite an increase in complexity, the new approach has shown a number of advantages over previous ones, for example the ability to preserve all relevant information and remove some of side effects such as reducing contrast and sensitive to error of registration.

Journal ArticleDOI
TL;DR: Two applications, namely the text occluded region recovery and the error concealment, are presented using the global motion/local motion information, which shows its effectiveness and robustness against noise and motion vector loss.
Abstract: Fast global motion estimation has been paid much attention in video compression and analysis. In this paper, a global motion estimation method is proposed by randomly selected motion vector groups in the compression domain directly. It is carried out by refining the centroid of the global motion parameters corresponding to the motion vector groups. Simulation results on different global motions show its effectiveness and robustness against noise and motion vector loss. Finally, two applications, namely the text occluded region recovery and the error concealment, are presented using the global motion/local motion information. Experimental results show the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: An improved cyclic beamforming algorithm exploiting cyclostationarity is proposed that improves substantially the signal selectivity and allows an increase in resolution power.
Abstract: Modulated signals used in telecommunication are cyclostationary. This property can be exploited to improve the direction of arrival (DOA) estimation performance. In this work, we propose an improved cyclic beamforming algorithm exploiting cyclostationarity. The proposed method exploits the information of both cyclic correlation matrix and cyclic conjugate correlation matrix with different cyclic frequencies. Compared with the existing methods, the simulation results show that proposed method improves substantially the signal selectivity; also it allows an increase in resolution power.

Journal ArticleDOI
TL;DR: This paper proposes fusion methods for tracking a single target in a sensor network using sequential Monte Carlo techniques and standard particle filtering and cost-reference particle filtering methods.
Abstract: In this paper we propose fusion methods for tracking a single target in a sensor network. The sensors use sequential Monte Carlo (SMC) techniques to process the received measurements and obtain random measures of the unknown states. We apply standard particle filtering (SPF) and cost-reference particle filtering (CRPF) methods. For both types of filtering, the random measures contain particles drawn from the state space. Associated to the particles, the SPF has weights representing probability masses, while the CRPF has user-defined costs measuring the quality of the particles. Summaries of the random measures are sent to the fusion center which combines them into a global summary. Similarly, the fusion center may send a global summary to the individual sensors that use it for improved tracking. Through extensive simulations and comparisons with other methods, we study the performance of the proposed algorithms.

Journal ArticleDOI
TL;DR: A method for estimating the statistical properties of two well-known edge detectors: the non maxima suppression and the zero crossing of the Laplacian algorithms and the computed pdf explicitly depends on the parameters of the edge detector.
Abstract: In this paper we present a method for estimating the statistical properties of two well-known edge detectors: the non maxima suppression and the zero crossing of the Laplacian algorithms. Assuming the data are corrupted by an additive Gaussian noise we derive the probability density function (pdf) of the detected edge. Thanks to this approach the computed pdf explicitly depends on the parameters of the edge detector. Experimental results on real images and comparisons with Monte Carlo simulations are presented in order to characterize the performance of this method.

Journal ArticleDOI
TL;DR: This paper investigates the performance of various “turbo” receivers for serially concatenated turbo codes transmitted through intersymbol interference (ISI) channels and proposes a simpler but suboptimal receiver that employs the predictive decision feedback equalizer (PDFE).
Abstract: This paper investigates the performance of various “turbo” receivers for serially concatenated turbo codes transmitted through intersymbol interference (ISI) channels. Both the inner and outer codes are assumed to be recursive systematic convolutional (RSC) codes. The optimum turbo receiver consists of an (inner) channel maximum a posteriori (MAP) decoder and a MAP decoder for the outer code. The channel MAP decoder operates on a “supertrellis” which incorporates the channel trellis and the trellis for the inner error-correcting code. This is referred to as the MAP receiver employing a SuperTrellis (STMAP). Since the complexity of the supertrellis in the STMAP receiver increases exponentially with the channel length, we propose a simpler but suboptimal receiver that employs the predictive decision feedback equalizer (PDFE). The key idea in this paper is to have the feedforward part of the PDFE outside the iterative loop and incorporate only the feedback part inside the loop. We refer to this receiver as the PDFE-STMAP. The complexity of the supertrellis in the PDFE-STMAP receiver depends on the inner code and the length of the feedback part. Investigations with Proakis B, Proakis C (both channels have spectral nulls with all zeros on the unit circle and hence cannot be converted to a minimum phase channel) and a minimum phase channel reveal that at most two feedback taps are sufficient to get the best performance. A reduced-state STMAP (RS-STMAP) receiver is also derived which employs a smaller supertrellis at the cost of performance.

Journal ArticleDOI
TL;DR: It is shown how the paradigm of classifier combination can be used for building a face detector that outperforms the current state-of-the-art systems, while remaining fast enough for being used in real–time systems.
Abstract: This paper describes a new approach to automatic frontal face detection which employs Gaussian filters as local image descriptors. We then show how the paradigm of classifier combination can be used for building a face detector that outperforms the current state-of-the-art systems, while remaining fast enough for being used in real–time systems. It is based on the combination of several parallel classifiers trained on subsets of the complete training set. We report a number of results on some reference datasets and we use an unbiased method for comparing the detectors.

Journal ArticleDOI
TL;DR: It is shown that multi-way Wiener filtering is significantly improved thanks to rotations of the estimated main orientations of tensors and a block processing approach to reduce the signal subspace dimension.
Abstract: Previous studies have shown that multi-way Wiener filtering improves the restoration of tensors impaired by an additive white Gaussian noise. Multi-way Wiener filtering is based on the distinction between noise and signal subspaces. In this paper, we show that the lower is the signal subspace dimension, the better is the restored tensor. To reduce the signal subspace dimension, we propose a method based on array processing technique to estimate main orientations in a flattened tensor. The rotation of a tensor of its main orientation values permits to concentrate the information along either rows or columns of the flattened tensor. We show that multi-way Wiener filtering performed on the rotated noisy tensor enables an improved recovery of signal tensor. Moreover, we propose in this paper a quadtree decomposition to avoid a blurry effect in the recovered tensor by multi-way Wiener filtering. We show that this proposed block processing reduces the whole blur and restores local characteristics of the signal tensor. Thus, we show that multi-way Wiener filtering is significantly improved thanks to rotations of the estimated main orientations of tensors and a block processing approach.

Journal ArticleDOI
TL;DR: Extensive simulation results have clearly shown that the proposed wavelet-based DD-MDC and DDCP–MDC methods are highly error resilient and consistently yield high coding gain; particularly, when only one description is available, the reconstructed image quality is superior to that obtained with other existing methods.
Abstract: In this paper, a novel wavelet-based multiple description coding (MDC) is proposed with dual decomposition (DD) and cross packetization (CP). Through dual decomposition, each description has two parts, the primary and the complementary. The former contains the structural information, including the positions and signs of significant coefficients, whereas the latter contains the residual data, which are the absolute values of the significant coefficients’ magnitude. The primary part is most crucial for source reconstruction; hence, their codes generated by the x-tree wavelet encoder are duplicated in both descriptions for heavy protection. On the other hand, the residual data are processed by the multiple description scalar quantizer to generate two indices for their respective descriptions as the complementary part. The proposed DD–MDC method effectively enhances error-resilience capability for robust transmission. For the packet-switching networks, the proposed CP, which produces the row packets and column packets for the two descriptions, respectively, is incorporated into the DD–MDC framework, leading to the DDCP–MDC scheme. Extensive simulation results have clearly shown that the proposed wavelet-based DD–MDC and DDCP–MDC methods are highly error resilient and consistently yield high coding gain; particularly, when only one description is available, the reconstructed image quality is superior to that obtained with other existing methods.

Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) estimation algorithm improving the CABAC decoding performances in the presence of transmission errors as well as improving the re-synchronization and error detection capabilities of the decoder are described.
Abstract: This paper addresses the problem of error-resilient decoding of bitstreams produced by the CABAC (context-based adaptive binary arithmetic coding) algorithm used in the H.264 video coding standard. The paper describes a maximum a posteriori (MAP) estimation algorithm improving the CABAC decoding performances in the presence of transmission errors. Methods improving the re-synchronization and error detection capabilities of the decoder are then described. A variant of the CABAC algorithm supporting error detection based on a forbidden interval is presented. The performances of the decoding algorithm are first assessed with theoretical sources and by considering different binarization codes. They are compared against those obtained with Exp-Golomb codes and with a transmission chain making use of an error-correcting code. The approach has been integrated in an H.264/MPEG-4 AVC video coder and decoder. The PSNR gains obtained are discussed.

Journal ArticleDOI
TL;DR: A novel approach to Multiple Description Coding of digital video using a two-descriptors architecture, characterized by an extremely low computational burden, while ensuring comparable overhead and almost seamless reconstruction from a single descriptor.
Abstract: A novel approach to Multiple Description Coding (MDC) of digital video is presented. In the proposed scheme, each descriptor carries a limited number of alternately selected coefficients, complemented with sorting information, which is used at the decoder to achieve an accurate interpolation of missing coefficients. In order to achieve an efficient compression reducing the overhead, a very effective JPEG-like syntax has been introduced to encode the side information. As compared to other approaches to MDC, the proposed strategy is characterized by an extremely low computational burden, while ensuring comparable overhead and almost seamless reconstruction from a single descriptor. Current implementation refers to a two-descriptors architecture, and is tested on intra-frames only, using a M-JPEG scheme for the simulation. Further studies are being conducted to extend the method to more descriptors and motion-compensated video coding.

Journal ArticleDOI
TL;DR: This paper shows how an underlying system’s state vector distribution can be determined in a distributed heterogeneous sensor network with reduced subspace observability at the individual nodes as long as the collective set of measurements from all the sensors provides full state observability.
Abstract: In this paper, we show how an underlying system’s state vector distribution can be determined in a distributed heterogeneous sensor network with reduced subspace observability at the individual nodes. The presented algorithm can generate the initial state vector distribution for networks with a variety of sensor types as long as the collective set of measurements from all the sensors provides full state observability. Hence the network, as a whole, can be capable of observing the target state vector even if the individual nodes are not capable of observing it locally. Initialization is accomplished through a novel distributed implementation of the particle filter that involves serial particle proposal and weighting strategies that can be accomplished without sharing raw data between individual nodes. If multiple events of interest occur, their individual states can be initialized simultaneously without requiring explicit data association across nodes. The resulting distributions can be used to initialize a variety of distributed joint tracking algorithms. We present two variants of our initialization algorithm: a low complexity implementation and a low latency implementation. To demonstrate the effectiveness of our algorithms we provide simulation results for initializing the states of multiple maneuvering targets in smart sensor networks consisting of acoustic and radar sensors.

Journal ArticleDOI
TL;DR: This paper proposes a novel approach to detect hotspots using NOAA advanced very high resolution radiometer (AVHRR) for the Jharia, Jharkhand (India) region using fuzzy logic approach and good agreement has been obtained between observed and predicted hotspots.
Abstract: This paper proposes a novel approach to detect hotspots using NOAA advanced very high resolution radiometer (AVHRR) for the Jharia, Jharkhand (India) region. Jharia coalfield in Jharkhand is the richest coal bearing area in India that contains a large number of mine fires which have been burning for several decades. In this paper, a fuzzy based methodology has been applied for the determination of hotspots to Jharia AVHRR images based on a theoretical model that establishes relationship among AVHRR channel 4, channel 5 and different vegetation indices. The algorithm consists of four stages: data preprocessing, multi-channel information fusion, hotspot detection using fuzzy logic approach and validation of result. The most commonly used existing algorithms like contextual algorithms, multi-thresholding, entropy based thresholding, and genetic algorithms have limitation that they need some mathematical model for training in order to get the required result. The employed fuzzy logic approach overcomes this requirement and in addition, it is flexible, tolerant of imprecise data and is based on natural language. The results were compared with the results obtained by ground survey and a good agreement has been obtained between observed and predicted hotspots.

Journal ArticleDOI
TL;DR: Experimental results show that new method cannot only decompose better for a given image but also reduce the runtime, in comparison to the MCA approach.
Abstract: In this paper, a new method which combines the basis pursuit denoising algorithm (BPDN) and the total variation (TV) regularization scheme is presented for separating images into texture and cartoon parts. It is a modification of the model [1]. In this process, two appropriate dictionaries are used, one for the representation of texture parts-the dual tree complex wavelet transform (DT CWT) and the other for the cartoon parts-the second generation of curvelet transform. To direct the separation process and reduce the pseudo-Gibbs phenomenon, the curvelet transform is followed by a projected regularization method for cartoon parts. Experimental results show that new method cannot only decompose better for a given image but also reduce the runtime, in comparison to the MCA approach.

Journal ArticleDOI
TL;DR: Filtering of pulse-like FM signals with varying amplitude corrupted by impulse noise is considered to decrease amplitude distortion of output signals that can be introduced by the robust DFT calculated within a wide interval including possible zero-output.
Abstract: Filtering of pulse-like FM signals with varying amplitude corrupted by impulse noise is considered. The robust DFT calculated for overlapped intervals is used for this aim. This technique is proposed in order to decrease amplitude distortion of output signals that can be introduced by the robust DFT calculated within a wide interval including possible zero-output. The proposed algorithm is realized through the following steps. In the first stage, the robust DFT is calculated for the intervals. Filtered signals from the intervals are obtained by applying the standard inverse DFT for the robust DFTs applied to input data. In the second stage, results for different overlapped intervals are combined using the appropriate order statistics. In addition, an algorithm inspired by the intersection of the confidence intervals rule is used for adaptive selection of the interval width in the robust DFT. Algorithm accuracy is tested on numerical examples. Computational complexity analysis is also provided.


Journal ArticleDOI
TL;DR: Compared with other blind channel estimation methods for space time systems, this method needs neither redundant precoding nor oversampling, and thus has higher data rate and is robust to channel order overestimation.
Abstract: This paper deals with a blind channel estimation method for space-time coded block transmission system. By concatenating the real part and imaginary part of the received signal to form an elongated vector, we derive an equivalent input–output system model. Then channel state information (CSI) is blindly estimated using subspace method, utilizing only the redundancy inherent in space-time block coding (STBC) and cyclic prefix (CP). The estimation ambiguity, which is common to all blind methods, is analyzed in detail and we prove that there only exist four scalar indeterminacies. Three effective methods to eliminate the ambiguities are also proposed. Compared with other blind channel estimation methods for space time systems, this method needs neither redundant precoding nor oversampling, and thus has higher data rate. Besides, this method is robust to channel order overestimation, which is effectively demonstrated by numerical simulations.

Journal ArticleDOI
TL;DR: This paper explores the application of a common operator used in systems theory, viz., the delta operator, to formulate a unified theory of multichannel blind deconvolution (MBD) which is valid in both discrete and continuous time domains.
Abstract: In this paper, we explore the application of a common operator used in systems theory, viz., the delta operator, to formulate a unified theory of multichannel blind deconvolution (MBD) which is valid in both discrete and continuous time domains. Apart from providing a unified treatment of MBD problems, this formulation permits a smooth transition of the demixer from a discrete time domain to a continuous time domain when the sampling rate is high. Furthermore we give a unified treatment of a balanced parameterized state space formulation to solving the MBD problem in both discrete and continuous time domains when the number of states is unknown.

Journal ArticleDOI
TL;DR: It is shown that AR coefficients are closer to nominal ones (noise-free) in the presence of noise for lower frequency contents with respect to the sampling frequency of corresponding continuous-time processes from which samples are taken for AR estimation.
Abstract: In this paper, we propose a noise modeling that does not destroy AR structure of buried signals in noise independently of its nature (white or colored, Gaussian or not) and its variance. Expression of perturbed AR coefficients is derived and proposed restoration does not use any a-priori information on the nature of noise and its variance. It is shown that AR coefficients are closer to nominal ones (noise-free) in the presence of noise for lower frequency contents with respect to the sampling frequency of corresponding continuous-time processes from which samples are taken for AR estimation. For unknown frequency contents, denoising of AR coefficients is obtained by decreasing the time interval separating samples used by AR estimation. A model order selection adapted to degraded signal-to-noise ratios is proposed. Performances of the proposed recovering of original AR spectra are demonstrated via signals buried in white and colored noise. Observed results are in accordance with the developed theory.