scispace - formally typeset
Search or ask a question

Showing papers by "M. Omair Ahmad published in 2014"


Journal ArticleDOI
TL;DR: A novel multiplicative watermarking scheme in the contourlet domain using the univariate and bivariate alpha-stable distributions is proposed and the robustness of the proposed bivariate Cauchy detector against various kinds of attacks is studied and shown to be superior to that of the generalized Gaussian detector.
Abstract: In the past decade, several schemes for digital image watermarking have been proposed to protect the copyright of an image document or to provide proof of ownership in some identifiable fashion. This paper proposes a novel multiplicative watermarking scheme in the contourlet domain. The effectiveness of a watermark detector depends highly on the modeling of the transform-domain coefficients. In view of this, we first investigate the modeling of the contourlet coefficients by the alpha-stable distributions. It is shown that the univariate alpha-stable distribution fits the empirical data more accurately than the formerly used distributions, such as the generalized Gaussian and Laplacian, do. We also show that the bivariate alpha-stable distribution can capture the across scale dependencies of the contourlet coefficients. Motivated by the modeling results, a blind watermark detector in the contourlet domain is designed by using the univariate and bivariate alpha-stable distributions. It is shown that the detectors based on both of these distributions provide higher detection rates than that based on the generalized Gaussian distribution does. However, a watermark detector designed based on the alpha-stable distribution with a value of its parameter α other than 1 or 2 is computationally expensive because of the lack of a closed-form expression for the distribution in this case. Therefore, a watermark detector is designed based on the bivariate Cauchy member of the alpha-stable family for which α = 1 . The resulting design yields a significantly reduced-complexity detector and provides a performance that is much superior to that of the GG detector and very close to that of the detector corresponding to the best-fit alpha-stable distribution. The robustness of the proposed bivariate Cauchy detector against various kinds of attacks, such as noise, filtering, and compression, is studied and shown to be superior to that of the generalized Gaussian detector.

80 citations


Journal ArticleDOI
TL;DR: Two robust affine projection sign (RAPS) algorithms are proposed, both of which minimize the mixed norm of l1 and l2 of the error signal, and offer robust performance with respect to impulsive noise and improved tracking of the unknown system in comparison to that provided by the PAPS and Affine projectionSign algorithms.
Abstract: Two robust affine projection sign (RAPS) algorithms, both of which minimize the mixed norm of $l_1$ and $l_2$ of the error signal, are proposed. The direction vector of the RAPS algorithms is obtained from the gradient of an $l_1$ norm-based objective function, while two related $l_2$ norm-based minimization problems are solved to obtain the line search of the two RAPS algorithms. The $l_1$ norm-based direction vector reduces the impact of impulsive noise, whereas the $l_2$ norm-based line search produces an unbiased solution in the proposed algorithms. In addition, one of the two RAPS algorithms shares the data selective adaptation used in the set-membership (SM) affine projection (SMAP) algorithm. The proposed algorithms are shown to offer a significant improvement in the convergence speed as well as a significant reduction in the steady-state misalignment relative to the pseudo affine projection sign (PAPS) algorithm. In addition, the proposed algorithms offer robust performance with respect to impulsive noise and improved tracking of the unknown system in comparison to that provided by the PAPS and Affine projection sign (APS) algorithms. These features of the proposed algorithms are demonstrated using simulation results in system-identification and echo-cancellation applications.

38 citations


Proceedings ArticleDOI
01 Jun 2014
TL;DR: It is shown that the alpha-stable family of distributions provides a more accurate model to the contourlet subband coefficients than the formerly used distributions, namely, the generalized Gaussian and Laplacian distributions, both in terms of the subjective measure of the Kolmogorov-Smirnov distance and the objective measure of comparing the log-scale histograms.
Abstract: It is known that the contourlet coefficients of images have non-Gaussian property and heavy tails. In view of this, an appropriate distribution to model the statistics of the contourlet coefficients would be the one having large peaks, and tails heavier than that of a Gaussian PDF, i.e., a heavy-tailed PDF. This paper proposes a new image modeling in the contourlet domain, where the magnitudes of the coefficients are modeled by a symmetric alpha-stable distribution which is best suited for modeling transform coefficients with a high non-Gaussian property and heavy tails. It is shown that the alpha-stable family of distributions provides a more accurate model to the contourlet subband coefficients than the formerly used distributions, namely, the generalized Gaussian and Laplacian distributions, both in terms of the subjective measure of the Kolmogorov-Smirnov distance and the objective measure of comparing the log-scale histograms.

25 citations


Proceedings ArticleDOI
04 May 2014
TL;DR: It is shown that a symmetric normal inverseGaussian distribution is more suitable for modeling the contourlet coefficients than formerly-used generalized Gaussian distribution for reducing noise in images corrupted by additive white Gaussian noise.
Abstract: A new contourlet-based method is introduced for reducing noise in images corrupted by additive white Gaussian noise It is shown that a symmetric normal inverse Gaussian distribution is more suitable for modeling the contourlet coefficients than formerly-used generalized Gaussian distribution To estimate the noise-free coefficients, a Bayesian maximum a posteriori estimator is developed utilizing the proposed distribution In order to estimate the parameters of the distribution, a moment-based technique is used The performance of the proposed method is studied using typical noise-free images corrupted with simulated noise and compared with that of the other state-of-the-art methods It is shown that compared with other denoising techniques, the proposed method gives higher values of the peak signal-to-noise ratio and provides images of good visual quality

21 citations


Journal ArticleDOI
TL;DR: A feature extraction scheme based on discrete cosine transform of electromyography (EMG) signals is proposed for the classification of normal event and a neuromuscular disease, namely the amyotrophic lateral sclerosis and it is found that the proposed method provides a very satisfactory performance in terms of specificity, sensitivity and overall classification accuracy.
Abstract: A feature extraction scheme based on discrete cosine transform (DCT) of electromyography (EMG) signals is proposed for the classification of normal event and a neuromuscular disease, namely the amyotrophic lateral sclerosis. Instead of employing DCT directly on EMG data, it is employed on the motor unit action potentials (MUAPs) extracted from the EMG signal via a template matching-based decomposition technique. Unlike conventional MUAP-based methods, only one MUAP with maximum dynamic range is selected for DCT-based feature extraction. Magnitude and frequency values of a few high-energy DCT coefficients corresponding to the selected MUAP are used as the desired feature which not only reduces computational burden, but also offers better feature quality with high within-class compactness and between-class separation. For the purpose of classification, the K-nearest neighbourhood classifier is employed. Extensive analysis is performed on clinical EMG database and it is found that the proposed method provides a very satisfactory performance in terms of specificity, sensitivity and overall classification accuracy.

16 citations


Proceedings ArticleDOI
22 Jun 2014
TL;DR: Simulation results are provided to show that the proposed denoising method can effectively reduce the noise in yielding higher values for the peak signal-to-noise ratio along with better visual quality than that provided by some of the other existing methods.
Abstract: Denoising problems can be regarded as that of a prior probability modeling in an estimation task. The performance of the estimator is intimately related on the correctness of the model. This paper proposes a new wavelet-domain image denoising method using the minimum mean square error (MMSE) estimator. The vector-based hidden Markov model (HMM) is used as the prior for modeling the wavelet coefficients of an image. This model is an effective statistical model for the wavelet coefficients, since it is capable of capturing both the subband marginal distribution and the inter-scale, intra-scale and cross-orientation dependencies of the wavelet coefficients. Using this prior, a Weiner filter, which is derived using a MMSE estimator, is developed for estimating the denoised coefficients. Experiments are conducted on standard images to evaluate the performance of the proposed method. Simulation results are provided to show that the proposed denoising method can effectively reduce the noise in yielding higher values for the peak signal-to-noise ratio along with better visual quality than that provided by some of the other existing methods.

15 citations


Journal ArticleDOI
TL;DR: The novelty of the proposed method is that it introduces a systematic framework that utilizes intensity, convexity, and texture information to achieve a high accuracy for automatic segmentation of nuclei in the phase-contrast images.
Abstract: This paper presents a method for automatic segmentation of nuclei in phase-contrast images using the intensity, convexity and texture of the nuclei. The proposed method consists of three main stages: preprocessing, h-maxima transformation-based marker controlled watershed segmentation ( h-TMC), and texture analysis. In the preprocessing stage, a top-hat filter is used to increase the contrast and suppress the non-uniform illumination, shading, and other imaging artifacts in the input image. The nuclei segmentation stage consists of a distance transformation, h-maxima transformation and watershed segmentation. These transformations utilize the intensity information and the convexity property of the nucleus for the purpose of detecting a single marker in every nucleus; these markers are then used in the h-TMC watershed algorithm to obtain segments of the nuclei. However, dust particles, imaging artifacts, or prolonged cell cytoplasm may falsely be segmented as nuclei at this stage, and thus may lead to an inaccurate analysis of the cell image. In order to identify and remove these non-nuclei segments, in the third stage a texture analysis is performed, that uses six of the Haralick measures along with the AdaBoost algorithm. The novelty of the proposed method is that it introduces a systematic framework that utilizes intensity, convexity, and texture information to achieve a high accuracy for automatic segmentation of nuclei in the phase-contrast images. Extensive experiments are performed demonstrating the superior performance ( precision = 0.948; recall = 0.924; F1-measure = 0.936; validation based on ∼ 4850 manually-labeled nuclei) of the proposed method.

13 citations


Proceedings ArticleDOI
01 Jun 2014
TL;DR: The performance of the proposed detector is studied through simulation and is shown to be superior to that of other detectors in terms of the imperceptibility of the embedded watermark and detection rate.
Abstract: The wavelet coefficients of images show heavy-tailed marginal statistics as well as strong inter- and intra-subbands and across orientations dependencies. The vector-based hidden Markov model (HMM) has been shown to be an effective statistical model for wavelet coefficients, which is capable of capturing both the subband marginal distribution and the inter-scale and intra-scale dependencies of the wavelet coefficients. In this paper, we propose a locally-optimum watermark detector using the HMM model for image wavelet coefficients. The performance of the proposed detector is studied through simulation and is shown to be superior to that of other detectors in terms of the imperceptibility of the embedded watermark and detection rate.

13 citations


Proceedings ArticleDOI
28 Jan 2014
TL;DR: This paper presents a model for collaboration between a pre-trained object detector and multiple single object trackers in the particle filter tracking framework, and presents a dual motion model that incorporates the associated detections with the object dynamics.
Abstract: The past decade has witnessed significant progress in object detection and tracking in videos. In this paper, we present a model for collaboration between a pre-trained object detector and multiple single object trackers in the particle filter tracking framework. For each frame, we construct an association between the trackers and the detections, and when a tracker is successfully associated to a detection, we treat this detection as the key-sample for this tracker. We present a dual motion model that incorporates the associated detections with the object dynamics. Then, a likelihood function provides different weights for the propagated and the newly created particles, reducing the effect of false positives and missed detections in the tracking process. In addition, we use generative and discriminative appearance models to maximize the appearance variation among the targets. The performance of the proposed algorithm compares favorably with that of the state-of-the-art approaches on three public sequences.

10 citations


Proceedings ArticleDOI
01 Aug 2014
TL;DR: The results show that the proposed denoising method provides values of the peak signal-to-noise ratio higher than that provided by some of the existing techniques along with superior visual quality images.
Abstract: In this paper, a new contourlet-based method for denoising of images corrupted by additive white Gaussian noise is proposed. The alpha-stable distribution is used to model the contourlet coefficients of noise-free images. This model is then exploited to develop a Bayesian minimum mean absolute error estimator. A modified empirical characteristic function-based method is employed for estimating the parameters of the assumed alpha-stable prior. The performance of the proposed denoising method is evaluated by using standard noise-free images corrupted with simulated noise and compared with that of the other state-of-the-art methods. The results show that the proposed method provides values of the peak signal-to-noise ratio higher than that provided by some of the existing techniques along with superior visual quality images.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a two-stage adaptive filter-based AEC scheme is proposed to deal with the difficult problem of acoustic echo cancellation (AEC) in single-channel scenario in the presence of noise.
Abstract: In this paper, a two-stage scheme is proposed to deal with the difficult problem of acoustic echo cancellation (AEC) in single-channel scenario in the presence of noise. In order to overcome the major challenge of getting a separate reference signal in adaptive filter-based AEC problem, the delayed version of the echo and noise suppressed signal is proposed to use as reference. A modified objective function is thereby derived for a gradient-based adaptive filter algorithm, and proof of its convergence to the optimum Wiener-Hopf solution is established. The output of the AEC block is fed to an acoustic noise cancellation (ANC) block where a spectral subtraction-based algorithm with an adaptive spectral floor estimation is employed. In order to obtain fast but smooth convergence with maximum possible echo and noise suppression, a set of updating constraints is proposed based on various speech characteristics (e.g., energy and correlation) of reference and current frames considering whether they are voiced, unvoiced, or pause. Extensive experimentation is carried out on several echo and noise corrupted natural utterances taken from the TIMIT database, and it is found that the proposed scheme can significantly reduce the effect of both echo and noise in terms of objective and subjective quality measures.

Proceedings ArticleDOI
22 Jun 2014
TL;DR: Experimental results show that the proposed algorithm when applied on two public vehicle detection datasets reduces the storage requirement of the classifier pyramid, while providing about the same performance as that provided by the state-of-the-art techniques.
Abstract: Histogram of oriented gradients (HOG) is often used for object detection in images These HOG features of images can be referred to as 2DHOG when represented in a 2D matrix format instead of a 1D vector In this paper, we propose a new vehicle detection algorithm by using 2DHOG in the discrete cosine transform (DCT) domain The proposed technique consists of extracting 2DHOG from the input image and applying on it 2DDCT This is followed by a low pass filtering in order to obtain novel features called as transform-domain 2DHOG (TD2DHOG) TD2DHOG is used with a classifier pyramid in order to reduce the multi-scale scanning cost Experimental results show that the proposed algorithm when applied on two public vehicle detection datasets reduces the storage requirement of the classifier pyramid, while providing about the same performance as that provided by the state-of-the-art techniques

Proceedings ArticleDOI
01 Dec 2014
TL;DR: The proposed algorithm significantly improves the quality of the side information frames compared with those provided by the advanced block matching frame interpolation in the typical Wyner-Ziv video codecs.
Abstract: In this work, a new algorithm to generate high quality side information in Wyner-Ziv video coding is proposed. A block-matching algorithm is incorporated into the forward and backward optical flow and warping algorithms to find the forward and backward motion fields that are used for frame interpolation. Also, a symmetric optical flow algorithm for the purpose of frame interpolation is obtained by parameter modification in the energy functional of an optical flow algorithm. The average of the interpolated frames estimated using the forward/backward motion fields and symmetric flow is used to provide a high quality side information frame for decoding of the corresponding Wyner-Ziv frame in the Wyner-Ziv video coding problem. The proposed algorithm significantly improves the quality of the side information frames compared with those provided by the advanced block matching frame interpolation in the typical Wyner-Ziv video codecs. Simulation results showing significant improvements in side information quality and rate-distortion performance in Wyner-Ziv video coding are provided.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed method can maintain the same classifiability as that of uncompressed data with only a small fraction of the wavelet coefficients.
Abstract: A measure is introduced that predicts the number of coefficients needed to be retained in the discrete wavelet transform of images in order to maintain their classifiability. The introduction of the criterion is based on the energy content of the wavelet coefficients and the order in which they are scanned. The coefficients are weighted based on their location acquired by Morton scanning of the two-dimensional transform plane. The proposed criterion has been tested on MIT-CBCL and AT&T-Olivetti face databases, Columbia Object Image Library (COIL-20) object database, the MNIST handwritten character recognition database and on Caltech-101 object image database. To demonstrate the efficiency of the proposed method, several classification experiments are conducted on each database. Simulation results show that the proposed method can maintain the same classifiability as that of uncompressed data with only a small fraction of the wavelet coefficients.

Proceedings ArticleDOI
04 May 2014
TL;DR: It is shown through simulation results that the proposed TVWL1 algorithm offers the same robustness with respect to impulsive noise as that achieved by using the recently proposed total variation l1 (TVL1) algorithm, while yielding an improved signal-to-noise ratio (SNR), and hence, improved restoration in image deblurring.
Abstract: A total variation weighted l 1 (TVWL1) norm based image dublurring algorithm is proposed. The proposed algorithm uses a series of data matrix to weight the error signal and then the l 1 norms of the resultant series of error signals are used to produce the fidelity term while the regularization term remains the conventional total variation regularization. An alternate minimization approach is used to solve the minimization problem that comprises the fidelity and the regularization terms. It is shown through simulation results that the proposed TVWL1 algorithm offers the same robustness with respect to impulsive noise as that achieved by using the recently proposed total variation l 1 (TVL1) algorithm, while yielding an improved signal-to-noise ratio (SNR), and hence, improved restoration in image deblurring.

Posted Content
TL;DR: A new convex method of hyperspectral image classification is developed based on the sparse unmixing algorithm SUnSAL for which a pixel adaptive L1-norm regularization term is introduced.
Abstract: Sparse regression methods have been proven effective in a wide range of signal processing problems such as image compression, speech coding, channel equalization, linear regression and classification. In this paper a new convex method of hyperspectral image classification is developed based on the sparse unmixing algorithm SUnSAL for which a pixel adaptive L1-norm regularization term is introduced. To further enhance class separability, the algorithm is kernelized using an RBF kernel and the final results are improved by a combination of spatial pre and post-processing operations. It is shown that the proposed method is competitive with state of the art algorithms such as SVM-CK, KSOMP-CK and KSSP-CK.

01 Jan 2014
TL;DR: A two-stage scheme to deal with the difficult problem of acoustic echo cancellation in single-channel scenario in the presence of noise can significantly reduce the effect of both echo and noise in terms of objective and subjective quality measures.
Abstract: In this paper, a two-stage scheme is proposed to deal with the difficult problem of acoustic echo cancellation (AEC) in single-channel scenario in the presence of noise. In order to overcome the major challenge of getting a separate reference signal in adaptive filter-based AEC problem, the delayed version of the echo and noise suppressed signal is proposed to use as reference. A modified objective function is thereby derived for a gradient-based adaptive filter algorithm, and proof of its convergence to the optimum Wiener-Hopf solution is established. The output of the AEC block is fed to an acoustic noise cancellation (ANC) block where a spectral subtraction-based algorithm with an adaptive spectral floor estimation is employed. In order to obtain fast but smooth convergence with maximum possible echo and noise suppression, a set of updating constraints is proposed based on various speech characteristics (e.g., energy and correlation) of reference and current frames considering whether they are voiced, unvoiced, or pause. Extensive experimentation is carried out on several echo and noise corrupted natural utterances taken from the TIMIT database, and it is found that the proposed scheme can significantly reduce the effect of both echo and noise in terms of objective and subjective quality measures.

Proceedings ArticleDOI
25 Sep 2014
TL;DR: A new message passing scheme is proposed by incorporating the variational Bayes (VB) into the belief propagation algorithm for estimating of time-varying noise distribution parameter in a low-density parity-check decoder.
Abstract: In this work, we investigate the problem of estimating time-varying noise distribution parameter on a factor graph. A new message passing scheme is proposed by incorporating the variational Bayes (VB) into the belief propagation algorithm for estimating of time-varying noise distribution parameter in a low-density parity-check decoder. The scheme can also be used for the estimation of the correlation noise model parameter in distributed video coding. A Bayesian estimator is used to estimate this parameter by obtaining its posterior distribution given the channel output. The VB algorithm is employed to approximate the complex form of the posterior distribution with a simple distribution. Finally, this distribution is used to derive a closed-form expression for the messages on the augmented factor graph for online parameter estimation and decoding process at the same time.

Proceedings ArticleDOI
01 Jun 2014
TL;DR: A space time code based partial rank affine projection (PRAP) algorithm is proposed which is shown to offer a faster convergence speed and a smaller computational burden per iteration than the NLMS algorithm.
Abstract: A space time code based partial rank affine projection (PRAP) algorithm is proposed. The proposed algorithm uses an input signal where the input signal matrix X k becomes an orthogonal matrix. For this input signal, matrix (X k T X k ) becomes a diagonal matrix whose inverse can be easily computed. Thus, the proposed algorithm saves a significant amount of computations. Due to this feature the proposed PRAP algorithm is shown to offer a faster convergence speed and a smaller computational burden per iteration than the NLMS algorithm does.