scispace - formally typeset
Search or ask a question

Showing papers by "M. Omair Ahmad published in 2015"


Journal ArticleDOI
TL;DR: Partly different from the EE, the SE corresponding to the optimal EE can be improved by increasing the number of antennas, multiplexing a rational number of users, narrowing the system bandwidth, or shrinking the cell radius.
Abstract: This brief mainly investigates energy efficiency (EE) and spectrum efficiency (SE) for the uplink massive multiple-input–multiple-output orthogonal frequency-division multiplexing system in a single-cell environment. An approximate SE expression is first derived by employing the maximum ratio combination or zero-forcing detection at the base station. Then, the theoretical tradeoff between EE and SE is established after introducing a realistic power consumption model in consideration of both the radiated power and the circuit power. Based on the tradeoff, the optimal EE with respect to SE is derived using the convex optimization theory. Results show that the optimal EE increases by deploying a suitable number of antennas, multiplexing a reasonable number of users, expanding the system bandwidth, or shrinking the cell radius. Partly different from the EE, the SE corresponding to the optimal EE can be improved by increasing the number of antennas, multiplexing a rational number of users, narrowing the system bandwidth, or shrinking the cell radius.

46 citations


Journal ArticleDOI
TL;DR: For the first time, a blind multichannel multiplicative color image watermarking scheme in the sparse domain is proposed, and a statistical model based on the multivariate Cauchy distribution is used to derive an efficient closed-form decision rule for the watermark detector.
Abstract: In recent years, digital watermarking has facilitated the protection of copyright information through embedding hidden information into the digital content. In this brief, for the first time, a blind multichannel multiplicative color image watermarking scheme in the sparse domain is proposed. In order to take into account the cross correlation between the coefficients of the color bands in the sparse domain, a statistical model based on the multivariate Cauchy distribution is used. The statistical model is then used to derive an efficient closed-form decision rule for the watermark detector. Experimental results and theoretical analysis are presented to validate the proposed watermark detector. The performance of the proposed detector is compared with that of the other detectors. The results demonstrate the improved detection rate and high robustness against the commonly used attacks such as JPEG compression, salt and pepper noise, median filtering, and Gaussian noise.

41 citations


Journal ArticleDOI
TL;DR: An improved fast iterative shrinkage thresholding algorithm (IFISTA) for image deblurring is proposed that has an improved convergence rate and improved restoration capability of the unknown image over that of the FISTA algorithm.
Abstract: An improved fast iterative shrinkage thresholding algorithm (IFISTA) for image deblurring is proposed. The IFISTA algorithm uses a positive definite weighting matrix in the gradient function of the minimization problem of the known fast iterative shrinkage thresholding (FISTA) image restoration algorithm. A convergence analysis of the IFISTA algorithm shows that due to the weighting matrix, the IFISTA algorithm has an improved convergence rate and improved restoration capability of the unknown image over that of the FISTA algorithm. The weighting matrix is predetermined and fixed, and hence, like the FISTA algorithm, the IFISTA algorithm requires only one matrix vector product operation in each iteration. As a result, the computational burden per iteration of the IFISTA algorithm remains the same as in the FISTA algorithm. Numerical examples are presented that demonstrate the improved performance of the IFISTA algorithm over that of the FISTA and iterative shrinkage thresholding (ISTA) algorithms in terms...

34 citations


Journal ArticleDOI
TL;DR: Several standard objective measures and subjective evaluations show that the proposed method outperforms some of the state-of-the-art speech enhancement methods at high as well as low levels of SNRs.
Abstract: This paper presents a speech enhancement approach, where an adaptive threshold is statistically determined based on Student $t$ Modeling of Teager energy (TE) operated perceptual wavelet packet (PWP) coefficients of noisy speech. In order to obtain an enhanced speech, the threshold thus derived is applied upon the PWP coefficients by employing a Student $t$ pdf dependent custom thresholding function, which is designed based on a combination of modified hard and semisoft thresholding functions. Extensive simulations are carried out using the NOIZEUS database to evaluate the effectiveness of the proposed method for car and multi-talker babble noise corrupted speech signals. Several standard objective measures and subjective evaluations including formal listening tests show that the proposed method outperforms some of the state-of-the-art speech enhancement methods at high as well as low levels of SNRs.

28 citations


Proceedings ArticleDOI
01 Oct 2015
TL;DR: The experimental results show that the performance of simple non-weighted fusion preceded by the EER independent anchor based normalization technique is better than that of the weak classifier of the system and is superior to that of two other normalization methods.
Abstract: A multimodal biometric system consolidates multiple biometric sources and mitigates the limitations of the unimodal biometric system. The consolidation of information can be done at various levels of fusion. In this paper, a normalization technique for score-level fusion based on a new anchor, which is computed from the raw score set, has been proposed. This new anchor is independent of the statistatical properies of the biometric system (e.g., equal error rate). This work focuses on the improvement of the multimodal biometric system that consists of at least one weak classifier and does not have a prior knowledge of the genuine/impostor score distribution. The experimental results show that the performance of simple non-weighted fusion preceded by our EER independent anchor based normalization technique is better than that of the weak classifier of the system and is superior to that of two other normalization methods.

19 citations


Proceedings ArticleDOI
24 May 2015
TL;DR: Wang et al. as mentioned in this paper proposed a blind watermark detection method using the normal inverse Gaussian (NIG) distribution for the contourlet coefficients of images, which can be well modelled by non-Gaussian distributions such as the NIG.
Abstract: Digital watermarking has been widely used in the copyright protected images in multimedia This paper addresses the blind watermark detection problem in contourlet domain It is known that the contourlet coefficients of images have non-Gaussian property and can be well modelled by non-Gaussian distributions such as the normal inverse Gaussian (NIG) In view of this, we exploit this model to derive closed-form expressions for the test statistics and design an optimum blind watermark detector in the contourlet domain Through conducting several experiments, the performance of the proposed detector is evaluated in terms of the probabilities of detection and false alarm and compared to that of the other existing detectors It is shown that the proposed detector using the NIG distribution is superior to other detectors in terms of providing higher rate of detection It is also shown that the proposed NIG-based detector is more robust than other detectors against attacks, such as JPEG compression and Gaussian noise

19 citations


Proceedings ArticleDOI
24 May 2015
TL;DR: The results show that the proposed denoising method outperforms some of the state-of-the-art methods in terms of both the subjective and objective criteria.
Abstract: A new contourlet-based method is introduced for reducing noise in images corrupted by additive white Gaussian noise. This method takes into account the statistical dependencies among the contourlet coefficients of different scales. In view of this, a non-Gaussian multivariate distribution is proposed to capture the across-scale dependencies of the contourlet coefficients. This model is then exploited in a Bayesian maximum a posteriori estimator to restore the clean coefficients by deriving an efficient closed-form shrinkage function. Experimental results are performed to evaluate the performance of the proposed denoising method using typical noise-free images contaminated by simulated noise. The results show that the proposed method outperforms some of the state-of-the-art methods in terms of both the subjective and objective criteria.

8 citations


Journal ArticleDOI
TL;DR: It is shown that the performance of the proposed algorithm, the MSAIndelFR algorithm, for multiple protein sequence alignment incorporating a new variable gap penalty function is superior to that of the most–widely used alignment algorithms, Clustal W2, ClUSTal Omega, Kalign2, MSAProbs, MAFFT, MUSCLE, ProbCons and Probalign, in terms of both the sum–of–pairs and total column metrics.
Abstract: The alignment of multiple protein sequences is one of the most commonly performed tasks in bioinformatics. In spite of considerable research and efforts that have been recently deployed for improving the performance of multiple sequence alignment (MSA) algorithms, finding a highly accurate alignment between multiple protein sequences is still a challenging problem. We propose a novel and efficient algorithm called, MSAIndelFR, for multiple sequence alignment using the information on the predicted locations of IndelFRs and the computed average log–loss values obtained from IndelFR predictors, each of which is designed for a different protein fold. We demonstrate that the introduction of a new variable gap penalty function based on the predicted locations of the IndelFRs and the computed average log–loss values into the proposed algorithm substantially improves the protein alignment accuracy. This is illustrated by evaluating the performance of the algorithm in aligning sequences belonging to the protein folds for which the IndelFR predictors already exist and by using the reference alignments of the four popular benchmarks, BAliBASE 3.0, OXBENCH, PREFAB 4.0, and SABRE (SABmark 1.65). We have proposed a novel and efficient algorithm, the MSAIndelFR algorithm, for multiple protein sequence alignment incorporating a new variable gap penalty function. It is shown that the performance of the proposed algorithm is superior to that of the most–widely used alignment algorithms, Clustal W2, Clustal Omega, Kalign2, MSAProbs, MAFFT, MUSCLE, ProbCons and Probalign, in terms of both the sum–of–pairs and total column metrics.

7 citations


Proceedings ArticleDOI
24 May 2015
TL;DR: The alpha-stable distribution provides a good fit for the contourlet coefficients of an image, since it can capture the large peak and heavy tails of the distribution of the empirical data.
Abstract: Speckle reduction has been a prerequisite for many SAR image processing tasks. This work presents a new approach for despeckling of SAR images in the contourlet domain using the alpha-stable distribution. It is shown that the alpha-stable distribution provides a good fit for the contourlet coefficients of an image, since it can capture the large peak and heavy tails of the distribution of the empirical data. This model is then exploited in a Bayesian maximum a posteriori estimator to restore the noise-free contourlet coefficients. The performance of the proposed despeckling method is evaluated using synthetically-speckled and real SAR images. Simulations are carried out using synthetically speckled images to investigate the performance of the proposed method, and compare it with that of some of the existing methods. The experimental results show that the proposed method can provide better preservation of the edges and can yield better visual quality as compared to some of the existing methods.

7 citations


Proceedings ArticleDOI
24 May 2015
TL;DR: A new maximum a posteriori estimator using the vector-based hidden Markov model (VB-HMM) as a prior for the wavelet coefficients of images is proposed by deriving an efficient closed-form expression for the shrinkage function.
Abstract: There are a number of image denoising methods in the wavelet domain using statistical models. It is known that the performance of such methods can be significantly improved by taking into account the statistical dependencies between the wavelet coefficients. It is shown that the vector-based hidden Markov model (VB-HMM) is capable of capturing both the subband marginal distribution and the inter-scale, intra-scale and cross orientation dependencies of the wavelet coefficients. In view of this, we propose a new maximum a posteriori estimator using the VB-HMM as a prior for the wavelet coefficients of images. This is realized by deriving an efficient closed-form expression for the shrinkage function. Experimental results are performed to evaluate the performance of the proposed denoising method. The results demonstrate that the proposed method outperforms some of the state-of-the-art techniques in terms of both the peak signal to noise ratio and perceptual quality.

6 citations


Journal ArticleDOI
TL;DR: A novel scheme to predict indel flanking regions in a protein sequence for a given protein fold, based on a variable-order Markov model is proposed, which is able to predict IndelFRs in the protein sequences with a high accuracy and F1 measure.
Abstract: MOTIVATION Insertion/deletion (indel) and amino acid substitution are two common events that lead to the evolution of and variations in protein sequences. Further, many of the human diseases and functional divergence between homologous proteins are more related to indel mutations, even though they occur less often than the substitution mutations do. A reliable identification of indels and their flanking regions is a major challenge in research related to protein evolution, structures and functions. RESULTS In this article, we propose a novel scheme to predict indel flanking regions in a protein sequence for a given protein fold, based on a variable-order Markov model. The proposed indel flanking region (IndelFR) predictors are designed based on prediction by partial match (PPM) and probabilistic suffix tree (PST), which are referred to as the PPM IndelFR and PST IndelFR predictors, respectively. The overall performance evaluation results show that the proposed predictors are able to predict IndelFRs in the protein sequences with a high accuracy and F1 measure. In addition, the results show that if one is interested only in predicting IndelFRs in protein sequences, it would be preferable to use the proposed predictors instead of HMMER 3.0 in view of the substantially superior performance of the former.

Proceedings ArticleDOI
24 May 2015
TL;DR: A structural local DCT sparse appearance model is proposed in a particle filter framework that provides superior/similar performance for most of the sequences with reduced computational complexity in l1-norm minimization.
Abstract: The success of sparse representation in face recognition has motivated the development of sparse representation-based appearance models for visual tracking. These sparse representation-based trackers show state-of-the-art performance, but at the cost of computationally expensive l 1 -norm minimization. As the computational cost prevents the tracker from being used in real-time systems such as real-time surveillance and military operations, it has become a very important issue. With the aim of reducing the computational complexity of l 1 -norm minimization, a structural local DCT sparse appearance model is proposed in a particle filter framework. Application of DCT on local patches helps to reduce the dimensions of the dictionary as well as candidate samples by using low-pass filtered DCT coefficients. This in turn helps to remove the information relating to occlusion and background clutter thereby reducing the ambiguity created while computing the confidences of the target samples. The proposed method is evaluated on the challenging image sequences available in the literature and its performance compared with three recent state-of-the-art methods. It is shown that the proposed method provides superior/similar performance for most of the sequences with reduced computational complexity in l 1 -norm minimization.

01 Jan 2015
TL;DR: It is shown that the proposed detector using the NIG distribution is superior to other detectors in terms of providing higher rate of detection and more robust than other detectors against attacks, such as JPEG compression and Gaussian noise.
Abstract: Digital watermarking has been widely used in the copyright protected images in multimedia. This paper addresses the blind watermark detection problem in contourlet domain. It is known that the contourlet coefficients of images have non- Gaussian property and can be well modelled by non-Gaussian distributions such as the normal inverse Gaussian (NIG). In view of this, we exploit this model to derive closed-form expressions for the test statistics and design an optimum blind watermark detector in the contourlet domain. Through conducting several experiments, the performance of the proposed detector is evaluated in terms of the probabilities of detection and false alarm and compared to that of the other existing detectors. It is shown that the proposed detector using the NIG distribution is superior to other detectors in terms of providing higher rate of detection. It is also shown that the proposed NIG-based detector is more robust than other detectors against attacks, such as JPEG compression and Gaussian noise.

Book ChapterDOI
22 Jul 2015
TL;DR: Experimental results show that the proposed scheme provides higher detection accuracy than that provided by the state-of-the-art techniques on two sequences from LISA 2010 dataset, while maintaining the real-time detection speed.
Abstract: Multi-resolution vehicle detection usually requires extracting a certain kind of features from each scale of an image pyramid to construct a feature pyramid, which is considered as a computational bottleneck for many object detectors. In this paper, a novel technique for the approximation of feature pyramids by using feature resampling in the 2D discrete Fourier transform domain is presented. Experimental results show that the proposed scheme provides higher detection accuracy than that provided by the state-of-the-art techniques on two sequences from LISA 2010 dataset, while maintaining the real-time detection speed.

Journal ArticleDOI
TL;DR: A criterion and an algorithm for determining the number of required features in kernel PCA using only the elements of the kernel matrix are proposed and applied.
Abstract: Feature generation techniques that sort the generated features in terms of their importance, such as principal component analysis (PCA), reduce the problem of feature subset selection to only determining the number of features to be retained. For databases with linearly inseparable classes, kernel PCA can be used as the feature generation method instead of the linear PCA. However, determining the number of features in the kernel space that needs to be retained for preserving the classifiability is a difficult problem, since the data vectors are not available in an explicit form in that space. In this paper, we propose a criterion and an algorithm for determining the number of required features in kernel PCA using only the elements of the kernel matrix. In order to show the effectiveness of the proposed criterion, the new algorithm is applied to the USPS handwritten digit, Yale Face and Caltech 101 databases. The proposed algorithm is also investigated for its robustness to noise that corrupts the data samples.

Proceedings ArticleDOI
03 May 2015
TL;DR: A new image denoising method in the contourlet domain is introduced in which the contouring coefficients of images are modeled by using the Bessel k-form prior, and a characteristic function-based technique is used.
Abstract: Statistical image modeling has attracted great attention in the field of image denoising. In this work, a new image denoising method in the contourlet domain is introduced in which the contourlet coefficients of images are modeled by using the Bessel k-form prior. A noisy image is decomposed into a low frequency approximation sub-image and a series of high frequency detail sub-images at different scales and directions via the contourlet transform. To estimate the noise-free coefficients in detail subbands, a Bayesian estimator is developed utilizing the Bessel k-form distribution. In order to estimate the parameters of the distribution, a characteristic function-based technique is used. Simulation results on standard test images show improved performance both in visual quality and in terms of the peak signal-to-noise ratio and structural similarity index as compared to some of the existing denoising methods. The proposed method also achieves an excellent balance between noise suppression and details preservation.