scispace - formally typeset
Search or ask a question
Author

Omar Farooq

Bio: Omar Farooq is an academic researcher from Aligarh Muslim University. The author has contributed to research in topics: Wavelet & Wavelet transform. The author has an hindex of 20, co-authored 159 publications receiving 1665 citations. Previous affiliations of Omar Farooq include Dublin Institute of Technology & American Public University System.


Papers
More filters
Journal ArticleDOI
TL;DR: Wavelet packet transform's multiresolution capabilities are used to derive new sets of features, which are found to be superior to Mel frequency cepstral coefficients (MFCC) in unvoiced phoneme classification problems.
Abstract: A new filter structure using admissible wavelet packet analysis is presented. These filters have the advantage of having frequency bands spacing similar to the Mel scale. Further wavelet packet transform's multiresolution capabilities are used to derive new sets of features, which are found to be superior to Mel frequency cepstral coefficients (MFCC) in unvoiced phoneme classification problems.

170 citations

Proceedings ArticleDOI
15 Mar 2012
TL;DR: The proposed research work designs a detector algorithm for automatic detection of epileptic seizures using a wavelet based feature extraction technique and the feature NCOV yielded better performance than the commonly used COV, σ2.
Abstract: The proposed research work designs a detector algorithm for automatic detection of epileptic seizures. In this work a wavelet based feature extraction technique has been adopted. Epochs of EEG are decomposed using discrete wavelet transform (DWT) up to 5 level of wavelet decomposition. Relative values of energy and a normalized coefficient of variation (NCOV) based measure, (σ2/μ a ) are computed on the wavelet coefficients acquired in the frequency range of 0–32 Hz from both seizure and non-seizure segments. The performance of NCOV over the traditionally used coefficient of variation, COV (σ2/μ2) was studied. The feature NCOV yielded better performance than the commonly used COV, σ2/μ2. The algorithm was evaluated on 5 subjects from CHB-MIT scalp EEG database.

108 citations

Proceedings ArticleDOI
12 Mar 2010
TL;DR: The characteristic of the proposed watermarking scheme is that the blind recovery of the watermark is possible at the receiver and the embedded watermark can be fully removed, Hence, ECG can be viewed by a clinician with zero distortion which is an essential requirement for bio-medical data.
Abstract: Use of wireless technology has made the bio-medical data vulnerable to attacks like tampering, hacking etc. This paper proposes the use of digital watermarking to increase the security of an ECG signal transmitted through a wireless network. A low frequency chirp signal is used to embed watermark which is patient’s identification taken as 15 digit code. The characteristic of the proposed watermarking scheme is that the blind recovery of the watermark is possible at the receiver and the embedded watermark can be fully removed. Hence, ECG can be viewed by a clinician with zero distortion which is an essential requirement for bio-medical data. Further, tampering such as noise addition and filtering attack can also be detected at the receiver.

85 citations

Journal ArticleDOI
TL;DR: A new wavelet-based denoising approach using cubical thresholding has been proposed to reduce noise from the EEG signal prior to analysis to reliable detection of nonconvulsive seizures.
Abstract: The detection of nonconvulsive seizures (NCSz) is a challenge because of the lack of physical symptoms, which may delay the diagnosis of the disease. Many researchers have reported automatic detection of seizures. However, few investigators have concentrated on detection of NCSz. This article proposes a method for reliable detection of NCSz. The electroencephalography (EEG) signal is usually contaminated by various nonstationary noises. Signal denoising is an important preprocessing step in the analysis of such signals. In this study, a new wavelet-based denoising approach using cubical thresholding has been proposed to reduce noise from the EEG signal prior to analysis. Three statistical features were extracted from wavelet frequency bands, encompassing the frequency range of 0 to 8, 8 to 16, 16 to 32, and 0 to 32 Hz. Extracted features were used to train linear classifier to discriminate between normal and seizure EEGs. The performance of the method was tested on a database of nine patients with 24 seizures in 80 hours of EEG recording. All the seizures were successfully detected, and false positive rate was found to be 0.7 per hour.

74 citations

Journal ArticleDOI
TL;DR: A novel morphological feature extraction technique based on the local binary pattern (LBP) operator, which provides a unique decimal value to a sample point by weighing the binary outcomes after thresholding the neighboring samples with the present sample point, is proposed.
Abstract: Epileptic neurological disorder of the brain is widely diagnosed using the electroencephalography (EEG) technique. EEG signals are nonstationary in nature and show abnormal neural activity during the ictal period. Seizures can be identified by analyzing and obtaining features of EEG signal that can detect these abnormal activities. The present work proposes a novel morphological feature extraction technique based on the local binary pattern (LBP) operator. LBP provides a unique decimal value to a sample point by weighing the binary outcomes after thresholding the neighboring samples with the present sample point. These LBP values assist in capturing the rising and falling edges of the EEG signal, thus providing a morphologically featured discriminating pattern for epilepsy detection. In the present work, the variability in the LBP values is measured by calculating the sum of absolute difference of the consecutive LBP values. Interquartile range is calculated over the preprocessed EEG signal to provide dispersion measure in the signal. For classification purpose, K-nearest neighbor classifier is used, and the performance is evaluated on 896.9 hours of data from CHB-MIT continuous EEG database. Mean accuracy of 99.7% and mean specificity of 99.8% is obtained with average false detection rate of 0.47/h and sensitivity of 99.2% for 136 seizures.

72 citations


Cited by
More filters
01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Journal ArticleDOI
TL;DR: The paper focuses on the use of principal component analysis in typical chemometric areas but the results are generally applicable.
Abstract: Principal component analysis is one of the most important and powerful methods in chemometrics as well as in a wealth of other areas. This paper provides a description of how to understand, use, and interpret principal component analysis. The paper focuses on the use of principal component analysis in typical chemometric areas but the results are generally applicable.

1,622 citations

Journal ArticleDOI
01 Oct 1980

1,565 citations