scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Image, Graphics and Signal Processing in 2014"


Journal ArticleDOI
TL;DR: This paper compares each of these operators by the manner of checking Peak signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) of resultant image and finds Canny operator found as the best among others in edge detection accuracy.
Abstract: Edge detection is the vital task in digital image processing. It makes the image segmentation and pattern recognition more comfort. It also helps for object detection. There are many edge detectors available for pre-processing in computer vision. But, Canny, Sobel, Laplacian of Gaussian (LoG), Robert's and Prewitt are most applied algorithms. This paper compares each of these operators by the manner of checking Peak signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) of resultant image. It evaluates the performance of each algorithm with Matlab and Java. The set of four universally standardized test images are used for the experimentation. The PSNR and MSE results are numeric values, based on that, performance of algorithms identified. The time required for each algorithm to detect edges is also documented. After the Experimentation, Canny operator found as the best among others in edge detection accuracy. Index Terms—Canny operator, Edge Detectors, Laplacian of Gaussian, MSE, PSNR, Sobel operator.

86 citations


Journal ArticleDOI
TL;DR: The need of reference threshold, how reference threshold value is calculated is explained mathematically, various factors on which reference thresholds value depends are discussed and experimental results describe the selection of reference thresholds for palmprint biometric system.
Abstract: In biometric systems, reference threshold is defined as a value that can decide the authenticity of a person. Authenticity means whether the person is genuine or intruder. The statistical calculation of various values like reference threshold, FAR (False Acceptance Rate), FRR (False Rejection Rate) are required for real- time automated biometric authentication system because the measurement of biometric features are statistical values. In this paper, the need of reference threshold, how reference threshold value is calculated is explained mathematically. Various factors on which reference threshold value depends are discussed. It is also explained that how selection of correct value of reference threshold plays an important role in authentication system. Experimental results describe the selection of reference threshold value for palmprint biometric system.

39 citations


Journal ArticleDOI
TL;DR: The relative merits of different Wavelet transform techniques are evaluated using objective fidelity measures- PSNR and MSE, results obtained provide a basis for application developers to choose the right family of wavelet for image compression matching their application.
Abstract: A vital problem in evaluating the picture quality of an image compression system is the difficulty in describing the amount of degradation in reconstructed image, Wavelet transforms are set of mathematical functions that have established their viability in image compression applications owing to the computational simplicity that comes in the form of filter bank implementation. The choice of wavelet family depends on the application and the content of image. Proposed work is carried out by the application of different hand designed wavelet families like Haar, Daubechies, Biorthogonal, Coiflets and Symlets etc on a variety of bench mark images. Selected benchmark images of choice are decomposed twice using appropriate family of wavelets to produce the approximation and detail coefficients. The highly accurate approximation coefficients so produced are further quantized and later Huffman encoded to eliminate the psychovisual and coding redundancies. However the less accurate detailed coefficients are neglected. In this paper the relative merits of different Wavelet transform techniques are evaluated using objective fidelity measures- PSNR and MSE, results obtained provide a basis for application developers to choose the right family of wavelet for image compression matching their application.

38 citations


Journal ArticleDOI
TL;DR: This paper presents a secure and robust watermarking technique for color images using Discrete Wavelet Transformation and shows that the technique is robust against various common image processing attacks.
Abstract: Information hiding in digital media such as audio, video and or images in order to establish the owner rights and to protect the copyrights commonly known as digital watermarking has received considerable attention of researchers over last few decades and lot of work has been done accordingly. A number of schemes and algorithms have been proposed and implemented using different techniques. The effectiveness of the technique depends on the host data values chosen for information hiding and the way watermark is being embedded in them. However, in view of the threats posed by the online pirates, the robustness and the security of the underlying watermarking techniques have always been a major concern of the researchers. This paper presents a secure and robust watermarking technique for color images using Discrete Wavelet Transformation. The results obtained have shown that the technique is robust against various common image processing attacks.

32 citations


Journal ArticleDOI
TL;DR: A comparison between the speaker identification rate including and excluding the silence removal technique shows around 20% increase in identification rate by the application of this proposed algorithm.
Abstract: In this paper we propose a composite silence removal technique comprising of short time energy and statistical method. The performance of the proposed algorithm is compared with the Short Time Energy (STE) algorithm and the statistical method with varying Signal to Noise Ratio (SNR). In the presence of low SNR the performance of proposed algorithm is highly appreciable in compare to STE and statistical method. We have applied the proposed algorithm in the pre processing stage of speaker identification system. A comparison between the speaker identification rate including and excluding the silence removal technique shows around 20% increase in identification rate by the application of this proposed algorithm. Index Terms—End point detection, short time energy, Gaussian distribution, signal to noise ratio, speaker identification, mel frequency cepstral coefficient, Gaussian mixture model.

20 citations


Journal ArticleDOI
TL;DR: A review on speaker recognition and emotion recognition is performed based on past ten years of research work and a detailed study on these issues is presented in this paper.
Abstract: Speech Processing has been developed as one of the vital provision region of Digital Signal Processing. Speaker recognition is the methodology of immediately distinguishing who is talking dependent upon special aspects held in discourse waves. This strategy makes it conceivable to utilize the speaker's voice to check their character and control access to administrations, for example voice dialing, data administrations, voice send, and security control for secret information. A review on speaker recognition and emotion recognition is performed based on past ten years of research work. So far iari is done on text independent and dependent speaker recognition. There are many prosodic features of speech signal that depict the emotion of a speaker. A detailed study on these issues is presented in this paper. Index Terms—Emotion recognition, feature extraction, speaker recognition.

20 citations


Journal ArticleDOI
TL;DR: The most suitable feature set from the features was identified for accurate classification and the Shape-nColor feature set outperformed in almost all the instances of classification.
Abstract: This research is aimed at evaluating the shape and color features using the most commonly used neural network architectures for cereal grain classification. An evaluation of the classification accuracy of shape and color features and neural network was done to classify four Paddy (Rice) grains, viz. Karjat-6, Ratnagiri-2, Ratnagiri-4 and Ratnagiri-24. Algorithms were written to extract the features from the high-resolution images of kernels of four grain types and use them as input features for classification. Different feature models were tested for their ability to classify these cereal grains. Effect of using different parameters on the accuracy of classification was studied. The most suitable feature set from the features was identified for accurate classification. The Shape-nColor feature set outperformed in almost all the instances of classification.

16 citations


Journal ArticleDOI
TL;DR: Recognition of facial expression by integrating the features derived from Grey Level Co-occurrence Matrix with a new structural approach derived from distinct LBP's on a 3 x 3 First order Compressed Image (FCI).
Abstract: Automatic recognition of facial expressions can be an important component of natural human- machine interfaces; it may also be used in behavioural science and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. This paper, presents recognition of facial expression by integrating the features derived from Grey Level Co-occurrence Matrix (GLCM) with a new structural approach derived from distinct LBP's (DLBP) ona 3 x 3 First order Compressed Image (FCI). The proposed method precisely recognizes the 7 categories of expressions i.e.: neutral, happiness, sadness, surprise, anger, disgust and fear. The proposed method contains three phases. In the first phase each 5 x 5 sub image is compressed into a 3 x 3 sub image. The second phase derives two distinct LBP's (DLBP) using the Triangular patterns between the upper and lower parts of the 3 x 3 sub image. In the third phase GLCM is constructed based on the DLBP's and feature parameters are evaluated for precise facial expression recognition. The derived DLBP is effective because it integrated with GLCM and provides better classification performance. The proposed method overcomes the disadvantages of statistical and formal LBP methods in estimating the facial expressions. The experimental results demonstrate the effectiveness of the proposed method on facial expression recognition.

16 citations


Journal ArticleDOI
TL;DR: In this paper, urban growth of Bangalore region is analyzed and discussed by using multi-temporal and multi-spectral Landsat satellite images, the change detection is studied over a period of 39 years and the region of interest covers an area of 2182 km.
Abstract: In this paper, urban growth of Bangalore region is analyzed and discussed by using multi-temporal and multi-spectral Landsat satellite images. Urban growth analysis helps in understanding the change detection of Bangalore region. The change detection is studied over a period of 39 years and the region of interest covers an area of 2182 km. The main cause for urban growth is the increase in population. In India, rapid urbanization is witnessed due to an increase in the population, continuous development has affected the existence of natural resources. Therefore observing and monitoring the natural resources (land use) plays an important role. To analyze changed detection, researcher’s use remote sensing data. Continuous use of remote sensing data helps researchers to analyze the change detection. The main objective of this study is to monitor land cover changes of Bangalore district which covers rural and urban regions using multi-temporal and multi-sensor Landsat multi-spectral scanner (MSS), thematic mapper (TM), Enhanced Thematic mapper plus (ETM+) MSS, TM and ETM+ images captured in the years 1973, 1992, 1999, 2002, 2005, 2008 and 2011. Temporal changes were determined by using maximum likelihood classification method. The classification results contain four land cover classes namely, built-up, vegetation, water and barren land. The results indicate that the region is densely developed which has resulted in decrease of water and vegetation regions. The continuous transformation of barren land to built-up region has affected water and vegetation regions. Generally, from 1973 to 2011 the percentage of urban region has increased from 4.6% to 25.43%, mainly due to urbanization.

15 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid wavelet transform matrix is formed using two component orthogonal transforms, one is base transform which contributes to global features of an image and another transform contributes to local features.
Abstract: In this paper image compression using hybrid wavelet transform is proposed. Hybrid wavelet transform matrix is formed using two component orthogonal transforms. One is base transform which contributes to global features of an image and another transform contributes to local features. Here base transform is varied to observe its effect on image quality at different compression ratios. Different transforms like Discrete Kekre Transform (DKT), Walsh, Real-DFT, Sine, Hartley and Slant transform are chosen as base transforms. They are combined with Discrete Cosine Transform (DCT) that contributes to local features of an image. Sizes of component orthogonal transforms are varied as 16-16, 32-8 and 64-4 to generate hybrid wavelet transform of size 256x256. Results of different combinations are compared and it has been observed that, DKT as a base transform combined with DCT gives better results for size 16x16 of both component transforms.

15 citations


Journal ArticleDOI
TL;DR: In this paper, the state-of-the-art methods of image denoising using wavelet thresholding are reviewed and compared on the basis of peak signal to noise ratio and visual quality of images.
Abstract: Image denoising using wavelet transform has been successful as wavelet transform generates a large number of small coefficients and a small number of large coefficients. Basic denoising algorithm that using the wavelet transform consists of three steps - first computing the wavelet transform of the noisy image, thresholding is performed on the detail coefficients in order to remove noise and finally inverse wavelet transform of the modified coefficients is taken. This paper reviews the state of art methods of image denoising using wavelet thresholding. An Experimental analysis of wavelet based methods Visu Shrink, Sure Shrink, Bayes Shrink, Prob Shrink, Block Shrink and Neigh Shrink Sure is performed. These wavelet based methods are also compared with spatial domain methods like median filter and wiener filter. Results are evaluated on the basis of Peak Signal to Noise Ratio and visual quality of images. In the experiment, wavelet based methods perform better than spatial domain methods. In wavelet domain, recent methods like prob shrink, block shrink and neigh shrink sure performed better as compared to other wavelet based methods.

Journal ArticleDOI
TL;DR: A new method that can be used in electromagnetic protection of process data is a software solution that involves the use of appropriate computer fonts, and appropriate shapes of the letter characters of computer fonts were recommended.
Abstract: In its operation, every electrical device generates electromagnetic disturbance signals. They can be due to the operation of components of the device (step motors, heaters, control circuits, or electronic circuits). Quite often such signals have the characteristics of the data processed on such devices. They can have the form of a text. In each case, such signals are undesirable. However, they can be used to reproduce such data or, in other word, to conduct the process of electromagnetic infiltration. In the case of video signals (graphic mode of a computer, a laser printer), then the reproduced data can be presented in the form of graphic images that can be easily assimilated by people. Such images are transformed in order to find the data that is of interest. Reproduction of such data may lead to a disclosure of classified information. There are many solutions that should counter the process of reproduction of such data. Such solutions are implemented in the design of equipment and influence, to a lesser or greater extent, the appearance of the equipment, as well as the related organizational methods. A new method that can be used in electromagnetic protection of process data is a software solution. It involves the use of appropriate computer fonts. The article presents the possibilities related to shaping the form of video signals. For this purpose, appropriate shapes of the letter characters of computer fonts were recommended. Unlike characters in standard fonts (Arial and Times New Roman), they do not have the unique decorative elements (serif), such as hooks, connectors, heels, arches, and ribbons, and consist of only vertical and horizontal lines [5]. There are no slanted and crooked lines. Due to this, the characters are often very similar. This greatly contributes to the impossibility to differentiate between the letter characters in the reproduced image that is filled with noise and numerous disturbances. The graphic elements being searched, having the form of strings of letters, cannot be read. The digital image processing methods intended to improve the quality of the image are quite ineffective. In search for graphic characters, such as computer font letter and digit characters, one can use methods based on the similarity of the standard with a portion of the analyzed image. However, when special fonts are used, the correlation method generates many false decisions, which also prevents reading text data.

Journal ArticleDOI
TL;DR: A symmetric image encryption based on bit-wise operation (XORing and Shifting) using dynamic substitution box and transposition techniques which provides additional protection of the secret data.
Abstract: This paper shows a symmetric image encryption based on bit-wise operation (XORing and Shifting). The basic idea is block ciphering (size of each block is 4 bytes) technique to cipher the secret bytes, after that ciphered bytes are again shuffled among N positions (N is the size of secret file). The scheme is combination of substitution as well as transposition techniques which provides additional protection of the secret data. The substitution and transposition are done using dynamic substitution box (SBOX) and transposition box (TBOX) which are generated using the secret key and made to vary for each block during ciphering. The size of encrypted data is same as the size of secret data and the proposed scheme has been tested using different images. We have also presented the security analysis such as key sensitivity analysis, statistical analysis, and differential analysis to prove the strength of our algorithm against crypto analysis.

Journal ArticleDOI
TL;DR: This paper successfully detect optic disc area quickly and segmented blood vessels more quickly and proposed the optic disc segmentation using a method that has not been used before, and this method is very simple.
Abstract: Detection of optic disc area is complex because it is located in an area that is considered as pathological blood vessels when in segmentation and thus require a method to detect the area of the optic disc, this paper proposed the optic disc segmentation using a method that has not been used before, and this method is very simple, K-means clustering is a proposed Method in this paper to detect the optic disc area with perfected using adaptive morphology. This paper successfully detect optic disc area quickly and segmented blood vessels more quickly.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed face detection and recognition system is robust enough to detect faces in different lighting conditions, scales, poses, and skin colors from various races and is able to recognize face with less misclassification compared to the previous methods.
Abstract: Face detection and recognition has always been one of the research interests to researchers in the field of the biometric identification of individuals. Problems such as environmental lighting, different skin color, complex background, etc affect on the detection and recognition of individuals. This paper proposes a method to enhance the performance of face detection and recognition systems. Our method, basically consists of two main parts: firstly, we detect faces and then recognize the detected faces. In the detection step, we use the skin color segmentation combined with AdaBoost algorithm, which is fast and also more accurate compared to the other known methods. Also, we use a series of morphological operators to improve the face detection performance. Recognition part consists of three steps: dimension reduction using Principal Component Analysis (PCA), feature selection using Linear Discriminant Analysis (LDA), and k-Nearest Neighbor (K-NN) or Support Vector Machine (SVM) based classification. Combination of PCA and LDA is used for improving the capability of LDA when a few samples of images are available. We test the system on the face databases. Experimental results show that the system is robust enough to detect faces in different lighting conditions, scales, poses, and skin colors from various races. Also, the system is able to recognize face with less misclassification compared to the previous methods.

Journal ArticleDOI
TL;DR: A 3D face recognition algorithm based on Radon transform, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) is proposed, which is efficient in terms of accuracy and detection time, in comparison with other methods based on PCA only and RT+PCA.
Abstract: Biometrics (or biometric authentication) refers to the identification of humans by their characteristics or traits. Bio-metrics is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Three dimensional (3D) human face recognition is emerging as a significant biometric technology. Research interest into 3D face recognition has increased during recent years due to the availability of improved 3D acquisition devices and processing algorithms. Three dimensional face recognition also helps to resolve some of the issues associated with two dimensional (2D) face recognition. In the previous research works, there are several methods for face recognition using range images that are limited to the data acquisition and pre-processing stage only. In the present paper, we have proposed a 3D face recognition algorithm which is based on Radon transform, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The Radon transform (RT) is a fundamental tool to normalize 3D range data. The PCA is used to reduce the dimensionality of feature space, and the LDA is used to optimize the features, which are finally used to recognize the faces. The experimentation has been done using three publicly available databases, namely, Bhosphorus, Texas and CASIA 3D face databases. The experimental results are shown that the proposed algorithm is efficient in terms of accuracy and detection time, in comparison with other methods based on PCA only and RT+PCA. It is observed that 40 Eigen faces of PCA and 5 LDA components lead to an average recognition rate of 99.20% using SVM classifier.

Journal ArticleDOI
TL;DR: A geometry- topology based algorithm for Japanese Hiragana character recognition based on center of gravity identification and is size, translation and rotation invariant is proposed.
Abstract: In this paper we propose a geometry- topology based algorithm for Japanese Hiragana character recognition. This algorithm is based on center of gravity identification and is size, translation and rotation invariant. In addition, to the center of gravity, topology based landmarks like conjunction points masking the intersection of closed loops and multiple strokes, as well as end points have been used to compute centers of gravity of these points located in the individual quadrants of the circles enclosing the characters. After initial pre-processing steps like notarization, resizing, cropping, noise removal, synchronization, the total number of conjunction points as well as the total number of end points are computed and stored. The character is then encircled and divided into four quadrants. The center of gravity (cog) of the entire character as well as the cogs of each of the four quadrants are computed and the Euclidean distances of the conjunction and end points in each of the quadrants with the cogs are computed and stored. Values of these quantities both for target and template images are computed and a match is made with the character having the minimum Euclidean distance. Average accuracy obtained is 94.1 %.

Journal ArticleDOI
TL;DR: An algorithm for real time face detection and tracking using skin color segmentation and region properties based on knowledge of geometrical properties of human face is presented.
Abstract: Real time face detection and face tracking is one of the challenging problem in computer human interaction, video surveillance, biometrics etc. In this paper we are presenting an algorithm for real time face detection and tracking using skin color segmentation and region properties. First segmentation of skin regions from an image is done by using different color models. Skin regions are separated from the image by using thresholding. Then to decide whether these regions contain human face or not we used face features. Our procedure is based on skin color segmentation and human face features (knowledge-based approach). We have used RGB, YCbCr, and HSV color models for skin color segmentation. These color models with thresholds, help to remove non skin like pixel from an image. Each segmented skin regions are tested to know whether region is human face or not, by using human face features based on knowledge of geometrical properties of human face.

Journal ArticleDOI
TL;DR: An automated mammogram classification method using Symlet, singular value decomposition and weighted histograms are used for feature extraction in mammograms, which helps radiologists to detect abnormalities earlier than traditional procedures.
Abstract: Mammography is a special CT scan technique, which uses X-rays and high-resolution film to detect breast tumors efficiently. Mammography is used only in breast tumor detection, and images help physicians to detect diseases due to cells normal growth. Mammography is an effective imaging modality for early breast cancer abnormalities detection. Computer aided diagnosis helps the radiologists to detect abnormalities earlier than traditional procedures. In this paper, an automated mammogram classification method is presented. Symlet, singular value decomposition and weighted histograms are used for feature extraction in mammograms. The extracted features are classified using nai ve bayes, random forest and neural network algorithms.

Journal ArticleDOI
TL;DR: A hardware system for Sobel Edge Detection Algorithm is designed and simulated for a 128 pixel, 8-bit monochrome line-scan camera designed to detect objects as they move along a conveyor belt in a manufacturing environment.
Abstract: In this paper, a hardware system for Sobel Edge Detection Algorithm is designed and simulated for a 128 pixel, 8-bit monochrome line-scan camera. The system is designed to detect objects as they move along a conveyor belt in a manufacturing environment, the camera will observe dark objects on a light conveyor belt. The edge detector is required to detect horizontal and vertical edges using Sobel edge detection method. The Sobel operator requires 3 lines and takes 3 pixels per line, thus using a 3×3 input block to process each pixel. The centre pixel of the 3×3 pixel block can be classified as an edge point or otherwise by thresholding the value from the operator. The FPGA based Sobel edge detector is designed and simulated using Altera Quartus II 8.1 web edition by targeting Cyclone II development boards.

Journal ArticleDOI
TL;DR: Simulation results proved that increment in modulation scheme size causes to improvement in throughput along with BER value, and there is a trade-off among modulation size, BERvalue and throughput.
Abstract: WiMAX is a broadband wireless communication system which provides fixed as well as mobility services. The mobile-WiMAX offers a special feature that has adopted an adaptive modulation and coding (AMC) in OFDM to provide higher data rates and error free transmission. AMC technique employs the channel state information (CSI) to efficiently utilize the channel and maximize the throughput with better spectral efficiency. In this paper, LSE, MMSE, LMMSE, Low rank (Lr)-LMMSE channel estimators are integrated with the physical layer. The performance of estimation algorithms is analyzed in terms of BER, SNR, MSE and throughput. Simulation results proved that increment in modulation scheme size causes to improvement in throughput along with BER value. There is a trade-off among modulation size, BER value and throughput. Index Terms—AMC, CSI, LMS, MMSE, LMMSE, OFDM/OFDMA, WiMAX.

Journal ArticleDOI
TL;DR: The experimental results suggest that the proposed image encryption scheme is robust and secure and can be used for secure image and video communication applications.
Abstract: A great many chaos-based image encryption schemes have been proposed in the past decades. Most of them use the permutation-diffusion architecture in pixel level, which has been proved insecure enough as they are not dependent on plain-images and so cannot resist chosen/known plain-image attack usually. In this paper, we propose a novel image encryption scheme comprising of one permutation process and one diffusion process. In the permutation process, the image sized M N  is expanded to one sized 2 M N  by dividing the plainimage into two parts: one consisting of the higher 4bits and one consisting of the lower 4bits. The permutation operations are done row-by-row and column-by-column to increase the speed of permutation process. The chaotic cat map is utilized to generate chaotic sequences, which are quantized to shuffle the expanded image. The chaotic sequence for permutation process is dependent on plainimage and cipher keys, resulting in good key sensitivity and plain-image sensitivity. To achieve more avalanche effect and larger key space, a chaotic Bernoulli shift map based bilateral (i.e., horizontal and vertical) diffusion function is applied as well. The security and performance of the proposed image encryption have been analyzed, including histograms, correlation coefficients, information entropy, key sensitivity analysis, key space analysis, differential analysis, encryption rate analysis etc. All the experimental results suggest that the proposed image encryption scheme is robust and secure and can be used for secure image and video communication applications.

Journal ArticleDOI
TL;DR: The existence of hard exudates is applied to classify the moderate and severe grading of non-proliferative diabetic retinopathy in retinal fundus images.
Abstract: Diabetic retinopathy is a severe complication retinal disease caused by advanced diabetes mellitus. Long suffering of this disease without threatment may cause blindness. Therefore, early detection of diabetic retinopathy is very important to prevent to become proliferative. One indication that a patient has diabetic retinopathy is the existence of hard exudates besides other indications such as microaneurysms and hemorrhages. In this study, the existence of hard exudates is applied to classify the moderate and severe grading of non-proliferative diabetic retinopathy in retinal fundus images. The hard exudates are segmented using K-means clustering. The segmented regions are extracted to obtain a feature vector which consists of the areas, the perimeters, the number of centroids and its standard deviation. Using three different classifiers, i.e. soft margin Support Vector Machine, Multilayer Perceptron, and Radial Basis Function Network, we achieve the accuracy of 89.29%, 91.07%, and 85.71% respectively, for 56 training data and 56 testing data of retinal images.

Journal ArticleDOI
TL;DR: The proposed method is capable of recognizing an individual when he/she have variations in their gait due to different clothes they wear, in different normal conditions and carrying a bag and has shown better recognition performance compared to some of the existing methods.
Abstract: Gait can be identified by observing static and dynamic parts of human body. In this paper a variant of gait energy image called change energy images (CEI) are generated to capture detailed static and dynamic information of human gait. Radon transform is applied to CEI in four different directions (vertical, horizontal and two opposite cross sections) considering four different angles to compute discriminative feature values. The extracted features are represented in the form of interval -valued type symbolic data. The proposed method is capable of recognizing an individual when he/she have variations in their gait due to different clothes they wear, in different normal conditions and carrying a bag. A similarity measure suitable for the proposed gait representation is explored for the purpose of establishing similarity match for gait recognition. Experiments are conducted on CASIA database B and the results have shown better recognition performance compared to some of the existing methods.

Journal ArticleDOI
TL;DR: This paper proposes automatically gradient threshold estimator of anisotropic diffusion for Meyer's Watershed algorithm based optimal segmentation and demonstrates better performance without loss of any clinical information while preserving edges.
Abstract: Medical image segmentation is a fundamental task in the medical imaging field. Optimal segmentation is required for the accurate judgment or appropriate clinical diagnosis. In this paper, we proposed automatically gradient threshold estimator of anisotropic diffusion for Meyer's Watershed algorithm based optimal segmentation. The Meyer's Watershed algorithm is the most significant for a large number of regions separations but the over segmentation is the major drawback of the Meyer's Watershed algorithm. We are able to remove over segmentation after using anisotropic diffusion as a preprocessing step of segmentation in the Meyer's Watershed algorithm. We used a fixed window size for dynamically gradient threshold estimation. The gradient threshold is the most important parameter of the anisotropic diffusion for image smoothing. The proposed method is able to segment medical image accurately because of obtaining the enhancement image. The introducing method demonstrates better performance without loss of any clinical information while preserving edges. Our investigated method is more efficient and effective in order to segment the region of interests in the medical images indeed.

Journal ArticleDOI
TL;DR: An overview of state of art in computerized object recognition techniques regarding digital images is revised and importance of ―Fourier Descriptor‖ (FD) for the shape based object representation is described.
Abstract: An overview of state of art in computerized object recognition techniques regarding digital images is revised. Advantages of shape based techniques are discussed. Importance of ―Fourier Descriptor‖ (FD) for the shape based object representation is described. A survey for the available shape signature assignment methods with Fourier descriptors is presented. Details for the design of shape signature containing the crucial information of corners of the object are depicted. A novel shape signature is designed basing on the Farthest Point Angle (FPA) which corresponds to the contour point. FPA signature considers the computation of the angle between the line drawn from each contour point and the line drawn from the farthest corner point. Histogram for each 15o angle conceiving the information of the object is constructed. FPA signature is evaluated for three standard databases; viz., two in Kimia {K-99, K-216} and one in MPEG CE-1 Set B. The performance of the present FPA method estimated through recognition rate, time and degree of matching and is found to be higher.

Journal ArticleDOI
TL;DR: Two level watermarking technique based on CS Theory framework in wavelet domain is proposed for security and authentication of biometric template at these two vulnerable points and is robust against various attacks.
Abstract: Biometric authentication system is having several security issues. Two security issues are template protection at system database and at communication channel between system database and matcher subsystem of biometric system. In this paper, two level watermarking technique based on CS Theory framework in wavelet domain is proposed for security and authentication of biometric template at these two vulnerable points. In the proposed technique, generate sparse measurement information of fingerprint and iris biometric template using CS theory framework. This sparse measurement information is used as secure watermark information which is embedding into a face image of same individual for generation of multimodal biometric template. Sparse watermark information is computed using Discrete Wavelet transform (DWT) and random seed. The proposed watermarking technique not only provide protection to biometric templates, it also gives computational security against spoofing attack because of it is difficult for imposter to get three secure biometric template information where two encoded biometric template is embed in term of sparse measurement information into third biometric template. Similarity value between original watermark image and reconstructed watermark image is the measuring factor for identification and authentication. The experimental results show that the technique is robust against various attacks.

Journal ArticleDOI
TL;DR: Two level unsupervised image classification algorithm based on statistical characteristics of the image which helps Sender to make reasonable selection of cover image to enhance performance of steganographic method based on his specific purpose are presented.
Abstract: Steganography is the method of information hiding. Free selection of cover image is a particular preponderance of steganography to other information hiding techniques. The performance of steganographic system can be improved by selecting the reasonable cover image. This article presents two level unsupervised image classification algorithm based on statistical characteristics of the image which helps Sender to make reasonable selection of cover image to enhance performance of steganographic method based on his specific purpose. Experiments demonstrate the effect of classification in satisfying steganography requirements.

Journal ArticleDOI
TL;DR: The proposed steganography system modifies the difference value before being used for hiding the information to provide good hiding capacity and improved quality of stego image with two levels of security for the secret information.
Abstract: Steganography involves hiding information in another media. PVD based steganography techniques uses the difference between the pixel values of a pair directly to hide the information. The proposed steganography system modifies the difference value before being used for hiding the information. This makes extraction of hidden data harder in case the steganography system fails. The algorithm divides the cover image in the block of 2 x 3 pixels and calculates average (N) of the bits that can be hidden in five pairs of that block. Thus if the difference value allows M-bits to be hidden in the pair, then only N-bits are hidden in that pair when M >N otherwise M (if M≤N) bits are hidden in that pair. Second level of security is added by converting the secret information into gray code before embedding it in the cover image. The algorithm provides good hiding capacity and improved quality of stego image with two levels of security for the secret information.

Journal ArticleDOI
TL;DR: Experimental result shows that the proposed technique outperforms the conventional and the state-of-the-art techniques in terms of peak signal to noise ratio, root mean square error, entropy, as well as, visual perspective.
Abstract: In this paper, we present a new image resolution enhancement algorithm based on cycle spinning and stationary wavelet subband padding. The proposed technique or algorithm uses stationary wavelet transformation (SWT) to decompose the low resolution (LR) image into frequency subbands. All these frequency subbands are interpolated using either bicubic or lanczos interpolation, and these interpolated subbands are put into inverse SWT process for generating intermediate high resolution (HR) image. Finally, cycle spinning (CS) is applied on this intermediate high resolution image for reducing blocking artifacts, followed by, traditional Laplacian sharpening filter is used to make the generated high resolution image sharper. This new technique has been tested on several satellite images. Experimental result shows that the proposed technique outperforms the conventional and the state-of-the-art techniques in terms of peak signal to noise ratio, root mean square error, entropy, as well as, visual perspective.