scispace - formally typeset
Search or ask a question

Showing papers in "The International Arab Journal of Information Technology in 2014"


Journal Article
TL;DR: A classification is made in this paper to solve the recognition of face in the presence of partial occlusion and the experiments and databases used by an assortment of authors to handle the problem of Occlusion.
Abstract: Systems that rely on Face Recognition (FR) biometric have gained great importance ever since terrorist threats imposed weakness among the implemented security systems. Other biometrics i.e., fingerprints or iris recognition is not trustworthy in such situations whereas FR is considered as a fine compromise. This survey illustrates different FR practices that laid foundations on the issue of partial occlusion dilemma where faces are disguised to cheat the security system. Occlusion refers to facade of the face image which can be due to sunglasses, hair or wrapping of facial image by scarf or other accessories. Efforts on FR in controlled settings have been in the picture for past several years; however identification under uncontrolled conditions like illumination, expression and partial occlusion is quite a matter of concern. Based on literature a classification is made in this paper to solve the recognition of face in the presence of partial occlusion. These methods are named as part based methods that make use of Principal Component Analysis (PCA), Linear Discriminate Analysis (LDA), Non-negative Matrix Factorization (NMF), Local Non-negative Matrix Factorization (LNMF), Independent Component Analysis (ICA) and other variations. Feature based and fractal based methods consider features around eyes, nose or mouth region to be used in the recognition phase of algorithms. Furthermore the paper details the experiments and databases used by an assortment of authors to handle the problem of occlusion and the results obtained after performing diverse set of analysis. Lastly, a comparison of various techniques is shown in tabular format to give a precise overview of what different authors have already projected in this particular field.

88 citations


Journal Article
TL;DR: Simulation and experimental results on benchmark test images demonstrates that proposed algorithm provides better results as compared to other state*of*art contrast enhancement techniques.
Abstract: This paper proposed an efficient algorithm for contrast enhancement of natural images. The contrast of images is very important characteristics by which the quality of images can be judged as good or poor. The proposed algorithm consists of two stages: In the first stage the poor quality of an image is processed by modified sigmoid function. In the second stage the output of the first stage is further processed by contrast limited adaptive histogram equalization to enhance contrast of images. In order to achieve better contrast enhancement of images, a novel mask based on input value together with the modified sigmoid formula that will be used as contrast enhancer in addition to contrast limited adaptive histogram equalization. This new contrast enhancement algorithm passes over the input image which operates on its pixels one by one in spatial domain. Simulation and experimental results on benchmark test images demonstrates that proposed algorithm provides better results as compared to other state*of*art contrast enhancement techniques. Proposed algorithm performs efficiently in different dark and bright images by adjusting their contrast very frequently. Proposed algorithm is very simple and efficient approach for contrast enhancement of image. This algorithm can be used in various applications where images are suffering from different contrast problems.

62 citations


Journal Article
TL;DR: A robust facial feature descriptor constructed with the Compound Local Binary Pattern (CLBP) for person-independent facial expression recognition, which overcomes the limitations of LBP.
Abstract: Automatic recognition of facial expression is an active research topic in computer vision due to its importance in both human-computer and social interaction. One of the critical issues for a successful facial expression recognition system is to design a robust facial feature descriptor. Among the different existing methods, the Local Binary Pattern (LBP) has been proved to be a simple and effective one for facial expression representation. However, the LBP method thresholds P neighbors exactly at the value of the center pixel in a local neighborhood and encodes only the signs of the differences between the gray values. Thus, it loses some important texture information. In this paper, we present a robust facial feature descriptor constructed with the Compound Local Binary Pattern (CLBP) for person-independent facial expression recognition, which overcomes the limitations of LBP. The proposed CLBP operator combines extra P bits with the original LBP code in order to construct a robust feature descriptor that exploits both the sign and the magnitude information of the differences between the center and the neighbor gray values. The recognition performance of the proposed method is evaluated using the CohnKanade (CK) and the Japanese Female Facial Expression (JAFFE) database with a Support Vector Machine (SVM) classifier. Experimental results with prototypic expressions show the superiority of the CLBP feature descriptor against some wellknown appearance-based feature representation methods.

50 citations


Journal Article
TL;DR: This research presents the approach to develop an automatic readability index for the Arabic language: Automatic Arabic Readability Index (AARI), using factor analysis and develops two applications to compute the Arabic text readability.
Abstract: Text readability refers to the ability of the reader to understand and comprehend a given text. In this research, we present our approach to develop an automatic readability index for the Arabic language: Automatic Arabic Readability Index (AARI), using factor analysis. Our results are based on more than 1196 Arabic texts extracted from the Jordanian curriculum in the subjects of: Arabic language, Islamic religion, natural sciences, and national and social education for the elementary classes (first grade through tenth grade). We conduct a comparison study to support our model using cluster analysis and Support Vector Machines (SVM). In order to facilitate the usage of our Arabic readability index, we developed two applications to compute the Arabic text readability: A standalone application and an add-on for Microsoft Word text processer. Through our presented research results and developed tools, we aim to improve the overall readability of Arabic texts, especially those targeted towards the younger generations.

43 citations


Journal Article
TL;DR: Results have shown a significant improvement of the approaches performance compared to the standard version of the EM and FCM, respectively, especially regarding about the robustness face to noise and the accuracy of the edges between regions.
Abstract: The Expectation Maximization (EM) algorithm and the clustering method Fuzzy(C(Means (FCM) are widely used in image segmentation. However, the major drawback of these methods is their sensitivity to the noise. In this paper, we propose a variant of these methods which aim at resolving this problem. Our approaches proceed by the characterization of pixels by two features: the first one describes the intrinsic properties of the pixel and the second characterizes the neighborhood of pixel. Then, the classification is made on the base on adaptive distance which privileges the one or the other features according to the spatial position of the pixel in the image. The obtained results have shown a significant improvement of our approaches performance compared to the standard version of the EM and FCM, respectively, especially regarding about the robustness face to noise and the accuracy of the edges between regions.

36 citations


Journal Article
TL;DR: This paper has proposed hybrid dependency patterns to extract product features from unstructured reviews and found that the proposed hybrid patterns provide comparatively more accurate results.
Abstract: In this paper we have addressed the problem of automatic identification of product features from customer reviews. Costumers, retailors, and manufacturers are popularly using customer reviews on websites for product reputation and sales forecasting. Opinion mining application have been potentially employed to summarize the huge collectionof customer reviews for decision making. In this paper we have proposed hybrid dependency patterns to extract product features from unstructured reviews. The proposed dependency patterns exploit lexical relations and opinion context to identify features. Based on empirical analysis, we found that the proposed hybrid patterns provide comparatively more accurate results. The average precision and recall are significantly improved with hybrid patterns.

31 citations


Journal Article
TL;DR: The current research mainly aims at solving the problem of automatic Arabic TC by investigating the Frequency Ratio Accumulation Method (FRAM), which has a simple mathematical model, and has outperformed the state of the arts.
Abstract: Compared to other languages, there is still a limited body of research which has been conducted for the automated Arabic Text Categorization (TC) due to the complex and rich nature of the Arabic language. Most of such research includes supervised Machine Learning (ML) approaches such as Naive Bayes (NB), K-Nearest Neighbour (KNN), Support Vector Machine and Decision Tree. Most of these techniques have complex mathematical models and do not usually lead to accurate results for Arabic TC. Moreover, all the previous research tended to deal with the Feature Selection (FS) and the classification respectively as independent problems in automatic TC, which led to the cost and complex computational issues. Based on this, the need to apply new techniques suitable for Arabic language and its complex morphology arises. A new approach in the Arabic TC term called the Frequency Ratio Accumulation Method (FRAM), which has a simple mathematical model is applied in this study. The categorization task is combined with a feature processing task. The current research mainly aims at solving the problem of automatic Arabic TC by investigating the FRAM in order to enhance the performance of Arabic TC model. The performance of FRAM classifier is compared with three classifiers based on Bayesian theorem which are called Simple NB, Multi-variant Bernoulli Naive Bayes (MNB) and Multinomial Naive Bayes models (MBNB). Based on the findings of the study, the FRAM has outperformed the state of the arts. It's achieved 95.1% macro-F1 value by using unigram word-level representation method.

28 citations


Journal Article
TL;DR: Experimental result verifies that the proposed algorithm in this paper supply a general solution for the problem of web service execution time prediction in cloud environment.
Abstract: Cloud environment is a complex system which includes the matching between computation resources and data resources. Efficient predicting services execution time is a key component of successful tasks scheduling and resource allocation in cloud computing environment. In this paper, we propose a framework for supporting knowledge discovery application running in cloud environment as well as a holistic approach to predict the application execution times. We use rough sets theory to determine a reduct and then compute the execution time prediction. The spirit of the algorithm, which we proposed comes from the number of attributes within a given discernibility matrix. We also propose to join dynamic data related to the performances of various knowledge discovery services in the cloud computing environment for supporting the prediction. This information can be joined as additional metadata stored in cloud environment. Experimental result verifies that the proposed algorithm in this paper supply a general solution for the problem of web service execution time prediction in cloud environment.

26 citations


Journal Article
TL;DR: A real time system in which once CSI and QoS is fed in as input, it gives us optimal Modulation Code Pairs (MCPs) and power vectors for different subcarriers and supremacy of the proposed scheme is shown by the simulations.
Abstract: Adaptive Resource Allocation is a prominent and necessary feature of almost all future communication systems. The transmission parameters like power, code rate and modulation scheme are adapted according to the varying channel conditions so that throughput of the OFDM system may be maximized while satisfying certain constraints like Bit Error Rate (BET) and total power at the same time. For real time systems, it is required that the adaptive process should be fast enough to synchronize with Channel State Information (CSI) and Quality of Service (QoS) demand that change rapidly. So in this paper, we have a real time system in which once CSI and QoS is fed in as input, it gives us optimal Modulation Code Pairs (MCPs) and power vectors for different subcarriers. Using a Fuzzy Rule Base System (FRBS) we obtain MCP by giving CSI and QoS and by using Differential Evolution (DE) the power vector is obtained. This becomes an example. A Gaussian Radial Basis Function Neural Network (GRBF7NN) is trained in offline mode using sufficient number of such examples. After training, given QoS and CSI as input GRBF7NN gives Optimum Power Vector (OPV) and FRBS gives optimum MCP immediately. Proposed scheme is compared with various other schemes of same domain and supremacy of the proposed scheme is shown by the simulations.

25 citations


Journal Article
TL;DR: Experiments show that, the proposed method is capable of effectively classifying weed images and provides superior performance than several existing methods.
Abstract: In conventional cropping systems, removal of weed population extensively relies on the application of chemical herbicides. However, this practice should be minimized because of the adverse effects of herbicide applications on environment, human health, and other living organisms. In this context, if the distribution of broadleaf and grass weeds could be sensed locally with a machine vision system, then the selection and dosage of herbicides applications could be optimized automatically. This paper presents a simple, yet effective texture&based weed classification method us ing local pattern operators. The objective is to evaluate the feasibility of using micro&level texture patterns to class ify weed images into broadleaf and grass categories for real&time select ive herbicide applications. Three widely&used textu re operators, namely Local Binary Pattern (LBP), Local Ternary Pattern (LTP), and Local Directional Pattern (LDP) are considered in our study. Experiments on 400 sample field images with 200 samples from each category show that, the proposed method is capable of effectively classifying weed images and provides superior performance than several existing methods.

24 citations


Journal Article
TL;DR: A Zernike moments-based descriptor is used as a measure of shape information for the detection of buildings from Very High Spatial Resolution (VHSR) satellite images.
Abstract: In this paper, a Zernike moments-based descriptor is used as a measure of shape information for the detection of buildings from Very High Spatial Resolution (VHSR) satellite images The proposed approach comprises three steps First, the image is segmented into homogeneous objects based on the spectral and spatial information MeanShift segmentation method is used for this end Second, a Zernike feature vector is computed for each segment Finally, a Support Vector Machines (SVM)-based classification using the feature vectors as inputs is performed Experimental results and comparison with Environment for Visualizing Images (ENVI) commercial package confirm the effectiveness of the proposed approach

Journal Article
TL;DR: An Adaptive Margin Fisher's Criterion Linear Discriminant Analysis (AMFC/LDA) is proposed that a ddresses issues and overcomes the limitations of intra class problems and reveals encouraging performance.
Abstract: Selecting a low dimensional feature subspace from thousands of features is a key phenomenon for optimal classification. Linear Discriminant Analysis (LDA) is a basic well recognized supervised classifier that is effectively employed for classification. However, two problems arise in intra class during discriminant analysis. Firstly, in training phase the number of samples in intra class is smaller than the dimensionality of the sample which makes LDA unstable. The other is high computational cost due to redundant and irrelevant data points in intra class. An Adaptive Margin Fisher's Criterion Linear Discriminant Analysis (AMFC/LDA) is proposed that a ddresses these issues and overcomes the limitations of intra class problems. Small Sample Size (SSS) problem is resolved through modified Maximum Margin Criterion (MMC), which is a form of customized LDA and convex hull. Inter class is defined using LDA while intra class is formulated using quick hull respectively. Similarly, computational cost is reduced by reformulating within class scatter matrix through minimum Redundancy Maximum Relevance (mRMR) algorithm while preserving discriminant information. The proposed algorithm reveals encouraging performance. Finally, a comparison is made with existing approaches.

Journal Article
TL;DR: The edge image obtained by NIFD operator is better than those of sobel and canny operators, and specially for a noisy image, NifD operator has the best anti-noise ability.
Abstract: In this paper, according to the development of the fractional differentiation and its applications in the modern signal processing, we improve the numerical calculation of fractional differentiation by Newton interpolation equation, and propose a new mask, the Newton Interpolation's Fractional Differentiation (NIFD). Then, we apply this new mask to image edge detection and can obtain the better edge information image. In order to get continuous and thin edges, we synthesize a new gradient and adopt the non-maxima suppression method. For a comparison, we consider the edge map yielded by the sobel operator and canny operator. By contrast, we discover that the edge image obtained by NIFD operator is better than those of sobel and canny operators, and specially for a noisy image, NIFD operator has the best anti-noise ability.

Journal Article
TL;DR: A new pre! filtering method is proposed in this paper to improve performance of extraction algorithms in Discrete Cosine Transform (DCT) based watermarking method and results show that extracted watermark has better quality than previous method.
Abstract: In image processing pre!processing is used for prep aring or improving performance of operations. In order to improve performance of extraction algorithms in Discrete Cosine Transform (DCT) based watermarking method, a new pre! filtering method is proposed in this paper. Enhancement filters are applied to the watermarked image as pre!filtering before running watermark extraction algorithms in DCT based method. These filters are based of mixture of two filters: Unsharp and Laplacian of Gaussian (LoG). Distinction of watermarked part and unwatermarked part is increased by these filters; thus, the watermark information could be extracted with more accuracy. To show the effectiveness of the proposed method, different types of attacks are applied on typical DCT based algorithms. Experimental results show that extracted watermark has better quality than previous method.

Journal Article
TL;DR: In this paper, monetary and compactness constraints in addition to frequency and length are included in the sequential mining process for discovering pertinent sequential patterns from sequential databases and a CFML)PrefixSpan algorithm is proposed by integrating these constraints with the original Prefix span algorithm, which allows discovering all CFML sequential patternsFrom sequential database.
Abstract: Sequential pattern mining is advantageous for several applications. For example, it finds out the sequential purchasing behavior of majority customers from a large number of customer transactions. However, the existing researches in the field of discovering sequential patterns are based on the concept of frequency and presume that the customer purchasing behavior sequences do not fluctuate with change in time, purchasing cost and other parameters. To acclimate the sequential patterns to these changes, constraint are integrated with the traditional sequential pattern mining approach. It is possible to discover more user)centered patterns by integrating certain constraints with the sequential mining process. Thus in this paper, monetary and compactness constraints in addition to frequency and length are included in the sequential mining process for discovering pertinent sequential patterns from sequential databases. Also, a CFML)PrefixSpan algorithm is proposed by integrating these constraints with the original PrefixSpan algorithm, which allows discovering all CFML sequential patterns from the sequential database. The proposed CFML)PrefixSpan algorithm has been validated on synthetic sequential databases. The experimental results ensure that the efficacy of the sequential pattern mining process is further enhanced in view of the fact that the purchasing cost, time duration and length are integrated with the sequential pattern mining process.

Journal Article
TL;DR: A new method for writer identification based on multi- fractal features for both types of presented approaches, which consists to extract the multi-fractal dimensions from the images of words and their on-line signals to identify the writer in realistic conditions.
Abstract: Writer identification still remains as a challenge area in the field of off-line handwriting recognition because only an image of the handwriting is available. Consequently, some information on the dynamic of writing, which is valuable for identification of writer, is unavailable in the off-line approaches, contrary to the on-line ones where temporal and spatial information about the writing are available. In this paper, we present a new method for writer identification based on multi- fractal features for both types of presented approaches. This method consists to extract the multi-fractal dimensions from the images of words and their on-line signals. In order to enhance the performance of our writer identification system, we have combined both on-line and off-line approaches. In this way, our work consists to take advantage of static and dynamic representations of handwriting, in order to identify the writer in realistic conditions. The tests are performed on the writing of 110 writers from the ADAB database. The obtained results show the effectiveness of the proposed writer identification method.

Journal Article
TL;DR: Two alternative approaches using analogy for estimation based on the integration of Grey Relational Analysis and regression have been proposed and empirical results attained are remarkable indicating that the methodologies have a great potential and can be used as a candidate approaches for software effort estimation.
Abstract: Software project planning and estimation is the most important confront for software developers and researchers It incorporates estimating the size of the software project to be produced, estimating the effort required, developing initial project schedules, and ultimately, estimating on the whole cost of the project Numerous empirical explorations have been performed on the existing methods, but they lack convergence in choosing the best prediction methodology Analogy based estimation is still one of the most extensively used method in industry which is based on finding effort from similar projects from the project repository Two alternative approaches using analogy for estimation have been proposed in this study Firstly, a precise and comprehensible predictive model based on the integration of Grey Relational Analysis (GRA) and regression has been discussed Second approach deals with the uncertainty in the software projects, and how fuzzy set theory in fusion with grey relational analysis can minimize this uncertainty Empirical results attained are remarkable indicating that the methodologies have a great potential and can be used as a candidate approaches for software effort estimation The results obtained using both the methods are subjected to rigorous statistical testing using Wilcoxon signed rank test

Journal Article
TL;DR: An efficient system for the prediction of peak traffic flow using machine learning techniques and the experimental results portray the effectiveness of the proposed system in predicting traffic flow.
Abstract: The rapid proliferation of Global Position Service (GPS) devices and mounting number of traffic monitoring systems employed by municipalities have opened the door for advanced traffic control and personalized route planning. Most state of the art traffic management and information systems focus on data analysis, and very little has been done in the sense of prediction. In this article, we devise an efficient system for the prediction of peak traffic flow using machine learning techniques. In the proposed system, the traffic flow of a locality is predicted with the aid of the geospatial data obtained from aerial images. The proposed system comprises of two significant phases: Geospatial data extraction from aerial images, and traffic flow prediction using See5.0 decision tree. Firstly, geographic information essential for traffic flow prediction are extracted from aerial images like traffic maps, using suitable image processing techniques. Subsequently, for a user query, the trained See5.0 decision tree predicts the traffic state of the intended location with relevance to the date and time specified. The experimental results portray the effectiveness of the proposed system in predicting traffic flow.

Journal Article
TL;DR: A comparative assessment of the performance of four popular ensemble methods shows that ensemble methods can be a best candidate for churn prediction tasks and that Boosting RIPPER and Boosting C4.5 are the two best methods.
Abstract: Customer churn is a main concern of most firms in all industries. The aim of customer churn prediction is detecting customers with high tendency to leave a company. Although, many modeling techniques have been used in the field of churn prediction, performance of ensemble methods has not been thoroughly investigated yet. Therefore, in this paper, we perform a comparative assessment of the performance of four popular ensemble methods, i.e., Bagging, Boosting, Stacking, and Voting based on four known base learners, i.e., C4.5 Decision Tree (DT), Artificial Neural Network (ANN), Support Vector Machine (SVM) and Reduced Incremental Pruning to Produce Error Reduction (RIPPER). Furthermore, we have investigated the effectiveness of two different sampling techniques, i.e., oversampling as a representative of basic sampling techniques and Synthetic Minority Over3sampling Technique (SMOTE) as a representative of advanced sampling techniques. Experimental results show that SMOTE doesn't increase predictive performance. In addition, the results show that the application of ensemble learning has brought a significant improvement for individual base learners in terms of three performance indicators i.e., AUC, sensitivity, and specificity. Particularly, in our experiments, Boosting resulted in the best result among all other methods. Among the four ensemble methods Boosting RIPPER and Boosting C4.5 are the two best methods. These results indicate that ensemble methods can be a best candidate for churn prediction tasks.

Journal Article
TL;DR: The experimental results indicate that the application of GA for feature subset selection using SVM as a classifier proves computationally effective and improves the accuracy compared to KNN to classify the leaf patterns.
Abstract: This paper describes an optimal approach for feature extraction and selection for classification of leaves based on Genetic Algorithm (GA). The selection of the optimal features subset and the classification has become an important methodology in the field of Leaf classification. The deterministic feature sequence is extracted from the leaf images using GA technique, and these extracted features are further used to train the Support Vector Machine (SVM). GA is applied to optimize the features of color and boundary sequences, and to improve the overall generalization performance based on the matching accuracy. SVM is applied to produce the false positive and false negative features. Our experimental results indicate that the application of GA for feature subset selection using SVM as a classifier proves computationally effective and improves the accuracy compared to KNN to classify the leaf patterns.

Journal Article
TL;DR: A large-scale comparison with other attempts that have tried to improve the accuracy of the Naive Bayes algorithm as well as other state-of-the-art algorithms on 28 standard benchmark datasets shows that the proposed method gave better accuracy in most cases.
Abstract: Naive Bayes algorithm captures the assumption that every attribute is independent from the rest of the attributes, given the state of the class attribute. In this study, we attempted to increase the prediction accuracy of the simple Bayes model by integrating global and local application of Naive Bayes classifier. We performed a large-scale comparison with other attempts that have tried to improve the accuracy of the Naive Bayes algorithm as well as other state-of-the-art algorithms on 28 standard benchmark datasets and the proposed method gave better accuracy in most cases.

Journal Article
TL;DR: Simulation results assured the accuracy of parameter selection, thus proved the validity in improving the prediction accuracy with acceptable computational time and employed the proposed model in predicting financial time series data.
Abstract: To date, exploring an efficient method for optimizing Least Squares Support Vector Machines (LSSVM) hyperparameters has been an enthusiastic research area among academic researchers.LSSVM is a practical machine learning approach that has been broadly utilized in numerous fields. To guarantee its convincing performance, it is crucial to select an appropriate technique in order to obtain the optimized hyper-parameters of LSSVM algorithm.In this paper, an Enhanced Artificial Bee Colony (eABC) is used to obtain the ideal value of LSSVM’s hyper parameters, which are regularization parameter, γ and kernel parameter, σ2.Later, LSSVM is used as the prediction model. The proposed model was employed in predicting financial time series data and comparison is made against the standard Artificial Bee Colony (ABC) and Cross Validation (CV) technique.The simulation results assured the accuracy of parameter selection, thus proved the validity in improving the prediction accuracy with acceptable computational time.

Journal Article
TL;DR: The proposed method for image contrast enhancement called spatially weighted histogram equalization has better performance than the existing methods, and preserve the original brightness quite well, so that it is possible to be utilized in consumer electronic products.
Abstract: This paper presents a simple and effective method for image contrast enhancement called spatially weighted histogram equalization. Spatially weighted histogram not only considers the times of each grey value appears in a certain image, but also takes the local characteristics of each pixel into account. In the homogeneous region of an image, the spatial weights of pixels tend to zero, whereas at the edges of the image, this weights are very large. In order to maintain the mean brightness of the original image, the grey level transformation function calculated by spatial weighted histogram equalization is modified, and the final result is given by mapping the original image through this modified grey level transformation function. The experimental results show that the proposed method has better performance than the existing methods, and preserve the original brightness quite well, so that it is possible to be utilized in consumer electronic products.

Journal Article
TL;DR: The proposed optimum threshold parameter using Fisher Discriminant Analysis (FDA) for determining the optimum threshold value of wavelet coefficient for the best speckle noise reduction also preserves edges without destroying image information.
Abstract: Optimizing threshold value of wavelet coefficient is an important task in speckle noise reduction in the wavelet domain. Without proper selection of threshold value image information may be lost, which is unwanted. In this paper we proposed optimum threshold parameter using Fisher Discriminant Analysis (FDA) for determining the optimum threshold value of wavelet coefficient for the best speckle noise reduction. It also preserves edges without destroying image information. The method is compared with the several other classical thresholding methods on variety of images and the experimental results confirm significant improvement over existing methods.

Journal Article
TL;DR: Experimental results show the high robustness of the proposed method against both intentional and unintentional attacks during the transfer of video data.
Abstract: Embedding a digital watermark into an electronic document is proving to be a feasible solution for multimedia copyright protection and authentication purposes. In the present paper we propose a new digital video watermarking scheme based on scene change analysis. By detecting the motion scene of video and using Code Division Multiple Access (CDMA) techniques the watermark is embedded into mid-frequency sub-bands of wavelet coefficients. In this experiment in order to enhance the security of our algorithm four keys are considered. Three of them are formed in watermark encryption process and one key is related to CDMA embedding process. Also, with the aim of making a good compatibility between the proposed scheme and Human Visual System (HVS), the blue channel of RGB video is utilized to embed the watermark. Experimental results show the high robustness of the proposed method against both intentional and unintentional attacks during the transfer of video data. The implemented attacks are Gaussian noise, median filtering, frame averaging, frame dropping, geometric attacks and different kinds of lossy compressions including MPEG-2, MPEG-4, MJPEG and H.264/AVC.

Journal Article
TL;DR: This research proposes a new structure for the mutual integration between OODs and CPNs modeling languages to support model changes, and suggests anew structure (Object Oriented Coloured Petri Nets (OOCPN) to include set of rules to check and maintain the consistency and integrity of the OOCPN model based on OODS relations.
Abstract: Unified Modeling Language (UML) is easier to understand and communicate using graphical notations, but lacks techniques for model validation and verification especially if these diagrams are updated. Formal approaches like Coloured Petri Nets (CPNs) are based on strong mathematical notations and proofs as basis for executable modeling languages. Transforming UML diagrams to executable models that are ready for analysis is significant, and providing an automated technique that can transform these diagrams to a mathematical model such as CPNs avoids the redundancy of writing specifications. The use of UML diagrams in modeling Object Oriented Diagrams (OODs) leads to a large number of interdependent diagrams. It is necessary to preserve the diagrams consistency since they are updated continuously. This research proposes a new structure for the mutual integration between OODs and CPNs modeling languages to support model changes, the proposed integration suggest a new structure (Object Oriented Coloured Petri Nets (OOCPN)) to include set of rules to check and maintain the consistency and integrity of the OOCPN model based on OODs relations.

Journal Article
TL;DR: A new multi-view face recognition approach that maintains a very acceptable running time and a high performance even in uncontrolled conditions and develops a new inter-communication technique using a model for the automatic pose estimation of the head in a face image.
Abstract: In this paper we present a new multi-view face recognition approach. Besides the recognition performance gain and the computation time reduction, our main objective is to deal with the variability of the face pose (multi-view) in the same class (identity). Several new methods were applied on face images to calculate our biometric templates. The Laplacian Smoothing Transform (LST) and Discriminant Analysis via Support Vectors (SVDA) have been used for the feature extraction and selection. For the classification, we have developed a new inter-communication technique using a model for the automatic pose estimation of the head in a face image. Experimental results conducted on UMIST database show that an average improvement for face recognition performance has been obtained in comparison with several multi-view face recognition techniques in the literature. Moreover, the system maintains a very acceptable running time and a high performance even in uncontrolled conditions.

Journal Article
TL;DR: A worm detection system that leverages the reliability of IP-Flow and the effectiveness of learning machines and uses the classification accuracy, false alarm rates, and training time as metrics of performance to conclude which algorithm is superior to another.
Abstract: We present a worm detection system that leverages the reliability of IP-Flow and the effectiveness of learning machines. Typically, a host infected by a scanning or an email worm initiates a significant amount of traffic that does not rely on DNS to translate names into numeric IP addresses. Based on this fact, we capture and classify NetFlow records to extract feature patterns for each PC on the network within a certain period of time. A feature pattern includes: No of DNS requests, no of DNS responses, no of DNS normals, and no of DNS anomalies. Two learning machines are used, K-Nearest Neighbors (KNN) and Naive Bayes (NB), for the purpose of classification. Solid statistical tests, the cross-validation and paired t-test, are conducted to compare the individual performance between the KNN and NB algorithms. We used the classification accuracy, false alarm rates, and training time as metrics of performance to conclude which algorithm is superior to another. The data set used in training and testing the algorithms is created by using 18 real-life worm variants along with a big amount of benign flows.

Journal Article
TL;DR: A dynamic load balancing strategy of association rule mining algorithm under a grid environment built upon a hierarchical grid model with three levels: Super coordinator, coordinator, and processing nodes is proposed.
Abstract: The parallel and distributed systems represent one of the important solutions proposed to ameliorate the performance of the sequential association rule mining algorithms. However, parallelization and distribution process is not trivial and still facing many problems of synchronization, communication, and workload balancing. Our study is limited to the workload balancing problem. In this paper, we propose a dynamic load balancing strategy of association rule mining algorithm under a grid environment. This strategy is built upon a hierarchical grid model with three levels: Super coordinator, coordinator, and processing nodes. The main objective of our strategy is to ameliorate the performances of the distributed association rule mining algorithm "APRIORI".

Journal Article
TL;DR: A Discrete Wavelet Transform based VLSI-oriented lossy image compression approach, widely used as the core of digital image compression, is introduced with good performance in power-efficiency corresponding to 0.328 mW/chip than the prior methods.
Abstract: In this paper, we introduced a Discrete Wavelet Transform (DWT) based VLSI-oriented lossy image compression approach, widely used as the core of digital image compression. Here, Distributed Arithmetic (DA) technique is applied to determine the wavelet coefficients, so that the number of arithmetic operation can be reduced substantially. As well, the compression rate is enhanced with the aid of introducing RW block that blocks some of the coefficients obtained from the high pass filter to zero. Subsequently, Differential Pulse-Code Modulation (DPCM) and huffman-encoding are applied to acquire the binary sequence of the image. The functional simulation of each module is presented as well as the performance of each module is widely analyzed with gate required, clock cycles required, power, processing rate, and processing time. From the analysis, it is found that the DCM module requires more gates to do the transformation process compared to other modules. Eventually, the proposed compression approach is compared with the existing methods in terms of processor area and power. Comparative result shows that the proposed method offers good performance in power-efficiency corresponding to 0.328 mW/chip than the prior methods.