scispace - formally typeset
Search or ask a question

Showing papers in "Journal of King Saud University - Computer and Information Sciences archive in 2015"


Journal ArticleDOI
TL;DR: A literature review of virtual reality and the Cave Automated Virtual Environment (CAVE) and a proposed taxonomy that categorizes such systems from the perspective of technologies used and the mental immersion level found in these systems are presented.
Abstract: One of the main goals of virtual reality is to provide immersive environments that take participants away from the real life into a virtual one. Many investigators have been interested in bringing new technologies, devices, and applications to facilitate this goal. Few, however, have focused on the specific human-computer interaction aspects of such environments. In this article we present our literature review of virtual reality and the Cave Automated Virtual Environment (CAVE). In particular, the article begins by providing a brief overview of the evolution of virtual reality. In addition, key elements of a virtual reality system are presented along with a proposed taxonomy that categorizes such systems from the perspective of technologies used and the mental immersion level found in these systems. Moreover, a detailed overview of the CAVE is presented in terms of its characteristics, uses, and mainly, the interaction styles inside it. Insights of the interaction challenges and research directions of investigating interaction with virtual reality systems in general and the CAVE in particular are thoroughly discussed as well.

168 citations


Journal ArticleDOI
TL;DR: A large set of personal emails is used for the purpose of folder and subject classifications and classification based on NGram is shown to be the best for such large text collection especially as text is Bi-language.
Abstract: Information users depend heavily on emails' system as one of the major sources of communication Its importance and usage are continuously growing despite the evolution of mobile applications, social networks, etc Emails are used on both the personal and professional levels They can be considered as official documents in communication among users Emails' data mining and analysis can be conducted for several purposes such as: Spam detection and classification, subject classification, etc In this paper, a large set of personal emails is used for the purpose of folder and subject classifications Algorithms are developed to perform clustering and classification for this large text collection Classification based on NGram is shown to be the best for such large text collection especially as text is Bi-language (ie with English and Arabic content)

95 citations


Journal ArticleDOI
TL;DR: This work has reviewed and studied the major challenges faced by the Sultanate of Oman in the process of developing and implementing the e- government and suggested a model, which the government may apply in order to develop and implement its own successful e-government.
Abstract: The technological development has turned the government policies and strategies toward e-government. The e-government is considered the primary tool to facilitate the access of the citizens to various services. Thus, the government plans and subsidies shall reach the public through e-government portals. Some of the developed countries had sufficiently integrated the e-government technology whereas others are still under development. In this respect, we have reviewed and studied the major challenges faced by the Sultanate of Oman in the process of developing and implementing the e-government. We have also analyzed secondary data to identify the level of e-government acceptance in the Sultanate of Oman. We have suggested a model, which the government may apply in order to develop and implement its own successful e-government.

64 citations


Journal ArticleDOI
TL;DR: The results proved that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) with Wiener filter have a better balance between smoothness and accuracy than the DWT and are less redundant than SWT (StationaryWavelet Transform).
Abstract: Image denoising is the process to remove the noise from the image naturally corrupted by the noise. The wavelet method is one among various methods for recovering infinite dimensional objects like curves, densities, images, etc. The wavelet techniques are very effective to remove the noise because of their ability to capture the energy of a signal in few energy transform values. The wavelet methods are based on shrinking the wavelet coefficients in the wavelet domain. We propose in this paper, a denoising approach basing on dual tree complex wavelet and shrinkage with the Wiener filter technique (where either hard or soft thresholding operators of dual tree complex wavelet transform for the denoising of medical images are used). The results proved that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) with Wiener filter have a better balance between smoothness and accuracy than the DWT and are less redundant than SWT (StationaryWavelet Transform). We used the SSIM (Structural Similarity Index Measure) along with PSNR (Peak Signal to Noise Ratio) and SSIM map to assess the quality of denoised images.

61 citations


Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed watermarking algorithm yields watermarked images with superior imperceptibility and robustness to common attacks, such as JPEG compression, rotation, Gaussian noise, cropping, and median filter.
Abstract: Digital watermarking, which has been proven effective for protecting digital data, has recently gained considerable research interest. This study aims to develop an enhanced technique for producing watermarked images with high invisibility. During extraction, watermarks can be successfully extracted without the need for the original image. We have developed discrete wavelet transform with a Haar filter to embed a binary watermark image in selected coefficient blocks. A probabilistic neural network is used to extract the watermark image. To evaluate the efficiency of the algorithm and the quality of the extracted watermark images, we used widely known image quality function measurements, such as peak signal-to-noise ratio (PSNR) and normalized cross correlation (NCC). Results indicate the excellent invisibility of the extracted watermark image (PSNR=68.27dB), as well as exceptional watermark extraction (NCC=0.9779). Experimental results reveal that the proposed watermarking algorithm yields watermarked images with superior imperceptibility and robustness to common attacks, such as JPEG compression, rotation, Gaussian noise, cropping, and median filter.

61 citations


Journal ArticleDOI
TL;DR: This work investigates the performance of three different machine learning algorithms, namely C5.0, AdaBoost and Genetic programming (GP), to generate robust classifiers for identifying VoIP encrypted traffic, and shows that finding and employing the most suitable sampling and machine learning technique can improve theperformance of classifying VoIP significantly.
Abstract: We investigate the performance of three different machine learning algorithms, namely C5.0, AdaBoost and Genetic programming (GP), to generate robust classifiers for identifying VoIP encrypted traffic. To this end, a novel approach (Alshammari and Zincir-Heywood, 2011) based on machine learning is employed to generate robust signatures for classifying VoIP encrypted traffic. We apply statistical calculation on network flows to extract a feature set without including payload information, and information based on the source and destination of ports number and IP addresses. Our results show that finding and employing the most suitable sampling and machine learning technique can improve the performance of classifying VoIP significantly.

54 citations


Journal ArticleDOI
TL;DR: This paper proposes a new robust and secure anonymous biometric-based remote user authentication scheme using smart cards that is secure against all possible known attacks including the attacks found in An's scheme.
Abstract: Several biometric-based remote user authentication schemes using smart cards have been proposed in the literature in order to improve the security weaknesses in user authentication system. In 2012, An proposed an enhanced biometric-based remote user authentication scheme using smart cards. It was claimed that the proposed scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. In this paper, we first analyze the security of An's scheme and we show that this scheme has three serious security flaws in the design of the scheme: (i) flaw in user's biometric verification during the login phase, (ii) flaw in user's password verification during the login and authentication phases, and (iii) flaw in user's password change locally at any time by the user. Due to these security flaws, An's scheme cannot support mutual authentication between the user and the server. Further, we show that An's scheme cannot prevent insider attack. In order to remedy the security weaknesses found in An's scheme, we propose a new robust and secure anonymous biometric-based remote user authentication scheme using smart cards. Through the informal and formal security analysis, we show that our scheme is secure against all possible known attacks including the attacks found in An's scheme. The simulation results of our scheme using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool ensure that our scheme is secure against passive and active attacks. In addition, our scheme is also comparable in terms of the communication and computational overheads with An's scheme and other related existing schemes. As a result, our scheme is more appropriate for practical applications compared to other approaches.

39 citations


Journal ArticleDOI
TL;DR: An algorithm to iteratively extract a list of attributes and associations for the given seed concept from which the rough schema is conceptualized using a syntactic and semantic probability-based Naive Bayes classifier is proposed.
Abstract: Domain ontology is used as a reliable source of knowledge in information retrieval systems such as question answering systems. Automatic ontology construction is possible by extracting concept relations from unstructured large-scale text. In this paper, we propose a methodology to extract concept relations from unstructured text using a syntactic and semantic probability-based Naive Bayes classifier. We propose an algorithm to iteratively extract a list of attributes and associations for the given seed concept from which the rough schema is conceptualized. A set of hand-coded dependency parsing pattern rules and a binary decision tree-based rule engine were developed for this purpose. This ontology construction process is initiated through a question answering process. For each new query submitted, the required concept is dynamically constructed, and ontology is updated. The proposed relation extraction method was evaluated using benchmark data sets. The performance of the constructed ontology was evaluated using gold standard evaluation and compared with similar well-performing methods. The experimental results reveal that the proposed approach can be used to effectively construct a generic domain ontology with higher accuracy. Furthermore, the ontology construction method was integrated into the question answering framework, which was evaluated using the entailment method.

36 citations


Journal ArticleDOI
TL;DR: Results of experiments show that Polynomial Networks classifier is a competitive algorithm to the state-of-the-art ones in the field of Arabic text classification.
Abstract: In this paper, an Arabic statistical learning-based text classification system has been developed using Polynomial Neural Networks. Polynomial Networks have been recently applied to English text classification, but they were never used for Arabic text classification. In this research, we investigate the performance of Polynomial Networks in classifying Arabic texts. Experiments are conducted on a widely used Arabic dataset in text classification: Al-Jazeera News dataset. We chose this dataset to enable direct comparisons of the performance of Polynomial Networks classifier versus other well-known classifiers on this dataset in the literature of Arabic text classification. Results of experiments show that Polynomial Networks classifier is a competitive algorithm to the state-of-the-art ones in the field of Arabic text classification.

34 citations


Journal ArticleDOI
TL;DR: An adaptive e-Learning system, which generates a user specific e- learning content by comparing the concepts with more than one system using similarity measures is proposed and it is proved that the proposed approach is effective over other methods.
Abstract: e-Learning is one of the most preferred media of learning by the learners. The learners search the web to gather knowledge about a particular topic from the information in the repositories. Retrieval of relevant materials from a domain can be easily implemented if the information is organized and related in some way. Ontologies are a key concept that helps us to relate information for providing the more relevant lessons to the learner. This paper proposes an adaptive e-Learning system, which generates a user specific e-Learning content by comparing the concepts with more than one system using similarity measures. A cross ontology measure is defined, which consists of fuzzy domain ontology as the primary ontology and the domain expert's ontology as the secondary ontology, for the comparison process. A personalized document is provided to the user with a user profile, which includes the data obtained from the processing of the proposed method under a User score, which is obtained through the user evaluation. The results of the proposed e-Learning system under the designed cross ontology similarity measure show a significant increase in performance and accuracy under different conditions. The assessment of the comparative analysis, showed the difference in performance of our proposed method over other methods. Based on the assessment results it is proved that the proposed approach is effective over other methods.

31 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to describe a procedure or algorithm by which a stem for the Arabian Gulf dialect can be defined, which is rule based and tested for a number of words and given a good correct stem ratio.
Abstract: Arabic dialects arewidely used from many years ago instead of Modern Standard Arabic language in many fields. The presence of dialects in any language is a big challenge. Dialects add a new set of variational dimensions in some fields like natural language processing, information retrieval and even in Arabic chatting between different Arab nationals. Spoken dialects have no standard morphological, phonological and lexical like Modern Standard Arabic. Hence, the objective of this paper is to describe a procedure or algorithm by which a stem for the Arabian Gulf dialect can be defined. The algorithm is rule based. Special rules are created to remove the suffixes and prefixes of the dialect words. Also, the algorithm applies rules related to the word size and the relation between adjacent letters. The algorithm was tested for a number of words and given a good correct stem ratio. The algorithm is also compared with two Modern Standard Arabic algorithms. The results showed that Modern Standard Arabic stemmers performed poorly with Arabic Gulf dialect and our algorithm performed poorly when applied for Modern Standard Arabic words.

Journal ArticleDOI
TL;DR: A new blind digital speech watermarking technique based on Eigen-value quantization in Discrete Wavelet Transform that is robust against different attacks such as filtering, additive noise, resampling, and cropping is presented.
Abstract: This paper presents a new blind digital speech watermarking technique based on Eigen-value quantization in Discrete Wavelet Transform. Initially, each frame of the digital speech was transformed into the wavelet domain by applying Discrete Wavelet Transform. Then, the Eigen-value of Approximation Coefficients was computed by using Singular Value Decomposition. Finally, the watermark bits were embedded by quantization of the Eigen-value. The experimental results show that this watermarking technique is robust against different attacks such as filtering, additive noise, resampling, and cropping. Applying new robust transforms, adaptive quantization steps and synchronization techniques can be the future trends in this field.

Journal ArticleDOI
TL;DR: The proposed KGANN is an efficient forecasting model for exchange rate prediction and has been demonstrated through an exhausting computer simulation study and using real life data.
Abstract: This paper presents a new adaptive forecasting model using a knowledge guided artificial neural network (KGANN) structure for efficient prediction of exchange rate. The new structure has two parallel systems. The first system is a least mean square (LMS) trained adaptive linear combiner, whereas the second system employs an adaptive FLANN model to supplement the knowledge base with an objective to improve its performance value. The output of a trained LMS model is added to an adaptive FLANN model to provide a more accurate exchange rate compared to that predicted by either a simple LMS or a FLANN model. This finding has been demonstrated through an exhausting computer simulation study and using real life data. Thus the proposed KGANN is an efficient forecasting model for exchange rate prediction.

Journal ArticleDOI
TL;DR: The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism.
Abstract: Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS) tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7%) handmade paraphrases, 54,815 (55.4%) artificial plagiarism cases, and 32,578 (32.9%) plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA) statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

Journal ArticleDOI
TL;DR: A database developed for the ArSL MS and NM signs which is called SignsWorld Atlas is presented, which combines postures, gestures, and motions collected in lighting and background laboratory conditions.
Abstract: Research has increased notably in vision-based automatic sign language recognition (ASLR). However, there has been little attention given to building a uniform platform for these purposes. Sign language (SL) includes not only static hand gestures, finger spelling, hand motions (which are called manual signs "MS") but also facial expressions, lip reading, and body language (which are called non-manual signs "NMS"). Building up a database (DB) that includes both MS and NMS is the main first step for any SL recognition task. In addition to this, the Arabic Sign Language (ArSL) has no standard database. For this purpose, this paper presents a DB developed for the ArSL MS and NM signs which we call SignsWorld Atlas. The postures, gestures, and motions included in this DB are collected in lighting and background laboratory conditions. Individual facial expression recognition and static hand gestures recognition tasks were tested by the authors using the SignsWorld Atlas, achieving a recognition rate of 97% and 95.28%, respectively.

Journal ArticleDOI
TL;DR: The results obtained proved the effectiveness of the proposed noise suppression technique and its ability to suppress noise and enhance the speech signal.
Abstract: In this paper, an effective noise suppression technique for enhancement of speech signals using optimized mask is proposed. Initially, the noisy speech signal is broken down into various time-frequency (TF) units and the features are extracted by finding out the Amplitude Magnitude Spectrogram (AMS). The signals are then classified based on quality ratio into different classes to generate the initial set of solutions. Subsequently, the optimal mask for each class is generated based on Cuckoo search algorithm. Subsequently, in the waveform synthesis stage, filtered waveforms are windowed and then multiplied by the optimal mask value and summed up to get the enhanced target signal. The experimentation of the proposed technique was carried out using various datasets and the performance is compared with the previous techniques using SNR. The results obtained proved the effectiveness of the proposed technique and its ability to suppress noise and enhance the speech signal.

Journal ArticleDOI
TL;DR: An innovative method of age group classification system based on the Correlation Fractal Dimension of complex facial image based on correlation FD value of a facial edge image is proposed.
Abstract: In the computer vision community, easy categorization of a person's facial image into various age groups is often quite precise and is not pursued effectively. To address this problem, which is an important area of research, the present paper proposes an innovative method of age group classification system based on the Correlation Fractal Dimension of complex facial image. Wrinkles appear on the face with aging thereby changing the facial edges of the image. The proposed method is rotation and poses invariant. The present paper concentrates on developing an innovative technique that classifies facial images into four categories i.e. child image (0-15), young adult image (15-30), middle-aged adult image (31-50), and senior adult image (>50) based on correlation FD value of a facial edge image.

Journal ArticleDOI
TL;DR: A QoSbootstrapping solution for Web Services is proposed and a QoS bootstrapping framework is built and a prototype is built to support QoSBootstrapping QoS is the process of evaluating the QoS of the newly registered services at the time of publishing the services.
Abstract: A distributed application may be composed of global services provided by different organizations and having different properties. To select a service from many similar services, it is important to distinguish between them. Quality of services (QoS) has been used as a distinguishing factor between similar services and plays an important role in service discovery, selection, and composition. Moreover, QoS is an important contributing factor to the evolution of distributed paradigms, such as service-oriented computing and cloud computing. There are many research works that assess services and justify the QoS at the finding, composition, or binding stages of services. However, there is a need to justify the QoS once new services are registered and before any requestors use them; this is called bootstrapping QoS. Bootstrapping QoS is the process of evaluating the QoS of the newly registered services at the time of publishing the services. Thus, this paper proposes a QoS bootstrapping solution for Web Services and builds a QoS bootstrapping framework. In addition, Service Oriented Architecture (SOA) is extended and a prototype is built to support QoS bootstrapping. Experiments are conducted and a case study is presented to test the proposed QoS bootstrapping solution.

Journal ArticleDOI
TL;DR: The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification and is robust enough to optimize the benchmark functions and practical optimization problems.
Abstract: It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA) which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called "Kepler", inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA-Kepler is evaluated by applying it to 14 benchmark functions with 20-1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

Journal ArticleDOI
TL;DR: It is observed that the scheme cannot withstand lost smartcard/off-line password guessing, privileged-insider and known session-specific temporary information attacks, and lacks the requirements of lost smart card revocation and users' anonymity.
Abstract: Recently, Wu et al. proposed a password-based remote user authentication scheme for the integrated Electronic Patient Record (EPR) information system to achieve mutual authentication and session key agreement over the Internet. They claimed that the scheme resists various attacks and offers lower computation cost, data integrity, confidentiality and authenticity. However, we observed that the scheme cannot withstand lost smartcard/off-line password guessing, privileged-insider and known session-specific temporary information attacks, and lacks the requirements of lost smartcard revocation and users' anonymity. Besides, the password change phase is inconvenient to use because a user cannot change his password independently. Thus, we proposed a new password-based user authentication scheme for the integrated EPR information system that would be able to resist detected security flaws of Wu et al.'s scheme.

Journal ArticleDOI
TL;DR: Investigating therapists' intention to use serious games for cognitive rehabilitation and identifying underlying factors that may affect their acceptance is found to indicate that the "Ship Game" prototype is easy to use, useful, helpful, enjoyable, and enjoyable.
Abstract: Acquired brain injury is one cause of long-term disability. Serious games can assist in cognitive rehabilitation. However, therapists' perception and feedback will determine game adoption. The objective of this study is to investigate therapists' intention to use serious games for cognitive rehabilitation and identify underlying factors that may affect their acceptance. The respondents are 41 therapists who evaluated a "Ship Game" prototype. Data were collected using survey questionnaire and interview. A seven-point Likert scale was used for items in the questionnaire ranging from (1) "strongly disagree" to (7) "strongly agree". Results indicate that the game is easy to use (Mean=5.83), useful (Mean=5.62), and enjoyable (Mean=5.90). However intention to use is slightly low (Mean=4.60). Significant factors that can affect therapists' intention to use the game were gathered from interviews. Game-based intervention should reflect therapists' needs in order to achieve various rehabilitation goals, providing suitable and meaningful training. Hence, facilities to tailor the game to the patient's ability, needs and constraints are important factors that can increase therapists' intention to use and help to deliver game experience that can motivate patients to undergo the practices needed. Moreover, therapists' supervision, database functionality and quantitative measures regarding a patient's progress also represent crucial factors.

Journal ArticleDOI
TL;DR: It is demonstrated that a combination of high-resolution and low-resolution algorithms can be a useful tool for physiologists to find the neural sources of primary circuits in the brain.
Abstract: Brain is a complex organ and many attempts have been done to know its functions. Studying attention and memory circuits can help to achieve much information about the brain. P300 is related to attention and memory operations, so its investigation will lead to better understanding of these mechanisms. In this study, EEG signals of thirty healthy subjects are analyzed. Each subject participates in three-segment experiment including start, penalty and last segments. Each segment contains the same number of visual and auditory tests including warning, attention, response and feedback phases. Data analysis is done by using conventional averaging techniques and P300 source localization is carried out with two localization algorithms including low-resolution and high-resolution algorithms. Using realistic head model to improve the accuracy of localization, our results demonstrate that the P300 component arises from a wide cerebral cortex network and localizing a definite generating cortical zone is impossible. This study shows that a combination of high-resolution and low-resolution algorithms can be a useful tool for physiologists to find the neural sources of primary circuits in the brain.

Journal ArticleDOI
TL;DR: The sign test results suggest a statistical significance of the superiority of Seed-Detective (and ModEx) over the existing techniques in terms of F-measure, Entropy and Purity.
Abstract: In this paper we present two clustering techniques called ModEx and Seed-Detective. ModEx is a modified version of an existing clustering technique called Ex-Detective. It addresses some limitations of Ex-Detective. Seed-Detective is a combination of ModEx and Simple K-Means. Seed-Detective uses ModEx to produce a set of high quality initial seeds that are then given as input to K-Means for producing the final clusters. The high quality initial seeds are expected to produce high quality clusters through K-Means. The performances of Seed-Detective and ModEx are compared with the performances of Ex-Detective, PAM, Simple K-Means (SK), Basic Farthest Point Heuristic (BFPH) and New Farthest Point Heuristic (NFPH). We use three cluster evaluation criteria namely F-measure, Entropy and Purity and four natural datasets that we obtain from the UCI Machine learning repository. In the datasets our proposed techniques perform better than the existing techniques in terms of F-measure, Entropy and Purity. The sign test results suggest a statistical significance of the superiority of Seed-Detective (and ModEx) over the existing techniques.

Journal ArticleDOI
TL;DR: The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET, and new methodologies like Gene Expression Programing (GEP) have been explored here and significant results were found.
Abstract: This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person's data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN) and Gene Expression Programing (GEP) to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP) have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

Journal ArticleDOI
TL;DR: This work presents a novel noise addition technique called Forest Framework, two novel data quality evaluation techniques called EDUDS and EDUSC, and a security evaluation technique called SERS, and compares it to its predecessor, Framework, and another established technique, GADP.
Abstract: Data mining plays an important role in analyzing the massive amount of data collected in today's world. However, due to the public's rising awareness of privacy and lack of trust in organizations, suitable Privacy Preserving Data Mining (PPDM) techniques have become vital. A PPDM technique provides individual privacy while allowing useful data mining. We present a novel noise addition technique called Forest Framework, two novel data quality evaluation techniques called EDUDS and EDUSC, and a security evaluation technique called SERS. Forest Framework builds a decision forest from a dataset and preserves all the patterns (logic rules) of the forest while adding noise to the dataset. We compare Forest Framework to its predecessor, Framework, and another established technique, GADP. Our comparison is done using our three evaluation criteria, as well as Prediction Accuracy. Our experimental results demonstrate the success of our proposed extensions to Framework and the usefulness of our evaluation criteria.

Journal ArticleDOI
TL;DR: Results of this study showed that some of these popular Websites are using techniques that are considered spam techniques according to Search Engine Optimization guidelines.
Abstract: The expansion of the Web and its information in all aspects of life raises the concern of how to trust information published on the Web especially in cases where publisher may not be known. Websites strive to be more popular and make themselves visible to search engines and eventually to users. Website popularity can be measured using several metrics such as the Web traffic (e.g. Website: visitors' number and visited page number). A link or page popularity refers to the total number of hyperlinks referring to a certain Web page. In this study, several top ranked Arabic Websites are selected for evaluating possible Web spam behavior. Websites use spam techniques to boost their ranks within Search Engine Results Page (SERP). Results of this study showed that some of these popular Websites are using techniques that are considered spam techniques according to Search Engine Optimization guidelines.

Journal ArticleDOI
TL;DR: The main objective of this paper is to introduce a peer-to-peer team formation technique based on zone routing protocol (ZRP), which achieves fast successful recommendations within the limited mobile resources and reduces exchanged messages.
Abstract: Mobile social networking is a new trend for social networking that enables users with similar interests to connect together through mobile devices. Therefore, it possesses the same features of a social network with added support to the features of a Mobile Ad-hoc Network (MANET) in terms of limited computing power, limited coverage, and intermittent connectivity. One of the most important features in social networks is Team Formation. Team Formation aims to assemble a set of users with a set of skills required for a certain task. The team formation is a special type of recommendation which is important to enable cooperative work among users. Team formation is challenging since users' interaction time is limited in MANET. The main objective of this paper is to introduce a peer-to-peer team formation technique based on zone routing protocol (ZRP). A comparison was made with Flooding and Adaptive Location Aided Mobile Ad Hoc Network Routing (ALARM) techniques. The suggested technique achieves fast successful recommendations within the limited mobile resources and reduces exchanged messages. The suggested technique has fast response time, small required buffering and low power consumption. The testing results show better performance of the suggested technique compared to flooding and ALARM technique.

Journal ArticleDOI
TL;DR: This paper presents a new approach to server consolidation in heterogeneous computer clusters using Colored Petri Nets (CPNs), and explores the use of CPN Tools in analyzing the state spaces of the CPNs.
Abstract: In this paper, we present a new approach to server consolidation in heterogeneous computer clusters using Colored Petri Nets (CPNs). Server consolidation aims to reduce energy costs and improve resource utilization by reducing the number of servers necessary to run the existing virtual machines in the cluster. It exploits the emerging technology of live migration which allows migrating virtual machines between servers without stopping their provided services. Server consolidation approaches attempt to find migration plans that aim to minimize the necessary size of the cluster. Our approach finds plans which not only minimize the overall number of used servers, but also minimize the total data migration overhead. The latter objective is not taken into consideration by other approaches and heuristics. We explore the use of CPN Tools in analyzing the state spaces of the CPNs. Since the state space of the CPN model can grow exponentially with the size of the cluster, we examine different techniques to generate and analyze the state space in order to find good plans to server consolidation within acceptable time and computing power.

Journal ArticleDOI
TL;DR: An adaptive fuzzy Petri net (AFPN) reasoning algorithm as a prognostic system to predict the outcome for esophageal cancer based on the serum concentrations of C-reactive protein and albumin as a set of input variables is developed.
Abstract: Esophageal cancer is one of the most common cancers world-wide and also the most common cause of cancer death. In this paper, we present an adaptive fuzzy reasoning algorithm for rule-based systems using fuzzy Petri nets (FPNs), where the fuzzy production rules are represented by FPN. We developed an adaptive fuzzy Petri net (AFPN) reasoning algorithm as a prognostic system to predict the outcome for esophageal cancer based on the serum concentrations of C-reactive protein and albumin as a set of input variables. The system can perform fuzzy reasoning automatically to evaluate the degree of truth of the proposition representing the risk degree value with a weight value to be optimally tuned based on the observed data. In addition, the implementation process for esophageal cancer prediction is fuzzily deducted by the AFPN algorithm. Performance of the composite model is evaluated through a set of experiments. Simulations and experimental results demonstrate the effectiveness and performance of the proposed algorithms. A comparison of the predictive performance of AFPN models with other methods and the analysis of the curve showed the same results with an intuitive behavior of AFPN models.

Journal ArticleDOI
TL;DR: XACML architecture, that is used and applied in many tools and system architectures, is used to enforce Shariah rules in the banking sector rather than its original goal of enforcing security rules where policy management systems such as XACML are usually used.
Abstract: For many banks and customers in the Middle East and Islamic world, the availability and the ability to apply Islamic Shariah rules on financial activities is very important. In some cases, business and technical barriers can limit the ability to apply and offer financial services that are implemented according to Shariah rules.In this paper, we discuss enforcing Shariah rules from information technology viewpoint and show how such rules can be implemented and enforced in a financial establishment. Security authorization standard XACML is extended to consider Shariah rules. In this research XACML architecture, that is used and applied in many tools and system architectures, is used to enforce Shariah rules in the banking sector rather than its original goal of enforcing security rules where policy management systems such as XACML are usually used.We developed a model based on XACML policy management to show how an Islamic financial information system can be used to make decisions for day to day bank activities. Such a system is required by all Islamic banks around the world. Currently, most Islamic banks use advisory boards to provide opinions on general activities. The gap between those high level general rules and decision for each customer business process is to be filled by Islamic financial information systems.The flexible design of the architecture can also be effective where rules can be screened and revisited often without the need to restructure the authorization system implemented. Authorization rules described here are not necessarily the perfect reflection of Shariah opinions. They are only shown as a proof of concept and a demonstration of how such rules can be written and implemented.