scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Computer Science and Applications in 2013"


Journal ArticleDOI
TL;DR: Two important clustering algorithms namely centroid based K-means and representative object based FCM (Fuzzy C-Means) clustering algorithm are compared and performance is evaluated on the basis of the efficiency of clustering output.
Abstract: In the arena of software, data mining technology has been considered as useful means for identifying patterns and trends of large volume of data. This approach is basically used to extract the unknown pattern from the large set of data for business as well as real time applications. It is a computational intelligence discipline which has emerged as a valuable tool for data analysis, new knowledge discovery and autonomous decision making. The raw, unlabeled data from the large volume of dataset can be classified initially in an unsupervised fashion by using cluster analysis i.e. clustering the assignment of a set of observations into clusters so that observations in the same cluster may be in some sense be treated as similar. The outcome of the clustering process and efficiency of its domain application are generally determined through algorithms. There are various algorithms which are used to solve this problem. In this research work two important clustering algorithms namely centroid based K-Means and representative object based FCM (Fuzzy C-Means) clustering algorithms are compared. These algorithms are applied and performance is evaluated on the basis of the efficiency of clustering output. The numbers of data points as well as the number of clusters are the factors upon which the behaviour patterns of both the algorithms are analyzed. FCM produces close results to K-Means clustering but it still requires more computation time than K-Means clustering. Keywords—clustering; k-means; fuzzy c-means; time complexity

408 citations


Journal ArticleDOI
TL;DR: A novel method to search alternative design by using classification method is proposed, which outperforms Decision Tree and k-Nearest Neighbor on all parameters but precision and Naive Bayes is the best.
Abstract: Energy simulation tool is a tool to simulate energy use by a building prior to the erection of the building. Commonly it has a feature providing alternative designs that are better than the user's design. In this paper, we propose a novel method in searching alternative design that is by using classification method. The classifiers we use are Naive Bayes, Decision Tree, and k-Nearest Neighbor. Our experiments hows that Decision Tree has the fastest classification time followed by Naive Bayes and k-Nearest Neighbor. The differences between classification time of Decision Tree and Naive Bayes also between Naive Bayes and k-NN are about an order of magnitude. Based on Percision, Recall, F- measure, Accuracy, and AUC, the performance of Naive Bayes is the best. It outperforms Decision Tree and k-Nearest Neighbor on all parameters but precision. Energy simulation tool is a tool to simulate energy use by a building prior to the erection of the building. The output of such simulation is a value in kWh/m 2 called energy performance. The calculation of the building energy performance must be carried out by developers as part of requirements to get permit to build the building. The building can only be built if the energy performance is below the allowable standard. In order to get building energy performance below the standard, architects must revise the design several times. And in order to ease the design work of the architects, an energy simulation tool must have a feature that suggests a better alternative design. Since the alternative design search is actually a classification problem, hence in this paper we propose a novel method to search alternative design by using classification method. The classification methods used in here are Decision Tree, Naive Bayes, and k-Nearest Neighbor. We will then compare the performance of these three methods in searching alternative design in an energy simulation tools.

134 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a system using Eigen value weighted Euclidean distance as a classification technique for recognition of various sign languages of India. But, the system was not suitable for the use of hand cropping and skin filtering.
Abstract: Sign Language Recognition is one of the most growing fields of research today. Many new techniques have been developed recently in these fields. Here in this paper, we have proposed a system using Eigen value weighted Euclidean distance as a classification technique for recognition of various Sign Languages of India. The system comprises of four parts: Skin Filtering, Hand Cropping, Feature Extraction and Classification. 24 signs were considered in this paper, each having 10 samples, thus a total of 240 images was considered for which recognition rate obtained was 97%.

96 citations


Journal ArticleDOI
TL;DR: In this article, an automated computer platform for the purpose of classifying EEG signals associated with left and right hand movements using a hybrid system that uses advanced feature extraction techniques and machine learning algorithms.
Abstract: In this paper, we propose an automated computer platform for the purpose of classifying Electroencephalography (EEG) signals associated with left and right hand movements using a hybrid system that uses advanced feature extraction techniques and machine learning algorithms. It is known that EEG represents the brain activity by the electrical voltage fluctuations along the scalp, and Brain-Computer Interface (BCI) is a device that enables the use of the brain's neural activity to communicate with others or to control machines, artificial limbs, or robots without direct physical movements. In our research work, we aspired to find the best feature extraction method that enables the differentiation between left and right executed fist movements through various classification algorithms. The EEG dataset used in this research was created and contributed to PhysioNet by the developers of the BCI2000 instrumentation system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts removal was done using AAR. Data was epoched on the basis of Event-Related (De) Synchronization (ERD/ERS) and movement-related cortical potentials (MRCP) features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta rhythms were isolated for the MRCP analysis. The Independent Component Analysis (ICA) spatial filter was applied on related channels for noise reduction and isolation of both artifactually and neutrally generated EEG sources. The final feature vector included the ERD, ERS, and MRCP features in addition to the mean, power and energy of the activations of the resulting Independent Components (ICs) of the epoched feature datasets. The datasets were inputted into two machine- learning algorithms: Neural Networks (NNs) and Support Vector Machines (SVMs). Intensive experiments were carried out and optimum classification performances of 89.8 and 97.1 were obtained using NN and SVM, respectively. This research shows that this method of feature extraction holds some promise for the classification of various pairs of motor movements, which can be used in a BCI context to mentally control a computer or machine. Keywords—EEG; BCI; ICA; MRCP; ERD/ERS; machine learning; NN; SVM

85 citations


Journal ArticleDOI
TL;DR: A new NFC payment application is introduced, which is based on the previous “NFC Cloud Wallet” model to demonstrate a reliable structure of NFC ecosystem and focuses on the Mobile Network Operator (MNO) as the main player within the ecosystem.
Abstract: Near Field Communication (NFC) technology is based on a short range radio communication channel which enables users to exchange data between devices. With NFC technology, mobile services establish a contactless transaction system to make the payment methods easier for people. Although NFC mobile services have great potential for growth, they have raised several issues which have concerned the researches and prevented the adoption of this technology within societies. Reorganizing and describing what is required for the success of this technology have motivated us to extend the current NFC ecosystem models to accelerate the development of this business area. In this paper, we introduce a new NFC payment application, which is based on our previous “NFC Cloud Wallet” model [1] to demonstrate a reliable structure of NFC ecosystem. We also describe the step by step execution of the proposed protocol in order to carefully analyse the payment application and our main focus will be on the Mobile Network Operator (MNO) as the main player within the ecosystem.

74 citations


Journal ArticleDOI
TL;DR: Two hybrid techniques for the classification of the skin images to predict it if exists are presented, consisting of three stages, namely, feature extraction, dimensionality reduction, and classification.
Abstract: Early detection of skin cancer has the potential to reduce mortality and morbidity. This paper presents two hybrid techniques for the classification of the skin images to predict it if exists. The proposed hybrid techniques consists of three stages, namely, feature extraction, dimensionality reduction, and classification. In the first stage, we have obtained the features related with images using discrete wavelet transformation. In the second stage, the features of skin images have been reduced using principle component analysis to the more essential features. In the classification stage, two classifiers based on supervised machine learning have been developed. The first classifier based on feed forward back-propagation artificial neural network and the second classifier based on k-nearest neighbor. The classifiers have been used to classify subjects as normal or abnormal skin cancer images. A classification with a success of 95% and 97.5% has been obtained by the two proposed classifiers and respectively. This result shows that the proposed hybrid techniques are robust and effective.

67 citations


Journal ArticleDOI
TL;DR: Students’ learning journey and data trails, the chatting log architecture and resultant applications to the design of language learning systems can be a valuable component for language-learning designers to improve second language acquisition.
Abstract: the goal of this article is to explore how learning analytics can be used to predict and advise the design of an intelligent language tutor, chatbot Lucy. With its focus on using student-produced data to understand the design of Lucy to assist English language learning, this research can be a valuable component for language-learning designers to improve second language acquisition. In this article, we present students’ learning journey and data trails, the chatting log architecture and resultant applications to the design of language learning systems.

57 citations


Journal ArticleDOI
TL;DR: The proposed LASyM system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs' communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOocs.
Abstract: Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the "linux of education", are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs' communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning

50 citations


Journal ArticleDOI
TL;DR: This study aims at revealing different security threats under the cloud models as well as network concerns to stagnate the threats within cloud, facilitating researchers, cloud providers and end users for noteworthy analysis of threats.
Abstract: Vendors offer a pool of shared resources to their users through the cloud network. Nowadays, shifting to cloud is a very optimal decision as it provides pay-as-you-go services to users. Cloud has boomed high in business and other industries for its advantages like multi-tenancy, resource pooling, storage capacity etc. In spite of its vitality, it exhibits various security flaws including loss of sensitive data, data leakage and few others related to cloning, resource pooling and so on. As far as security issues are concerned, a very wide study has been reviewed which signifies threats with service and deployment models of cloud. In order to comprehend these threats, this study is presented so as to effectively refine the crude security issues under various areas of cloud. This study also aims at revealing different security threats under the cloud models as well as network concerns to stagnate the threats within cloud, facilitating researchers, cloud providers and end users for noteworthy analysis of threats.

50 citations


Journal ArticleDOI
TL;DR: The research findings showed that the main barriers in Kuwait were lack of management awareness and support, technological barriers, and language barriers, which were specific to Kuwait when compared with developed countries.
Abstract: E-learning as an organizational activity started in the developed countries, and as such, the adoption models and experiences in the developed countries are taken as a benchmark in the literature. This paper investigated the barriers that affect or prevent the adoption of e-learning in higher educational institutions in Kuwait as an example of a developing country, and compared them with those found in developed countries. Semi-structured interviews were used to collect the empirical data from academics and managers in higher educational institutions in Kuwait. The research findings showed that the main barriers in Kuwait were lack of management awareness and support, technological barriers, and language barriers. From those, two barriers were specific to Kuwait (lack of management awareness and language barriers) when compared with developed countries. Recommendations for decision makers and suggestions for further research are also considered in this study.

48 citations


Journal ArticleDOI
TL;DR: This work aims to establish a reliable survey about available design, simulation or implementation NoC tools and collected an important amount of information and characteristics about NoC dedicated tools that will present throughout this survey.
Abstract: Nowadays System-On-Chips (SoCs) have evolved considerably in term of performances, reliability and integration capacity. The last advantage has induced the growth of the number of cores or Intellectual Properties (IPs) in a same chip. Unfortunately, this important number of IPs has caused a new issue which is the intra-communication between the elements of a same chip. To resolve this problem, a new paradigm has been introduced which is the Network-On-Chip (NoC). Since the introduction of the NoC paradigm in the last decade, new methodologies and approaches have been presented by research community and many of them have been adopted by industrials. The literature contains many relevant studies and surveys discussing NoC proposals and contributions. However, few of them have discussed or proposed a comparative study of NoC tools. The objective of this work is to establish a reliable survey about available design, simulation or implementation NoC tools. We collected an important amount of information and characteristics about NoC dedicated tools that we will present throughout this survey. This study is built around a respectable amount of references and we hope it will help scientists. Keywords—Embedded Systems; Network-On-Chip; CAD Tools; Performance Analysis; Verification and Measurement

Journal ArticleDOI
TL;DR: This work developed a corpus for sentiment analysis and opinion mining purposes, and used different machine learning algorithms – decision tree, support vector machines, and naive bayes - to develop sentiment analyzer.
Abstract: Today, the number of users of social network is increasing Millions of users share opinions on different aspects of life every day Therefore social network are rich sources of data for opinion mining and sentiment analysis Also users have become more interested in following news pages on Facebook Several posts; political for example, have thousands of users’ comments that agree/disagree with the post content Such comments can be a good indicator for the community opinion about the post content For politicians, marketers, decision makers …, it is required to make sentiment analysis to know the percentage of users agree, disagree and neutral respect to a post This raised the need to analyze theusers’ comments in Facebook We focused on Arabic Facebook news pages for the task of sentiment analysis We developed a corpus for sentiment analysis and opinion mining purposes Then, we used different machine learning algorithms – decision tree, support vector machines, and naive bayes - to develop sentiment analyzer The performance of the system using each technique was evaluated and compared with others

Journal ArticleDOI
TL;DR: This paper provides a comparative study on support vector regression (SVR), Intermediate COCOMO and Multiple Objective Particle Swarm Optimization (MOPSO) model for estimation of software project effort.
Abstract: Software cost estimation is the process of predicting the effort required to develop a software system. The basic input for the software cost estimation is coding size and set of cost drivers, the output is Effort in terms of Person-Months (PM’s). Here, the use of support vector regression (SVR) has been proposed for the estimation of software project effort. We have used the COCOMO dataset and our results are compared to Intermediate COCOMO as well as to MOPSO model results for this dataset. It has been observed from the simulation that SVR outperforms other estimating techniques. This paper provides a comparative study on support vector regression (SVR), Intermediate COCOMO and Multiple Objective Particle Swarm Optimization (MOPSO) model for estimation of software project effort. We have analyzed in terms of accuracy and Error rate. Here, data mining tool Weka is used for simulation

Journal ArticleDOI
TL;DR: Proposed CNN technique is used to realize edge detection task it takes the advantage of momentum features extraction, it can process any input image of any size with no more training required, the results are very promising when compared to both classical methods and other ANN based methods.
Abstract: The edge detection on the images is so important for image processing. It is used in a various fields of applications ranging from real-time video surveillance and traffic management to medical imaging applications. Currently, there is not a single edge detector that has both efficiency and reliability. Traditional differential filter-based algorithms have the advantage of theoretical strictness, but require excessive post-processing. Proposed CNN technique is used to realize edge detection task it takes the advantage of momentum features extraction, it can process any input image of any size with no more training required, the results are very promising when compared to both classical methods and other ANN based methods

Journal ArticleDOI
TL;DR: Analysis showed effectiveness of the algorithm in minimizing degradation while it was sensitive to the smoothness of cover images, and a combination of steganography with cryptography with cryptography may be used.
Abstract: A new algorithm is presented for hiding a secret image in the least significant bits of a cover image. The images used may be color or grayscale images. The number of bits used for hiding changes according to pixel neighborhood information of the cover image. The exclusive-or (XOR) of a pixel's neighbors is used to determine the smoothness of the neighborhood. A higher XOR value indicates less smoothness and leads to using more bits for hiding without causing noticeable degradation to the cover image. Experimental results are presented to show that the algorithm generally hides images without significant changes to the cover image, where the results are sensitive to the smoothness of the cover image. Keywords—image steganography; information hiding; LSB method I. INTRODUCTION Steganography is a method of hiding a secret message inside other information so that the existence of the hidden message is concealed. Cryptography, in contrast, is a method of scrambling hidden information so that unauthorized persons will not be able to recover it. The main advantage steganography has over cryptography is that it hides the actual existence of secret information, making it an unlikely target of spying attacks. To achieve higher security, a combination of steganography with cryptography may be used. In this paper, a new algorithm is presented to hide information in the least significant bits (LSBs) of image pixels. The algorithm uses a variable number of hiding bits for each pixel, where the number of bits is chosen based on the amount of visible degradation they may cause to the pixel compared to its neighbors. The amount of visible degradation is expected to be higher for smooth areas, so the number of hiding bits is chosen to be proportional to the exclusive-or (XOR) of the pixel's neighbors. Analysis showed effectiveness of the algorithm in minimizing degradation while it was sensitive to the smoothness of cover images.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed approach is competitive in terms of success rate or likelihood of optimality and solution quality, and despite that it is computationally more expensive due to its hefty mathematical evaluations, it is more fruitful on the long run.
Abstract: Gravitational Search Algorithms (GSA) are heuristic optimization evolutionary algorithms based on Newton's law of universal gravitation and mass interactions. GSAs are among the most recently introduced techniques that are not yet heavily explored. An early work of the authors has successfully adapted this technique to the cell placement problem, and shown its efficiency in producing high quality solutions in reasonable time. We extend this work by fine tuning the algorithm parameters and transition functions towards better balance between exploration and exploitation. To assess its performance and robustness, we compare it with that of Genetic Algorithms (GA), using the standard cell placement problem as benchmark to evaluate the solution quality, and a set of artificial instances to evaluate the capability and possibility of finding an optimal solution. Experimental results show that the proposed approach is competitive in terms of success rate or likelihood of optimality and solution quality. And despite that it is computationally more expensive due to its hefty mathematical evaluations, it is more fruitful on the long run.

Journal ArticleDOI
TL;DR: An Eigenvector based system has been presented to recognize facial expressions from digital facial images and it was found that similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.
Abstract: In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions. A human face carries a lot of important information while interacting to one another. In social interaction, the most common communicative hint is given by one's facial expression. Mainly in psychology, the expressions of facial features have been largely considered. As per the study of Mehrabian (1), amongst the human communication, facial expressions comprises 55% of the message transmitted in comparison to the 7% of the communication information conveyed by linguistic language and 38% by paralanguage.

Journal ArticleDOI
TL;DR: A framework for guiding mobile learning innovation is utilised to review the qualities and shortcomings of the case study on developing science activities for first year primary school children on the OTPC devices.
Abstract: The Ministry of Education in Thailand is currently distributing tablets to all first year primary (Prathom 1) school children across the country as part of the government's "One Tablet Per Child" (OTPC) project to improve education. Early indications suggest that there are many unexplored issues in designing and implementing tablet activities for such a large and varied group of students and so far there is a lack of evaluation on the effectiveness of the tablet activities. In this article, the authors propose four challenges for the improving Thailand's OTPC project, consisting of: developing contextualised content, ensuring usability, providing teacher support, and assessing learning outcomes. A case study on developing science activities for first year primary school children on the OTPC devices is the basis for presenting possible solutions to the four challenges. In presenting a solution to the challenge of providing teacher support, an architecture is described for collecting data from student interactions with the tablet in order to analysis the current progress of students while in a live classroom setting. From tests in three local Thai schools, the authors evaluate the case study from both student and teacher perspectives. In concluding the paper, a framework for guiding mobile learning innovation is utilised to review the qualities and shortcomings of the case study.

Journal ArticleDOI
TL;DR: The study found that e-commerce in Saudi Arabia was lacking in Governmental support as well as relevant involvement by both customers and retailers.
Abstract: This paper looks at the present standing of e-commerce in Saudi Arabia as well as the challenges and strengths of Business to Customers (B2C) electronic commerce. Many studies have been conducted around the world in order to gain a better understanding of the demands needs and effectiveness of online commerce. A study was undertaken to review the literature identifying the factors influencing the adoption and diffusion of B2C e-commerce. It found four distinct categories: businesses customers environmental and governmental support which must all be considered when creating an e-commerce infrastructure. A concept matrix was used to provide a comparison of important factors in different parts of the world. The study found that e-commerce in Saudi Arabia was lacking in Governmental support as well as relevant involvement by both customers and retailers.

Journal ArticleDOI
TL;DR: The proposed watermarking technique is based on dividing the medical image in to blocks and inserting the watermark to the ROI by shifting the blocks, and uses Chinese remainder theorem as a backbone to achieve high level of security.
Abstract: Applying security to the transmitted medical images is important to protect the privacy of patients. Secure transmission requires cryptography, and watermarking to achieve confidentiality, and data integrity. Improving cryptography part needs to use an encryption algorithm that stands for a long time against different attacks. The proposed method is based on number theory and uses Chinese remainder theorem as a backbone. This approach achieves high level of security and stands against different attacks for a long time. On watermarking part, the medical image is divided into two regions: a region of interest (ROI) and a region of background (ROB). The pixel values of the ROI contain the important information so this region must not experience any change. The proposed watermarking technique is based on dividing the medical image in to blocks and inserting the watermark to the ROI by shifting the blocks. Then, an equivalent number of blocks in the ROB are removed. This approach can be considered as lossless since it does not affect on the ROI, also it does not increase the image size. In addition, it can stand against some watermarking attacks such cropping, and noise.

Journal ArticleDOI
TL;DR: An automated system to locate an OD and its centre in all types of retinal images is presented and the proposed algorithm gives excellent results and avoids false OD detection.
Abstract: An efficient detection of optic disc (OD) in colour retinal images is a significant task in an automated retinal image analysis system. Most of the algorithms developed for OD detection are especially applicable to normal and healthy retinal images. It is a challenging task to detect OD in all types of retinal images, that is, normal, healthy images as well as abnormal, that is, images affected due to disease. This paper presents an automated system to locate an OD and its centre in all types of retinal images. The ensemble of steps based on different criteria produces more accurate results. The proposed algorithm gives excellent results and avoids false OD detection. The technique is developed and tested on standard databases provided for researchers on internet, Diaretdb0 (130 images), Diaretdb1 (89 images), Drive (40 images) and local database (194 images). The local database images are collected from ophthalmic clinics. It is able to locate OD and its centre in 98.45% of all tested cases. The results achieved by different algorithms can be compared when algorithms are applied on same standard databases. This comparison is also discussed in this paper which shows that the proposed algorithm is more efficient.

Journal ArticleDOI
TL;DR: An emotion recognition system that analysis the motion trajectory of the eye and gives the response on appraisal emotion based on the data gathering using head mounted eye tracking device is developed.
Abstract: The object of this paper is to develop an emotion recognition system that analysis the motion trajectory of the eye and gives the response on appraisal emotion. The emotion recognition solution is based on the data gathering using head mounted eye tracking device. The participants of experimental investigation were provided with a visual stimulus (PowerPoint slides) and the emotional feedback was determined by the combination of eye tracking device and emotion recognition software. The stimulus was divided in four groups by the emotion that should be triggered in the human, i.e., neutral, disgust, exhilaration and excited. Some initial experiments and the data on the recognition accuracy of the emotion from eye motion trajectory are provided along with the description of implemented algorithms.

Journal ArticleDOI
TL;DR: This paper is involved in using only one ultrasonic sensor to detect stair-cases in electronic cane using a multiclass SVM approach and recognition rates of 82.4% has been achieved.
Abstract: Blinds people need some aid to interact with their environment with more security. A new device is then proposed to enable them to see the world with their ears. Considering not only system requirements but also technology cost, we used, for the conception of our tool, ultrasonic sensors and one monocular camera to enable user being aware of the presence and nature of potential encountered obstacles. In this paper, we are involved in using only one ultrasonic sensor to detect stair-cases in electronic cane. In this context, no previous work has considered such a challenge. Aware that the performance of an object recognition system depends on both object representation and classification algorithms, we have used in our system, one representation of ultrasonic signal in frequency domain: spectrogram representation explaining how the spectral density of signal varies with time, spectrum representation showing the amplitudes as a function of the frequency, periodogram representation estimating the spectral density of signal. Several features, thus extracted from each representation, contribute in the classification process. Our system was evaluated on a set of ultrasonic signal where stair-cases occur with different shapes. Using a multiclass SVM approach, recognition rates of 82.4% has been achieved.

Journal ArticleDOI
TL;DR: From the evaluation results for the prototype wellness recommendation system, it is concluded that wellness consultants are using consistent wellness knowledge to recommend solutions for sample wellness cases generated through an online consultation form and the proposed model can be integrated into wellness websites to enable users to search for suitable personalized wellness therapy treatment based on their health condition.
Abstract: rising costs and risks in health care have shifted the preference of individuals from health treatment to disease prevention. This prevention treatment is known as wellness. In recent years, the Internet has become a popular place for wellness-conscious users to search for wellness-related information and solutions. As the user community becomes more wellness conscious, service improvement is needed to help users find relevant personalised wellness solutions. Due to rapid development in the wellness market, users value convenient access to wellness services. Most wellness websites reflect common health informatics approaches; these amount to more than 70,000 sites worldwide. Thus, the wellness industry should improve its Internet services in order to provide better and more convenient customer service. This paper discusses the development of a wellness recommender system that would help users find and adapt suitable personalised wellness therapy treatments based on their individual needs. This paper introduces new approaches that enhance the convenience and quality of wellness information delivery on the Internet. The wellness recommendation task is performed using an Artificial Intelligence technique of hybrid case-based reasoning (HCBR). HCBR solves users’ current wellness problems by applying solutions from similar cases in the past. From the evaluation results for our prototype wellness recommendation system, we conclude that wellness consultants are using consistent wellness knowledge to recommend solutions for sample wellness cases generated through an online consultation form. Thus, the proposed model can be integrated into wellness websites to enable users to search for suitable personalized wellness therapy treatment based on their health condition.

Journal ArticleDOI
TL;DR: Two new models for software effort estimation using fuzzy logic are presented based on the famous COnstructive COst Model (COCOMO) and utilizes the Source Line Of Code (SLOC) as input variable to estimate the Effort.
Abstract: Budgeting, bidding and planning of software project effort, time and cost are essential elements of any software development process. Massive size and complexity of now a day produced software systems cause a substantial risk for the development process. Inadequate and inefficient information about the size and complexity results in an ambiguous estimates that cause many losses. Project managers cannot adequately provide good estimate for both the effort and time needed. Thus, no clear release day to the market can be defined. This paper presents two new models for software effort estimation using fuzzy logic. One model is developed based on the famous COnstructive COst Model (COCOMO) and utilizes the Source Line Of Code (SLOC) as input variable to estimate the Effort (E); while the second model utilize the Inputs, Outputs, Files, and User Inquiries to estimate the Function Point (FP). The proposed fuzzy models show better estimation capabilities compared to other reported models in the literature and better assist the project manager in computing the software required development effort. The validation results are carried out using Albrecht data set.

Journal ArticleDOI
TL;DR: Quantitative comparisons of the proposed image registration technique with the related techniques show a significant improvement in the presence of large scale, rotation changes, and the intensity changes.
Abstract: Image registration is a crucial step in most image processing tasks for which the final result is achieved from a combination of various resources. Automatic registration of remote-sensing images is a difficult task as it must deal with the intensity changes and variation of scale, rotation and illumination of the images. This paper proposes image registration technique of multi-view, multi- temporal and multi- spectral remote sensing images. Firstly, a preprocessing step is performed by applying median filtering to enhance the images. Secondly, the Steerable Pyramid Transform is adopted to produce multi-resolution levels of reference and sensed images; then, the Scale Invariant Feature Transform (SIFT) is utilized for extracting feature points that can deal with the large variations of scale, rotation and illumination between images .Thirdly, matching the features points by using the Euclidian distance ratio; then removing the false matching pairs using the RANdom SAmple Consensus (RANSAC) algorithm. Finally, the mapping function is obtained by the affine transformation. Quantitative comparisons of our technique with the related techniques show a significant improvement in the presence of large scale, rotation changes, and the intensity changes. The effectiveness of the proposed technique is demonstrated by the experimental results.

Journal ArticleDOI
TL;DR: The results of this study have showed that Google machine translation system is better than Babylon machinetranslation system in terms of precision of translation from English to Arabic.
Abstract: This study aims to compare the effectiveness of two popular machine translation systems (Google Translate and Babylon machine translation system) used to translate English sentences into Arabic relative to the effectiveness of English to Arabic human translation. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method, which was adopted and implemented to achieve the main goal of this study. BLEU method is based on the assumptions of automated measures that depend on matching machine translators' output to human reference translations; the higher the score, the closer the translation to the human translation will be. Well known English sayings in addition to manually collected sentences from different Internet web sites were used for evaluation purposes. The results of this study have showed that Google machine translation system is better than Babylon machine translation system in terms of precision of translation from English to Arabic.

Journal ArticleDOI
TL;DR: This paper aims to survey existing knowledge regarding risk assessment for cloud computing and analyze existing use cases from cloud computing to identify the level of risk assessment realization in state of art systems and emerging challenges for future research.
Abstract: with the increase in the growth of cloud computing and the changes in technology that have resulted a new ways for cloud providers to deliver their services to cloud consumers, the cloud consumers should be aware of the risks and vulnerabilities present in the current cloud computing environment. An information security risk assessment is designed specifically for that task. However, there is lack of structured risk assessment approach to do it. This paper aims to survey existing knowledge regarding risk assessment for cloud computing and analyze existing use cases from cloud computing to identify the level of risk assessment realization in state of art systems and emerging challenges for future research.

Journal ArticleDOI
TL;DR: This paper proposed efficient design of reversible sequential circuits that are optimized in terms of quantum cost, delay and garbage outputs, and proposed a new 3*3 reversible gate called SAM gate.
Abstract: Reversible sequential circuits are going to be the significant memory blocks for the forthcoming computing devices for their ultra low power consumption. Therefore design of various types of latches has been considered a major objective for the researchers quite a long time. In this paper we proposed efficient design of reversible sequential circuits that are optimized in terms of quantum cost, delay and garbage outputs. For this we proposed a new 3*3 reversible gate called SAM gate and we then design efficient sequential circuits using SAM gate along with some of the basic reversible logic gates.

Journal ArticleDOI
TL;DR: A study of recent techniques and automated approaches to attributing authorship of online messages is presented and evaluation criteria and parameters for authorship attribution studies are discussed.
Abstract: Authorship Identification techniques are used to identify the most appropriate author from group of potential suspects of online messages and find evidences to support the conclusion. Cybercriminals make misuse of online communication for sending blackmail or a spam email and then attempt to hide their true identities to void detection.Authorship Identification of online messages is the contemporary research issue for identity tracing in cyber forensics. This is highly interdisciplinary area as it takes advantage of machine learning, information retrieval, and natural language processing. In this paper, a study of recent techniques and automated approaches to attributing authorship of online messages is presented. The focus of this review study is to summarize all existing authorship identification techniques used in literature to identify authors of online messages. Also it discusses evaluation criteria and parameters for authorship attribution studies and list open questions that will attract future work in this area. Keywords—cyber crime; Author Identification; SVM