scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Computer Science and Applications in 2014"


Journal ArticleDOI
TL;DR: This paper presents and modifies the technology acceptance model (TAM) in an attempt to assist public universities, particularly in Saudi Arabia, in predicting the behavioural intention to use learning management systems (LMS).
Abstract: Although e-learning is in its infancy in Saudi Arabia, most of the public universities in the country show a great interest in the adoption of learning and teaching tools. Determining the significance of a particular tool and predicting the success of implantation is essential prior to its adoption. This paper presents and modifies the technology acceptance model (TAM) in an attempt to assist public universities, particularly in Saudi Arabia, in predicting the behavioural intention to use learning management systems (LMS). This study proposed a theoretical framework that includes the core constructs in TAM: namely, perceived ease of use, perceived usefulness, and attitude toward usage. Additional external variables were also adopted— namely, the lack of LMS availability, prior experience (LMS usage experience), and job relevance. The overall research model suggests that all mentioned variables either directly or indirectly affect the overall behavioural intention to use an LMS. Initial findings suggest the applicability of using TAM to measure the behavioural intention to use an LMS. Further, the results confirm the original TAM's findings.

446 citations


Journal ArticleDOI
TL;DR: This study presents the classical algorithm that is ID3, then highlights of this study will discuss in more detail C4.5 this one is a natural extension of the ID3 algorithm and a comparison between these two algorithms and others algorithms such as C5.0 and CART.
Abstract: Data mining is the useful tool to discovering the knowledge from large data. Different methods & algorithms are available in data mining. Classification is most common method used for finding the mine rule from the large database. Decision tree method generally used for the Classification, because it is the simple hierarchical structure for the user understanding & decision making. Various data mining algorithms available for classification based on Artificial Neural Network, Nearest Neighbour Rule & Baysen classifiers but decision tree mining is simple one. ID3 and C4.5 algorithms have been introduced by J.R Quinlan which produce reasonable decision trees. The objective of this paper is to present these algorithms. At first we present the classical algorithm that is ID3, then highlights of this study we will discuss in more detail C4.5 this one is a natural extension of the ID3 algorithm. And we will make a comparison between these two algorithms and others algorithms such as C5.0 and CART.

287 citations


Journal ArticleDOI
TL;DR: This survey identifies and classifies simplification research within the period 1998-2013 and gives an overview of contemporary research whilst taking into account the history that has brought text simplification to its current state.
Abstract: Text simplification modifies syntax and lexicon to improve the understandability of language for an end user. This survey identifies and classifies simplification research within the period 1998-2013. Simplification can be used for many applications, including: Second language learners, preprocessing in pipelines and assistive technology. There are many approaches to the simplification task, including: lexical, syntactic, statistical machine translation and hybrid techniques. This survey also explores the current challenges which this field faces. Text simplification is a non-trivial task which is rapidly growing into its own field. This survey gives an overview of contemporary research whilst taking into account the history that has brought text simplification to its current state.

183 citations


Journal ArticleDOI
TL;DR: A system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture, which the students will need to scan the code in order to confirm their attendance.
Abstract: Smartphones are becoming more preferred companions to users than desktops or notebooks. Knowing that smartphones are most popular with users at the age around 26, using smartphones to speed up the process of taking attendance by university instructors would save lecturing time and hence enhance the educational process. This paper proposes a system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture. The students will need to scan the code in order to confirm their attendance. The paper explains the high level implementation details of the proposed system. It also discusses how the system verifies student identity to eliminate false registrations.

96 citations


Journal ArticleDOI
TL;DR: A comparative study between Meta-heuristic algorithms: Genetic Algorithm, Tabu Search, and Simulated annealing for solving a real-life QAP and analyze their performance in terms of both runtime efficiency and solution quality shows that Genetic Al algorithm has a better solution quality.
Abstract: Quadratic Assignment Problem (QAP) is an NP-hard combinatorial optimization problem, therefore, solving the QAP requires applying one or more of the meta-heuristic algorithms. This paper presents a comparative study between Meta-heuristic algorithms: Genetic Algorithm, Tabu Search, and Simulated annealing for solving a real-life (QAP) and analyze their performance in terms of both runtime efficiency and solution quality. The results show that Genetic Algorithm has a better solution quality while Tabu Search has a faster execution time in comparison with other Meta-heuristic algorithms for solving QAP.

77 citations


Journal ArticleDOI
TL;DR: The main idea of the current work is to use a wireless Electroencephalography headset as a remote control for the mouse cursor of a personal computer using EEG signals as a communication link between brains and computers.
Abstract: The main idea of the current work is to use a wireless Electroencephalography (EEG) headset as a remote control for the mouse cursor of a personal computer. The proposed system uses EEG signals as a communication link between brains and computers. Signal records obtained from the PhysioNet EEG dataset were analyzed using the Coif lets wavelets and many features were extracted using different amplitude estimators for the wavelet coefficients. The extracted features were inputted into machine learning algorithms to generate the decision rules required for our application. The suggested real time implementation of the system was tested and very good performance was achieved. This system could be helpful for disabled people as they can control computer applications via the imagination of fists and feet movements in addition to closing eyes for a short period of time. Keywords—EEG; BCI; Data Mining; Machine Learning; SVMs; NNs; DWT; Feature Extraction

67 citations


Journal ArticleDOI
TL;DR: An opinion mining and analysis tool to collect different forms of Arabic language (i.e. Standard or MSA, and colloquial) and shows that it yields more accurate results when it is applied on domain-based Arabic reviews relative to general- based Arabic reviews.
Abstract: Social media constitutes a major component of Web 2.0 and includes social networks, blogs, forum discussions, micro-blogs, etc. Users of social media generate a huge volume of reviews and comments on a daily basis. These reviews and comments reflect the opinions of users about different issues, such as: products, news, entertainments, or sports. Therefore different establishments may need to analyze these reviews and comments. For examples: It is essential for companies to know the pros and cons of their products or services in the eyes of customers. Governments may want to know the attitude of people towards certain decisions, services, etc. Although the manual analysis of textual reviews and comments can be more accurate than the automatic methods, nonetheless, it is time consuming, expensive, and can be subjective. Furthermore, the huge amount of data contained in social networks can make it impractical to perform analysis manually. This paper focuses on evaluating Arabic social content. Currently, Middle East is an area rich of major political and social reforms. The social media can be a rich source of information to evaluate such contexts. In this research we developed an opinion mining and analysis tool to collect different forms of Arabic language (i.e. Standard or MSA, and colloquial). The tool accepts comments and opinions as input and generates polarity based outputs related to the comments. Additionally the tool can determine the comment or review is: (subjective or objective), (positive or negative), and (strong or weak). The evaluation of the performance of the developed tool showed that it yields more accurate results when it is applied on domain-based Arabic reviews relative to general-based Arabic reviews.

66 citations


Journal ArticleDOI
TL;DR: An algorithm to provide or suggest recommendations based on users' query, which employs both TF-IDF weighing scheme and cosine similarity measure and will help library users to find most relevant research papers to their needs.
Abstract: Recommender systems are software applications that provide or suggest items to intended users. These systems use filtering techniques to provide recommendations. The major ones of these techniques are collaborative-based filtering technique, content-based technique, and hybrid algorithm. The motivation came as a result of the need to integrate recommendation feature in digital libraries in order to reduce information overload. Content-based technique is adopted because of its suitability in domains or situations where items are more than the users. TF-IDF (Term Frequency Inverse Document Frequency) and cosine similarity were used to determine how relevant or similar a research paper is to a user's query or profile of interest. Research papers and user's query were represented as vectors of weights using Keyword-based Vector Space model. The weights indicate the degree of association between a research paper and a user's query. This paper also presents an algorithm to provide or suggest recommendations based on users' query. The algorithm employs both TF-IDF weighing scheme and cosine similarity measure. Based on the result or output of the system, integrating recommendation feature in digital libraries will help library users to find most relevant research papers to their needs.

61 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discussed the selection of touch gestures for children's applications and investigated the gestures that children aged between 2 to 4 years old can manage on the iPad device.
Abstract: This paper discusses the selection of touch gestures for children’s applications. This research investigates the gestures that children aged between 2 to 4 years old can manage on the iPad device. Two experiments were conducted for this research. The first experiment was carried out in United Kingdom. The second experiment was carried out in Malaysia. The two similar experiments were carried out to increase the reliability and refine the result. This study shows that children aged 4 years have no problem using the 7 common gestures found in iPad applications. Some children aged 3 years have problem with two of the gestures. A high percentage of children aged 2 years struggled with the free rotate, drag & drop, pinch and spread gestures. This paper also discusses the Additional Criteria for the use of Gestures, Interface Design Components and Research on Children using iPad and Applications.

59 citations


Journal ArticleDOI
TL;DR: A 3D-CNN is designed by augmenting dimensionality reduction methods such as PCA and TMPCA to recognize simultaneously the successive frames with facial expression images obtained through video camera to achieve some degree of shift and deformation invariance.
Abstract: This paper is concerned with video-based facial expression recognition frequently used in conjunction with HRI (Human-Robot Interaction) that can naturally interact between human and robot. For this purpose, we design a 3D-CNN(3D Convolutional Neural Networks) by augmenting dimensionality reduction methods such as PCA(Principal Component Analysis) and TMPCA(Tensor-based Multilinear Principal Component Analysis) to recognize simultaneously the successive frames with facial expression images obtained through video camera. The 3D-CNN can achieve some degree of shift and deformation invariance using local receptive fields and spatial subsampling through dimensionality reduction of redundant CNN’s output. The experimental results on video-based facial expression database reveal that the presented method shows a good performance in comparison to the conventional methods such as PCA and TMPCA.

59 citations


Journal ArticleDOI
TL;DR: To conclude about the coverage variation for low orbiting satellites at low elevation up to 10o, the simulation for attitudes from 600km to 1200km is presented through this paper.
Abstract: Low Earth Orbit (LEO) satellites are used for public networking and for scientific purposes. Communication via satellite begins when the satellite is positioned in its orbital position. Ground stations can communicate with LEO satellites only when the satellite is in their visibility region. The duration of the visibility and the communication vary for each LEO satellite pass over the station, since LEO satellites move too fast over the Earth. The satellite coverage area is defined as a region of the Earth where the satellite is seen at a minimum predefined elevation angle. The satellite's coverage area on the Earth depends on orbital parameters. The communication under low elevation angles can be hindered by natural barriers. For safe communication and for savings within a link budget, the coverage under too low elevation is not always provided. LEO satellites organized in constellations act as a convenient network solution for real time global coverage. Global coverage model is in fact the complementary networking process of individual satellite's coverage. Satellite coverage strongly depends on elevation angle. To conclude about the coverage variation for low orbiting satellites at low elevation up to 10o, the simulation for attitudes from 600km to 1200km is presented through this paper. Keywords—LEO; satellite; coverage

Journal ArticleDOI
TL;DR: The adoption of cloud computing has a significant impact on cost effectiveness, enhanced availability, low environmental impact, reduced IT complexities, mobility, scalability, increased operability and reduced investment in physical asset.
Abstract: This study investigates the impact and challenges of the adoption of cloud computing by public universities in the Southwestern part of Nigeria. A sample size of 100 IT staff, 50 para-IT staff and 50 students were selected in each university using stratified sampling techniques with the aid of well-structured questionnaires. Microsoft excel was used to capture the data while frequency and percentage distributions were used to analyze it. In all, 2, 000 copies of the questionnaire were administered to the ten (10) public universities in the southwestern part of Nigeria while 1742 copies were returned which represents a respondent rate of 87.1%. The result of the findings revealed that the adoption of cloud computing has a significant impact on cost effectiveness, enhanced availability, low environmental impact, reduced IT complexities, mobility, scalability, increased operability and reduced investment in physical asset However, the major challenges confronting the adoption of cloud are data insecurity, regulatory compliance concerns, lock-in and privacy concerns. This paper concludes by recommending strategies to manage the identified challenges in the study area.

Journal ArticleDOI
TL;DR: A solution of the problem related to the selection criteria of a better wireless communication technology face up to the constraints imposed by the intended application and the evaluation of its key features is presented.
Abstract: The systems based on intelligent sensors are currently expanding, due to theirs functions and theirs performances of intelligence: transmitting and receiving data in real-time, computation and processing algorithms, metrology remote, diagnostics, automation and storage measurements…The radio frequency wireless communication with its multitude offers a better solution for data traffic in this kind of systems. The mains objectives of this paper is to present a solution of the problem related to the selection criteria of a better wireless communication technology face up to the constraints imposed by the intended application and the evaluation of its key features. The comparison between the different wireless technologies (Wi- Fi, Wi-Max, UWB, Bluetooth, ZigBee, ZigBeeIP, GSM/GPRS) focuses on their performance which depends on the areas of utilization. Furthermore, it shows the limits of their characteristics. Study findings can be used by the developers/ engineers to deduce the optimal mode to integrate and to operate a system that guarantees quality of communication, minimizing energy consumption, reducing the implementation cost and avoiding time constraints.

Journal ArticleDOI
TL;DR: The technique used here is K-Means and Fuzzy K-means which are very time saving and efficient.
Abstract: Clustering is a major technique used for grouping of numerical and image data in data mining and image processing applications. Clustering makes the job of image retrieval easy by finding the images as similar as given in the query image. The images are grouped together in some given number of clusters. Image data are grouped on the basis of some features such as color, texture, shape etc. contained in the images in the form of pixels. For the purpose of efficiency and better results image data are segmented before applying clustering. The technique used here is K-Means and Fuzzy K-Means which are very time saving and efficient.

Journal ArticleDOI
TL;DR: The usability attributes evaluated were user-friendliness, learnability, technological infrastructure and policy, which made recommendations which could help universities accelerate the adoption of e-learning systems.
Abstract: The use of e-learning systems has increased significantly in the recent times. E-learning systems are supplementing teaching and learning in universities globally. Kenyan universities have adopted e-learning technologies as means for delivering course content. However despite adoption of these systems, there are considerable challenges facing the usability of the systems. Lecturers and students have different perceptions in regard to the usability of e-learning systems. The aim of this study was to evaluate usability attributes that affect e-learning systems in Kenyan universities. The study had two fold objectives; determining status of e-learning platforms and evaluating usability issues affecting e-learning adoption in Kenyan universities. The research took a case study of one of the public universities which has implemented Moodle e-learning system. The usability attributes evaluated were user-friendliness, learnability, technological infrastructure and policy. The research made recommendations which could help universities accelerate the adoption of e-learning systems.

Journal ArticleDOI
TL;DR: Results provide effectiveness of linguistic tools such as grammar, syntax, and textual patterns that are fairly productive in educational context for learning and assessment that can be utilized with scientific computer programs to enhance the process of education.
Abstract: Natural Language Processing (NLP) is an effective approach for bringing improvement in educational setting. Implementing NLP involves initiating the process of learning through the natural acquisition in the educational systems. It is based on effective approaches for providing a solution for various problems and issues in education. Natural Language Processing provides solution in a variety of different fields associated with the social and cultural context of language learning. It is an effective approach for teachers, students, authors and educators for providing assistance for writing, analysis, and assessment procedures. Natural Language Processing is widely integrated with the large number of educational contexts such as research, science, linguistics, e-learning, evaluations system, and contributes resulting positive outcomes in other educational settings such as schools, higher education system, and universities. The paper aims to address the process of natural language learning and its implication in the educational settings. The study also highlights how NLP can be utilized with scientific computer programs to enhance the process of education. The study follows qualitative approach. Data is collected from the secondary resources in order to identify problems faced by the teachers and students for understanding the context due to obstacles of language. Results provide effectiveness of linguistic tools such as grammar, syntax, and textual patterns that are fairly productive in educational context for learning and assessment.

Journal ArticleDOI
TL;DR: A new model for the detection functionality currently performed by host-based antivirus software is suggested and the benefits of multiple detection throughout the cloud and a new approach to coordinate detection across the cloud are presented.
Abstract: Detecting malicious software is a complex problem. The vast, ever-increasing ecosystem of malicious software and tools presents a daunting challenge for network operators and IT administrators. Antivirus software is one of the most widely used tools for detecting and stopping malicious and unwanted software. However, the elevating sophistication of modern malicious software means that it is increased challenging for any single vendor to develop signatures for every new threat. Indeed, a recent Microsoft survey found more than 45,000 new variants of backdoors, Trojans, and bots during the second half of 2006 [1]. In this paper, we suggest a new model for the detection functionality currently performed by host-based antivirus software. This paper is characterized by two key changes.  Malware detection as a network service: First, the detection capabilities currently provided by host-based antivirus software can be more efficiently and effectively provided as an in-cloud network service. Instead of running complex analysis software on every end host, we suggest that each end host runs a lightweight process to detect new files, send them to a network service for analysis, and then permit access or quarantine them based on a report returned by the network service.  Multi-detection techniques: Second, the identification of malicious and unwanted software should be determined by multiple, Different detection engines Respectively. Suggest that malware detection systems should leverage the detection capabilities of multiple, Collection detection engines to more effectively determine malicious and unwanted files. In the future, we will see an increase in the dependence of cloud computing as consumers increasingly move to mobile platforms for their computing needs. Cloud technologies have become possible by tuberculation in order to share physical server resources between multiple virtual machines (VMs). The advantages of this approach include an increase in the number of clients that can be served for every physical server and the ability to provide software as a service (SaaS). In this paper, previous work on malware detection had been presented, both conventional and in the presence of cloud as storage in order to determine the best approach for detection in the cloud [2]. We also argue the benefits of multiple detection throughout the cloud and present a new approach to coordinate detection across the cloud. Section II provides background and related work the research area, specifically: cloud technologies, security system in the cloud, malware detection and detection in the cloud. Section III, we explain our Proposed System. Section IV we show Remarks of our system. Finally, section V Conclusions the points raised in this paper and provide some ideas for future work.

Journal ArticleDOI
TL;DR: The SentiTFIDF model was evaluated by comparing it with state of art techniques for sentiment classification using the movie dataset and the proportional distribution of a term to be classified as Senti-stop-word was determined experimentally.
Abstract: Sentiment Classification refers to the computational techniques for classifying whether the sentiments of text are positive or negative. Statistical Techniques based on Term Presence and Term Frequency, using Support Vector Machine are popularly used for Sentiment Classification. This paper presents an approach for classifying a term as positive or negative based on its proportional frequency count distribution and proportional presence count distribution across positively tagged documents in comparison with negatively tagged documents. Our approach is based on term weighting techniques that are used for information retrieval and sentiment classification. It differs significantly from these traditional methods due to our model of logarithmic differential term frequency and term presence distribution for sentiment classification. Terms with nearly equal distribution in positively tagged documents and negatively tagged documents were classified as a Senti-stop-word and discarded. The proportional distribution of a term to be classified as Senti-stop-word was determined experimentally. We evaluated the SentiTFIDF model by comparing it with state of art techniques for sentiment classification using the movie dataset.

Journal ArticleDOI
TL;DR: An efficient eye tracking method is proposed which uses the position of detected face which provides a 98% overall accuracy and 100% detection accuracy for a distance of 35 cm and an artificial light.
Abstract: In this paper, we present a real time method based on some video and image processing algorithms for eye blink detection. The motivation of this research is the need of disabling who cannot control the calls with human mobile interaction directly without the need of hands. A Haar Cascade Classifier is applied for face and eye detection for getting eye and facial axis information. In addition, the same classifier is used based on Haar- like features to find out the relationship between the eyes and the facial axis for positioning the eyes. An efficient eye tracking method is proposed which uses the position of detected face. Finally, an eye blinking detection based on eyelids state (close or open) is used for controlling android mobile phones. The method is used with and without smoothing filter to show the improvement of detection accuracy. The application is used in real time for studying the effect of light and distance between the eyes and the mobile device in order to evaluate the accuracy detection and overall accuracy of the system. Test results show that our proposed method provides a 98% overall accuracy and 100% detection accuracy for a distance of 35 cm and an artificial light.

Journal ArticleDOI
TL;DR: A probabilistic Monte-Carlo framework is developed and applied to predict remaining useful life of a component and the prognostic is carried out by the mean of simulation.
Abstract: Power electronics are widely used in electric vehicles, railway locomotive and new generation aircrafts. Reliability of these components directly affect the reliability and performance of these vehicular platforms. In recent years, several research work about reliability, failure mode and aging analysis have been extensively carried out. There is a need for an efficient algorithm able to predict the life of power electronics component. In this paper, a probabilistic Monte-Carlo framework is developed and applied to predict remaining useful life of a component. Probability distributions are used to model the component’s degradation process. The modelling parameters are learned using Maximum Likelihood Estimation. The prognostic is carried out by the mean of simulation in this paper. Monte-Carlo simulation is used to propagate multiple possible degradation paths based on the current health state of the component. The remaining useful life and confident bounds are calculated by estimating mean, median and percentile descriptive statistics of the simulated degradation paths. Results from different probabilistic models are compared and their prognostic performances are evaluated.

Journal ArticleDOI
TL;DR: A new method based on the combination of both cryptography and steganography known as Crypto-Steganography which overcome each other’s weaknesses and make difficult for the intruders to attack or steal sensitive information is being proposed.
Abstract: The two important aspects of security that deal with transmitting information or data over some medium like Internet are steganography and cryptography. Steganography deals with hiding the presence of a message and cryptography deals with hiding the contents of a message. Both of them are used to ensure security. But none of them can simply fulfill the basic requirements of security i.e. the features such as robustness, undetectability and capacity etc. So a new method based on the combination of both cryptography and steganography known as Crypto-Steganography which overcome each other’s weaknesses and make difficult for the intruders to attack or steal sensitive information is being proposed. This paper also describes the basics concepts of steganography and cryptography on the basis of previous literatures available on the topic.

Journal ArticleDOI
TL;DR: Simulation results in MATLAB with most occurring faults indicate that ESPRIT and R-MUSIC algorithms have high capability of correctly identifying the frequencies of fault characteristic components, and a performance ranking had been carried out to demonstrate the efficiency of the studied methods in faults detecting.
Abstract: 1 Abstract—Electrical energy production based on wind power has become the most popular renewable resources in the recent years because it gets reliable clean energy with minimum cost. The major challenge for wind turbines is the electrical and the mechanical failures which can occur at any time causing prospective breakdowns and damages and therefore it leads to machine downtimes and to energy production loss. To circumvent this problem, several tools and techniques have been developed and used to enhance fault detection and diagnosis to be found in the stator current signature for wind turbines generators. Among these methods, parametric or super-resolution frequency estimation methods, which provides typical spectrum estimation, can be useful for this purpose. Facing on the plurality of these algorithms, a comparative performance analysis is made to evaluate robustness based on different metrics: accuracy, dispersion, computation cost, perturbations and faults severity. Finally, simulation results in MATLAB with most occurring faults indicate that ESPRIT and R-MUSIC algorithms have high capability of correctly identifying the frequencies of fault characteristic components, a performance ranking had been carried out to demonstrate the efficiency of the studied methods in faults detecting.

Journal ArticleDOI
TL;DR: This paper conducts different performance tests on three hypervisors XenServer, ESXi and KVM and results are gathered using SIGAR API (System Information Gatherer and Reporter) along with Passmark benchmark suite.
Abstract: To make cloud computing model Practical and to have essential characters like rapid elasticity, resource pooling, on demand access and measured service, two prominent technologies are required. One is internet and second important one is virtualization technology. Virtualization Technology plays major role in the success of cloud computing. A virtualization layer which provides an infrastructural support to multiple virtual machines above it by virtualizing hardware resources such as CPU, Memory, Disk and NIC is called a Hypervisor. It is interesting to study how different Hypervisors perform in the Private Cloud. Hypervisors do come in Paravirtualized, Full Virtualized and Hybrid flavors. It is novel idea to compare them in the private cloud environment. This paper conducts different performance tests on three hypervisors XenServer, ESXi and KVM and results are gathered using SIGAR API (System Information Gatherer and Reporter) along with Passmark benchmark suite. In the experiment, CloudStack 4.0.2 (open source cloud computing software) is used to create a private cloud, in which management server is installed on Ubuntu 12.04 – 64 bit operating system. Hypervisors XenServer 6.0, ESXi 4.1 and KVM (Ubuntu 12.04) are installed as hosts in the respective clusters and their performances have been evaluated in detail by using SIGAR Framework, Passmark and NetPerf.

Journal ArticleDOI
TL;DR: A client-side solution to protect against phishing attacks which is a Firefox extension integrated as a toolbar that is responsible for checking whether recipient website is trusted or not by inspecting URLs of each requested webpage, and if the site is suspicious the toolbar is going to block it.
Abstract: Phishing tricks to steal personal or credential information by entering victims into a forged website similar to the original site, and urging them to enter their information believing that this site is legitimate. The number of internet users who are becoming victims of phishing attacks is increasing beside that phishing attacks have become more sophisticated. In this paper we propose a client-side solution to protect against phishing attacks which is a Firefox extension integrated as a toolbar that is responsible for checking whether recipient website is trusted or not by inspecting URLs of each requested webpage. If the site is suspicious the toolbar is going to block it. Every URL is evaluated corresponding to features extracted from it. Three heuristics (primary domain, sub domain, and path) and Naive Bayes classification using four lexical features combined with page ranking received from two different services (Alexa, and Google page rank) used to classify URL. The proposed method requires no server changes and will prevent internet users from fraudulent sites especially from phishing attacks based on deceptive URLs. Experimental results show that our approach can achieve 48% accuracy ratio using a test set of 246 URL, and 87.5% accuracy ratio by excluding NB addition tested over 162 URL.

Journal ArticleDOI
TL;DR: The web questionnaires of users showed the usefulness of the integration of Web-GIS, SNS and recommendation systems, because the functions of reference and recommendation can be expected to support tourists’ excursion behavior.
Abstract: This study aims to develop a social recommendation media GIS (Geographic Information Systems) specially tailored to recommend tourist spots. The conclusions of this study are summarized in the following three points. (1) Social media GIS, an information system which integrates Web-GIS, SNS and recommendation system into a single system, was conducted in the central part of Yokohama City in Kanagawa Prefecture, Japan. The social media GIS uses a design which displays its usefulness in reducing the constraints of information inspection, time and space, and continuity, making it possible to redesign systems in accordance with target cases. (2) The social media GIS was operated for two months for members of the general public who are more than 18 years old. The total numbers of users was 98, and the number of pieces of information submitted was 232. (3) The web questionnaires of users showed the usefulness of the integration of Web-GIS, SNS and recommendation systems, because the functions of reference and recommendation can be expected to support tourists’ excursion behavior. Since the access survey of log data showed that about 35% of accesses were from mobile information terminals, it can be said that the preparation of an optimal interface for such terminals was effective.

Journal ArticleDOI
TL;DR: A survey of the existing literature and research carried out in the area of project management using different models, methodologies, and frameworks is provided.
Abstract: This paper provides a survey of the existing literature and research carried out in the area of project management using different models, methodologies, and frameworks. Project Management (PM) broadly means programme management, portfolio management, practice management, project management office, etc. A project management system has a set of processes, procedures, framework, methods, tools, methodologies, techniques, resources, etc. which are used to manage the full life cycle of projects. This also means to create risk, quality, performance, and other management plans to monitor and manage the projects efficiently and effectively.

Journal ArticleDOI
TL;DR: The session identification in log files using Hadoop in a distributed cluster is performed and the identified session is analyzed in R to produce a statistical report based on total count of visit per day.
Abstract: Big Data is an emerging growing dataset beyond the ability of a traditional database tool. Hadoop rides the big data where the massive quantity of information is processed using cluster of commodity hardware. Web server logs are semi-structured files generated by the computer in large volume usually of flat text files. It is utilized efficiently by Mapreduce as it process one line at a time. This paper performs the session identification in log files using Hadoop in a distributed cluster. Apache Hadoop Mapreduce a data processing platform is used in pseudo distributed mode and in fully distributed mode. The framework effectively identifies the session utilized by the web surfer to recognize the unique users and pages accessed by the users. The identified session is analyzed in R to produce a statistical report based on total count of visit per day. The results are compared with non-hadoop approach a java environment, and it results in a better time efficiency, storage and processing speed of the proposed work.

Journal ArticleDOI
TL;DR: A functional taxonomy of generic skills that draws upon three fields of knowledge: education, software engineering and artificial intelligence provides the backbone of an ontology for learning designs, enabling the creation of a library of learning designs based on their cognitive and meta-cognitive properties.
Abstract: Learning designs are central resources for educational environments because they provide the organizational structure of learning activities; they are concrete instructional methods. We characterize each learning design by the competencies they target. We define competencies at the meta-knowledge level, as generic processes acting on domain-specific knowledge. We summarize a functional taxonomy of generic skills that draws upon three fields of knowledge: education, software engineering and artificial intelligence. This taxonomy provides the backbone of an ontology for learning designs, enabling the creation of a library of learning designs based on their cognitive and meta-cognitive properties.

Journal ArticleDOI
TL;DR: The various security issues and vulnerabilities related to the IEEE 802.11 Wireless LAN encryption standard and common threats/attacks pertaining to the home and enterprise Wireless LAN system are discussed and overall guidelines and recommendation are provided to theHome users and organizations.
Abstract: Wireless LANs are everywhere these days from home to large enterprise corporate networks due to the ease of installation, employee convenience, avoiding wiring cost and constant mobility support. However, the greater availability of wireless LANs means increased danger from attacks and increased challenges to an organisation, IT staff and IT security professionals. This paper discusses the various security issues and vulnerabilities related to the IEEE 802.11 Wireless LAN encryption standard and common threats/attacks pertaining to the home and enterprise Wireless LAN system and provide overall guidelines and recommendation to the home users and organizations.

Journal ArticleDOI
TL;DR: The experimental results show that Google machine translation system is better than Babylon machinetranslation system in terms of precision of translation from Arabic to English.
Abstract: Online text machine translation systems are widely used throughout the world freely. Most of these systems use statistical machine translation (SMT) that is based on a corpus full with translation examples to learn from them how to translate correctly. Online text machine translation systems differ widely in their effectiveness, and therefore we have to fairly evaluate their effectiveness. Generally the manual (human) evaluation of machine translation (MT) systems is better than the automatic evaluation, but it is not feasible to be used. The distance or similarity of MT candidate output to a set of reference translations are used by many MT evaluation approaches. This study presents a comparison of effectiveness of two free online machine translation systems (Google Translate and Babylon machine translation system) to translate Arabic to English. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method. BLEU is used to evaluate translation quality of two free online machine translation systems under consideration. A corpus consists of more than 1000 Arabic sentences with two reference English translations for each Arabic sentence is used in this study. This corpus of Arabic sentences and their English translations consists of 4169 Arabic words, where the number of unique Arabic words is 2539. This corpus is released online to be used by researchers. These Arabic sentences are distributed among four basic sentence functions (declarative, interrogative, exclamatory, and imperative). The experimental results show that Google machine translation system is better than Babylon machine translation system in terms of precision of translation from Arabic to English.