scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science in 2015"


Journal ArticleDOI
TL;DR: The development phases of introducing gamification into e-learning systems, various gamification design elements and their suitability for usage in e- learning systems are discussed, and an experimental study to investigate the effectiveness of gamification of an informatics online course showed that students enrolled in the gamified version of the online module achieved greater learning success.
Abstract: Gamification is the usage of game mechanics, dynamics, aesthetics and game thinking in non- game systems. Its main objective is to increase user’s motivation, experience and engagement. For the same reason, it has started to penetrate in e-learning systems. However, when using gamified design elements in e-learning, we must consider various types of learners. In the phases of analysis and design of such elements, the cooperation of education, technology, pedagogy, design and finance experts is required. This paper discusses the development phases of introducing gamification into e-learning systems, various gamification design elements and their suitability for usage in e-learning systems. Several gamified design elements are found suited for e-learning (including points, badges, trophies, customization, leader boards, levels, progress tracking, challenges, feedback, social engagement loops and the freedom to fail). Advices for the usage of each of those elements in e-learning systems are also provided in this study. Based on those advises and the identified phases of introducing gamification info e-learning systems, we conducted an experimental study to investigate the effectiveness of gamification of an informatics online course. Results showed that students enrolled in the gamified version of the online module achieved greater learning success. Positive results encourage us to investigate the gamification of online learning content for other topics and courses. We also encourage more research on the influence of specific gamified design elements on learner’s motivation and engagement.

61 citations


Journal ArticleDOI
TL;DR: The experimental results show that Bayesian networks with Markov blanket estimation has a superior performance on the diagnosis of cardiovascular diseases with classification accuracy of MBE model is 97.92% of test samples, while TAN and SVM models have 88.54 and 70.83% respectively.
Abstract: Cardiovascular disease or atherosclerosis is any disease affecting the cardiovascular system. They include coronary heart disease, raised blood pressure, cerebrovascular disease, peripheral artery disease, rheumatic heart disease, congenital heart disease and heart failure. They are treated by cardiologists, thoracic surgeons, vascular surgeons, neurologists and interventional radiologists. The diagnosis is an important yet complicated task that needs to be done accurately and efficiently. The automation of this system is very much needed to help the physicians to do better diagnosis and treatment. Computer aided diagnosis systems are widely discussed as classification problems. The objective is to reduce the number of false decisions and increase the true ones. In this study, we evaluate the performance of Bayesian classifier (BN) in predicting the risk of cardiovascular disease. Bayesian networks are selected as they are able to produce probability estimates rather than predictions. These estimates allow predictions to be ranked and their expected costs to be minimized. The major advantage of BN is the ability to represent and hence understand knowledge. The cardiovascular dataset is provided by University of California, Irvine (UCI) machine learning repository. It consists of 303 instances of heart disease data each having 76 variables including the predicted class one. This study evaluates two Bayesian network classifiers; Tree Augmented Naive Bayes and the Markov Blanket Estimation and their prediction accuracies are benchmarked against the Support Vector Machine. The experimental results show that Bayesian networks with Markov blanket estimation has a superior performance on the diagnosis of cardiovascular diseases with classification accuracy of MBE model is 97.92% of test samples, while TAN and SVM models have 88.54 and 70.83% respectively.

31 citations


Journal ArticleDOI
TL;DR: A query expansion algorithm for Semantic Information Retrieval in Sports Domain (SIRSD) is proposed to do Semantic Search to improve search over large document repositories and reduces the issue of semantic interoperability during the user query search.
Abstract: Semantic Search has been a major longing factor from the envisage state of Semantic Web. Information on the Web is growing at a very rapid pace and has become quite voluminous over the past few years. Semantics of the query is not considered in Traditional Search system since it is a mere Keyword based Search. To increase the number of relevant documents retrieved, queries need to be disambiguated by looking at their context. A query expansion algorithm for Semantic Information Retrieval in Sports Domain (SIRSD) is proposed to do Semantic Search to improve search over large document repositories. This algorithm reformulates user queries by using Word Net and domain ontology to improve the returned results. Our proposal is illustrated with sample experiments showing improvements with respect to Traditional Search and providing ground for further research and discussion. SIRSD reduces the issue of semantic interoperability during the user query search. It has been inferred that there will be a higher value of average precision and recall for the Semantic Search system when compared to Traditional Search. The results show its effectiveness in generating a suitable number of query search with an accuracy of 87.1% compared to other competitors of generic search engines.

22 citations


Journal ArticleDOI
TL;DR: This study gives the details of different classification algorithms and feature selection methodologies and discusses the new features of C5.0 classification algorithm over C4.5 and performance of classification algorithm on high dimensional datasets.
Abstract: The aim of this research paper is to study and discuss the various classification algorithms applied on different kinds of medical datasets and compares its performance. The classification algorithms with maximum accuracies on various kinds of medical datasets are taken for performance analysis. The result of the performance analysis shows the most frequently used algorithms on particular medical dataset and best classification algorithm to analyse the specific disease. This study gives the details of different classification algorithms and feature selection methodologies. The study also discusses about the data constraints such as volume and dimensionality problems. This research paper also discusses the new features of C5.0 classification algorithm over C4.5 and performance of classification algorithm on high dimensional datasets. This research paper summarizes various reviews and technical articles which focus on the current research on Medical diagnosis.

19 citations


Journal ArticleDOI
TL;DR: The sensor used, the dataset used and the characteristics of extraction techniques as well as the classifier in the system developed by the previous researchers are described, which describes some possible development of the future potential application of lung sound analysis.
Abstract: The development of digital signal processing technology encourages researchers to develop better methods for automatic lungs sound recognition system than the existing ones. Lung sounds were originally assessed manually according to doctor's expertise. Signal processing techniques are intended to reduce subjectivity factor. Signal processing techniques for lung sound recognition are developed by researchers based on their point of view to the lung sounds. Several researchers developed signal processing methods in a time domain. Meanwhile, other researchers developed signal processing techniques in a frequency domain or combined some signal domains. This paper describes the sensor used, the dataset used and the characteristics of extraction techniques as well as the classifier in the system developed by the previous researchers. In the final section, we describe some possible development of the future potential application of lung sound analysis.

19 citations


Journal ArticleDOI
TL;DR: A new approach for automatic sentiment analysis of Malay movie reviews is proposed, implemented and evaluated and it is illustrated that the hybrid method outperforms the state of-the-art unigram baseline.
Abstract: Sentiment analysis or opinion mining refers to the automatic extraction of sentiments from a natural language text. Although many studies focusing on sentiment analysis have been conducted, there remains a limited amount of studies that focus on sentiment analysis in the Malay language. In this article, a new approach for automatic sentiment analysis of Malay movie reviews is proposed, implemented and evaluated. In contrast to most studies that focus on supervised or unsupervised machine learning approaches, this research aims to propose a new model for Malay sentiment analysis based on a combination of both approaches. We used sentiment lexicons in the new model to generate a new set of features to train a k-Nearest Neighbour (k-NN) classifier. We further illustrated that our hybrid method outperforms the state of-the-art unigram baseline.

18 citations


Journal ArticleDOI
TL;DR: The results of the study show that certain instructional methods are especially predestined for computer science education: Problem-based learning, learning tasks, discovery learning, computer simulation, project work and direct instruction.
Abstract: Answers to the questions of which instructional methods are suitable for school, what instructional methods should be applied in teaching individual subjects and how instructional methods support the act of learning represent challenges to general education and education in individual subjects. This article focuses on computer science teachers´ examination of instructional methods supporting knowledge processes in the act of learning. A survey was conducted in which computer science teachers evaluated 20 instructional methods in regard to the following knowledge processes: Build, process, apply, transfer, assess and integrate. The results of the study show that certain instructional methods are especially predestined for computer science education: Problem-based learning, learning tasks, discovery learning, computer simulation, project work and direct instruction.

18 citations


Journal ArticleDOI
TL;DR: This work proposes a method to predict the user’s visiting behaviours and obtains their interests by analyzing the patterns using Adaptive Neuro-Fuzzy Inference System with Subtractive Algorithm (ANFIS-SA).
Abstract: Websites on the internet are useful source of infor mation in our day-to-day activity. Web Usage Mining (WUM) is one of the major applications of data mini ng, artificial intelligence and so on to the web da ta to predict the user’s visiting behaviours and obtains their interests by analyzing the patterns.WUM has t urned out to be one of the considerable areas of research in the field of computer and information science. Weblog is one of the major sources which contain all the i nformation regarding the users visited links, brows ing patterns, time spent on a page or link and this inf ormation can be used in several applications like a daptive web sites, personalized services, customer profilin g, pre-fetching, creating attractive web sites etc. WUM consists of preprocessing, pattern discovery and pa ttern analysis. Log data is typically noisy and unc lear, so preprocessing is an essential process for effective mining process. In the preprocessing phase, the da ta cleaning process includes removal of records of gra phics, videos, format information, records with the failed HTTP status code and robots cleaning. In the second phase, the user behaviour is organized into a set of clusters using Weighted Fuzzy-Possibilistic C-Me ans (WFPCM), which consists of “similar” data items based on the user behaviour and navigation patterns for the use of pattern discovery. In the third pha se, classification of the user behaviour is carried out for the purpose of analyzing the user behaviour us ing Adaptive Neuro-Fuzzy Inference System with Subtractive Algorithm (ANFIS-SA). The performance of the proposed work is evaluated based on accuracy, execution time and convergence behaviour using anonymous microsoft web dataset.

17 citations


Journal ArticleDOI
TL;DR: Experimental results show that using LoPS feature set improves the accuracy of Arabic text ‎classification compared with the well-known Bag-of-Word feature and the ‎recent Bag- of-Concept (synset) features.
Abstract: Arabic text classification methods have emerged as a natural result of the existence of a massive amount of varied textual information (written in Arabic language) on the web In most text classification processes, feature selection is crucial task since it highly affects the classification accuracy Generally, two types of features could be used: Statistical based features and semantic and concept features The main interest of this paper is to specify the most effective semantic and concept features on Arabic text classification process In this study, two novel features that use lexical, semantic and lexico-semantic relations of Arabic WordNet (AWN) ontology are suggested The first feature set is List of Pertinent Synsets (LoPS), which is list of synsets that have a specific relation with the original terms The second feature set is List of Pertinent Words (LoPW), which is list of words that have a specific relation with the original terms Fifteen different relations (defined in AWN ontology) are used with both proposed features Naive Bayes classifier is used to perform the classification process The experimental results, which are conducted on BBC Arabic dataset, ‎show that using LoPS feature set improves the accuracy of Arabic text ‎classification compared with the well-known Bag-of-Word feature and the ‎recent Bag-of-Concept (synset) features Also, it was found that LoPW (especially with related-to relation) improves the classification accuracy compared with LoPS, Bag-of-Word and Bag-of-Concept

16 citations


Journal ArticleDOI
TL;DR: This study explores the adaptation of another swarm algorithm which is the Firefly Algorithm (FA) in text clustering, and demonstrates that a better clustering can be obtained once the exploitation of a search solution is improved.
Abstract: Document clustering is widely used in Information Retrieval however, existing clustering techniques suffer from local optima problem in determining the k number of clusters Various efforts have been put to address such drawback and this includes the utilization of swarm-based algorithms such as particle swarm optimization and Ant Colony Optimization This study explores the adaptation of another swarm algorithm which is the Firefly Algorithm (FA) in text clustering We present two variants of FA; Weight- based Firefly Algorithm (WFA) and Weight-based Firefly Algorithm II (WFAII) The difference between the two algorithms is that the WFAII, includes a more restricted condition in determining members of a cluster The proposed FA methods are later evaluated using the 20Newsgroups dataset Experimental results on the quality of clustering between the two FA variants are presented and are later compared against the one produced by particle swarm optimization, K-means and the hybrid of FA and -K-means The obtained results demonstrated that the WFAII outperformed the WFA, PSO, K-means and FA-Kmeans This result indicates that a better clustering can be obtained once the exploitation of a search solution is improved

15 citations


Journal ArticleDOI
TL;DR: The obtained shortest path is believed can assist evacuee to choose a suitable exit route to evacuate safely and can be extended to improve the robustness of the algorithm.
Abstract: Corresponding Author: Nor Amalina Mohd Sabri Center for Advanced Computing Technology (C-ACT), Faculty of Information and Communication Technology, Universiti Teknikal Malaysia Melaka, Malaysia Email: noramalinamohdsabri@gmail.com Abstract: Finding a shortest path in a high rise building during critical incident or evacuation is facing two main issues which are evacuees find difficulties to find the best routes and their behavior makes the process more difficult. These problems are important since it is related to the human’s life. By providing the shortest path and control the evacuee behavior, these can lead to successfulness of evacuation. To overcome these issues, two main objectives have been carried on which is initiated by identifying the shortest path algorithm for evacuation. Then follows by design and develop an evacuation preparedness model via shortest path algorithm to choose a suitable exit route to evacuate. Three steps are involved to achieve the objectives. The first step is Building Layout Plan, followed by creating the Visibility Graph and finally implements Dijkstra Algorithm to find the shortest path. Based on the experimental study, the result shows that Dijkstra Algorithm has produced a significant route to exit the building safely. Even though there are other factors need to be considered, this preliminary result has shown a promising outcome which can be extended to improve the robustness of the algorithm. In conclusion, the obtained shortest path is believed can assist evacuee to choose a suitable exit route to evacuate safely.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed approach is converted into original color image without noise and adaptive process to enhance the quality of images.
Abstract: This study presents an image enhancement approach to Cuckoo Search Algorithmin with Morphological Operation. At the present time, in many image processing applications digital images are developed. Machine vision, computer interfaces, manufacturing, compression for storage and more are some of the fields of image processing application. Before using it in any applications the image has to be managed, such processing is said to be image enhancement. We propose a method to combine with an enhancing digital images through cuckoo search algorithmin and morphological operation. Therefore, the appearance of noise produces distortion in an image and thus the image will be unattractive. This decreases the discernibility of many features inside the images. In this study, we are working to overcome this drawback by getting an improved contrast value after converting the color image into grayscale image. The fundamental characteristic of this CS algorithm is that the amplitudes of its components can objectively reflect the contribution of the gray levels to the representation of image information for the best contrast value of an image. After selecting the best contrast value of an image in CS algorithm, morphological operations have to be done. In morphological operations, the intensity parameters of the image are adjusted to improve its quality. Experimental results demonstrate that the proposed approach is converted into original color image without noise and adaptive process to enhance the quality of images.

Journal ArticleDOI
TL;DR: This study presents an experimental evaluation of Discrete Wavelet Transforms for use in speaker identification, which consists of two stages: Feature extraction stage and the identification stage.
Abstract: This study presents an experimental evaluation of Discrete Wavelet Transforms for use in speaker identification. The features are tested using speech data provided by the CHAINS corpus. This system consists of two stages: Feature extraction stage and the identification stage. Parameters are extracted and used in a closed-set text-independent speaker identification task. In this study the signals are pre-processed and features are extracted using discrete wavelet transforms. The energy of the wavelet coefficients are used for training the Gaussian Mixture Model. Daubechies wavelets are used and the speech samples are analyzed using 8 levels of decomposition.

Journal ArticleDOI
TL;DR: A classification of methodologies for echocardiography image segmentation is presented, providing large number of segmentation techniques in a comprehensive and systematic manner and critically review recent approaches in terms of their performance and degree of clinical evaluation with respect to the final goal of cardiac functional analysis.
Abstract: Due to acoustic interferences and artifacts which are inherent in echocardiography images, automatic segmentation of anatomical structures in cardiac ultrasound images is a real challenge. This paper surveys state-of-the-art researches on echocardiography data segmentation methods, concentrating on methods techniques developed for clinical data. We present a classification of methodologies for echocardiography image segmentation. By choosing ten recent papers which have proposed innovative ideas that they proved certain clinical advantages or potential especial role to the echocardiography segmentation task. The contribution of the paper would be serving as a tutorial of the field for both clinicians and technologists, providing large number of segmentation techniques in a comprehensive and systematic manner and critically review recent approaches in terms of their performance and degree of clinical evaluation with respect to the final goal of cardiac functional analysis.

Journal ArticleDOI
TL;DR: It is observed that NLP techniques improve performance of Intelligent Email Reply algorithm enhancing its ability to classify and generate email responses with minimal errors using probabilistic methods.
Abstract: Email based communication over the course of globalization in recent years has transformed into an all-encompassing form of interaction and requires automatic processes to control email correspondence in an environment of increasing email database. Relevance characteristics defining class of email in general includes the topic of thee mail and the sender of the email along with the body of email. Intelligent reply algorithms can be employed in which machine learning methods can accommodate email content using probabilistic methods to classify context and nature of email. This helps in correct selection of template for email reply. Still redundant information can cause errors in classifying an email. Natural Language Processing (NLP) possess potential in optimizing text classification due to its direct relation with language structure. An enhancement is presented in this research to address email management issues by incorporating optimized information extraction for email classification along with generating relevant dictionaries as emails vary in categories and increases in volume. The open hypothesis of this research is that the underlying concept to fan email is communicating a message in form of text. It is observed that NLP techniques improve performance of Intelligent Email Reply algorithm enhancing its ability to classify and generate email responses with minimal errors using probabilistic methods. Improved algorithm is functionally automated with machine learning techniques to assist email users who find it difficult to manage bulk variety of emails.

Journal ArticleDOI
TL;DR: It is concluded that each individual finite set of Legendre moments will represent the unique image features independently, while the even orders ofLegendre moments describe most of the image characteristics.
Abstract: In this research, a numerical integration method is proposed to improve the computational accuracy of Legendre moments. To clarify the improved computation scheme, image reconstructions from higher order of Legendre moments, up to 240, are conducted. With the more accurately generated moments, the distributions of image information in a finite set of Legendre moments are investigated. We have concluded that each individual finite set of Legendre moments will represent the unique image features independently, while the even orders of Legendre moments describe most of the image characteristics. Keywords: Legendre Moments, Image Reconstruction, Image Analysis 1. INTRODUCTION Moment methods have been the subject of intensive research since the concept of image moments was introduced by (Hu, 1962). Different types of conventional continuous orthogonal moments defined in a rectangular region have been investigated as the unique image shape features for applications in fields of pattern recognition and image analysis. We refer to books written by (Mukundan and Ramakrishnan, 1998; Pawlak, 2006; Flusser et al ., 2009) as background study of moment methods for this research. As one of the important continuous orthogonal moments, the Legendre moment has been well investigated since the earlier years of moment-based descriptors studies (Teague, 1980; Teh and Chin, 1988; Liao and Pawlak, 1996). However, some computational issues have bottlenecked the further development of efficient applications driven by Legendre moment-based techniques. The objective of this research is to study the image representing characteristics of Legendre moments and demonstrate their potential usefulness in the field of image analysis. In this study, we have analyzed the computational errors and proposed an efficient method to improve the accuracy of Legendre moments computation, especially for the higher order moments. With the substantially improved accurate Legendre moments, the image reconstructions from Legendre moments, up to the order of 240, are performed with highly satisfied results. We have also conducted the image reconstructions from a finite set of Legendre moments. This leads to the clarification that the lower orders of Legendre moments mainly contain fundamental image information, while the higher orders of Legendre moments preserve more detailed image information; especially, the even orders of Legendre moments describe most of the image characteristics. The organization of this study is as follows. Section 2 will review the general properties of Legendre moments and the computational errors in Legendre moment computing. In section 3, to verify the more accurate Legendre moment computational results, we represent some reconstructed images from the higher orders of Legendre moments. The investigation of representing characteristics of a partial set of Legendre moments in image analysis is performed in section 4. Finally, the concluding remarks are reported in section 5.

Journal ArticleDOI
TL;DR: A model is proposed and attempts to identify the organizational, technological and OIE factors that effect on the adoption of EC and the impact of EC on organization performance and provides a foundation for test the relationships in the proposed model by using the empirical data or other techniques.
Abstract: Electronic Commerce (EC) has recently become the subject of interest of many researchers involved in behavioral and technology acceptance. EC was heavily studied in developed countries, but there are only a few narrowly focused studies on EC adoption in developing countries, especially in the context of Small and Medium Enterprises (SMEs). Previous researchers have investigated many factors that influence the adoption of EC applications, such as organizational and technological factors. However, a review of the literature showed that Organization Information Ecology (OIE), which is an important factor in the context of EC, has not been receiving the attention it deserves in the context of EC adoption. Based on the literature review of previous studies, a model is proposed to reviews and attempts to identify the organizational, technological and OIE factors that effect on the adoption of EC and the impact of EC on organization performance. Altogether, twelve hypotheses are proposed. The proposed conceptual model provides a foundation for test the relationships in our suggested model by using the empirical data or other techniques.

Journal ArticleDOI
TL;DR: Although EDoS attack is small at the moment, it is expected to grow in the near future in tandem with the growth in cloud usage and many defence and mitigation mechanisms have been proposed to combat these attacks.
Abstract: Many organizations and service providers have started shifting from traditional server-cluster infrastructure to cloud-based infrastructure. The threat of Distributed Denial of Service (DDoS) attack continues to wreak havoc in these cloud infrastructures. In addition to DDoS attacks, a new form of attack known as Economic Denial of Sustainability (EDoS) attack has emerged in recent years. EDoS which is unique to cloud infrastructure may not be easily detected as with DDoS. Although EDoS attack is small at the moment, it is expected to grow in the near future in tandem with the growth in cloud usage. As EDoS has a major impact economically, it can considered to be more serious than DDoS and many defence and mitigation mechanisms have been proposed to combat these attacks. This paper introduces EDoS and how it differs from DDoS. The existing mitigation techniques are described and the drawbacks of these techniques are explained.

Journal ArticleDOI
TL;DR: A new method to generate star catalog using density-based clustering is proposed in this article, which identifies regions of a high star density by using Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm.
Abstract: A new method to generate star catalog using density-based clustering is proposed. It identifies regions of a high star density by using Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm. Reducing the number stars performed by storing the brightest star in each cluster. The brightest star and all non-clustered members are then stored as a navigation star candidate. Monte Carlo simulation has performed to generate random FOV to check the uniformity of the new catalog. Succeed parameter is if there are at least three stars in the FOV. The simulation results compare between DBSCAN method and Magnitude Filtering Method (MFM) which is the common method to generate star catalog. The result shows that DBSCAN method is better than MFM such for number of star 846 DBSCAN has success 100% while MFM 95%. It concluded that density-based clustering is a promising method to select navigation star for star catalog generation.

Journal ArticleDOI
TL;DR: In this study, a hybrid approach that combines different architectures for resolving pronominal anaphora in Arabic language is presented and the experimental results indicate that the proposed hybrid approach is completely reasonable and feasible for Arabic Pronominal Anaphora resolution.
Abstract: One of the challenges in natural language processing is to determine which pronouns to be referred to their intended referents in the discourse. Performing anaphora resolution is considered as an important task for a number of natural language processing applications such as information extraction, question answering and text summarization. Most of the earlier works of anaphora resolution have been applied to English and other languages. However, the work done in Arabic is not sufficiently studied. In this study, a hybrid approach that combines different architectures for resolving pronominal anaphora in Arabic language is presented. The hybrid model adopted the strategy based on the combination of a rule-based and machine learning approach. The collection of anaphora and respective possible antecedents was identified in a rule-based manner with morphological information taken into account. In addition, the selection of the most probable candidate as the antecedent of the anaphor was done by machine learning based on a k-Nearest Neighbor (k-NN) approach. In this study, the appropriate features to be used in this task were determined and their effect on the performance of anaphora resolution was investigated. Experiments of the proposed method were performed using the corpus of the Quran annotated with pronominal anaphora. The experimental results indicate that the proposed hybrid approach is completely reasonable and feasible for Arabic pronominal anaphora resolution.

Journal ArticleDOI
TL;DR: A lightweight and robust cloud-based security model for OpenStack object storage within a cloud computing environment is proposed and incorporates cryptographic algorithms at the first level (authentication/authorisation) and a hash function to introduce a more secure access method for authentication and authorisation.
Abstract: Despite the numerous potential benefits of Open Source Cloud Computing (OSCC) in several industrial and academic-oriented environments, OSCC could be also associated with some risks. However, which a proper awareness to the cloud consumers or organisations, these risks can be clearly identify and avoided. OpenStack Swift security can provide a greater understanding of how OpenStack Swift functions and what types of security issues arise therein. In this study, a lightweight and robust cloud-based security model for OpenStack object storage within a cloud computing environment is proposed. Swift is a multi-user based model in which every owner encrypts her/his files; each owner uses different levels of cryptographic security. A reduction in the key distribution complexity in this diverse model with a variety of security based settings is critical. Note that proposed model incorporates cryptographic algorithms at the first level (authentication/authorisation) and a hash function to introduce a more secure access method for authentication and authorisation.

Journal ArticleDOI
TL;DR: This study proposes a new heuristic algorithm for mapping independent tasks in agrid environment to be assigned optimally among the available machines in a grid computing system and is compared with other popular heuristics for performance measures.
Abstract: Grid computing plays an important role in solving large-scale computational problems in a high performance computing environment. Scheduling of tasks to efficient and best suitable resource is one of the most challenging phase in grid computing systems. Grid environment reveals several challenges in efficient scheduling of complex applications because of its heterogeneity, dynamic behavior and shared resources. Scheduling of independent tasks in grid computing is dealt by a number of heuristic algorithms. This study proposes a new heuristic algorithm for mapping independent tasks in a grid environment to be assigned optimally among the available machines in a grid computing system. Due to the multi-objective nature of the grid scheduling problem, several performance measures and optimization criteria can be assumed to determine the quality of a given schedule. The metrics used here include makespan and resource utilization. This algorithm provides effective resource utilization by reducing machine idle time and minimizes makespan. This algorithm also balances load among the grid resources and produce high resource utilization with low computational complexity. The proposed algorithm is compared with other popular heuristics for performance measures.

Journal ArticleDOI
TL;DR: An experimental comparative study is carried out among three task scheduling algorithms in cloud computing, namely, random resource selection, round robin and green scheduler to conclude which algorithm is the best for scheduling in terms of energy and performance of VMs.
Abstract: Cloud computing is an interesting and beneficial area in modern distributed computing. It enables millions of users to use the offered services through their own devices or terminals. Cloud computing offers an environment with low cost, ease of use and low power consumption by utilizing server virtualization in its offered services (e.g., Infrastructure as a Service). The pool of Virtual Machines (VMs) in a cloud computing Data Center (DC) needs to be managed through an efficient task scheduling algorithm to maintain quality of service and resource utilization and thus ensure the positive impact of energy consumption in the cloud computing environment. In this study, an experimental comparative study is carried out among three task scheduling algorithms in cloud computing, namely, random resource selection, round robin and green scheduler. Based on the analysis of the simulation result, we can conclude which algorithm is the best for scheduling in terms of energy and performance of VMs. The evaluation of these algorithms is based on three metrics: Total power consumption, DC load and VM load. A number of experiments with various aims are completed in this empirical comparative study. The results showed that there is no algorithm that is superior to the others. Each has its own pros and cons. Based on the simulation performed, the green scheduler gives the best performance with respect to energy consumption. On the other hand, the random scheduler showed the best performance with respect to both VM and DC load. The round robin scheduler gives better VM and DC load than the green scheduler but have more energy consumption than both random and green schedulers. However, since the RR scheduler distributes the tasks fairly, the network traffic is balanced and neither the server nor the network node will get overloaded or congested.

Journal ArticleDOI
TL;DR: Research is conducted of current issues of training and education in the field of national information security in the Russian Federation and abroad in the context of globalization and opinions of scientists and specialists are analyzed.
Abstract: In the present article author conducts research of current issues of training and education in the field of national information security in the Russian Federation and abroad in the context of globalization. Author analyzes current legal framework, analyzes opinions of scientists and specialists. Author marks out the place and role of new scientific and educational specialty-Information Law, given an opinion on the relevance of its occurrence. Author notes that each year, with the introduction of new technologies, increased levels of Informatization and computerization, there are now more issues of state (national) and collective information security. Solving these issues is impossible without ensuring an adequate level of training of new professionals, as well as timely retraining of skills of already working employees. This requires further comprehensive work on organization and implementation of specialized educational programs of the new generation. During the research author analyzes opinions of scientists from different countries, representing different scientific schools.

Journal ArticleDOI
TL;DR: The central goal in this investigation is to fully demonstrate how to construct a feasible radio system through software as close to the state of the art as possible.
Abstract: Software Defined Radio (SDR) technology enables wireless devices deployment with support to multiple interfaces in modulation formats and it has gained importance due to the current wireless standards proliferation. In order to activate these functionalities it is necessary to apply SDR inside a reconfigurable hardware such as the Field Programmable Gate Array (FPGA). In this study, design procedures developments are presented resulting in a Quadrature Phase Shift Keying (QPSK) modulator/demodulator based on a hardware architecture of the Xilinx FPGA Zynq family. Also in the modem’s conception it was employed the Xilinx Vivado-tool combined with Matlab, Simulink and development with ISE/EDK apparatuses. As for the reconfigurable hardware platform applied to this, it was utilized the ZedBoard along with the Analog Devices FMCCOMMS1 radio module. In addition, it is noteworthy to indicate that the entire software-hardware solution was initially carried out through a simulation that achieved the system scheme, including the hardware platform, before any hard implementation of the QPSK modem itself. The central goal in this investigation is to fully demonstrate how to construct a feasible radio system through software as close to the state of the art as possible.

Journal ArticleDOI
TL;DR: This study aims to integrate correlation clustering and agglomerative hierarchical clustering toward improving the effectiveness of holistic schema matching with the proposed integrated method which avoids the random initial so-lutions and the predefined number of centroids.
Abstract: Holistic schema matching is the process of carrying off several number of schemas as an input and outputs the correspondences among them. Treating large number of schemas may consume longer time with poor quality. Therefore, several clustering approaches have been proposed in order to reduce the search space by partitioning the data into smaller portions which can facilitate the matching process. However, there is still a demand for improving the partitioning mechanism by avoiding the random initial solutions (centroids) re-sulted from the clustering process. Such random solutions have a significant impact on the matching results. This study aims to integrate correlation clustering and agglomerative hierarchical clustering toward improving the effectiveness of holistic schema matching. The proposed integrated method avoids the random initial so-lutions and the predefined number of centroids. Several preprocessing steps have been performed with using auxiliary information (domain dictionary). The experiments have been carried out on Airfare, Auto and Book datasets from UIUC Web Integration Repository. The proposed method has been compared with K-means and K-medoids clustering methods. As a results the proposed method has outperformed K-means and K-medoids by achieving 0.9, 0.93 and 0.9 of accuracy for Airfare, Auto and Book respectively.

Journal ArticleDOI
TL;DR: This study shows that secret information can be shared or passed from a sender to a receiver even if not encoded in a secret message, as each piece of secret information has a distinct public encoding.
Abstract: This study shows that secret information can be shared or passed from a sender to a receiver even if not encoded in a secret message. In the protocol designed in this study, no parts of the original secret information ever travel via communication channels between the source and the destination, no encoding/decoding key is ever used. The two communicating partners, Alice and Bob, are endowed with coherent qubits that can be read and set and keep their quantum values over time. Additionally, there exists a central authority that is capable of identifying Alice and Bob to share with each half of entangled qubit pairs. The central authority also performs entanglement swapping. Our protocol relies on the assumption that public information can be protected, an assumption present in all cryptographic protocols. Also any classical communication channel need not be authenticated. As each piece of secret information has a distinct public encoding, the protocol is equivalent to a one-time pad protocol.

Journal ArticleDOI
TL;DR: Scorpius is a new simulation tool able to help testing network management mechanisms based on IP Flows that is capable of simulating different kinds of anomalies, such as Denial of Service (DoS), Distributed Denial Of Service (DDoS), Flash Crowd and Port Scan, directly into the flow export files.
Abstract: Due to the increasing amount of data traveling computer networks every day, efficient management of this information is required to ensure the quality of the services provided by them. Development of new network management tools and mechanisms is a widely approached area due to its importance, not only to the current technology, but also to next generation network standards and equipments. Several researches have been directed to the use of IP Flows in order to increase the efficiency of these management tools. Although there are several proposed approaches in this area, most of them don't have suitable test scenarios to validate their performance results. In this study, we present Scorpius, a new simulation tool able to help testing network management mechanisms based on IP Flows. Scorpius is capable of simulating different kinds of anomalies, such as Denial of Service (DoS), Distributed Denial of Service (DDoS), Flash Crowd and Port Scan, directly into the flow export files. This characteristic unites the advantages of tests in real network environments without the drawbacks of the occurrence of real anomalies and attacks, even controlled ones. This approach makes the processes of performance analysis of anomaly detection approaches easier, without interfering or hampering the operation of the analyzed network. In order to validate the efficiency of the presented tool, we use real data collected from a large-scale network environment.

Journal ArticleDOI
TL;DR: The aim of this study is to present a review of the most important automatic methods for generation of membership functions, both type 1 and interval type-2, highlighting the principal characteristics of each approach.
Abstract: Generation of membership functions is an important step in construction of fuzzy systems. Since membership functions reflect what is known about the variables involved in a problem, when they are correctly modeled the system will behave in the manner that is expected in the context of the problem being addressed. Since their creation, type-1 membership functions have been used in domains characterized by uncertainty. Nevertheless, use of type-2 membership functions has been expanding over recent years because they are considered more appropriate for this application. Both types of membership function can be generated with the aid of automatic methods that implement generation of membership functions from data. These methods are convenient for situations in which it is not possible to obtain all the information needed from an expert or when the problem in question is complex. The aim of this study is to present a review of the most important automatic methods for generation of membership functions, both type 1 and interval type-2, highlighting the principal characteristics of each approach.

Journal ArticleDOI
TL;DR: Comparison made between AC, AAC and other approaches drawn from the scientific literature indicate that, AAC algorithm is able to produce good quality solutions which are comparable to other approaches in the literature.
Abstract: Optimization methods commonly are designed for solving the optimization problems. Local search algorithms are optimization method, which are good candidate in exploiting the search space. However, most of them need parameter tuning and incapable of escaping from local optima. This work proposes non-parametric Acceptance Criterion (AC) that not relies on user-defined, which motivate to propose an Adaptive Acceptance Criterion (AAC). AC accepts a little worse solution based on comparing the candidate and best solutions found values to a stored value. The value is stored based on the lowest value of comparing the candidate and best solution found, when a new best solution found. AAC adaptively escape from local optima by employing a similar diversification idea of a previous proposed (ARDA) algorithm. In AAC, an estimated value added to the threshold (when the search is idle) to increase the search exploration. The estimated value is generated based on the frequency of the solutions quality differences, which are stored in an array. The progress of the search diversity is governed by the stored value. Six medical benchmark datasets for clustering problem (which are available in UCI Machine Learning Repository) and eleven benchmark datasets for university course timetabling problems (Socha benchmark datasets) are used as test domains. In order to evaluate the effectiveness of the propose AAC, comparison made between AC, AAC and other approaches drawn from the scientific literature. Results indicate that, AAC algorithm is able to produce good quality solutions which are comparable to other approaches in the literature.