scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science in 2017"


Journal ArticleDOI
TL;DR: A review of user expectation towards Augmented Reality (AR) and the acceptance of AR for technology-enhanced teaching and learning and previous research on user expectations of AR in education and its acceptance are reviewed.
Abstract: This paper presents a review of user expectation towards Augmented Reality (AR) and the acceptance of AR for technology-enhanced teaching and learning. Augmented Reality is a technology that superimposes a computer-generated image over a user’s view of the real world, thus providing a composite view. This technology has been used in many fields such as marketing, military, entertainment and many other sectors. Studies have found that AR technology can enhance teaching and learning, however more research still needs to be conducted about the acceptance of AR as a learning tool and what users in education expect from the technology. An understanding of the user expectation is one of the key foundations towards establishing better-designed AR systems and applications that will result in more acceptance of this technology. To help with this, this paper reviews previous research on user expectations of AR in education and its acceptance.

41 citations


Journal ArticleDOI
TL;DR: The proposed model works as an interface between hearing-impaired and normal persons who are not familiar with Arabic sign language, overcomes the gap between them and it is also valuable for social respect.
Abstract: Across the world, several millions of people use sign language as their main way of communication with their society, daily they face a lot of obstacles with their families, teachers, neighbours, employers. According to the most recent statistics of World Health Organization, there are 360 million persons in the world with disabling hearing loss i.e. (5.3% of the world’s population), around 13 million in the Middle East. Hence, the development of automated systems capable of translating sign languages into words and sentences becomes a necessity. We propose a model to recognize both of static gestures like numbers, letters, ...etc and dynamic gestures which includes movement and motion in performing the signs. Additionally, we propose a segmentation method in order to segment a sequence of continuous signs in real time based on tracking the palm velocity and this is useful in translating not only pre-segmented signs but also continuous sentences. We use an affordable and compact device called Leap Motion controller, which detects and tracks the hands' and fingers' motion and position in an accurate manner. The proposed model applies several machine learning algorithms as Support Vector Machine (SVM), K- Nearest Neighbour (KNN), Artificial Neural Network (ANN) and Dynamic Time Wrapping (DTW) depending on two different features sets. This research will increase the chance for the Arabic hearing-impaired and deaf persons to communicate easily using Arabic Sign language(ArSLR). The proposed model works as an interface between hearing-impaired and normal persons who are not familiar with Arabic sign language, overcomes the gap between them and it is also valuable for social respect. The proposed model is applied on Arabic signs with 38 static gestures (28 letters, numbers (1:10) and 16 static words) and 20 dynamic gestures. Features selection process is maintained and we get two different features sets. For static gestures, KNN model dominates other models for both of palm features set and bone features set with accuracy 99 and 98% respectively. For dynamic gestures, DTW model dominates other models for both palm features set and bone features set with accuracy 97.4% and 96.4% respectively.

29 citations


Journal ArticleDOI
TL;DR: This study investigated the relationship between information quality's dimensions, namely accuracy, completeness, timeliness and relevancy, on the satisfaction of customers towards E-banking services in Palestine's banking sector and revealed a positive effect on customer satisfaction.
Abstract: Poor information quality can have a significant negative effect on the success of an organization. This study investigated the relationship between information quality's dimensions, namely accuracy, completeness, timeliness and relevancy, on the satisfaction of customers towards E-banking services in Palestine's banking sector. It also studied the relationship between the intent to use E-banking services and customer satisfaction. The study implemented a quantitative method for data collection using a questionnaire survey. The results revealed that the accuracy, completeness, timeliness and relevancy of information had a positive effect on customer satisfaction towards E-banking services. Consequently, customer satisfaction affects the intention to utilize such services. The study also offered limitations, conclusions and recommendations for further studies in the future.

19 citations


Journal ArticleDOI
TL;DR: The experimental results have shown that the students who used the proposed advising approach have the highest percentage of the score from pre-test to post- test.
Abstract: Intended Learning Outcomes (ILOs) is a major new approach to enhance students' performance level at any higher education institution. Moreover, it supports decision making based on the extracted information. It works as a framework for continuous improvement and evaluation in higher education environments. This study intends to achieve many benefits including; minimize wastages education through selecting the appropriate decision for students to increase education values. Additionally, the study presents the Proposed Advising Approach to support students during the course registration process. The study is applied in the faculty of Commerce and Business Administration, Business Information System program (BIS) at Helwan University, Egypt. The experimental results have shown that the students who used the proposed advising approach have the highest percentage of the score from pre-test to post- test.

16 citations


Journal ArticleDOI
TL;DR: This paper attempts to survey and analyze the previous works proposed to process skyline queries in the incomplete database, focusing on examining these approaches highlighting the strengths and the weaknesses of each work.
Abstract: In many contemporary database applications such as multi-criteria decision-making and real-time decision-support applications, data mining, ecommerce and recommendation systems, users need query operators to process the data aiming at finding the results that best fit with their preferences. Skyline queries are one of the most predominant query operators that privileges to find the query results that return only those data items whose dimension vector is not dominated by any other data item in the database. Because of their usefulness and ubiquity, skyline queries have been incorporated into different types of databases such as complete, incomplete and uncertain. This paper attempts to survey and analyze the previous works proposed to process skyline queries in the incomplete database. The discussion focuses on examining these approaches highlighting the strengths and the weaknesses of each work. Besides, we also discuss in detail the current challenges in processing skyline queries in the incomplete database and investigate the impact of incomplete data on skyline operation. A summary of the most well-known works has been reported to identify the limitations of these works. Some recommendations and future work directions have been drawn to help researchers investigate the unsolved problems related to skyline queries in a database system.

15 citations


Journal ArticleDOI
TL;DR: This paper presents an Arabic text steganographic algorithm based on Unicode that imposes a minimal change on connected letters without any change in size and shape.
Abstract: Steganography is a technique for hiding data in media in a way that makes its existence hard to detect. Text files are a preferable format for use in steganography due to the small storage size of such files. This paper presents an Arabic text steganographic algorithm based on Unicode. The algorithm imposes a minimal change on connected letters without any change in size and shape. The experiment resulted in a high capacity rate ratio of about 180 bit/KB and low modification rate less than 90 letters/KB.

14 citations


Journal ArticleDOI
TL;DR: An online learning management system prototype that is suitable for educating both the public and medical personnel about early detection of cancer based on model of previous study result is developed.
Abstract: Cervical cancer is the second leading type of cancer causing death among women in Indonesia. The mortality rate caused by cervical cancer could be reduced through an early detection program, which would require the training of healthcare workers in educating women in Indonesia. The aim of this study was to develop an online learning management system prototype that is suitable for educating both the public and medical personnel about early detection of cancer based on model of previous study result. The research method utilized the Analysis, Design, Development, Implementation, Evaluation (ADDIE) framework as an approach of System Development Life Cycle (SDLC). The study produced an online Learning Management System (LMS) to assist in the efforts of early detection of cervical cancer.

14 citations


Journal ArticleDOI
TL;DR: This study covers the common WSN aspects and performance evaluation criteria in addition to the list of previous studies that have used ACO approaches in WSN.
Abstract: Wireless Sensor Network (WSN) has been widely implemented in large sectors such as military, habitat, business, industrial, health and environment. WSN is part of a distributed system where elements such as routing, load balancing, energy efficiency, node localization, time synchronization, data aggregation and security need to be addressed to improve its efficiency, robustness, extendibility, applicability and reliability. Despite multiple approaches proposed to improve all these aspects, there is still room for improvement in order to enhance the capability of WSN in terms of routing and energy efficiency. Ant Colony Optimization (ACO) is one of the approaches used to extend WSN capabilities because its heuristic nature is very suitable with distributed and dynamic environments. This study covers the common WSN aspects and performance evaluation criteria in addition to the list of previous studies that have used ACO approaches in WSN.

13 citations


Journal ArticleDOI
TL;DR: In this paper, the parameter of Gaussian /Radial Basis Function (RBF) kernel function is optimized using multi class non-linear SVM method and implemented to training and test datasets of traditional Indonesian batik images.
Abstract: Image retrieval using Support Vector Machine (SVM) classification verydepends on kernel function and parameter. Kernel function used by dot productsubstitution from old dimension feature to new dimension depends on imagedataset condition. In this research, parameter of Gaussian /Radial BasisFunction (RBF) kernel function is optimized using multi class non-linear SVMmethod and implemented to training and test datasets of traditional Indonesian batik images. The batik images dataset is limited to four geometric motifs textures,which are ceplok/ceplokan, kawung, nitikand parang/lerang. Discrete WaveletTransform level 3 daubechies 2 is used to result feature dataset of traditionalbatik images dataset of four classesgeometric motifs textures. The batikimages are used for training and test dataset in SVM-RBF kernel parameteroptimation to maximize accuracy value in non-linear multi-class classification.Cross Validation and Grid-search methods are used to analyze and evaluateSVM-RBF kernel parameter optimation. Confusion matrix measurement method isused to result accuracy value in every evaluationconducted in every combination of cost function/C and gamma/γ as SVM-RBF kernel parameter. Maximumaccuracy parameter value is C = 27 and γ = 2-15 achieved by 10 times evaluation wit different testdataset for each evaluation. Maximum accuracy value is 0.77 to 0.86.

12 citations


Journal ArticleDOI
TL;DR: This research proposes an alternative solution to determine the cropping pattern of paddy and CGPRT crops based on the pattern of rainfall in the area using decision tree approach and shows that J48 algorithm has a higher classification accuracy than RandomTree and REPTree.
Abstract: Nowadays, agricultural field is experiencing problems related to climate change that result in the changing patterns in cropping season, especially for paddy and coarse grains, pulses roots and Tuber (CGPRT/Palawija) crops. The cropping patterns of rice and CGPRT crops highly depend on the availability of rainfall throughout the year. The changing and shifting of the rainy season result in the changing cropping seasons. It is important to find out the cropping patterns of paddy and CGPRT crops based on monthly rainfall pattern in every area. The Oldeman's method which is usually used in the classification of of cropping patterns of paddy and CGPRT crops is considered less able to determine the cropping patterns because it requires to see the rainfall data throughout the year. This research proposes an alternative solution to determine the cropping pattern of paddy and CGPRT crops based on the pattern of rainfall in the area using decision tree approach. There were three algorithms, namely, J48, RandomTree and REPTree, tested to determine the best algorithm used in the process of the classification of the cropping pattern in the area. The results showed that J48 algorithm has a higher classification accuracy than RandomTree and REPTree for 48%, 42.67% and 38.67%, respectively. Meanwhile, the results of data testing into the decision tree rule indicate that most of the areas in DKI Jakarta are suggested to apply the cropping pattern of 1 paddy cropping and 1 CGRPT cropping (1 PS + 1 PL). While in Banten, there are three cropping patterns that can be applied, they are; 1 paddy cropping and 1 CGPRT cropping (1 PS + 1 PL), 3 short-period paddy croppings or 2 paddy croppings and 1 CGPRT cropping (3 short-period PS or 2 PS + 1 PL) and 2 paddy croppings and 1 CGPRT cropping (2 PS + 1 PL).

12 citations


Journal ArticleDOI
TL;DR: This project is to develop a framework based on the concepts of well-established EA frameworks such as TOGAF and Zachman and their compositional layers that will be combined with a data flow analysis of the principles that trace the potential information flow between high- and low-security enterprise components.
Abstract: Many existing studies have shown that the causes of most of system attacks are not related to coding vulnerabilities that apply to individual systems, issues related to the run-time environment, or the technology in place. In fact, they are caused by issues associated with how systems within organizations are structured. Therefore, it is necessary to examine security with regard to all components that influence the organization’s systems, including data, processes and even employees. The most promising approach to achieving this goal is Enterprise Architecture (EA). The main goal of this project is to develop a framework based on the concepts of well-established EA frameworks such as TOGAF and Zachman and their compositional layers (e.g., application, information and process). This framework will be combined with a data flow analysis of the principles that trace the potential information flow between high- and low-security enterprise components. Therefore, this paper studies various enterprise architecture frameworks and shows how to develop an enterprise architecture framework that considers the organization’s information security from the perspective of information flow. This framework will have various layers, each with a set of security metrics that quantify the organization’s relative security based on the specifications of that layer. The defined framework will be capable of defining Enterprise Architecture security-related principles and metrics. These principles and metrics will eventually be used to define how to develop secure enterprise systems based on the enterprise architecture with regard to security-critical information flow within any given organization. The defined framework will also be capable of providing guidance for information security architects by recognizing certain parts of the organization that are less secure than others.

Journal ArticleDOI
TL;DR: Results shown that MPI outperforms Apache Spark in parallel and distributed cluster computing environments and hence the higher performance of MPI can be exploited in big data applications for improving speedups.
Abstract: The advent of various processing frameworks which happens under big data technologies is due to tremendous dataset size and its complexity. The speed of execution was much higher with High Performance computing frameworks rather than big data processing frameworks. As majority of the jobs under big data are mostly data intensive rather than computation intensive, the High Performance Computing paradigms were not been used in big data processing. This paper reviews two distributed and parallel computing frameworks: Apache Spark and MPI. Sentiment analysis on twitter data is chosen as a test case application for benchmarking and implemented on Scala programming for spark processing and in C++ for MPI. Experiments were conducted on Google cloud virtual machines for three data set sizes, 100 GB, 500 GB and 1 TB to compare the execution times. Results shown that MPI outperforms Apache Spark in parallel and distributed cluster computing environments and hence the higher performance of MPI can be exploited in big data applications for improving speedups.

Journal ArticleDOI
TL;DR: The empirical results indicate that the proposed algorithm allows a steady decrease of fit error in all cases, where de most important and differentiable feature is the fact that reach values close to zero, which is not true for the other algorithms.
Abstract: Artificial Neural Networks (ANN) consists of some components, such as architecture and learning algorithm. These components have a significant effect on the performance of the ANN, but finding good parameters is a difficult task to achieve. An important requirement for this task is to ensure the reduction of error when inputs and/or hidden neurons are added. In practice, it is assumed that this requirement is always true, but usually it is false. In this paper, we propose a new algorithm that ensures error decrease when input variables and/or hidden neurons are added to the neural network. The behavior of two traditional algorithms and the proposed algorithm in the forecast of Airline time series were compared. The empirical results indicate that the proposed algorithm allows a steady decrease of fit error in all cases, where de most important and differentiable feature is the fact that reach values close to zero, which is not true for the other algorithms. Therefore, it can be used as a suitable alternative algorithm, especially when it needs a good fit.

Journal ArticleDOI
TL;DR: Two different techniques were implemented on the dataset; classification used to build a prediction model and association rules were used to find interesting hidden information in the student's records to help the students to determine their direction and improve when necessary with their studies.
Abstract: Educational Data Mining is a new discipline, focusing on studying the methods and creating models to utilize educational data, using those methods to better understand students and their performance. We implemented two different techniques on our dataset; classification used to build a prediction model and association rules were used to find interesting hidden information in the student's records. This study will help the student's to determine their direction and improve when necessary to cope up with their studies. It also provide a great tool to predict and evaluate those students who need attention and correction actions and find out any deviation before it happen and become a decrease in performance and reduce failure rate.

Journal ArticleDOI
TL;DR: A prison hybrid surveillance system using two main technologies; Wireless Sensor Network (WSN) and Unmanned Aerial Vehicles (UAVs) that proves performance in flexibility, scalability and hierarchical surveillance for a high security prison or any other military site.
Abstract: The aim of this paper is to introduce a prison hybrid surveillance system using two main technologies; Wireless Sensor Network (WSN) and Unmanned Aerial Vehicles (UAVs). The system consists of three tiers; Wireless Underground Sensor Network (WUSN) in tier 0, Wireless Ground Sensor Network (WGSN) in tier 1 and Wireless Vision Sensor Network (WVSN) in tier 2 that consists of surveillance towers and Unmanned Aerial Vehicles (UAVs) equipped with multimedia sensors. Those three tiers are independent in operation and can complement one another in functionality. Such a design that utilizes the most advanced technologies proves performance in flexibility, scalability and hierarchical surveillance for a high security prison or any other military site.

Journal ArticleDOI
TL;DR: From the opinion of the stakeholders it can be easily understood that HIS has brought a sea change in the service flows of Magrabi group, and the lacunae of the study along with the future research prospects are suggested for the possible improvements.
Abstract: Today's healthcare market has created its niche by adopting emerging technologies and ensuring minimal rate of failures. Embracing integrated Health Information System (HIS) is one such venture that not only helps the organization to excel in the market but also caters to the needs of the public. The present research paper is a case study of HIS implementation of Magrabi hospitals and centres, in Saudi Arabia. Though implementing HIS is a herculean task, with respect to the efforts as well as investment, Magrabi could achieve this task with at most care. The system could promote service efficacy across all the centres and hospitals of Magrabi. The paper analyses the procedures and strategies followed in order to prepare the entity ready for transformation. That is, the method of actualizing the health system implementation is also detailed in this study. Overall, the case research tried to know the extent of improvement took place in all the streams of operations in Magrabi group after automation. To gather this information, data is collected from all the in-charges of respective departments as well as users, who are in managerial cadre. This data is tabulated against various cycles (revenue and payroll). From the opinion of the stakeholders it can be easily understood that HIS has brought a sea change in the service flows of Magrabi group. Apart from the benefits of HIS implementation, the lacunae of the study along with the future research prospects in these lines are suggested for the possible improvements.

Journal ArticleDOI
TL;DR: The experimental study provides incipient evidence that SMarty is more effective for resolving variabilities and configuring consistent products at UML class level, and overall obtained results indicated the capability of SMarty at configuring specific products.
Abstract: Variability management is one of the most important activities during software product line development and evolution. Current literature presents several approaches for variability management, especially based on UML, such as, PLUS and SMarty. A systematic process with guidelines support SMarty. Existing literature for these kind of approaches provides slight experimental evidence of their effectiveness at product configuration. Thus, this is considered fundamental for transferring technology to the industry. This paper provides experimental evidence on the product configuration capability of SMarty by comparing it to PLUS, one of the most cited product-line method in literature. The experimental study provides incipient evidence that SMarty is more effective for resolving variabilities and configuring consistent products at UML class level. Thus, overall obtained results indicated the capability of SMarty at configuring specific products.

Journal ArticleDOI
TL;DR: This research was conducted within two higher education institutions and includes full-time and part-time students who enrolled in the courses 3D modeling and Programming.
Abstract: Research presented in this paper represents a further step towards proving the efficiency of gamification in higher education. Our research was conducted within two higher education institutions and includes full-time and part-time students who enrolled in the courses 3D modeling and Programming. Based on the research results, three hypotheses were tested. These hypotheses give a better insight into some psychological phenomes. The first hypotheses tested the level of knowledge in experimental and control groups for all students who achieved a minimum of 50% score in the pre-test. Our results confirmed the existence of statistically significant difference in the benefit of the experimental group. The other two hypotheses are spreading results even more. We analyzed 50% of the highest ranked and also 50% of the lowest ranked students’ score with the use of t-test. Based on our analysis of the average number of points on the post-test for participants with the lowest ranking we found no statistically significant difference. On the other hand, the same analysis for participants with the highest ranking shows, with statistically significant difference, that the experimental group achieved notably better score.

Journal ArticleDOI
TL;DR: This work focuses on providing an up to dated overview of existing defects in the context of software inspection techniques, and identifies several different studies with distinct proposals concerned on defect types.
Abstract: Software inspection has been used to guarantee and control the quality of products by detecting defects, which can be spread out throughout the entire software life cycle. Therefore, the main premise is to identify and reduce the number of defect types in software artifacts during inspections. This work focuses on providing an up to dated overview of existing defects in the context of software inspection techniques. A systematic mapping was carried out, from which 2096 primary studies were retrieved and 32 were final selected. From the analysis, classification and aggregation of the retrieved studies, important different defect types were identified. Most studies encompass defect types by means of experiments and proposed techniques and approaches. Thus, as a main result, the identification of several different studies with distinct proposals concerned on defect types is evident. Although researchers have conducted studies over time, a general pattern on the detection of defects could not be established. Therefore, the scenario in which this study was carried out provides researchers with the capability of conducting further research in a motivating and challenging research topic, as well as practitioners with the adoption of empirically evaluated inspection techniques and respective defect types.

Journal ArticleDOI
TL;DR: Time and number of iteration required to evaluate iceberg driven query using proposed approach is reduced [40 to 50] % even though data size increases and the effectiveness and efficiency of proposed approach are proved.
Abstract: Iceberg driven query is important and common in many applications of data mining and data warehousing. Main property of iceberg driven query is it extracts small set of data from huge database. It works on aggregation function followed by GROUP BY and HAVING clause. Due to involvement of aggregation function execution of iceberg driven query becomes tedious and complex work. Main objective of this research is to improve the performance of iceberg driven query by reducing the time, number of iteration and I/O access required to execute it. Currently available iceberg driven query processing technique faces the problems of empty bitwise AND, OR and XOR operation. Because of these problems they require more time and I/O access to execute query. To overcome above problems this research proposes tracking pointer and look ahead matching strategy to evaluate iceberg driven query. Tracking pointer will initiate the evaluation process as per the priority of vector. Look ahead matching strategy help to identify probable vector instead of generating one by one till the end of vector list. This strategy decides the probability of bitmap vector to be executed. Thus in advance it identifies and avoids unnecessary operations to be performed on bitmap vector. Our experimental result shows that time and number of iteration required to evaluate iceberg driven query using proposed approach is reduced [40 to 50] % even though data size increases. Thus we prove the effectiveness and efficiency of proposed approach to process iceberg driven query.

Journal ArticleDOI
TL;DR: The study revealed that secondary schools students were hardly using e-learning technologies compared with higher education sector, which implied fairly good mastery of ICT and student interest in following the development in computer education which opens new outlook to the education sector in this area.
Abstract: The current study investigated the use of ICT within the schools and higher education sectors in the eastern province of kingdom of Saudi Arabia. The objective is to evaluate how computer literacy and the implementation of e-learning could contribute to the learning and teaching process within the education sector in this area. A cross sectional survey was carried out among students. A website was designed to gather the data through questionnaire and interview. The data was analyzed with routine statistical tools. The results show that the students favorite place to get access to a computer is their home. High percentages of students were capable of working with windows functions and affirmed that computer programs and models helping a great deal in understanding the courses. The study revealed that secondary schools students were hardly using e-learning technologies compared with higher education sector. The research implied fairly good mastery of ICT and student interest in following the development in computer education which opens new outlook to the education sector in this area. However, effective measures to promote the usage of e-learning facilities should be provided and more efforts should be made to improve e-learning facilities in schools.

Journal ArticleDOI
TL;DR: A new way of using Neural Networks is employed, which aims at obtaining a best fit for the used variables in the mathematical model, hence resulting in a better testing interpretation and more accurate prediction and classification of image features to improve future composite structures designs.
Abstract: A new approach to the use and implementation of Optical Flow technique is presented. The technique extracts features from presented images as a function of reference image and produces percentage of matching between the reference and tested images. The new approach in using Optical Flow lies in replacing the motion part of the algorithm with differential time related changes in an infrared thermal image sequence with frames of images taken as a result of applying the Pulse Video Thermography (PVT) technique. The sequence of images or frames is obtained for the tested structures of composites before and after impact damage. The resulted data of the tested images is used to establish mathematical model that can be used to predict impact energy from collected features or predict expected features from knowing impact damage level. To optimize the mathematical model, a new way of using Neural Networks is employed, which aims at obtaining a best fit for the used variables in the mathematical model, hence resulting in a better testing interpretation and more accurate prediction and classification of image features to improve future composite structures designs. The Neural Network Weigh Elimination Algorithm (WEA) is used and proved effective in predicting areas of damage.

Journal ArticleDOI
TL;DR: This study surveys from the onset of the addition chain problem to the state-of-the-art heuristics for optimizing it, with the view to identifying fundamental issues that when addressed renders the heuristic most optimal mean of minimizing the two operations in various public key cryptosystems.
Abstract: Field exponentiation and scalar multiplication are the pillars of and the most computationally expensive operations in the public key cryptosystems Optimizing the operation is the key to the efficiency of the systems Analogous to the optimization is solving addition chain problem In this study, we survey from the onset of the addition chain problem to the state-of-the-art heuristics for optimizing it, with the view to identifying fundamental issues that when addressed renders the heuristics most optimal mean of minimizing the two operations in various public key cryptosystems Thus, our emphasis is specifically on the heuristics: Their various constraints and implementations efficiencies We present possible ways forwards toward the optimal solution for the addition chain problem that can be efficiently applied for optimal implementation of the public key cryptosystems

Journal ArticleDOI
TL;DR: The proposed model integrates Multiple Linear Regression algorithm and One Rule (OneR) classification algorithm and showed that the regression-classification model were significantly more successful than the standard classification algorithms.
Abstract: One of the main problems of predicting stock price with regression approach is overfitting a model An overfit model becomes tailored to fit the random noise in the dataset rather than reflecting the overall population For this it is necessary to construct an integrated regression-classification model to approximate the true model for the entire population in the dataset The proposed model integrates Multiple Linear Regression algorithm and One Rule (OneR) classification algorithm Initially the prediction was treated with regression approach where the outputs were in numerical values After that a classification model was used to interpret the regression outputs and then classified the outcomes into Profit and Loss class labels The test results were compared to those obtained with standard classification algorithms which included OneR, Zero Rule (ZeroR), Decision Tree and REP Tree The results showed that the regression-classification model were significantly more successful than the standard classification algorithms

Journal ArticleDOI
TL;DR: This paper aims at experimentally comparing two approaches and picks SMarty and PLUS as representative examples and finds evidence of PLUS and SMarty effectiveness based on graduate students and lecturers.
Abstract: Although variability management is one of the main activities of software product lines, current literature provides almost no empirical evaluations on variability management approaches based on UML. This paper aims at experimentally comparing two approaches and picks SMarty and PLUS as representative examples. Such comparison takes into account their effectiveness of expressing correctly and incorrectly variabilities in UML class diagrams. We used a 2×2 factorial design for this study. We calculated and analyzed data from participants using the T-Test. The Spearman technique supported correlation of the effectiveness of the approaches and the participants prior variability knowledge. In general, PLUS was more effective than SMarty. Generalization of results is not possible as this is an incipient evidence of PLUS and SMarty effectiveness based on graduate students and lecturers. However, counting on students and lecturers provides several contributions as we discuss in this paper.

Journal ArticleDOI
TL;DR: Various techniques that can detect Blackhole attacks in MANET are investigated and the detection techniques are compared with different matrices such as Average Packet Delivery ratio and Average End-To-End delay.
Abstract: Mobile Ad Hoc Network (MANET) is an auto Configuring network. Due to its natural characteristics, a MANET is vulnerable to many security threats. Blackhole attack compromises the performance and the reliability of the network. Since nodes are allowed to move freely within the network, it becomes very important to protect the communication among mobile nodes for the sake of security. In this paper we have investigated various techniques that can detect Blackhole attacks in MANET and we have compared the detection techniques with different matrices such as Average Packet Delivery ratio and Average End-To-End delay.

Journal ArticleDOI
TL;DR: An Optical Character Recognition (OCR) of “Aksara Lontara” has been constructed using a novel combination of feature extraction methods and is translated into Bahasa Indonesia to help non-native language to learn this language.
Abstract: An Optical Character Recognition (OCR) of “Aksara Lontara” has been constructed using a novel combination of feature extraction methods in this study. The ancient font of “Lontara” is then translated into Bahasa Indonesia to help non-native language to learn this language. Two powerful extraction feature methods, i.e., Modified Direction Feature (MDF) and Fourier Descriptor (FD) are stages combined to deal with two dominant phases of the Lontara font. The classification process is conducted using Support Vector Machine (SVM) as a fast and straightforward learning method deal with 23 fonts in image containing of 150×120 pixels. In this research, 50 verbs were used for training and 30 verbs for validating the system. The results show that system can reach 96% accuracy using this hybrid in extraction feature with kernel variable of C = 3 and σ = 8.

Journal ArticleDOI
TL;DR: An analytical model and an algorithm to analyze as well as determine characteristics of solution spaces of system of ODE are proposed and the proposed model successfully determines the dynamics of solution intervals aswell as structural interactions.
Abstract: Traditionally, the concepts of graph theory are applied to design stationary computer networks and, to analyze dynamics of social networks. However, the majority of non-stationary network models are formulated by using Ordinary Differential Equations (ODE) with varying orders having homogeneous or non-homogeneous forms. However, the analysis of continuous solution spaces of ODE and understanding of the interplay of spaces in complex systems are difficult to formulate. This paper proposes an analytical model and an algorithm to analyze as well as determine characteristics of solution spaces of system of ODE. The analytical model employs structural elements of metric spaces. The algorithmic output and analysis illustrate that, the proposed model successfully determines the dynamics of solution intervals as well as structural interactions.

Journal ArticleDOI
TL;DR: Simulation results show that the log-rule algorithm outperforms other packet scheduling algorithms in terms of delay, throughput and packet loss ratio when serving Voice-over-Internet Protocol packet data service on the LTE network.
Abstract: At present, Long-Term Evolution (LTE) is the most popular Internet access technology because of its robust wireless connectivity that can support data, voice, video and messaging traffic. This advantage has significantly increased the number of users availing themselves of these services through various devices. Such an increase in the number of users has also increased network traffic. Thus, scheduling algorithms are needed to prioritize and distribute available resources among users to improve system performance. Researchers have proposed several algorithms that focus on meeting the Quality of Service requirement for different applications. The current study evaluates the performance of four scheduling algorithms for LTE downlink transmission, namely, proportional fair, exponential PF, exponential rule and log-rule algorithms, in a certain scenario. The performance of these algorithms was evaluated using an LTE-Sim simulator. Simulation results show that the log-rule algorithm outperforms other packet scheduling algorithms in terms of delay, throughput and packet loss ratio when serving Voice-over-Internet Protocol packet data service on the LTE network.

Journal ArticleDOI
TL;DR: This study compares the performances of two routing protocols (AODV and OLSR) for three propagation model (two-Ray ground, Rice and Nakagami) and provides a qualitative assessment of the protocols applicability in different vehicular scenarios.
Abstract: Vehicular Ad-hoc Networks (VANET) is one of the emerging and actual research fields in automotive companies and Intelligent Transportation Systems (ITS) designers. In the Smart City the presence of such networks opens the way for a wide range of applications such as safety applications, mobility and connectivity for both driver and passengers to exploit the transport systems in a smoothly efficiently and safer way. The 802.11p is a draft amendment to the IEEE 802.11 standard for vehicular communications. VANET are characterized by a dynamic topology triggered by the vehiculars mobility. In the Smart City the main problems of inter-vehicle communication are the speed, density of vehicles and the size of the buildings. For this purpose, we first examine and then display the simulation findings of the impact of different radio propagation models on the performance of vehicular ad hoc networks in terms of the characteristics of the physical layer. In our study, we have compared the performances of two routing protocols (AODV and OLSR) for three propagation model (two-Ray ground, Rice and Nakagami). We study those protocols under varying metrics such as Traffic density, Smart City Architecture (size of the scenario areas) and the mobility of vehicle. Our objective is to provide a qualitative assessment of the protocols applicability in different vehicular scenarios. These two routing protocols are simulated and compared with Network Simulator-2 under Manhattan Grid Mobility Model. To conclude, the simulation findings are to be taken as a strong reference on the three routing protocols behaviour; however, it shouldn’t be considered as an exact representation of its behaviour and real environment because of several simulation constraints such as: the dimension of movement field of vehiculars, the traffic type and the simulation timing.