scispace - formally typeset
Search or ask a question

Showing papers in "Computer Science and Information Technology in 2018"


Journal ArticleDOI
TL;DR: Key among the issues raised in this paper are the various applications of AR that enhance the user's ability to understand the movement of mobile robot, the movements of a robot arm and the forces applied by a robot.
Abstract: Since the origins of Augmented Reality (AR), industry has always been one of its prominent application domains. The recent advances in both portable and wearable AR devices and the new challenges introduced by the fourth industrial revolution (renowned as industry 4.0) further enlarge the applicability of AR to improve the productiveness and to enhance the user experience. This paper provides an overview on the most important applications of AR regarding the industry domain. Key among the issues raised in this paper are the various applications of AR that enhance the user's ability to understand the movement of mobile robot, the movements of a robot arm and the forces applied by a robot. It is recommended that, in view of the rising need for both users and data privacy, technologies which compose basis for Industry 4.0 will need to change their own way of working to embrace data privacy.

70 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification and propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images.
Abstract: This paper aims at providing insight on the transferability of deep CNN features to unsupervised problems. We study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification. We propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images. This approach is compared to state-of-the-art algorithms in image-clustering and provides better results. These results strengthen the belief that supervised training of deep CNN on large datasets, with a large variability of classes, extracts better features than most carefully designed engineering approaches, even for unsupervised tasks. We also validate our approach on a robotic application, consisting in sorting and storing objects smartly based on clustering.

55 citations


Proceedings ArticleDOI
TL;DR: An apache spark based model to classify Amharic Facebook posts and comments into hate and not hate is developed and achieves a promising result with unique feature of spark for big data.
Abstract: The anonymity of social networks makes it attractive for hate speech to mask their criminal activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing volume of social media data, hate speech identification becomes a challenge in aggravating conflict between citizens of nations. The high rate of production, has become difficult to collect, store and analyze such big data using traditional detection methods. This paper proposed the application of apache spark in hate speech detection to reduce the challenges. Authors developed an apache spark based model to classify Amharic Facebook posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation, the model based on word2vec embedding performed best with 79.83%accuracy. The proposed method achieve a promising result with unique feature of spark for big data.

53 citations


Journal ArticleDOI
TL;DR: A model to detect malaria parasite accurately from giemsa blood sample is developed by using color based pixel discrimination technique and segmentation operation to identify malaria parasites from thin smear blood images to decrease the false result in the area of malaria detection.
Abstract: Malaria is one of the deadliest diseases ever exists in this planet. Automated evaluation process can notably decrease the time needed for diagnosis of the disease. This will result in early onset of treatment saving many lives. As it poses a serious global health problem, we approached to develop a model to detect malaria parasite accurately from giemsa blood sample with the hope of reducing death rate because of malaria. In this work, we developed a model by using color based pixel discrimination technique and Segmentation operation to identify malaria parasites from thin smear blood images. Various segmentation techniques like watershed segmentation, HSV segmentation have been used in this method to decrease the false result in the area of malaria detection. We believe that, our malaria parasite detection method will be helpful wherever it is difficult to find the expert in microscopic analysis of blood report and also limits the human error while detecting the presence of parasites in the blood sample.

18 citations




Journal ArticleDOI
TL;DR: A classification model that supports both the generality and the efficiency through following the logical sequence of the process of classifying the unstructured text documents step by step is presented and a compatible combination of the embedded techniques for achieving better performance is proposed.
Abstract: Document classification has become an important field of research due to the increase of unstructured text documents available in digital form. It is considered one of the key techniques used for organizing the digital data by automatically assigning a set of documents into predefined categories based on their content. Document classification is a process that consists of a set of phases, each phase can be accomplished using various techniques. Selecting the proper technique that should be used in each phase affects the efficiency of the text classification performance. The aim of this paper is to present a classification model that supports both the generality and the efficiency. It supports the generality through following the logical sequence of the process of classifying the unstructured text documents step by step; and supports the efficiency through proposing a compatible combination of the embedded techniques for achieving better performance. The experimental results over 20-Newgroups dataset have been validated using statistical measures of precision, recall, and f-score. The results have proven the capability of the proposed model to significantly improve the performance.

15 citations


Journal ArticleDOI
TL;DR: This paper critically review the application of deep learning for different biomedical signals analysis and provide a holistic overview of current works of literature to provide state of the art knowledge about how deep learning evolved and revolutionized machine learning in the past few years.
Abstract: Recent improvements in big data and machine learning have enhanced the importance of biomedical signal and image-processing research. One part of machine learning evolution is deep learning networks. Deep learning networks are designed for the task of exploiting compositional structure in data. The golden age of the deep learning network in particular convolutional neural networks (CNNs) began in 2012. CNNs have rapidly become a methodology of optimal choice for analysing biomedical signals. CNNs have been successful in detecting and diagnosing an abnormality in biomedical signals. This paper has three distinct aims. The key primary aim is to provide state of the art knowledge about how deep learning evolved and revolutionized machine learning in the past few years. Second, to critically review the application of deep learning for different biomedical signals analysis and provide a holistic overview of current works of literature. Finally, to discuss the research opportunities with deep learning algorithms in the field of study that can serve as a starting point for new researchers to identify the future research direction in a concise manner.

12 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors introduce popular data science methodologies and compare them in accordance with cyber-security challenges and a comparison discussion has also delivered to explain methodologies strengths and weaknesses in case of cyber security projects.
Abstract: Cyber-security solutions are traditionally static and signature-based. The traditional solutions along with the use of analytic models, machine learning and big data could be improved by automatically trigger mitigation or provide relevant awareness to control or limit consequences of threats. This kind of intelligent solutions is covered in the context of Data Science for Cyber-security. Data Science provides a significant role in cyber-security by utilising the power of data (and big data), high-performance computing and data mining (and machine learning) to protect users against cyber-crimes. For this purpose, a successful data science project requires an effective methodology to cover all issues and provide adequate resources. In this paper, we are introducing popular data science methodologies and will compare them in accordance with cyber-security challenges. A comparison discussion has also delivered to explain methodologies strengths and weaknesses in case of cyber-security projects.

12 citations


Journal ArticleDOI
TL;DR: It is believed that the incorporation of Information and Communication Technology (ICT) in Nigeria electoral process has reduced excessive electoral fraud to the barest minimum and foster credible elections.
Abstract: There is no gainsaying that technology has drastically reduced incidences of electoral malpractices such as: ballot stuffing, result sheet mutilation, manipulations, over voting, alteration of result sheets and hijacking of ballot boxes in the history of Nigeria elections. The Independent National Electoral Commission (INEC) has employed a number of innovative approaches to improve the management and conduct of elections in the country. As years pass by, INEC gets more sophisticated with its technologies in order to meet up with international standard. Therefore, this paper examines the impact of these technologies and the effect they have on election activities in Nigeria from 1999 general election to 2017. Results show that the introduction of these technologies: Electronic Voters Register(EVR), Automatic Fingerprints Identification System (AFIS) and Smart Card Reader (SCR) have reduced the incidence of multiple registration and multiple voting to the barest minimum while the development of e-collation support platform has drastically reduced incidence of result manipulation at collation centres. Hence, it is believed that the incorporation of Information and Communication Technology (ICT) in Nigeria electoral process has reduced excessive electoral fraud to the barest minimum and foster credible elections.

12 citations


Journal ArticleDOI
TL;DR: This study focuses on the extraction of features based on tunable-Q factor wavelet transform (TQWT) for classifying ALS and healthy EMG signals and obtained better classification results as compared to other existing methods.
Abstract: Amyotrophic lateral sclerosis (ALS) is a disease, affects the nerve cells in brain and spinal cord that controls the voluntary action of muscles, which identification can be possible by processing electromyogram (EMG) signals This study focuses on the extraction of features based on tunable-Q factor wavelet transform (TQWT) for classifying ALS and healthy EMG signals TQWT decomposes EMG signal into sub-bands and these sub-bands are used for extraction of statistical features namely mean absolute deviation (MAD), interquartile range (IQR), kurtosis, mode, and entropy The obtained features are tested on k-Nearest Neighbour and least squares support vector machines classifiers for the classification of ALS and healthy EMG signals The proposed method obtained better classification results as compared to other existing methods

Journal ArticleDOI
TL;DR: Significant security issues for IoT, security prerequisites for IoT alongside the current attacks and maps IoT security issues against existing solutions found in the literature are focused on.
Abstract: Since the beginning of crypto currency in 2008, blockchain technology rise as progressive technology. Despite the fact that blockchain began off as a core technology of Bitcoin, its utilization cases are growing to numerous fields such as, security of Internet of Things (IoT), banking sector, industries and medical etc. In recent years IoT has gained popularity due to its usage in smart homes and smart city projects around the world. Unfortunately, IoT devices possess limited computing power, low storage capability and network capacity therefore they are more prone to attacks than other endpoint devices such as cell phones, tablets, or PCs. This paper focus on significant security issues for IoT, security prerequisites for IoT alongside the current attacks and maps IoT security issues against existing solutions found in the literature. Blockchain technology can be a key empowering influence to take care of numerous IoT security issues. Finally describe the future work directions.

Proceedings ArticleDOI
TL;DR: The results show that the proposed methodology could clearly discriminate between near grayscale organs especially in case of tumor existence.
Abstract: Medical imaging is one of the most attractive topics of image processing and understanding research fields due to the similarity between the captured body organs colors. Most medical images come in grayscale with low contrast gray values; which makes it a challenge to discriminate between the region of interest (ROI) and the background (BG) parts. Pseudocoloring is one of the solutions to enhance the visual appeal of medical images, most literature works suggest RGB-base color palettes. In this paper, pseudo-coloring methods of different medical imaging works are investigated and a highly discriminative colorization method is proposed. The proposed colorization method employs HSV/HSI color models to generate the desired color scale. Experiments have been performed on different medical images and different assessment methods have been utilized. The results show that the proposed methodology could clearly discriminate between near grayscale organs especially in case of tumor existence. Comparisons with other literary works were performed and the results are promising.

Proceedings ArticleDOI
TL;DR: This paper reviews some of the automated penetration testing techniques and presents its enhancement over the traditional manual approaches and is the first research that takes into consideration the concept of penetration testing and the standards in the area.
Abstract: The using of information technology resources is rapidly increasing in organizations, businesses, and even governments, that led to arise various attacks, and vulnerabilities in the field. All resources make it a must to do frequently a penetration test (PT) for the environment and see what can the attacker gain and what is the current environment's vulnerabilities. This paper reviews some of the automated penetration testing techniques and presents its enhancement over the traditional manual approaches. To the best of our knowledge, it is the first research that takes into consideration the concept of penetration testing and the standards in the area.This research tackles the comparison between the manual and automated penetration testing, the main tools used in penetration testing. Additionally, compares between some methodologies used to build an automated penetration testing platform.

Proceedings ArticleDOI
TL;DR: This paper attempts to apply classification algorithms to evaluate student’s performance in the higher education sector and identifies the key features affecting the prediction process based on a combination of three major attributes categories.
Abstract: In the globalised education sector, predicting student performance has become a central issue for data mining and machine learning researchers where numerous aspects influence the predictive models. This paper attempts to apply classification algorithms to evaluate student’s performance in the higher education sector and identify the key features affecting the prediction process based on a combination of three major attributes categories. These are: admission information, module-related data and 1st year final grades. For this purpose, J48 (C4.5) decision tree and Naïve Bayes classification algorithms are applied on computer science level 2studentdatasets at Brunel University London for the academic year 2015/16. The outcome of the predictive model identifies the low, medium and high risk of failure of students. This prediction will help instructors to assist high-risk students by making appropriate interventions.

Journal ArticleDOI
TL;DR: A solution named Black Hole Detection System is used for the detection of Black Hole attack on AODV protocol in MANET and the solution minimize the data loss and decrease the average Jitter 5% and increase the throughput.
Abstract: A Mobile Ad hoc Network is an aggregation of mobile terminal that form a volatile network with wireless interfaces. Mobile Ad Hoc Network has no any central administration. MANET more vulnerable to attacks than wired network, as there is no central management and no clear defence mechanism. Black Hole Attack is one of the attacks against network integrity in MANET. In this type of attack all data packets are absorbed by Black Hole node. There are lots of techniques to eliminate the black hole attack on AODV protocol in MANET. In this paper a solution named Black Hole Detection System is used for the detection of Black Hole attack on AODV protocol in MANET. The Black Hole Detection System considered the first route reply is the response from malicious node and deleted, then the second one is chosen using the route reply saving mechanism as it come from the destination node. We use NS-2.35 for the simulation and compare the result of AODV and BDS n solution under Black Hole attack. The BDS solution against Black hole node has high packet delivery ratio as compared to the AODV protocol under Black hole attack and it’s about 46.7%.The solution minimize the data loss and decrease the average Jitter 5% and increase the throughput.

Proceedings ArticleDOI
TL;DR: In this paper, the authors proposed to take advantage of the new progress made in neural networks to emulate nonlinear audio systems in real time, and they showed that an accurate emulation can be reached with less than 1% of root mean square error between the signal coming from a tube amplifier and the output of the neural network.
Abstract: Numerous audio systems for musicians are expensive and bulky. Therefore, it could be advantageous to model them and to replace them by computer emulation. In guitar players' world, audio systems could have a desirable nonlinear behavior (distortion effects). It is thus difficult to find a simple model to emulate them in real time. Volterra series model and its subclass are usual ways to model nonlinear systems. Unfortunately, these systems are difficult to identify in an analytic way. In this paper we propose to take advantage of the new progress made in neural networks to emulate them in real time. We show that an accurate emulation can be reached with less than 1% of root mean square error between the signal coming from a tube amplifier and the output of the neural network. Moreover, the research has been extended to model the Gain parameter of the amplifier.

Proceedings ArticleDOI
TL;DR: An approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names is presented, showing that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA.
Abstract: In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.

Journal ArticleDOI
TL;DR: Drunk driving detection uses data provided by sensors and camera to detect whether a person is drunk or not and uses various detection methods like Iris recognition using Gabor filter, Neural network using face images, detection using speech, Non-invasive Biological sensors and Engine locking system.
Abstract: Drunk driving accidents have increased day by day and has become a big issue. On an average nearly 29% of road accidents are caused due to drunk driving. To avoid such accidents, precautionary measures using different technologies are taken. Drunk driving detection is process of detecting whether a person is drunk or not. It uses data provided by sensors and camera to detect whether the person is drunk or not. The data is further processed using specific algorithm and methods to detect whether person is drunk or not. It uses various detection methods like Iris recognition using Gabor filter, Neural network using face images, detection using speech, Non-invasive Biological sensors, detection using Driving pattern and Engine locking system.

Journal ArticleDOI
TL;DR: Evaluating the level of accessibility and usability of websites of major airlines in the civil aviation industry in Nigeria for the purpose of ascertaining their level of compliance with Web Content Accessibility Guidelines (WCAG) 2.0 reveals most of the websites do not have functional and required usability tools expected from commercial airlines.
Abstract: In the present globalized world, online access to information and service irrespective of location and time have become the order of the day. A major gateway to online information and services in any corporate or government organization is the website. Thus, websites accessibility and usability across different organizations is gaining interest among researchers and practitioners. This study focuses on evaluating the level of accessibility and usability of websites of major airlines in the civil aviation industry in Nigeria for the purpose of ascertaining their level of compliance with Web Content Accessibility Guidelines (WCAG) 2.0. In achieving the aim of the study, different automated tools such as the A-Checker tool, European Internet Inclusion Initiative (EIII) e-accessibility tool, WAVE web accessibility evaluation tool (WAVE), Functional Accessibility Evaluation Tool and Mobile Friendly Test were used in the study. In addition, heuristic method was used for the usability testing. Based on the various tests conducted, majority of the websites show known problems (KP) and high failure in terms of xhtml conformance with the stipulated guidelines. Findings from the coding and design tests reveal an average performance while the websites’ implementation tests and mobile friendly tests were not satisfactory. The usability evaluation reveals that most of the websites do not have functional and required usability tools expected from commercial airlines. In practical terms, this study provides pointer to stakeholders in the civil aviation industry in Nigeria on the importance of ensuring that websites deployed facilitate seamless interaction with customers and enable service delivery without constraints.

Proceedings ArticleDOI
TL;DR: This study evaluates the performance of a simple faster R-CNN detector for mammography lesion detection using a MIAS databases.
Abstract: Recently availability of large scale mammography databases enable researchers to evaluates advanced tumor detections applying deep convolution networks (DCN) to mammography images which is one of the common used imaging modalities for early breast cancer. With the recent advance of deep learning, the performance of tumor detection has been developed by a great extent, especially using R-CNNs or Region convolution neural networks. This study evaluates the performance of a simple faster R-CNN detector for mammography lesion detection using a MIAS databases.



Proceedings ArticleDOI
TL;DR: This work empirically compare the performance of various visual descriptors for ulcer detection using real Wireless Capsule Endoscopy WCE video frames to determine which visual descriptor represents better WCE frames, and yields more accurate gastrointestinal ulcers detection.
Abstract: In this work, we empirically compare the performance of various visual descriptors for ulcer detection using real Wireless Capsule Endoscopy WCE video frames. This comparison is intended to determine which visual descriptor represents better WCE frames, and yields more accurate gastrointestinal ulcer detection. The extracted visual descriptors are fed to the ulcer recognition system which relies on Support Vector Machine (SVM) classification to categorize WCE frames as “ulcer” or “non-ulcer”.

Proceedings ArticleDOI
TL;DR: This work investigates how social media analytics help to analyze smart city data collected from various social media sources, such as Twitter and Facebook, to detect various events taking place in a smart city and identify the importance of events and concerns of citizens regarding some events.
Abstract: Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city services including energy, transportation, health, and much more. They generate massive volumes of structured and unstructured data on a daily basis. Also, social networks, such as Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart cities. Social network users are acting as social sensors. These datasets so large and complex are difficult to manage with conventional data management tools and methods. To become valuable, this massive amount of data, known as 'big data,' needs to be processed and comprehended to hold the promise of supporting a broad range of urban and smart cities functions, including among others transportation, water, and energy consumption, pollution surveillance, and smart city governance. In this work, we investigate how social media analytics help to analyze smart city data collected from various social media sources, such as Twitter and Facebook, to detect various events taking place in a smart city and identify the importance of events and concerns of citizens regarding some events. A case scenario analyses the opinions of users concerning the traffic in three largest cities in the UAE

Journal ArticleDOI
TL;DR: This paper discusses that the ABM algorithm is only the first algorithm in a family of new algorithms based on the θ-transformation, and introduces the simplest algorithm in this family based on Linear neurons.
Abstract: In this paper, we introduce the concepts of Linear neurons, and new learning algorithms based on Linear neurons, with an explanation of the reasons behind these algorithms. First, we briefly review the Boltzmann Machine and the fact that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θtransformation; therefore, an ABM can simulate any distribution. We then discuss that the ABM algorithm is only the first algorithm in a family of new algorithms based on the θ-transformation. We introduce the simplest algorithm in this family based on Linear neurons. We also discuss the advantages of this algorithm: accuracy, stability, and low time complexity.

Journal ArticleDOI
TL;DR: The results show that the implementation of the ID3 algorithm using the quadratic entropy with some selected datasets have a significant improvement in the areas of its accuracy as compared with the traditional ID3 implementation using the Shannon entropy.
Abstract: Decision trees have been a useful tool in data mining for building useful intelligence in diverse areas of research to solve real world problems of data classifications. One decision tree algorithm that has been predominant for its robust use and wide acceptance has been the Iterative Dichotomiser 3 (ID3). The splitting criteria for the algorithm have been the Shannon algorithm for evaluating the entropy of the dataset. In this research work, the implementation of the ID3 algorithm using the Quadratic entropy algorithm in a bid to improve the accuracy of classification of the ID3 algorithm was carried out. The results show that the implementation of the ID3 algorithm using the quadratic entropy with some selected datasets have a significant improvement in the areas of its accuracy as compared with the traditional ID3 implementation using the Shannon entropy. The formulated model makes use of similar process of the ID3 algorithm but replaces the Shannon entropy formula with the Quadratic entropy.


Proceedings ArticleDOI
TL;DR: The proposed E-assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
Abstract: In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.

Proceedings ArticleDOI
TL;DR: Simulation results suggest that on-demand routing within the MANET better serves Mobile IP on MANETs.
Abstract: Mobile computing devices equipped with transceivers form Mobile Ad Hoc Networks (MANET) when two or more of these devices find themselves within transmission range. MANETs are stand-alone (no existing infrastructure needed), autonomous networks that utilise multi-hop communication to reach nodes out of transmitter range. Unlike infrastructure networks e.g. the Internet with fixed topology, MANETs are dynamic. Despite the heterogeneous nature of these two networks, integrating MANETs with the Internet extends the network coverage and adds to the application domain of MANETs. One of the many ways of combining MANETs with the Internet involves using Mobile Internet Protocol (Mobile IP) and a MANET protocol to route packets between the Internet and the MANET via Gateway agents. In this paper, we evaluate the performance of Mobile IP on MANET in Network Simulator 2 (NS2). We have implemented Mobile IP on Ad hoc On-demand Distance Vector (AODV), Ad hoc On-demand Multiple Distance Vector (AOMDV) and Destination-Sequenced Distance Vector (DSDV) routing protocols and compared performances based on Throughput, End-to-End Delay (E2ED), Packet Delivery Ratio (PDR) and Normalized Packet Ratio (NPR). The simulation results suggest that on-demand routing within the MANET better serves Mobile IP on MANETs.