scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Computer Science and Applications in 2019"


Journal ArticleDOI
TL;DR: Twitter is an enormously popular microblog on which clients may voice their opinions and opinion investigation of Twitter data is a field that has been given much attention over the last decade and involves dissecting “tweets” (comments) and the content of these expressions.
Abstract: The entire world is transforming quickly under the present innovations. The Internet has become a basic requirement for everybody with the Web being utilized in every field. With the rapid increase in social network applications, people are using these platforms to voice them their opinions with regard to daily issues. Gathering and analyzing peoples’ reactions toward buying a product, public services, and so on are vital. Sentiment analysis (or opinion mining) is a common dialogue preparing task that aims to discover the sentiments behind opinions in texts on varying subjects. In recent years, researchers in the field of sentiment analysis have been concerned with analyzing opinions on different topics such as movies, commercial products, and daily societal issues. Twitter is an enormously popular microblog on which clients may voice their opinions. Opinion investigation of Twitter data is a field that has been given much attention over the last decade and involves dissecting “tweets” (comments) and the content of these expressions. As such, this paper explores the various sentiment analysis applied to Twitter data and their outcomes.

136 citations


Journal ArticleDOI
TL;DR: This paper suggests a new nature inspired metaheuristic optimization algorithm which is called Sea Lion Optimization (SLnO) algorithm, which is inspired by sea lions' whiskers that are used in order to detect the prey.
Abstract: This paper suggests a new nature inspired metaheuristic optimization algorithm which is called Sea Lion Optimization (SLnO) algorithm. The SLnO algorithm imitates the hunting behavior of sea lions in nature. Moreover, it is inspired by sea lions' whiskers that are used in order to detect the prey. SLnO algorithm is tested with 23 well-known test functions (Benchmarks). Optimization results show that the SLnO algorithm is very competitive compared to Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA), Grey Wolf Optimization (GWO), Sine Cosine Algorithm (SCA) and Dragonfly Algorithm (DA).

134 citations


Journal ArticleDOI
TL;DR: The results and comparative study showed that, the current work improved the previous accuracy score in predicting heart disease, and the integration of the machine learning model presented in this study with medical information systems would be useful to predict the HF or any other disease using the live data collected from patients.
Abstract: In the current era, Heart Failure (HF) is one of the common diseases that can lead to dangerous situation. Every year almost 26 million of patients are affecting with this kind of disease. From the heart consultant and surgeon’s point of view, it is complex to predict the heart failure on right time. Fortunately, classification and predicting models are there, which can aid the medical field and can illustrates how to use the medical data in an efficient way. This paper aims to improve the HF prediction accuracy using UCI heart disease dataset. For this, multiple machine learning approaches used to understand the data and predict the HF chances in a medical database. Furthermore, the results and comparative study showed that, the current work improved the previous accuracy score in predicting heart disease. The integration of the machine learning model presented in this study with medical information systems would be useful to predict the HF or any other disease using the live data collected from patients.

118 citations


Journal ArticleDOI
TL;DR: The aim to analyze recently developed IoT applications in the agriculture and farming industries to provide an overview of sensor data collections, technologies, and sub-verticals such as water management and crop management and provide recommendations for future research to include IoT systems' scalability, heterogeneity aspects, IoT system architecture, data analysis methods, size or scale of the observed land or agricultural domain.
Abstract: It is essential to increase the productivity of agricultural and farming processes to improve yields and cost-effectiveness with new technology such as the Internet of Things (IoT). In particular, IoT can make agricultural and farming industry processes more efficient by reducing human intervention through automation. In this study, the aim to analyze recently developed IoT applications in the agriculture and farming industries to provide an overview of sensor data collections, technologies, and sub-verticals such as water management and crop management. In this review, data is extracted from 60 peer-reviewed scientific publications (2016-2018) with a focus on IoT sub-verticals and sensor data collection for measurements to make accurate decisions. Our results from the reported studies show water management is the highest sub-vertical (28.08%) followed by crop management (14.60%) then smart farming (10.11%). From the data collection, livestock management and irrigation management resulted in the same percentage (5.61%). In regard to sensor data collection, the highest result was for the measurement of environmental temperature (24.87%) and environmental humidity (19.79%). There are also some other sensor data regarding soil moisture (15.73%) and soil pH (7.61%). Research indicates that of the technologies used in IoT application development, Wi-Fi is the most frequently used (30.27%) followed by mobile technology (21.10%). As per our review of the research, we can conclude that the agricultural sector (76.1%) is researched considerably more than compared to the farming sector (23.8%). This study should be used as a reference for members of the agricultural industry to improve and develop the use of IoT to enhance agricultural production efficiencies. This study also provides recommendations for future research to include IoT systems' scalability, heterogeneity aspects, IoT system architecture, data analysis methods, size or scale of the observed land or agricultural domain, IoT security and threat solutions/protocols, operational technology, data storage, cloud platform, and power supplies.

105 citations


Journal ArticleDOI
TL;DR: The experimental results prove that the accuracy rate of the proposed model can yield up to 95%, which is higher than the current technologies for phishing website detection.
Abstract: This research focuses on evaluating whether a website is legitimate or phishing. Our research contributes to improving the accuracy of phishing website detection. Hence, a feature selection algorithm is employed and integrated with an ensemble learning methodology, which is based on majority voting, and compared with different classification models including Random forest, Logistic Regression, Prediction model etc. Our research demonstrates that current phishing detection technologies have an accuracy rate between 70% and 92.52%. The experimental results prove that the accuracy rate of our proposed model can yield up to 95%, which is higher than the current technologies for phishing website detection. Moreover, the learning models used during the experiment indicate that our proposed model has a promising accuracy rate.

84 citations


Journal ArticleDOI
TL;DR: There is no significant difference between propose approach and the state-of-the-art on CIFAR-10 datasets, however, the actual potency lies in the hybridization of genetic algorithms with local search method in optimizing both network structures and network training which is yet to be reported.
Abstract: Optimizing hyperparameters in Convolutional Neural Network (CNN) is a tedious problem for many researchers and practitioners. To get hyperparameters with better performance, experts are required to configure a set of hyperparameter choices manually. The best results of this manual configuration are thereafter modeled and implemented in CNN. However, different datasets require different model or combination of hyperparameters, which can be cumbersome and tedious. To address this, several works have been proposed such as grid search which is limited to low dimensional space, and tails which use random selection. Also, optimization methods such as evolutionary algorithms and Bayesian have been tested on MNIST datasets, which is less costly and require fewer hyperparameters than CIFAR-10 datasets. In this paper, the authors investigate the hyperparameter search methods on CIFAR-10 datasets. During the investigation with various optimization methods, performances in terms of accuracy are tested and recorded. Although there is no significant difference between propose approach and the state-of-the-art on CIFAR-10 datasets, however, the actual potency lies in the hybridization of genetic algorithms with local search method in optimizing both network structures and network training which is yet to be reported to the best of author knowledge.

72 citations


Journal ArticleDOI
TL;DR: This research reviews research and development within SSA and provides an IoT/AI architecture to establish a Smart, Sustainable Agriculture platform as a solution.
Abstract: The Internet of Things (IoT) and Artificial Intelligence (AI) have been employed in agriculture over a long period of time, alongside other advanced computing technologies. However, increased attention is currently being paid to the use of such smart technologies. Agriculture has provided an important source of food for human beings over many thousands of years, including the development of appropriate farming methods for different types of crops. The emergence of new advanced IoT technologies has the potential to monitor the agricultural environment to ensure high-quality products. However, there remains a lack of research and development in relation to Smart Sustainable Agriculture (SSA), accompanied by complex obstacles arising from the fragmentation of agricultural processes, i.e. the control and operation of IoT/AI machines; data sharing and management; interoperability; and large amounts of data analysis and storage. This study firstly, explores existing IoT/AI technologies adopted for SSA and secondly, identifies IoT/AI technical architecture capable of underpinning the development of SSA platforms. As well as contributing to the current body of knowledge, this research reviews research and development within SSA and provides an IoT/AI architecture to establish a Smart, Sustainable Agriculture platform as a solution.

69 citations


Journal ArticleDOI
TL;DR: An overall enhancement using augmentation methods with deep learning classification methods (especially transfer learning) when evaluated on the two datasets is confirmed.
Abstract: Breast classification and detection using ultrasound imaging is considered a significant step in computer-aided diagno-sis systems. Over the previous decades, researchers have proved the opportunities to automate the initial tumor classification and detection. The shortage of popular datasets of ultrasound images of breast cancer prevents researchers from obtaining a good performance of the classification algorithms. Traditional augmentation approaches are firmly limited, especially in tasks where the images follow strict standards, as in the case of medical datasets. Therefore besides the traditional augmentation, we use a new methodology for data augmentation using Generative Adversarial Network (GAN). We achieved higher accuracies by integrating traditional with GAN-based augmentation. This paper uses two breast ultrasound image datasets obtained from two various ultrasound systems. The first dataset is our dataset which was collected from Baheya Hospital for Early Detection and Treatment of Women’s Cancer, Cairo (Egypt), we name it (BUSI) referring to Breast Ultrasound Images (BUSI) dataset. It contains 780 images (133 normal, 437 benign and 210 malignant). While the Dataset (B) is obtained from related work and it has 163 images (110 benign and 53 malignant). To overcome the shortage of public datasets in this field, BUSI dataset will be publicly available for researchers. Moreover, in this paper, deep learning approaches are proposed to be used for breast ultrasound classification. We examine two different methods: a Convolutional Neural Network (CNN) approach and a Transfer Learning (TL) approach and we compare their performance with and without augmentation. The results confirm an overall enhancement using augmentation methods with deep learning classification methods (especially transfer learning) when evaluated on the two datasets.

68 citations


Journal ArticleDOI
TL;DR: The technology behind Blockchain is discussed then an IoMT based security architecture employing Blockchain to ensure the security of data transmission between connected nodes is proposed.
Abstract: The internet of medical things (IoMT) is playing a substantial role in improving the health and providing medical facilities to people around the globe. With the exponential growth, IoMT is having a huge influence in our everyday life style. Instead of going to the hospital, patient clinical related data is remotely observed and processed in a real time data system and then is transferred to the third party for future use such as the cloud. IoMT is intensive data domain with a continuous growing rate which means that we must secure a large amount of sensitive data without being tampered. Blockchain is a temper proved digital ledger which provides us peer-to-peer communication. Blockchain enables communication between non-trusting members without any intermediary. In this paper we first discuss the technology behind Blockchain then propose IoMT based security architecture employing Blockchain to ensure the security of data transmission between connected nodes.

66 citations


Journal ArticleDOI
TL;DR: The recent development of IoT technologies are presented and future applications and research challenges are discussed and it is discussed that IoT is paving the way for new dimensions of research to be carried out.
Abstract: With the Internet of Things (IoT) gradually evolving as the subsequent phase of the evolution of the Internet, it becomes crucial to recognize the various potential domains for application of IoT, and the research challenges that are associated with these applications. Ranging from smart cities, to health care, smart agriculture, logistics and retail, to even smart living and smart environments IoT is expected to infiltrate into virtually all aspects of daily life. Even though the current IoT enabling technologies have greatly improved in the recent years, there are still numerous problems that require attention. Since the IoT concept ensues from heterogeneous technologies, many research challenges are bound to arise. The fact that IoT is so expansive and affects practically all areas of our lives, makes it a significant research topic for studies in various related fields such as information technology and computer science. Thus, IoT is paving the way for new dimensions of research to be carried out. This paper presents the recent development of IoT technologies and discusses future applications and research challenges.

65 citations


Journal ArticleDOI
TL;DR: A system that uses machine learning techniques to classify websites based on their URL, and the results show that the classifiers were successful in distinguishing real websites from fake ones over 90% of the time.
Abstract: Tremendous resources are spent by organizations guarding against and recovering from cybersecurity attacks by online hackers who gain access to sensitive and valuable user data. Many cyber infiltrations are accomplished through phishing attacks where users are tricked into interacting with web pages that appear to be legitimate. In order to successfully fool a human user, these pages are designed to look like legitimate ones. Since humans are so susceptible to being tricked, automated methods of differentiating between phishing websites and their authentic counterparts are needed as an extra line of defense. The aim of this research is to develop these methods of defense utilizing various approaches to categorize websites. Specifically, we have developed a system that uses machine learning techniques to classify websites based on their URL. We used four classifiers: the decision tree, Naive Bayesian classifier, support vector machine (SVM), and neural network. The classifiers were tested with a data set containing 1,353 real world URLs where each could be categorized as a legitimate site, suspicious site, or phishing site. The results of the experiments show that the classifiers were successful in distinguishing real websites from fake ones over 90% of the time.

Journal ArticleDOI
TL;DR: Several machine learning classification techniques are used to predict the software defects in twelve widely used NASA datasets and can be used as a baseline for other researches so that any claim regarding the improvement in prediction through any new technique, model or framework can be compared and verified.
Abstract: Defect prediction at early stages of software development life cycle is a crucial activity of quality assurance process and has been broadly studied in the last two decades. The early prediction of defective modules in developing software can help the development team to utilize the available resources efficiently and effectively to deliver high quality software product in limited time. Until now, many researchers have developed defect prediction models by using machine learning and statistical techniques. Machine learning approach is an effective way to identify the defective modules, which works by extracting the hidden patterns among software attributes. In this study, several machine learning classification techniques are used to predict the software defects in twelve widely used NASA datasets. The classification techniques include: Naive Bayes (NB), Multi-Layer Perceptron (MLP). Radial Basis Function (RBF), Support Vector Machine (SVM), K Nearest Neighbor (KNN), kStar (K*), One Rule (OneR), PART, Decision Tree (DT), and Random Forest (RF). Performance of used classification techniques is evaluated by using various measures such as: Precision, Recall, F-Measure, Accuracy, MCC, and ROC Area. The detailed results in this research can be used as a baseline for other researches so that any claim regarding the improvement in prediction through any new technique, model or framework can be compared and verified.

Journal ArticleDOI
TL;DR: A framework is introduced for identification of news articles related to top trending topics/hashtags and multi-document summarization of unifiable news articles based on the trending topics for capturing opinion diversity on those topics.
Abstract: Vectorization is imperative for processing textual data in natural language processing applications. Vectorization enables the machines to understand the textual contents by converting them into meaningful numerical representations. The proposed work targets at identifying unifiable news articles for performing multi-document summarization. A framework is introduced for identification of news articles related to top trending topics/hashtags and multi-document summarization of unifiable news articles based on the trending topics, for capturing opinion diversity on those topics. Text clustering is applied to the corpus of news articles related to each trending topic to obtain smaller unifiable groups. The effectiveness of various text vectorization methods, namely the bag of word representations with tf-idf scores, word embeddings, and document embeddings are investigated for clustering news articles using the k-means. The paper presents the comparative analysis of different vectorization methods obtained on documents from DUC 2004 benchmark dataset in terms of purity.

Journal ArticleDOI
TL;DR: Two EMG features, namely, enhanced wavelength (EWL) and enhanced mean absolute value (EMAV) are proposed, which aims to enhance the prediction accuracy for the classification of hand movements and can be considered as valuable tools for rehabilitation and clinical applications.
Abstract: Extraction of potential electromyography (EMG) features has become one of the important roles in EMG pattern recognition. In this paper, two EMG features, namely, enhanced wavelength (EWL) and enhanced mean absolute value (EMAV) are proposed. The EWL and EMAV are the modified version of wavelength (WL) and mean absolute value (MAV), which aims to enhance the prediction accuracy for the classification of hand movements. Initially, the proposed features are extracted from the EMG signals via discrete wavelet transform (DWT). The extracted features are then fed into the machine learning algorithm for classification process. Four popular machine learning algorithms include k-nearest neighbor (KNN), linear discriminate analysis (LDA), Naive Bayes (NB) and support vector machine (SVM) are used in evaluation. To examine the effectiveness of EWL and EMAV, several conventional EMG features are used in performance comparison. In addition, the efficacy of EWL and EMAV when combine with other features are also investigated. Based on the results obtained, the combination of EWL and EMAV with other features can improve the classification performance. Thus, EWL and EMAV can be considered as valuable tools for rehabilitation and clinical applications.

Journal ArticleDOI
TL;DR: A facial recognition system based on the Local Binary Pattern Histogram (LBPH) method to treat the real-time recognition of the human face in the low and high-level images and aspire to maximize the variation that is relevant to facial expression and open edges so to sort of encode edges in a very cheap way.
Abstract: Facial recognition has always gone through a consistent research area due to its non-modelling nature and its diverse applications. As a result, day-to-day activities are increasingly being carried out electronically rather than in pencil and paper. Today, computer vision is a comprehensive field that deals with a high level of programming by feeding the input images/videos to automatically perform tasks such as detection, recognition and classification. Even with deep learning techniques, they are better than the normal human visual system. In this article, we developed a facial recognition system based on the Local Binary Pattern Histogram (LBPH) method to treat the real-time recognition of the human face in the low and high-level images. We aspire to maximize the variation that is relevant to facial expression and open edges so to sort of encode edges in a very cheap way. These highly successful features are called the Local Binary Pattern Histogram (LBPH).

Journal ArticleDOI
TL;DR: A supervised machine learning approach for detecting and preventing cyberbullying and shows that Neural Network performs better and achieves accuracy of 92.8% and NN outperforms other classifiers of similar work on the same dataset.
Abstract: With the exponential increase of social media users, cyberbullying has been emerged as a form of bullying through electronic messages. Social networks provides a rich environment for bullies to uses these networks as vulnerable to attacks against victims. Given the consequences of cyberbullying on victims, it is necessary to find suitable actions to detect and prevent it. Machine learning can be helpful to detect language patterns of the bullies and hence can generate a model to automatically detect cyberbullying actions. This paper proposes a supervised machine learning approach for detecting and preventing cyberbullying. Several classifiers are used to train and recognize bullying actions. The evaluation of the proposed approach on cyberbullying dataset shows that Neural Network performs better and achieves accuracy of 92.8% and SVM achieves 90.3. Also, NN outperforms other classifiers of similar work on the same dataset.

Journal ArticleDOI
TL;DR: The development of a firefighting robot dubbed QRob that can extinguish fire without the need for fire fighters to be exposed to unnecessary danger is presented.
Abstract: Fire incident is a disaster that can potentially cause the loss of life, property damage and permanent disability to the affected victim. They can also suffer from prolonged psychological and trauma. Fire fighters are primarily tasked to handle fire incidents, but they are often exposed to higher risks when extinguishing fire, especially in hazardous environments such as in nuclear power plant, petroleum refineries and gas tanks. They are also faced with other difficulties, particularly if fire occurs in narrow and restricted places, as it is necessary to explore the ruins of buildings and obstacles to extinguish the fire and save the victim. With high barriers and risks in fire extinguishment operations, technological innovations can be utilized to assist firefighting. Therefore, this paper presents the development of a firefighting robot dubbed QRob that can extinguish fire without the need for fire fighters to be exposed to unnecessary danger. QRob is designed to be compact in size than other conventional fire-fighting robot in order to ease small location entry for deeper reach of extinguishing fire in narrow space. QRob is also equipped with an ultrasonic sensor to avoid it from hitting any obstacle and surrounding objects, while a flame sensor is attached for fire detection. This resulted in QRob demonstrating capabilities of identifying fire locations automatically and ability to extinguish fire remotely at particular distance. QRob is programmed to find the fire location and stop at maximum distance of 40 cm from the fire. A human operator can monitor the robot by using camera which connects to a smartphone or remote devices.

Journal ArticleDOI
TL;DR: This study aims to contribute to the literature by evaluating various machine learning algorithms that can be used to quickly and effectively detect IoT network attacks, using a new dataset, Bot-IoT, to evaluate various detection algorithms.
Abstract: The Internet of Things (IoT) combines hundreds of millions of devices which are capable of interaction with each other with minimum user interaction. IoT is one of the fastest-growing areas in of computing; however, the reality is that in the extremely hostile environment of the internet, IoT is vulnerable to numerous types of cyberattacks. To resolve this, practical countermeasures need to be established to secure IoT networks, such as network anomaly detection. Regardless that attacks cannot be wholly avoided forever, early detection of an attack is crucial for practical defense. Since IoT devices have low storage capacity and low processing power, traditional high-end security solutions to protect an IoT system are not appropriate. Also, IoT devices are now connected without human intervention for longer periods. This implies that intelligent network-based security solutions like machine learning solutions must be developed. Although many studies in recent years have discussed the use of Machine Learning (ML) solutions in attack detection problems, little attention has been given to the detection of attacks specifically in IoT networks. In this study, we aim to contribute to the literature by evaluating various machine learning algorithms that can be used to quickly and effectively detect IoT network attacks. A new dataset, Bot-IoT, is used to evaluate various detection algorithms. In the implementation phase, seven different machine learning algorithms were used, and most of them achieved high performance. New features were extracted from the Bot-IoT dataset during the implementation and compared with studies from the literature, and the new features gave better results.

Journal ArticleDOI
TL;DR: The general architecture of HAR system is presented, along with the description of its main components, and different challenges and issues online versus offline also using deep learning versus traditional machine learning for human activity recognition based on accelerometer sensors are concluded.
Abstract: Human activity recognition is an important area of machine learning research as it has many utilization in different areas such as sports training, security, entertainment, ambient-assisted living, and health monitoring and management Studying human activity recognition shows that researchers are interested mostly in the daily activities of the human Therefore, the general architecture of HAR system is presented in this paper, along with the description of its main components The state of the art in human activity recognition based on accelerometer is surveyed According to this survey, Most of the researches recently used deep learning for recognizing HAR, but they focused on CNN even though there are other deep learning types achieved a satisfied accuracy The paper displays a two-level taxonomy in accordance with machine learning approach (either traditional or deep learning) and the processing mode (either online or offline) Forty eight studies are compared in terms of recognition accuracy, classifier, activities types, and used devices Finally, the paper concludes different challenges and issues online versus offline also using deep learning versus traditional machine learning for human activity recognition based on accelerometer sensors

Journal ArticleDOI
TL;DR: This empirical study showed that the proposed GRU-FCN model also outperforms the state-of-the-art classification performance in many univariate time series datasets without additional supporting algorithms requirement.
Abstract: Hybrid LSTM-fully convolutional networks (LSTM-FCN) for time series classification have produced state-of-the-art classification results on univariate time series. We empirically show that replacing the LSTM with a gated recurrent unit (GRU) to create a GRU-fully convolutional network hybrid model (GRU-FCN) can offer even better performance on many time series datasets without further changes to the model. Our empirical study showed that the proposed GRU-FCN model also outperforms the state-of-the-art classification performance in many univariate time series datasets without additional supporting algorithms requirement. Furthermore, since the GRU uses simpler architecture than the LSTM, it has fewer training parameters, less training time, smaller memory storage requirements, and simpler hardware implementation, compared to the LSTM-based models.

Journal ArticleDOI
TL;DR: This paper mainly focuses on cervical cancer prediction through different screening methods using data mining techniques like Boosted decision tree, decision forest and decision jungle algorithms as well performance evaluation has done on the basis of AUROC (Area under Receiver operating characteristic) curve, accuracy, specificity and sensitivity.
Abstract: Cervical cancer remains an important reason of deaths worldwide because effective access to cervical screening methods is a big challenge. Data mining techniques including decision tree algorithms are used in biomedical research for predictive analysis. The imbalanced dataset was obtained from the dataset archive belongs to the University of California, Irvine. Synthetic Minority Oversampling Technique (SMOTE) has been used to balance the dataset in which the number of instances has increased. The dataset consists of patient age, number of pregnancies, contraceptives usage, smoking patterns and chronological records of sexually transmitted diseases (STDs). Microsoft azure machine learning tool was used for simulation of results. This paper mainly focuses on cervical cancer prediction through different screening methods using data mining techniques like Boosted decision tree, decision forest and decision jungle algorithms as well performance evaluation has done on the basis of AUROC (Area under Receiver operating characteristic) curve, accuracy, specificity and sensitivity. 10-fold cross-validation method was utilized to authenticate the results and Boosted decision tree has given the best results. Boosted decision tree provided very high prediction with 0.978 on AUROC curve while Hinslemann screening method has used. The results obtained by other classifiers were significantly worse than boosted decision tree.

Journal ArticleDOI
TL;DR: This study reviews existing literature in order to identify the major issues of various healthcare stakeholders and to explore the features of blockchain technology that could resolve identified issues.
Abstract: Blockchain is an emerging field which works on the concept of a digitally distributed ledger and consensus algorithm removing all the threats of intermediaries. Its early applications were related to the finance sector but now this concept has been extended to almost all the major areas of research includ-ing education, IoT, banking, supplychain, defense, governance, healthcare, etc. In the field of healthcare, stakeholders (provider, patient, payer, research organizations, and supply chain bearers) demand interoperability, security, authenticity, transparency, and streamlined transactions. Blockchain technology, built over the internet, has the potential to use the current healthcare data into peer to peer and interoperable manner by using a patient-centric approach eliminating the third party. Using this technology, applications can be built to manage and share secure, transparent and immutable audit trails with reduced systematic fraud. This study reviews existing literature in order to identify the major issues of various healthcare stakeholders and to explore the features of blockchain technology that could resolve identified issues. However, there are some challenges and limitations of this technology which are needed to be focused on future research.

Journal ArticleDOI
TL;DR: The paper found that artificial intelligence chatbots are very productive tools in recruitment process and it will be helpful in preparing recruitment strategy for the Industry.
Abstract: The purpose of the paper is to assess the artificial intelligence chatbots influence on recruitment process. The authors explore how chatbots offered service delivery to attract and candidates engagement in the recruitment process. The aim of the study is to identify chatbots impact across the recruitment process. The study is completely based on secondary sources like conceptual papers, peer reviewed articles, websites are used to present the current paper. The paper found that artificial intelligence chatbots are very productive tools in recruitment process and it will be helpful in preparing recruitment strategy for the Industry. Additionally, it focuses more on to resolve complex issues in the process of recruitment. Through the amalgamation of artificial intelligence recruitment process is increasing attention among the researchers still there is opportunity to explore in the field. The paper provided future research avenues in the field of chatbots and recruiters.

Journal ArticleDOI
TL;DR: This application uses deep learning technology with Convolutional Neural Network method and LeNet-5 architecture for classifying image data and has the highest percentage of success at 93% in training and 100% in testing.
Abstract: Melanoma cancer is a type of skin cancer and is the most dangerous one because it causes the most of skin cancer deaths. Melanoma comes from melanocyte cells, melanin-producing cells, so that melanomas are generally brown or black coloured. Melanomas are mostly caused by exposure to ultraviolet radiation that damages the DNA of skin cells. The diagnoses of melanoma cancer are often performed manually by using visuals of the skilled doctors, analyzing the result of dermoscopy examination and match it with medical sciences. Manual detection weakness is highly influenced by human subjectivity that makes it inconsistent in certain conditions. Therefore, a computer assisted technology is needed to help classifying the results of dermoscopy examination and to deduce the results more accurately with a relatively faster time. The making of this application starts with problem analysis, design, implementation, and testing. This application uses deep learning technology with Convolutional Neural Network method and LeNet-5 architecture for classifying image data. The experiment using 44 images data from the training results with a different number of training and epoch resulted the highest percentage of success at 93% in training and 100% in testing, which the number of training data used of 176 images and 100 epochs. This application was created using Python programming language and Keras library as Tensorflow back-end.

Journal ArticleDOI
TL;DR: The purpose of this research is to make a prototype to providing a blood pressure real-time data from systolic and diastolic data patient’s that determine patients suffering from symptoms of certain diseases, i.e, anemia, symptoms of hypertension and even more chronic diseases.
Abstract: Wireless Sensor Network has grown rapidly, e.g. using the Zigbee RF module and combined with the Raspberry Pi 3, a reason at this research is building a Wireless Sensor Network (WSN). this research discusses how sensor nodes work well and how Quality of Service (QoS) from the Sensor node being analyzed and the role of Raspberry Pi 3 as an internet gateway will sending a blood pressure data to the database and displayed in real-time on the internet, from this research it is expected that patients can check the blood pressure from home and don’t need to the Hospital even data can be quickly and accurately received by Hospital Officers, doctors, and medical personnel. the purpose of this research is make a prototype to providing a blood pressure (mmHg) real-time data from systolic and diastolic data patient’s that determine patients suffering from symptoms of certain diseases, i.e, anemia, symptoms of hypertension and even more chronic diseases. this research discusses how sensor nodes work well and how Quality of Service (QoS) from the Sensor node being analyzed and the role of Raspberry Pi 3 as an internet gateway will sending a blood pressure data to the database and displayed in real-time on the internet. Furthermore, Zigbee has the task of sending Blood pressure (mmHg) data in real-time to the database and then sent to the internet from Zigbee end-device communication to ZigBee coordinator. Zigbee communication at a distance of 5 meters, RSSI simulations show a value of -29 dBm and the experiment shows a value of -40 dBm, at a distance of 100 m, RSSI shows a value of -55 dBm (simulation) and -86 dBm (experiment).

Journal ArticleDOI
TL;DR: Thailand agriculture products traceability system using blockchain and Internet of Things could have a huge impact on food traceability and supply chain management become more reliable as well as rebuild public awareness in Thailand on food safety and quality control.
Abstract: In this paper, we successfully designed and de-veloped Thai agriculture products traceability system using blockchain and Internet of Things. Blockchain, which is the distributed database, is used for our proposed traceability system to enhance the transparency and data integrity. OurSQL is added on another layer to easier query process of blockchain database, therefore the proposed system is a user-friendly system, which cannot be found in ordinary blockchain database. The website and android application have been developed to show the tracking information of the product. The blockchain database coupling with Internet of Things give a number of benefits for our traceability system because all of the collecting information is in real-time and kept in a very secured database. Our system could have a huge impact on food traceability and supply chain management become more reliable as well as rebuild public awareness in Thailand on food safety and quality control.

Journal ArticleDOI
TL;DR: This research developed a system in which six different machine learning algorithms including Naive-Bayes (NB), Support Vector Machine (SVM), Logistic Regression (LR), Decision Tree (DT), K-Nearest Neighbor (KNN) and Random Forest (RF) are implemented and found that the system provided significant performance in every case compared to the base system.
Abstract: Over time, textual information on the World Wide Web (WWW) has increased exponentially, leading to potential research in the field of machine learning (ML) and natural language processing (NLP). Sentiment analysis of scientific domain articles is a very trendy and interesting topic nowadays. The main purpose of this research is to facilitate researchers to identify quality research papers based on their sentiment analysis. In this research, sentiment analysis of scientific articles using citation sentences is carried out using an existing constructed annotated corpus. This corpus is consisted of 8736 citation sentences. The noise was removed from data using different data normalization rules in order to clean the data corpus. To perform classification on this data set we developed a system in which six different machine learning algorithms including Naive-Bayes (NB), Support Vector Machine (SVM), Logistic Regression (LR), Decision Tree (DT), K-Nearest Neighbor (KNN) and Random Forest (RF) are implemented. Then the accuracy of the system is evaluated using different evaluation metrics e.g. F-score and Accuracy score. To improve the system’ accuracy additional features selection techniques, such as lemmatization, n-graming, tokenization, and stop word removal are applied and found that our system provided significant performance in every case compared to the base system. Our method achieved a maximum of about 9% improved results as compared to the base system.

Journal ArticleDOI
TL;DR: An empirical study was conducted on the effectiveness of deep learning and ensemble methods in NIDS, thereby contributing to knowledge by developing a NIDS through the implementation of machine and deep-learning algorithms in various forms on recent network datasets that contains more recent attacks types and attackers’ behaviours (UNSW-NB15 dataset).
Abstract: Cyber-security, as an emerging field of research, involves the development and management of techniques and technologies for protection of data, information and devices. Protection of network devices from attacks, threats and vulnerabilities both internally and externally had led to the development of ceaseless research into Network Intrusion Detection System (NIDS). Therefore, an empirical study was conducted on the effectiveness of deep learning and ensemble methods in NIDS, thereby contributing to knowledge by developing a NIDS through the implementation of machine and deep-learning algorithms in various forms on recent network datasets that contains more recent attacks types and attackers’ behaviours (UNSW-NB15 dataset). This research involves the implementation of a deep-learning algorithm–Long Short-Term Memory (LSTM)–and two ensemble methods (a homogeneous method–using optimised bagged Random-Forest algorithm, and a heterogeneous method–an Averaged Probability method of Voting ensemble). The heterogeneous ensemble was based on four (4) standard classifiers with different computational characteristics (Naive Bayes, kNN, RIPPER and Decision Tree). The respective model implementations were applied on the UNSW_NB15 datasets in two forms: as a two-classed attack dataset and as a multi-attack dataset. LSTM achieved a detection accuracy rate of 80% on the two-classed attack dataset and 72% detection accuracy rate on the multi-attack dataset. The homogeneous method had an accuracy rate of 98% and 87.4% on the two-class attack dataset and the multi-attack dataset, respectively. Moreover, the heterogeneous model had 97% and 85.23% detection accuracy rate on the two-class attack dataset and the multi-attack dataset, respectively.

Journal ArticleDOI
TL;DR: A survey of available researches utilizing the heuristic technique based on machine learning to counter cyber-attacks is provided, which has proven its success in several areas based on the processing of huge amounts of data.
Abstract: Diverse malware programs are set up daily focusing on attacking computer systems without the knowledge of their users. While some authors of these programs intend to steal secret information, others try quietly to prove their competence and aptitude. The traditional signature-based static technique is primarily used by anti-malware programs in order to counter these malicious codes. Although this technique excels at blocking known malware, it can never intercept new ones. The dynamic technique, which is often based on running the executable on a virtual environment, may be introduced by a number of anti-malware programs. The major drawbacks of this technique are the long period of scanning and the high consumption of resources. Nowadays, recent programs may utilize a third technique. It is the heuristic technique based on machine learning, which has proven its success in several areas based on the processing of huge amounts of data. In this paper we provide a survey of available researches utilizing this latter technique to counter cyber-attacks. We explore the different training phases of machine learning classifiers for malware detection. The first phase is the extraction of features from the input files according to previously chosen feature types. The second phase is the rejection of less important features and the selection of the most important ones which better represent the data contained in the input files. The last phase is the injection of the selected features in a chosen machine learning classifier, so that it can learn to distinguish between benign and malicious files, and give accurate predictions when confronted to previously unseen files. The paper ends with a critical comparison between the studied approaches according to their performance in malware detection.

Journal ArticleDOI
TL;DR: The findings of the review show that the initial IoT architectures did not provide a comprehensive meaning for IoT that describe its nature, whereas the recent IoT architectures convey a comprehensive mean of IoT, starting from data collection, followed by data transmission and processing, and ending with data dissemination.
Abstract: Internet of things (IoT) has become one of the most prominent technologies that the world has been witnessing nowadays. It provides great solutions to humanity in many significant fields of life. IoT refers to a collection of sensors or object in the universe with the capability of communicating with each other through the internet without human intervention. Currently, there is no standard IoT architecture. As it is in its infancy, IoT is surrounded by numerous security and privacy concerns. Thus, to avoid such concerns that may hinder its deployment, an IoT architecture has to be carefully designed to incorporate security and privacy solutions. In this paper, a systematic literature review was conducted to trace the evolvement of IoT architectures from its initial development in 2008 until 2018. The Comparison among these architectures is based on terms of the architectural stack, covered issues, the technology used and considerations of security and privacy aspects. The findings of the review show that the initial IoT architectures did not provide a comprehensive meaning for IoT that describe its nature, whereas the recent IoT architectures convey a comprehensive meaning of IoT, starting from data collection, followed by data transmission and processing, and ending with data dissemination. Moreover, the findings reveal that IoT architecture has evolved gradually across the years, through improving architecture stack with new solutions to mitigate IoT challenges such as scalability, interoperability, extensibility, management, etc. with lack consideration of security solutions. The findings disclose that none of the discussed IoT architectures considers privacy concerns, which indeed considered as a critical factor of IoT sustainability and success. Therefore, there is an inevitable need to consider security and privacy solutions when designing IoT architecture.