scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Computer Science and Applications in 2020"


Journal ArticleDOI
TL;DR: The results of the study show that, regardless of whether the dataset is balanced or imbalanced, the classification models built by applying the two different splitting indices GINI index and information gain give same accuracy.
Abstract: Decision tree is a supervised machine learning algorithm suitable for solving classification and regression problems. Decision trees are recursively built by applying split conditions at each node that divides the training records into subsets with output variable of same class. The process starts from the root node of the decision tree and progresses by applying split conditions at each non-leaf node resulting into homogenous subsets. However, achieving pure homogenous subsets is not possible. Therefore, the goal at each node is to identify an attribute and a split condition on that attribute that minimizes the mixing of class labels, thus resulting into nearly pure subsets. Several splitting indices were proposed to evaluate the goodness of the split, common ones being GINI index and Information gain. The aim of this study is to conduct an empirical comparison of GINI index and information gain. Classification models are built using decision tree classifier algorithm by applying GINI index and Information gain individually. The classification accuracy of the models is estimated using different metrics such as Confusion matrix, Overall accuracy, Per-class accuracy, Recall and Precision. The results of the study show that, regardless of whether the dataset is balanced or imbalanced, the classification models built by applying the two different splitting indices GINI index and information gain give same accuracy. In other words, choice of splitting indices has no impact on performance of the decision tree classifier algorithm.

75 citations


Journal ArticleDOI
TL;DR: An analysis of students’ feedback from the Centre of Pre-University Studies, Universiti Malaysia Sarawak (UNIMAS), Malaysia, during the transition to fully online learning finds that online learning would not be a hindrance, but blessing towards academic excellence in the face of calamity like the COVID-19 pandemic.
Abstract: In the last decade, online learning has grown rapidly However, the outbreak of coronavirus (COVID-19) has caused learning institutions to embrace online learning due to the lockdown and campus closure This paper presents an analysis of students’ feedback (n=354) from the Centre of Pre-University Studies (PPPU), Universiti Malaysia Sarawak (UNIMAS), Malaysia, during the transition to fully online learning Three phases of online surveys were conducted to measure the learners’ acceptance of the migration and to identify related problems The result shows that there is an increased positivity among the students on the vie of teaching and learning in STEM during the pandemic It is found that online learning would not be a hindrance, but blessing towards academic excellence in the face of calamity like the COVID-19 pandemic The suggested future research direction will be of interest to educators, academics, and researchers’ community

54 citations


Journal ArticleDOI
TL;DR: This paper compares the performance of three feature engineering techniques and eight machine learning algorithms to evaluate their performance on a publicly available dataset having three distinct classes and showed that the bigram features when used with the support vector machine algorithm best performed with 79% off overall accuracy.
Abstract: The increasing use of social media and information sharing has given major benefits to humanity. However, this has also given rise to a variety of challenges including the spreading and sharing of hate speech messages. Thus, to solve this emerging issue in social media sites, recent studies employed a variety of feature engineering techniques and machine learning algorithms to automatically detect the hate speech messages on different datasets. However, to the best of our knowledge, there is no study to compare the variety of feature engineering techniques and machine learning algorithms to evaluate which feature engineering technique and machine learning algorithm outperform on a standard publicly available dataset. Hence, the aim of this paper is to compare the performance of three feature engineering techniques and eight machine learning algorithms to evaluate their performance on a publicly available dataset having three distinct classes. The experimental results showed that the bigram features when used with the support vector machine algorithm best performed with 79% off overall accuracy. Our study holds practical implication and can be used as a baseline study in the area of detecting automatic hate speech messages. Moreover, the output of different comparisons will be used as state-of-art techniques to compare future researches for existing automated text classification techniques.

45 citations


Journal ArticleDOI
TL;DR: The proposed system is based on collecting environmental wireless sensor network data from the forest and predicting the occurrence of a forest fire using artificial intelligence, more particularly Deep Learning (DL) models.
Abstract: Due to the global warming, which mechanically increases the risk of starting fires. The number of forest fires is increasing and will increase more and more. To better support the fire soldiers on the ground, we present in this work a system for early detection of forest fires. This system is more precise compared to traditional surveillance approaches such as lookout towers and satellite surveillance. The proposed system is based on collecting environmental wireless sensor network data from the forest and predicting the occurrence of a forest fire using artificial intelligence, more particularly Deep Learning (DL) models. The combination of such a system based on the concept of the Internet of Things (IoT) is made up of a Low Power Wide Area Network (LPWAN), fixed or mobile sensors and a good suitable model of deep learning. That several models derived from deep learning were evaluated and compared enabled us to show the feasibility of an autonomous and real-time environmental monitoring platform for dynamic risk factors of forest fires.

43 citations


Journal ArticleDOI
TL;DR: This study argues that resilience is a process of bounce back to the previous condition after facing any adverse effect, and focuses on the integrated function of the adaptive, absorptive and transformative capacity of a social unit such as individual, community or state for facing any natural disaster.
Abstract: The adverse effect of climate change is gradually increasing all over the world and developing countries are more sufferer. The potential of big data can be an effective tool to make an appropriate adaptation strategy and enhance the resilience of the people. This study aims to explore the potential of big data for taking proper strategy against climate change effects as well as enhance people’s resilience in the face of the adverse effect of climate change. A systematic literature review has been conducted in the last ten years of existing kinds of literature. This study argues that resilience is a process of bounce back to the previous condition after facing any adverse effect. It also focuses on the integrated function of the adaptive, absorptive and transformative capacity of a social unit such as individual, community or state for facing any natural disaster. Big data technologies have the capacity to show the information regarding upcoming issues, current issues and recovery stages of the adverse effect of climate change. The findings of this study will enable policymakers and related stakeholders to take appropriate adaptation strategies for enhancing the resilience of the people of the affected areas.

37 citations


Journal ArticleDOI
TL;DR: Critical analysis of Vision of 6G wireless communication and its network structure is presented and a number of important technical challenges are outlined, additionally some possible solutions related to 6G, as well as physical layer transmission procedures, network designs, security methods.
Abstract: With the accelerated evolution of smart terminals and rising fresh applications, wireless information traffic has sharply enhanced and underway cellular networks (even 5G) can’t entirely compete the rapidly emerging technical necessities. A fresh framework of wireless communication, the sixth era (6G) framework, by floating aid of artificial intelligence is anticipated to be equipped somewhere in the range of 2027 and 2030. This paper presents critical analysis of Vision of 6G wireless communication and its network structure; also outline a number of important technical challenges, additionally some possible solutions related to 6G, as well as physical layer transmission procedures, network designs, security methods.

34 citations


Journal ArticleDOI
TL;DR: The performances of some machine learning methods with the case of the catBoost classifier algorithm on both loan approval and staff promotion are discussed and the algorithm’s performance with other classifiers is compared.
Abstract: Machine learning and data-driven techniques have become very famous and significant in several areas in recent times. In this paper, we discuss the performances of some machine learning methods with the case of the catBoost classifier algorithm on both loan approval and staff promotion. We compared the algorithm’s performance with other classifiers. After some feature engineering on both data, the CatBoost algorithm outperforms other classifiers implemented in this paper. In analysis one, features such as loan amount, loan type, applicant income, and loan purpose are major factors to predict mortgage loan approvals. In the second analysis, features such as division, foreign schooled, geopolitical zones, qualification, and working years had a high impact on staff promotion. Hence, based on the performance of the CatBoost in both analyses, we recommend this algorithm for better prediction of loan approvals and staff promotion.

34 citations


Journal ArticleDOI
TL;DR: The strength of the proposed ensemble approach such as voting based model is compelling in improving the prognosis accuracy of anemic classifiers and established adequate achievement in analyze risk of heart disease.
Abstract: To solve many problems in data science, Machine Learning (ML) techniques implicates artificial intelligence which are commonly used. The major utilization of ML is to predict the conclusion established on the extant data. Using an established dataset machine determine emulate and spread them to an unfamiliar data sets to anticipate the conclusion. A few classification algorithm’s accuracy prediction is satisfactory, although other perform limited accuracy. Different ML and Deep Learning (DL) networks established on ANN have been extensively recommended for the disclosure of heart disease in antecedent researches. In this paper, we used UCI Heart Disease dataset to test ML techniques along with conventional methods (i.e. random forest, support vector machine, K-nearest neighbor), as well as deep learning models (i.e. long short-term-memory and gated-recurrent unit neural networks). To improve the accuracy of weak algorithms we explore voting based model by combining multiple classifiers. A provisional cogent approach was used to regulate how the ensemble technique can be enforced to improve an accuracy in the heart disease prediction. The strength of the proposed ensemble approach such as voting based model is compelling in improving the prognosis accuracy of anemic classifiers and established adequate achievement in analyze risk of heart disease. A superlative increase of 2.1% accuracy for anemic classifiers was attained with the help of an ensemble voting based model.

34 citations


Journal ArticleDOI
TL;DR: Simulation results show that integrating IoT with Blockchain scheme is more efficient, uses low energy, improves throughput and enhances network lifetime, and leads to lower consumption of energy and improve the life of network.
Abstract: Blockchain is an emerging field of study in a number of applications and domains. Especially when combine with Internet of Things (IoT) this become truly transformative, opening up new plans of action, improving engagement and revolutionizing many sectors including agriculture. IoT devices are intelligent and have high critical capabilities but low-powered and have less storage, and face many challenges when used in isolation. Maintaining the network and consuming IoT energy by means of redundant or fabricated data transfer lead to consumption of high energy and reduce the life of IoT network. Therefore, an appropriate routing scheme should be in place to ensure consistency and energy efficiency in an IoT network. This research proposes an efficient routing scheme by integrating IoT with Blockchain for distributed nodes which work in a distributed manner to use the communicating links efficiently. The proposed protocol uses smart contracts within heterogeneous IoT networks to find a route to Base Station (BS). Each node can ensure route from an IoT node to sink then base station and permits IoT devices to collaborate during transmission. The proposed routing protocol removes redundant data and blocks IoT architecture attacks and leads to lower consumption of energy and improve the life of network. The performance of this scheme is compared with our existing scheme IoT-based Agriculture and LEACH in Agriculture. Simulation results show that integrating IoT with Blockchain scheme is more efficient, uses low energy, improves throughput and enhances network lifetime.

33 citations


Journal ArticleDOI
TL;DR: XGBoost classifier is used to predict four personality traits based on Myers- Briggs Type Indicator model, namely Introversion-Extroversion, iNtuition-Sensing, Feeling-Thinking and Judging-Perceiving from input text to provide the basis for developing a personality identification system.
Abstract: Personality refer to the distinctive set of characteristics of a person that effect their habits, behaviour’s, attitude and pattern of thoughts. Text available on Social Networking sites provide an opportunity to recognize individual’s personality traits automatically. In this proposed work, Machine Learning Technique, XGBoost classifier is used to predict four personality traits based on Myers- Briggs Type Indicator (MBTI) model, namely Introversion-Extroversion(I-E), iNtuition-Sensing(N-S), Feeling-Thinking(F-T) and Judging-Perceiving(J-P) from input text. Publically available benchmark dataset from Kaggle is used in experiments. The skewness of the dataset is the main issue associated with the prior work, which is minimized by applying Re-sampling technique namely random over-sampling, resulting in better performance. For more exploration of the personality from text, pre-processing techniques including tokenization, word stemming, stop words elimination and feature selection using TF IDF are also exploited. This work provides the basis for developing a personality identification system which could assist organization for recruiting and selecting appropriate personnel and to improve their business by knowing the personality and preferences of their customers. The results obtained by all classifiers across all personality traits is good enough, however, the performance of XGBoost classifier is outstanding by achieving more than 99% precision and accuracy for different traits.

31 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed URL attributes and behavior can help improve the ability to detect malicious URL significantly and may be considered as an optimized and friendly used solution for malicious URL detection.
Abstract: Currently, the risk of network information insecurity is increasing rapidly in number and level of danger. The methods mostly used by hackers today is to attack end-to-end technology and exploit human vulnerabilities. These techniques include social engineering, phishing, pharming, etc. One of the steps in conducting these attacks is to deceive users with malicious Uniform Resource Locators (URLs). As a results, malicious URL detection is of great interest nowadays. There have been several scientific studies showing a number of methods to detect malicious URLs based on machine learning and deep learning techniques. In this paper, we propose a malicious URL detection method using machine learning techniques based on our proposed URL behaviors and attributes. Moreover, bigdata technology is also exploited to improve the capability of detection malicious URLs based on abnormal behaviors. In short, the proposed detection system consists of a new set of URLs features and behaviors, a machine learning algorithm, and a bigdata technology. The experimental results show that the proposed URL attributes and behavior can help improve the ability to detect malicious URL significantly. This is suggested that the proposed system may be considered as an optimized and friendly used solution for malicious URL detection.

Journal ArticleDOI
TL;DR: The design of the BLDC motor control system using in using MATLAB/SIMULINK software for Proportional Integral Derivative (PID) algorithm that can more effectively improve the speed control of these types of motors.
Abstract: At present, green technology is a major concern in every country around the world and electricity is a clean energy which encourages the acquisition of this technology. The main applications of electricity are made through the use of electric motors. Electric power is converted to mechanical energy using a motor, that is to say, the major applications of electrical energy are accomplished through electric motors. Brushless direct current (BLDC) motors have become very attractive in many applications due to its low maintenance costs and compact structure. The BLDC motors can be substituted to make the industries more dynamic. To get better performance BLDC motor requires control drive facilitating to control its speed and torque. This paper describes the design of the BLDC motor control system using in using MATLAB/SIMULINK software for Proportional Integral Derivative (PID) algorithm that can more effectively improve the speed control of these types of motors. The purpose of the paper is to provide an overview about the functionality and design of the PID controller. Finally, the study undergoes some well-functioning tests that will support that the PID regulator is far more applicable, better operational, and effective in achieving satisfactory control performance compared to other controllers.

Journal ArticleDOI
TL;DR: The focus of this research paper is to pinpoint the emerging trends in IoT, CC, and BGD to provide directions for researchers and practitioners about how to leverage the benefits of combining these approaches.
Abstract: Although the Internet of things (IoT), cloud computing (CC), and Big Data (BGD) are three different approaches that have evolved independently of each other over time; however, with time, they are becoming increasingly interconnected. The convergence of IoT, CC, and BGD provides new opportunities in various real-time applications, including telecommunication, healthcare, business, education, science, and engineering. Together, these approaches are facing various challenges during data gathering, processing, and management. The focus of this research paper is to pinpoint the emerging trends in IoT, CC, and BGD. The convergence of these approaches and their impact on various real-time applications, benefits, and challenges associated with all these approaches, current industry trends, and future research directions with especial focus on the healthcare domain. The paper also provides a conceptual framework that integrates IoT, CC, and BGD and provides an IoT centric cloud infrastructure using BGD. Finally, this paper summarizes by providing directions for researchers and practitioners about how to leverage the benefits of combining these approaches.

Journal ArticleDOI
TL;DR: The study results show that DistilBERT transformer outperformed the baseline algorithms while allowing parallelization, and was compared against attention-based recurrent neural networks and other transformer baselines for hate speech detection in Twitter documents.
Abstract: Social media networks such as Twitter are increasingly utilized to propagate hate speech while facilitating mass communication. Recent studies have highlighted a strong correlation between hate speech propagation and hate crimes such as xenophobic attacks. Due to the size of social media and the consequences of hate speech in society, it is essential to develop automated methods for hate speech detection in different social media platforms. Several studies have investigated the application of different machine learning algorithms for hate speech detection. However, the performance of these algorithms is generally hampered by inefficient sequence transduction. The Vanilla recurrent neural networks and recurrent neural networks with attention have been established as state-of-the-art methods for the assignments of sequence modeling and sequence transduction. Unfortunately, these methods suffer from intrinsic problems such as long-term dependency and lack of parallelization. In this study, we investigate a transformer-based method and tested it on a publicly available multiclass hate speech corpus containing 24783 labeled tweets. DistilBERT transformer method was compared against attention-based recurrent neural networks and other transformer baselines for hate speech detection in Twitter documents. The study results show that DistilBERT transformer outperformed the baseline algorithms while allowing parallelization.

Journal ArticleDOI
TL;DR: This work introduces an architecture with a forecasting model for the IoT systems to monitor water quality in aquaculture and fisheries, and proposes to use deep learning with Long-Short Term Memory (LSTM) algorithm for forecasting these indicators.
Abstract: Global climate change and water pollution effects have caused many problems to the farmers in fish/shrimp raising, for example, the shrimps/fishes had early died before harvest. How to monitor and manage quality of the water to help the farmers tackling this problem is very necessary. Water quality monitoring is important when developing IoT systems, especially for aquaculture and fisheries. By monitoring the real-time sensor data indicators (such as indicators of salinity, temperature, pH, and dissolved oxygen - DO) and forecasting them to get early warning, we can manage the quality of the water, thus collecting both quality and quantity in shrimp/fish raising. In this work, we introduce an architecture with a forecasting model for the IoT systems to monitor water quality in aquaculture and fisheries. Since these indicators are collected every day, they becomes sequential/time series data, we propose to use deep learning with Long-Short Term Memory (LSTM) algorithm for forecasting these indicators. Experimental results on several data sets show that the proposed approach works well and can be applied for the real systems.

Journal ArticleDOI
TL;DR: This study has developed “Nabiha,” a chatbot that can support conversation with Information Technology (IT) students at King Saud University using the Saudi Arabic dialect, and will be the first Saudi chatbots that uses the Saudi dialect.
Abstract: Nowadays, we are living in the era of technology and innovation that impact various fields, including sciences. In computing and technology, many outstanding and attractive programs and applications have emerged, including programs that try to mimic the human behavior. A chatbot is an example of the artificial intelligence-based computer programs that try to simulate the human behavior by conducting a conversation and an interaction with the users using natural language. Over the years, various chatbots have been developed for many languages (such as English, Spanish, and French) to serve many fields (such as entertainment, medicine, education, and commerce). Unfortunately, Arabic chatbots are rare. To our knowledge, there is no previous work on developing a chatbot for the Saudi Arabic dialect. In this study, we have developed “Nabiha,” a chatbot that can support conversation with Information Technology (IT) students at King Saud University using the Saudi Arabic dialect. Therefore, Nabiha will be the first Saudi chatbot that uses the Saudi dialect. To facilitate access to Nabiha, we have made it available on different platforms: Android, Twitter, and Web. When a student wants to talk with Nabiha, she can download an application, talk with her on Twitter, or visit her website. Nabiha was tested by the students of the IT department, and the results were somewhat satisfactory, considering the difficulty of the Arabic language in general and the Saudi dialect in particular.

Journal ArticleDOI
TL;DR: The authors present a distributed Blockchain-based security to industry 4.0 applications with SDN-IoT enabled environment and offer an excellent combination among the technologies like IoT, SDN and Blockchain to improve the security and privacy of Industry 5.0 services properly.
Abstract: The concept of Industry 4.0 is a newly emerging focus of research throughout the world. However, it has lots of challenges to control data, and it can be addressed with various technologies like Internet of Things (IoT), Big Data, Artificial Intelligence (AI), Software Defined Networking (SDN), and Blockchain (BC) for managing data securely. Further, the complexity of sensors, appliances, sensor networks connecting to the internet and the model of Industry 4.0 has created the chal-lenge of designing systems, infrastructure and smart applications capable of continuously analyzing the data produced. Regarding these, the authors present a distributed Blockchain-based security to industry 4.0 applications with SDN-IoT enabled environment. Where the Blockchain can be capable of leading the robust, privacy and confidentiality to our desired system. In addition, the SDN-IoT incorporates the different services of industry 4.0 with more security as well as flexibility. Furthermore, the authors offer an excellent combination among the technologies like IoT, SDN and Blockchain to improve the security and privacy of Industry 4.0 services properly. Finally , the authors evaluate performance and security in a variety of ways in the presented architecture.

Journal ArticleDOI
TL;DR: Different machine learning approaches for predicting the grade of a student in a course, in the context of the private universities of Bangladesh, are proposed, where the weighted voting classifier outperforms the base classifiers.
Abstract: Every year thousands of students get admitted into different universities in Bangladesh. Among them, a large number of students complete their graduation with low scoring results which affect their careers. By predicting their grades before the final examination, they can take essential measures to ameliorate their grades. This article has proposed different machine learning approaches for predicting the grade of a student in a course, in the context of the private universities of Bangladesh. Using different features that affect the result of a student, seven different classifiers have been trained, namely: Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Logistic Regression, Decision Tree, AdaBoost, Multilayer Perceptron (MLP), and Extra Tree Classifier for classifying the students’ final grades into four quality classes: Excellent, Good, Poor, and Fail. Afterwards, the outputs of the base classifiers have been aggregated using the weighted voting approach to attain better results. And here this study has achieved an accuracy of 81.73%, where the weighted voting classifier outperforms the base classifiers.

Journal ArticleDOI
TL;DR: This paper aims to formulate the research framework for prediction of antecedents of smart mobility by using SEM-Neural hybrid approach towards preliminary data analysis and will suggest a broader approach to investigate individual-level technology acceptance.
Abstract: Smart city is synchronized with digital environment and its transportation system is vitalized with RFID sensors, Internet of Things (IoT) and Artificial Intelligence. However, without user’s behavioral assessment of technology, the ultimate usefulness of smart mobility cannot be achieved. This paper aims to formulate the research framework for prediction of antecedents of smart mobility by using SEM-Neural hybrid approach towards preliminary data analysis. This research undertook smart mobility service adoption in Malaysia as study perspective and applied the Technology Acceptance Model (TAM) as theoretical basis. An extended TAM model was hypothesized with five external factors (digital dexterity, IoT service quality, intrusiveness concerns, social electronic word of mouth and subjective norm). The data was collected through a pilot survey in Klang Valley, Malaysia. Then responses were analyzed for reliability, validity and accuracy of model. Finally, the causal relationship was explained by Structural Equation Modeling (SEM) and Artificial Neural Networking (ANN). The paper will share better understanding of road technology acceptance to all stakeholders to refine, revise and update their policies. The proposed framework will suggest a broader approach to investigate individual-level technology acceptance.

Journal ArticleDOI
TL;DR: A CVRP optimization model, which contains two main processes of clustering and optimization, based on a discrete hybrid evolutionary firefly algorithm (DHEFA), is proposed, which shows that clustering nodes into several clusters effectively reduces the problem space, and the DHEFA quickly searches the optimum solution in those partial spaces.
Abstract: A Capacitated Vehicle Routing Problem (CVRP) is an important problem in transportation and industry. It is challenging to be solved using some optimization algorithms. Unfortunately, it is not easy to achieve a global optimum solution. Hence, many researchers use a combination of two or more optimization algorithms, which based on swarm intelligence methods, to overcome the drawbacks of the single algorithm. In this research, a CVRP optimization model, which contains two main processes of clustering and optimization, based on a discrete hybrid evolutionary firefly algorithm (DHEFA), is proposed. Some evaluations on three CVRP cases show that DHEFA produces an averaged effectiveness of 91.74%, which is much more effective than the original FA that gives mean effectiveness of 87.95%. This result shows that clustering nodes into several clusters effectively reduces the problem space, and the DHEFA quickly searches the optimum solution in those partial spaces.

Journal ArticleDOI
TL;DR: The idea is to use AI techniques to visualize and predict possible terrorist attacks using classification models, the decision trees, and the Random Forest to help the scientific community use artificial intelligence to provide various types of solutions related to global events.
Abstract: Terrorist attacks affect the confidence and security of citizens; it is a violent form of a political struggle that ends in the destruction of order. In the current decade, along with the growth of social networks, terrorist attacks around the world are still ongoing and have had potential growth in recent years. Consequently, it is necessary to identify where the attacks were committed and where is the possible area for an attack. The objective is to provide assertive solutions to these events. As a solution, this research focuses on one of the branches of artificial intelligence (AI), which is the Automatic Learning, also called Machine Learning. The idea is to use AI techniques to visualize and predict possible terrorist attacks using classification models, the decision trees, and the Random Forest. The input would be a database that has a systematic record of worldwide terrorist attacks from 1970 to the last recorded year, which is 2018. As a final result, it is necessary to know the number of terrorist attacks in the world, the most frequent types of attacks and the number of seizures caused by region; furthermore, to be able to predict what kind of terrorist attack will occur and in which areas of the world. Finally, this research aims to help the scientific community use artificial intelligence to provide various types of solutions related to global events.

Journal ArticleDOI
TL;DR: Improved YOLOv3 is proposed by increasing detection scale from 3 to 4, apply k-means clustering to increase the anchor boxes, novel transfer learning technique, and improvement in loss function to improve the model performance to show that improved version outperforms than the original YOLov3 model.
Abstract: Object Detection is one of the problematic Computer Vision (CV) problems with countless applications. We proposed a real-time object detection algorithm based on Improved You Only Look Once version 3 (YOLOv3) for detecting fish. The demand for monitoring the marine ecosystem is increasing day by day for a vigorous automated system, which has been beneficial for all of the researchers in order to collect information about marine life. This proposed work mainly approached the CV technique to detect and classify marine life. In this paper, we proposed improved YOLOv3 by increasing detection scale from 3 to 4, apply k-means clustering to increase the anchor boxes, novel transfer learning technique, and improvement in loss function to improve the model performance. We performed object detection on four fish species custom datasets by applying YOLOv3 architecture. We got 87.56% mean Average Precision (mAP). Moreover, comparing to the experimental analysis of the original YOLOv3 model with the improved one, we observed the mAP increased from 87.17% to 91.30. It showed that improved version outperforms than the original YOLOv3 model.

Journal ArticleDOI
TL;DR: An online questionnaire was developed as a tool to collect data and various prediction models based on statistical model and machine learning model were utilized to predict potential patients of COVID-19 based on their signs and symptoms.
Abstract: As of December 2019, the world’s view on life has been changed due to ongoing COVID-19 pandemic. This requires the use of all kinds of technology to help identify coronavirus patients and control the spread of this disease. In this paper, an online questionnaire was developed as a tool to collect data. This data was used as an input for various prediction models based on statistical model (Logistic Regression, LR) and machine learning model (Support Vector Machine, SVM, and Multi-Layer Perceptron, MLP). These models were utilized to predict potential patients of COVID-19 based on their signs and symptoms. The MLP has shown the best accuracy (91.62%) compared to the other models. Meanwhile, the SVM has shown the best precision 91.67%.

Journal ArticleDOI
TL;DR: This paper implemented ontology on the sign language domain to solve some sign language challenges and trained and tested Deep Convolution Neural Network architecture on a pre-made Arabic sign language dataset and on a dataset collected in this paper to obtain better accuracy in recognition.
Abstract: Translation and understanding sign language may be difficult for some. Therefore, this paper proposes a solution to this problem by providing an Arabic sign language translation system using ontology and deep learning techniques. That is to interpret user’s signs to different meanings. This paper implemented ontology on the sign language domain to solve some sign language challenges. In this first version, simple static signs composed of Arabic alphabets and some Arabic words started to translate. Deep Convolution Neural Network (CNN) architecture was trained and tested on a pre-made Arabic sign language dataset and on a dataset collected in this paper to obtain better accuracy in recognition. Experimental results show that according to the pre-made Arabic sign language dataset the classification accuracy of the training set (80% of the dataset) was 98.06% and recognition accuracy of the testing set (20% of the dataset) was 88.87%. According to the collected dataset, the classification accuracy of the training set was 98.6% and Semantic recognition accuracy of the testing set was 94.31%.

Journal ArticleDOI
TL;DR: Simulation result shows that Improved PSO algorithm in this study is faster at obtaining the location of the given mobile network at which coverage area is minimal and hence central compared to other algorithms.
Abstract: Jamming attack is one of the most common threats on wireless networks through sending a high-power signal to the network in order to corrupt legitimate packets. To address Jamming attacks problem, the Particle Swarm Optimization (PSO) algorithm is used to describe and simulate the behavior of a large group of entities, with similar characteristics or attributes, as they progress to achieve an optimal group, or swarm. Therefore, in this study enhanced version of PSO is proposed called the Improved PSO algorithm aims to enhance the detection of jamming attack sources over randomized mobile networks. The simulation result shows that Improved PSO algorithm in this study is faster at obtaining the location of the given mobile network at which coverage area is minimal and hence central compared to other algorithms. The Improved PSO as well was applied to a mobile network. The Improved PSO algorithm was evaluated with two experiments. In the First experiment, The Improved PSO was compared with PSO, GWO and MFO, obtained results shown the Improved PSO is the best algorithm among others to fine obtain the location for jamming attack. In Second experiment, Improved PSO was compared with PSO in mobile network environment. The obtain results prove that Improved PSO is better than PSO for obtaining the location in mobile network where coverage area is minimal and hence central.

Journal ArticleDOI
TL;DR: Information Gain and Gini Index is applied on attributes of Kashmir province to convert continuous data into discrete values and the data set is ready for the application of machine learning (decision tree) algorithms.
Abstract: The historical geographical data of Kashmir province is spread across two disparate files having attributes of Maximum Temperature, Minimum Temperature, Humidity measured at 12 A.M., Humidity measured at 3 P.M., rainfall besides auxiliary parameters like date, year etc. The parameters Maximum Temperature, Minimum Temperature, Humidity measured at 12 A.M., Humidity measured at 3 P.M. are continuous in nature and here, in this study, we applied Information Gain and Gini Index on these attributes to convert continuous data into discrete values, their after we compare and evaluate the generated results. Of the four attributes, two have same results for Information Gain and Gini Index; one attribute has overlapping results while as only one attribute has conflicting results for Information Gain and Gini Index. Subsequently, continuous valued attributes are converted into discrete values using Gini index. Irrelevant attributes are not considered and auxiliary attributes are labeled accordingly. Consequently, the data set is ready for the application of machine learning (decision tree) algorithms.

Journal ArticleDOI
TL;DR: This study demonstrated that the DNN (128-64) model achieves the highest accuracy, recall, and precision on the ABIDE dataset to date.
Abstract: The objective of this study is to implement deep neural network (DNN) models to classify autism spectrum disorder (ASD) patients and typically developing (TD) participants. The experimental design utilizes functional connectivity features extracted from resting-state functional magnetic resonance imaging (rs-fMRI) originating in the multisite repository Autism Brain Imaging Data Exchange (ABIDE) over a significant set of training samples. Our methodology and results have two main parts. First, we build DNN models using the TensorFlow framework in python to classify ASD from TD. Here we acquired an accuracy of 75.27%. This is significantly higher than any known accuracy (71.98%) using the same data. We also obtained a recall of 74% and a precision of 78.37%. In summary, and based on our literature review, this study demonstrated that our DNN (128-64) model achieves the highest accuracy, recall, and precision on the ABIDE dataset to date. Second, using the same ABIDE data, we implemented an identical experimental design with four distinct hidden layer configuration DNN models each preprocessed using four different industry accepted strategies. These results aided in identifying the preprocessing technique with the highest accuracy, recall, and precision: the Configurable Pipeline for the Analysis of Connectomes (CPAC).

Journal ArticleDOI
TL;DR: The experiments in this work show that the accuracy of the proposed model to predict the sentiment on customer feedback data is greater than the performance accuracy obtained by the model without applying parameter tuning.
Abstract: Text classification is a common task in machine learning. One of the supervised classification algorithm called Random Forest has been generally used for this task. There is a group of parameters in Random Forest classifier which need to be tuned. If proper tuning is performed on these hyperparameters, the classifier will give a better result. This paper proposes a hybrid approach of Random Forest classifier and Grid Search method for customer feedback data analysis. The tuning approach of Grid Search is applied for tuning the hyperparameters of Random Forest classifier. The Random Forest classifier is used for customer feedback data analysis and then the result is compared with the results which get after applying Grid Search method. The proposed approach provided a promising result in customer feedback data analysis. The experiments in this work show that the accuracy of the proposed model to predict the sentiment on customer feedback data is greater than the performance accuracy obtained by the model without applying parameter tuning.

Journal ArticleDOI
TL;DR: In this paper, a generalized approach to the time series multifractal analysis is proposed, based on numerical modeling and estimating, and the main disadvantages and advantages of the sample fractal characteristics obtained by three methods: the multifractual fluctuation detrended analysis, wavelet transform modulus maxima and multifract analysis using discrete wavelet transforms are studied.
Abstract: The paper considers a generalized approach to the time series multifractal analysis. The focus of research is on the correct estimation of multifractal characteristics from short time series. Based on numerical modeling and estimating, the main disadvantages and advantages of the sample fractal characteristics obtained by three methods: the multifractal fluctuation detrended analysis, wavelet transform modulus maxima and multifractal analysis using discrete wavelet transform are studied. The generalized Hurst exponent was chosen as the basic characteristic for comparing the accuracy of the methods. A test statistic for determining the monofractal properties of a time series using the multifractal fluctuation detrended analysis is proposed. A generalized approach to estimating the multifractal characteristics of short time series is developed and practical recommendations for its implementation are proposed. A significant part of the study is devoted to practical applications of fractal analysis. The proposed approach is illustrated by the examples of multifractal analysis of various real fractal time series.

Journal ArticleDOI
TL;DR: A survey has been conducted to compare ontology development methodologies between 2015 and 2020 and gives some guidelines that help to define a suitable methodology for designing any domain ontology under the domain of interlocking institutional worlds.
Abstract: Interlocking Institutional Worlds (IWs) is a concept explaining the need to interoperate between institutions (or players), to solve problems of common interest in a given domain. Managing knowledge in the IWs domain is complex, but promoting knowledge sharing based on standards and common terms agreeable to all players is essential and is something that must be established. In this sense, ontologies, as a conceptual tool and a key component of knowledge-based systems, have been used by organizations for effective knowledge management of the domain of discourse. There are many methodologies that have been proposed by several researchers during the last decade. However, designing a domain ontology for IWs needs a well-defined ontology development methodology. Therefore, in this article, a survey has been conducted to compare ontology development methodologies between 2015 and 2020. The purpose of this survey is to identify limitations and benefits of previously developed ontology development methodologies. The criteria for the comparison of methodologies has been derived from evolving trends in literature. Our findings give some guidelines that help to define a suitable methodology for designing any domain ontology under the domain of interlocking institutional worlds.