scispace - formally typeset
Search or ask a question

Showing papers by "Nebojsa Bacanin published in 2023"


Journal ArticleDOI
01 Feb 2023-Energies
TL;DR: In this paper , a canonical and straightforward long short-term memory (LSTM) DL model for electricity load is developed and tuned for multivariate time-series forecasting, and the performance of LSTM models for a one-step-ahead prediction is evaluated.
Abstract: An effective energy oversight represents a major concern throughout the world, and the problem has become even more stringent recently. The prediction of energy load and consumption depends on various factors such as temperature, plugged load, etc. The machine learning and deep learning (DL) approaches developed in the last decade provide a very high level of accuracy for various types of applications, including time-series forecasting. Accordingly, the number of prediction models for this task is continuously growing. The current study does not only overview the most recent and relevant DL for energy supply and demand, but it also emphasizes the fact that not many recent methods use parameter tuning for enhancing the results. To fill the abovementioned gap, in the research conducted for the purpose of this manuscript, a canonical and straightforward long short-term memory (LSTM) DL model for electricity load is developed and tuned for multivariate time-series forecasting. One open dataset from Europe is used as a benchmark, and the performance of LSTM models for a one-step-ahead prediction is evaluated. Reported results can be used as a benchmark for hybrid LSTM-optimization approaches for multivariate energy time-series forecasting in power systems. The current work highlights that parameter tuning leads to better results when using metaheuristics for this purpose in all cases: while grid search achieves a coefficient of determination (R2) of 0.9136, the metaheuristic that led to the worst result is still notably better with the corresponding score of 0.9515.

11 citations


Journal ArticleDOI
TL;DR: In this paper , the authors explore the computational capabilities of advanced modeling tools to reveal the factors that shape the observed benzene levels and behavior under different environmental conditions, including temperature, volumetric soil moisture content, and momentum flux direction.
Abstract: In this paper, we explore the computational capabilities of advanced modeling tools to reveal the factors that shape the observed benzene levels and behavior under different environmental conditions. The research was based on two-year hourly data concentrations of inorganic gaseous pollutants, particulate matter, benzene, toluene, m, p-xylenes, total nonmethane hydrocarbons, and meteorological parameters obtained from the Global Data Assimilation System. In order to determine the model that will be capable of achieving a superior level of performance, eight metaheuristics algorithms were tested for eXtreme Gradient Boosting optimization, while the relative SHapley Additive exPlanations values were used to estimate the relative importance of each pollutant level and meteorological parameter for the prediction of benzene concentrations. According to the results, benzene levels are mostly shaped by toluene and the finest aerosol fraction concentrations, in the environment governed by temperature, volumetric soil moisture content, and momentum flux direction, as well as by levels of total nonmethane hydrocarbons and total nitrogen oxide. The types of conditions which provided the environment for the impact of toluene, the finest aerosol, and temperature on benzene dynamics are distinguished and described.

7 citations


Journal ArticleDOI
04 Mar 2023-Axioms
TL;DR: In this article , the use of an LSTM and a BiLSTM was proposed for dealing with a data collection that, besides the time series values denoting the solar energy generation, also comprises corresponding information about the weather.
Abstract: As solar energy generation has become more and more important for the economies of numerous countries in the last couple of decades, it is highly important to build accurate models for forecasting the amount of green energy that will be produced. Numerous recurrent deep learning approaches, mainly based on long short-term memory (LSTM), are proposed for dealing with such problems, but the most accurate models may differ from one test case to another with respect to architecture and hyperparameters. In the current study, the use of an LSTM and a bidirectional LSTM (BiLSTM) is proposed for dealing with a data collection that, besides the time series values denoting the solar energy generation, also comprises corresponding information about the weather. The proposed research additionally endows the models with hyperparameter tuning by means of an enhanced version of a recently proposed metaheuristic, the reptile search algorithm (RSA). The output of the proposed tuned recurrent neural network models is compared to the ones of several other state-of-the-art metaheuristic optimization approaches that are applied for the same task, using the same experimental setup, and the obtained results indicate the proposed approach as the better alternative. Moreover, the best recurrent model achieved the best results with R2 of 0.604, and a normalized MSE value of 0.014, which yields an improvement of around 13% over traditional machine learning models.

5 citations


Journal ArticleDOI
TL;DR: In this article , an element-based K-harmonic means clustering algorithm (CA) is proposed for effective sharing of data among the entities along with an algorithm named underweight data block (UDB) for overcoming the obstacle of storage space.
Abstract: Industrial Internet of Things (IIoT)-based systems have become an important part of industry consortium systems because of their rapid growth and wide-ranging application. Various physical objects that are interconnected in the IIoT network communicate with each other and simplify the process of decision-making by observing and analyzing the surrounding environment. While making such intelligent decisions, devices need to transfer and communicate data with each other. However, as devices involved in IIoT networks grow and the methods of connections diversify, the traditional security frameworks face many shortcomings, including vulnerabilities to attack, lags in data, sharing data, and lack of proper authentication. Blockchain technology has the potential to empower safe data distribution of big data generated by the IIoT. Prevailing data-sharing methods in blockchain only concentrate on the data interchanging among parties, not on the efficiency in sharing, and storing. Hence an element-based K-harmonic means clustering algorithm (CA) is proposed for the effective sharing of data among the entities along with an algorithm named underweight data block (UDB) for overcoming the obstacle of storage space. The performance metrics considered for the evaluation of the proposed framework are the sum of squared error (SSE), time complexity with respect to different m values, and storage complexity with CPU utilization. The results have experimented with MATLAB 2018a simulation environment. The proposed model has better sharing, and storing based on blockchain technology, which is appropriate IIoT.

3 citations


Journal ArticleDOI
01 Apr 2023-Toxics
TL;DR: In this article , the authors applied the XGBoost model to a two-year database of pollutant concentrations and meteorological parameters, with the aim to identify the factors which were mostly associated with the observed benzo(a)pyrene concentrations and to describe types of environments that supported the interactions between benzos(a)-pyrene and other polluting species.
Abstract: Polycyclic aromatic hydrocarbons (PAHs) refer to a group of several hundred compounds, among which 16 are identified as priority pollutants, due to their adverse health effects, frequency of occurrence, and potential for human exposure. This study is focused on benzo(a)pyrene, being considered an indicator of exposure to a PAH carcinogenic mixture. For this purpose, we have applied the XGBoost model to a two-year database of pollutant concentrations and meteorological parameters, with the aim to identify the factors which were mostly associated with the observed benzo(a)pyrene concentrations and to describe types of environments that supported the interactions between benzo(a)pyrene and other polluting species. The pollutant data were collected at the energy industry center in Serbia, in the vicinity of coal mining areas and power stations, where the observed benzo(a)pyrene maximum concentration for a study period reached 43.7 ngm−3. The metaheuristics algorithm has been used to optimize the XGBoost hyperparameters, and the results have been compared to the results of XGBoost models tuned by eight other cutting-edge metaheuristics algorithms. The best-produced model was later on interpreted by applying Shapley Additive exPlanations (SHAP). As indicated by mean absolute SHAP values, the temperature at the surface, arsenic, PM10, and total nitrogen oxide (NOx) concentrations appear to be the major factors affecting benzo(a)pyrene concentrations and its environmental fate.

3 citations


Journal ArticleDOI
TL;DR: In this article , a novel version of firefly algorithm (FA) is proposed and adapted for feature selection challenge, which significantly improves performance of the basic FA, and also outperforms other state-of-the-art metaheuristics for both, benchmark bound-constrained and practical feature selection tasks.

2 citations


Journal ArticleDOI
01 Apr 2023-Heliyon
TL;DR: In this article , a novel quasi-reflection learning arithmetic optimization algorithm -firefly search, an enhanced version of the original algebraic optimization algorithm is presented, which is applied to the Corona disease dataset.

2 citations


Journal ArticleDOI
TL;DR: In this article , a hybrid one-dimensional Convolution Neural Network with Long Short Term Memory (LSTM) classifier is employed to improve the performance of human activity recognition (HAR).

2 citations


Journal ArticleDOI
TL;DR: In this paper , an alternative ranking order method accounting for two-step normalization (AROMAN) is proposed to solve the EV selection problem for the last-mile delivery.
Abstract: Decision-making is a ubiquitous and paramount issue in the modern business world. Inappropriate decisions may lead to severe consequences for companies. Considering that the evaluation of alternatives is generally affected by several criteria, decision-making should be considered a very challenging task. From the 1945s to the present day, various multi-criteria decision-making (MCDM) methods have evolved, supporting people in the decision-making process. The main aim of this paper is to propose an original MCDM method and to demonstrate its applicability in an empirical case study that relates to the Electric Vehicle (EV) selection problem. To solve the electric vehicle selection problem for the last-mile delivery, we developed and applied a new MCDM method - the AROMAN (Alternative Ranking Order Method Accounting for Two-Step Normalization) method. The main contribution of the AROMAN method is coupling the linear and vector normalization techniques to obtain precise data structures used in further calculation. In addition, the original final ranking equation is developed. To demonstrate the robustness of the proposed method, a comparative analysis with other state-of-the-art MCDM methods is conducted. The results indicate a high level of confidence in the AROMAN method in the decision-making field. In addition, the sensitivity analysis is performed, and the results indicate a high level of stability. Nevertheless, based on the confident results, the managerial implications have also been indicated.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a swarm intelligence-based approach to tune the machine learning models is proposed and tested on four real-world Industry 4.0 data sets, namely distributed transformer monitoring, elderly fall prediction, BoT-IoT, and UNSW-NB 15.
Abstract: The progress of Industrial Revolution 4.0 has been supported by recent advances in several domains, and one of the main contributors is the Internet of Things. Smart factories and healthcare have both benefited in terms of leveraged quality of service and productivity rate. However, there is always a trade-off and some of the largest concerns include security, intrusion, and failure detection, due to high dependence on the Internet of Things devices. To overcome these and other challenges, artificial intelligence, especially machine learning algorithms, are employed for fault prediction, intrusion detection, computer-aided diagnostics, and so forth. However, efficiency of machine learning models heavily depend on feature selection, predetermined values of hyper-parameters and training to deliver a desired result. This paper proposes a swarm intelligence-based approach to tune the machine learning models. A novel version of the firefly algorithm, that overcomes known deficiencies of original method by employing diversification-based mechanism, has been proposed and applied to both feature selection and hyper-parameter optimization of two machine learning models—XGBoost and extreme learning machine. The proposed approach has been tested on four real-world Industry 4.0 data sets, namely distributed transformer monitoring, elderly fall prediction, BoT-IoT, and UNSW-NB 15. Achieved results have been compared to the results of eight other cutting-edge metaheuristics, that have been implemented and tested under the same conditions. The experimental outcomes strongly indicate that the proposed approach significantly outperformed all other competitor metaheuristics in terms of convergence speed and results' quality measured with standard metrics—accuracy, precision, recall, and f1-score.

1 citations


Book ChapterDOI
01 Jan 2023
TL;DR: In this article , an improved version of the arithmetic optimization algorithm is tasked with selecting optimal values of a long-short term network casting price predictions, which achieved excellent results, and outperformed aforementioned algorithms in one and four-step ahead predictions.
Abstract: Machine learning as a subset of artificial intelligence presents a promising set of algorithms with an ability to gather experience and learn from provided data. This coupled with the expanding availability of computational resources and information transparency has made it possible to utilize algorithms to forecast prices. In recent years, cryptocurrency has increased in popularity and has seen wider adoption as a payment method. However, due to the volatile nature of the cryptocurrency market, casting accurate predictions can be quite challenging. One promising approach is the application of long-short-term memory artificial neural networks to time-series price data to attain results. The forecasting accuracy of machine learning models is highly dependent on adequate hyperparameter settings. Thus, this work, an improved version of the arithmetic optimization algorithm, is tasked with selecting optimal values of a long-short term network casting price predictions. The proposed approach has been tested on publicly available real-world Ethereum trading price data and according to the results of comparative analysis with other contemporary metaheuristics, it has been concluded that the proposed method achieved excellent results, and outperformed aforementioned algorithms in one and four-step ahead predictions.

Book ChapterDOI
01 Jan 2023
TL;DR: In this paper , a novel artificial intelligence-based (AI) approach to diabetes classification is proposed, where the Planet Optimization Algorithm (POA) is tasked with selecting the optimal XGBoost hyperparameters so as to achieve the best possible classification outcomes.
Abstract: Recent years have seen an increase in instances of diabetes mellitus, a metabolic condition that if left untreated can severely decrease the quality of life, and even cause the death of those affected. Early diagnostics and treatment are vital for improving the outcome of treatment. This work proposes a novel artificial intelligence-based (AI) approach to diabetes classification. Due to the ability to process large amounts of data at a relatively quick rate with admirable performance, the XGBoost approach is used. However, despite many advantages, the large number of control parameters presented by this algorithm makes the process of tuning delicate and complex. To this end, the planet optimization algorithm (POA) is tasked with selecting the optimal XGBoost hyperparameters so as to achieve the best possible classification outcomes. In order to demonstrate the improvements achieved, a comparative analysis is given that presents the proposed approach alongside other contemporary algorithms addressing the same classification task. The attained results clearly demonstrate the superiority of the proposed approach.

Journal ArticleDOI
TL;DR: In this article , a deep learning and metaheuristic techniques-based system was proposed to predict pancreatic cancer early by analyzing medical imaging data, mainly CT scans, and identifying vital features and cancerous growths in the pancreas using Convolutional Neural Network (CNN) and YOLO model-based CNN (YCNN) models.
Abstract: Pancreatic cancer is associated with higher mortality rates due to insufficient diagnosis techniques, often diagnosed at an advanced stage when effective treatment is no longer possible. Therefore, automated systems that can detect cancer early are crucial to improve diagnosis and treatment outcomes. In the medical field, several algorithms have been put into use. Valid and interpretable data are essential for effective diagnosis and therapy. There is much room for cutting-edge computer systems to develop. The main objective of this research is to predict pancreatic cancer early using deep learning and metaheuristic techniques. This research aims to create a deep learning and metaheuristic techniques-based system to predict pancreatic cancer early by analyzing medical imaging data, mainly CT scans, and identifying vital features and cancerous growths in the pancreas using Convolutional Neural Network (CNN) and YOLO model-based CNN (YCNN) models. Once diagnosed, the disease cannot be effectively treated, and its progression is unpredictable. That's why there's been a push in recent years to implement fully automated systems that can sense cancer at a prior stage and improve diagnosis and treatment. The paper aims to evaluate the effectiveness of the novel YCNN approach compared to other modern methods in predicting pancreatic cancer. To predict the vital features from the CT scan and the proportion of cancer feasts in the pancreas using the threshold parameters booked as markers. This paper employs a deep learning approach called a Convolutional Neural network (CNN) model to predict pancreatic cancer images. In addition, we use the YOLO model-based CNN (YCNN) to aid in the categorization process. Both biomarkers and CT image dataset is used for testing. The YCNN method was shown to perform well by a cent percent of accuracy compared to other modern techniques in a thorough review of comparative findings.

Journal ArticleDOI
TL;DR: In this article , an improved metaheuristics algorithm was proposed to fine-tune the K-means approach for text clustering task, which was evaluated using the first 30 unconstrained test functions from the CEC2017 test-suite and six standard criterion text datasets.
Abstract: Due to the vast amounts of textual data available in various forms such as online content, social media comments, corporate data, public e-services and media data, text clustering has been experiencing rapid development. Text clustering involves categorizing and grouping similar content. It is a process of identifying significant patterns from unstructured textual data. Algorithms are being developed globally to extract useful and relevant information from large amounts of text data. Measuring the significance of content in documents to partition the collection of text data is one of the most important obstacles in text clustering. This study suggests utilizing an improved metaheuristics algorithm to fine-tune the K-means approach for text clustering task. The suggested technique is evaluated using the first 30 unconstrained test functions from the CEC2017 test-suite and six standard criterion text datasets. The simulation results and comparison with existing techniques demonstrate the robustness and supremacy of the suggested method.


Journal ArticleDOI
TL;DR: In this article , the authors explore the use of IoES in emergency response and disaster management, with an emphasis on the role of sensors and IoT devices in providing real-time information to emergency responders.
Abstract: The advancement in technology has led to the integration of internet-connected devices and systems into emergency management and response, known as the Internet of Emergency Services (IoES). This integration has the potential to revolutionize the way in which emergency services are provided, by allowing for real-time data collection and analysis, and improving coordination among various agencies involved in emergency response. This paper aims to explore the use of IoES in emergency response and disaster management, with an emphasis on the role of sensors and IoT devices in providing real-time information to emergency responders. We will also examine the challenges and opportunities associated with the implementation of IoES, and discuss the potential impact of this technology on public safety and crisis management. The integration of IoES into emergency management holds great promise for improving the speed and efficiency of emergency response, as well as enhancing the overall safety and well-being of citizens in emergency situations. However, it is important to understand the possible limitations and potential risks associated with this technology, in order to ensure its effective and responsible use. This paper aims to provide a comprehensive understanding of the Internet of Emergency Services and its implications for emergency response and disaster management.

Journal ArticleDOI
TL;DR: In this article , a novel artificial intelligence (AI) driven energy forecasting tuned deep learning framework is presented, where two variations of recurrent neural networks (RNNs) have been implemented: long-shortterm memory (LSTM) and gated recurrent unit (GRU) neural networks.

Proceedings ArticleDOI
05 Jan 2023
TL;DR: In this article , the sine cosine algorithm (SCA) was used to generate initial random candidate solutions with the goal of fluctuation outwards or towards the ideal answer.
Abstract: From 2015 to 2022, healthcare 4.0 has made revolutionary impacts on health services. It includes machine learning (ML), internet of things (IoT), fog computing and cloud computing. The utilization of machine learning approaches supplied by IoT advances employing fog and cloud computing principles improves the performance and accuracy of healthcare models. These concepts bounded together are distinguished in their application with the researchers as they dominate alongside the best results. Inspirited by the mathematical traits of sine and cosine functions, the sine cosine algorithm (SCA) generates numerous initial random candidate solutions with the goal of fluctuation outwards or towards the ideal answer. The metaheuristic algorithm can be applied for optimization of an artificial neural network (ANN) on which the Healthcare 4.0 relies. The solution has been tested on four diverse datasets in this field as well as the results of those tests have been compared to those of other hybrid solutions with the use of same datasets as the suggested solution. The results are in the favor of the novel method, as it obtains general advantage over all tests.

Journal ArticleDOI
TL;DR: In this article , a novel diversity-oriented social network search algorithm has been developed and incorporated into a two-level cooperative framework to improve phishing website detection by tuning extreme learning model that utilizes the most relevant subset of phishing websites data sets features.
Abstract: Abstract Feature selection and hyper-parameters optimization (tuning) are two of the most important and challenging tasks in machine learning. To achieve satisfying performance, every machine learning model has to be adjusted for a specific problem, as the efficient universal approach does not exist. In addition, most of the data sets contain irrelevant and redundant features that can even have a negative influence on the model’s performance. Machine learning can be applied almost everywhere; however, due to the high risks involved with the growing number of malicious, phishing websites on the world wide web, feature selection and tuning are in this research addressed for this particular problem. Notwithstanding that many metaheuristics have been devised for both feature selection and machine learning tuning challenges, there is still much space for improvements. Therefore, the research exhibited in this manuscript tries to improve phishing website detection by tuning extreme learning model that utilizes the most relevant subset of phishing websites data sets features. To accomplish this goal, a novel diversity-oriented social network search algorithm has been developed and incorporated into a two-level cooperative framework. The proposed algorithm has been compared to six other cutting-edge metaheuristics algorithms, that were also implemented in the framework and tested under the same experimental conditions. All metaheuristics have been employed in level 1 of the devised framework to perform the feature selection task. The best-obtained subset of features has then been used as the input to the framework level 2, where all algorithms perform tuning of extreme learning machine. Tuning is referring to the number of neurons in the hidden layers and weights and biases initialization. For evaluation purposes, three phishing websites data sets of different sizes and the number of classes, retrieved from UCI and Kaggle repositories, were employed and all methods are compared in terms of classification error, separately for layers 1 and 2 over several independent runs, and detailed metrics of the final outcomes (output of layer 2), including precision, recall, f1 score, receiver operating characteristics and precision–recall area under the curves. Furthermore, an additional experiment is also conducted, where only layer 2 of the proposed framework is used, to establish metaheuristics performance for extreme machine learning tuning with all features, which represents a large-scale NP-hard global optimization challenge. Finally, according to the results of statistical tests, final research findings suggest that the proposed diversity-oriented social network search metaheuristics on average obtains better achievements than competitors for both challenges and all data sets. Finally, the SHapley Additive exPlanations analysis of the best-performing model was applied to determine the most influential features.

Proceedings ArticleDOI
01 Mar 2023
TL;DR: In this paper , a new model for forecasting customer churn and determining the contribution of variables that could lead to losing a customer was proposed, and a novel metaheuristic algorithm is proposed and tasked with selecting optimal hyperparameters for the XGBoost algorithm.
Abstract: Retaining customers is of great importance for all subscription-based financial institutions. Having in mind that even a small change in customer churn can have a significant impact on a company's profits and overall value of the company, proper customer churn management is a prerequisite. When it comes to banks, the key issue is identifying reasons (factors) that lead to contract termination between a customer and a bank. This paper offers a new model for forecasting customer churn and determines the contribution of variables that could lead to losing a customer. This work presents a novel artificial intelligence approach for predicting churn using the XGboost methods. A novel metaheuristic algorithm is proposed and tasked with se-lecting optimal hyperparameters for the XGBoost algorithm. The performance of the algorithm has been evaluated on real-world data and compared to several cutting-edge algorithms, attaining the best performance, with the highest accuracy of approximately 97%, which proves presumption that customer credit card churn could be forecast with high precision. Additionally, the best models have been subjected to SHAP analysis to determine feature impact. Attained results show that features that belong to customer account information have the strongest impact on customer turnover, while personal customer information does not have or has little contribution. Features with the highest SHAP values are total transaction count and amount over the last 12 months and total revolving card balance.

Journal ArticleDOI
TL;DR: In this article , a hybrid approach based on the eXtreme Gradient Boosting (XGBoost) machine learning model optimized by an improved version of the well-known metaheuristic algorithm is proposed.
Abstract: In the last few decades, the World Wide Web has become a necessity that offers numerous services to end users. The number of online transactions increases daily, as well as that of malicious actors. Machine learning plays a vital role in the majority of modern solutions. To further improve Web security, this paper proposes a hybrid approach based on the eXtreme Gradient Boosting (XGBoost) machine learning model optimized by an improved version of the well-known metaheuristics algorithm. In this research, the improved firefly algorithm is employed in the two-tier framework, which was also developed as part of the research, to perform both the feature selection and adjustment of the XGBoost hyper-parameters. The performance of the introduced hybrid model is evaluated against three instances of well-known publicly available phishing website datasets. The performance of novel introduced algorithms is additionally compared against cutting-edge metaheuristics that are utilized in the same framework. The first two datasets were provided by Mendeley Data, while the third was acquired from the University of California, Irvine machine learning repository. Additionally, the best performing models have been subjected to SHapley Additive exPlanations (SHAP) analysis to determine the impact of each feature on model decisions. The obtained results suggest that the proposed hybrid solution achieves a superior performance level in comparison to other approaches, and that it represents a perspective solution in the domain of web security.

TL;DR: In this article , the authors proposed a secure, scalable, and responsive patient monitoring system, which used the lightweight attribute-based encryption (LABE) to protect cloud-based IoT patient data.
Abstract: Smart cities are composed of intelligent industrial things that enhance people’s lives and save lives. Intelligent remote patient monitoring helps predict the patient's condition. Internet of Things (IoT), artificial intelligence (AI), and cloud computing have improved the healthcare industry. Edge computing speeds up patient data transmission and ensures latency, reliability, and response time. Nonetheless, the transmission of massive amounts of patient data may lead to IoT data security vulnerabilities, which is both a concern and a challenge. This research proposed a secure, scalable, and responsive patient monitoring system. This model used the lightweight attribute-based encryption (LABE), which encrypts and decrypts IoT patient data to protect cloudbased IoT patient data. Edge servers are positioned between the IoT and cloud to increase QoS and diagnose patient impairment. The deep belief network (DBN) predicts and monitors patient health. The bat optimization algorithm (BOA) optimizes the hyperparameters. This study used deep belief to identify hyper parameters and BOA for optimization. Swarm intelligence improves the prediction results and edge–cloud reaction time. The simulation environment assessed the secure patient health monitoring system to ensure its efficiency, security, and efficacy. The proposed model offers effective patient remote health monitoring through a secure edge– cloud–IoT environment with improved accuracy (97.9%), precision (95.6%), recall (94.6%), F1-score (94.9%), and FDR (0.06).

Journal ArticleDOI
30 Jun 2023-PeerJ
TL;DR: In this article , the authors proposed an AI framework based on the simple convolutional neural network (CNN) and extreme machine learning machine (ELM) tuned by modified sine cosine algorithm (SCA).
Abstract: An ever increasing number of electronic devices integrated into the Internet of Things (IoT) generates vast amounts of data, which gets transported via network and stored for further analysis. However, besides the undisputed advantages of this technology, it also brings risks of unauthorized access and data compromise, situations where machine learning (ML) and artificial intelligence (AI) can help with detection of potential threats, intrusions and automation of the diagnostic process. The effectiveness of the applied algorithms largely depends on the previously performed optimization, i.e., predetermined values of hyperparameters and training conducted to achieve the desired result. Therefore, to address very important issue of IoT security, this article proposes an AI framework based on the simple convolutional neural network (CNN) and extreme machine learning machine (ELM) tuned by modified sine cosine algorithm (SCA). Not withstanding that many methods for addressing security issues have been developed, there is always a possibility for further improvements and proposed research tried to fill in this gap. The introduced framework was evaluated on two ToN IoT intrusion detection datasets, that consist of the network traffic data generated in Windows 7 and Windows 10 environments. The analysis of the results suggests that the proposed model achieved superior level of classification performance for the observed datasets. Additionally, besides conducting rigid statistical tests, best derived model is interpreted by SHapley Additive exPlanations (SHAP) analysis and results findings can be used by security experts to further enhance security of IoT systems.

Proceedings ArticleDOI
26 Apr 2023
TL;DR: In this article , an improved teaching-learning based (ITLB) method was used to optimize the hyperparameter selection procedure since the performance of neural networks is heavily dependent upon the combination of adequate control parameters.
Abstract: Critical operations of the global financial sector are significantly impacted by the volatility of the price of gold. Developing a robust forecasting model capable of recognizing patterns in the gold price dynamics may significantly reduce investment risks and enable considerable profits. This manuscript introduces an inventive deep learning forecasting system rested on the Bi-directional Long Short-term (BiLSTM) neural network that exploits the potential of these networks to apprehend short-term as well as long-term dependencies. This study developed a unique improved Teaching-learning Based (ITLB) method utilized to optimize the hyperparameter selection procedure since the performance of neural networks is heavily dependent upon the combination of adequate control parameters. Variation Mode Decomposition (VMD) was also used to discover trends in the gold price data prior to being submitted as input to the BiLSTM. A set of experiments has been conducted and the suggested model was compared to several cutting-edge metaheuristic algorithms. Overall results illustrate that the introduced BiLSTM-VMD-ITLB approach achieved superior forecasting results in predicting gold price fluctuations.

Journal ArticleDOI
TL;DR: In this article , an improved version of the arithmetic optimization algorithm was used to select the best values of a long short-term neural network for casting price predictions on Ethereum trading price data.
Abstract: Machine learning as a subset of artificial intelligence presents a promising set of algorithms for tackling increasingly complex challenges. A notable ability of this subgroup of algorithms to tackle tasks without explicit programming coupled with the expanding availability of computational resources and information transparency has made it possible to utilize algorithms to forecast prices. In recent years, cryptocurrency has increased in popularity and has seen wider adoption as a payment method. Cryptocurrency trading and mining have become a potentially very lucrative venture. However, due to the instability of cryptocurrency prices, casting accurate predictions can be quite challenging. A novel way of approaching this challenge is by tackling it through time-series forecasting. A particularly promising method for tackling this type of problem is through the utilization of long-short-term memory artificial neural networks to attain accurate prediction results. However, the forecasting accuracy of machine learning models is highly dependent on adequate hyperparameter settings. Thus, this work presents an improved variation of the arithmetic optimization algorithm, tasked with selecting the best values of a long-short term neural network casting price predictions. The presented approach has been evaluated on publicly available real-world Ethereum trading price data. The attained results of a comparative analysis against several popular metaheuristics indicate that the presented method achieved excellent results, and outperformed aforementioned algorithms in one and four-step ahead predictions.