scispace - formally typeset
Search or ask a question

Showing papers by "Nebojsa Bacanin published in 2021"


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a hybrid approach between machine learning, adaptive neuro-fuzzy inference system and enhanced beetle antennae search swarm intelligence metaheuristics to predict the number of the COVID-19 cases.

167 citations


Book ChapterDOI
01 Jan 2021
TL;DR: The proposed hybrid approach to predict the number of confirmed cases of COVID-19 disease has outperformed other sophisticated approaches and can be used as a tool for other time-series prediction.
Abstract: A novel type of coronavirus, now known under the acronym COVID-19, was initially discovered in the city of Wuhan, China. Since then, it has spread across the globe and now it is affecting over 210 countries worldwide. The number of confirmed cases is rapidly increasing and has recently reached over 14 million on July 18, 2020, with over 600,000 confirmed deaths. In the research presented within this paper, a new forecasting model to predict the number of confirmed cases of COVID-19 disease is proposed. The model proposed in this paper is a hybrid between machine learning adaptive neuro-fuzzy inference system and enhanced genetic algorithm metaheuristics. The enhanced genetic algorithm is applied to determine the parameters of the adaptive neuro-fuzzy inference system and to enhance the overall quality and performances of the prediction model. Proposed hybrid method was tested by using realistic official dataset on the COVID-19 outbreak in the state of China. In this paper, proposed approach was compared against multiple existing state-of-the-art techniques that were tested in the same environment, on the same datasets. Based on the simulation results and conducted comparative analysis, it is observed that the proposed hybrid approach has outperformed other sophisticated approaches and that it can be used as a tool for other time-series prediction.

68 citations


Journal ArticleDOI
13 Aug 2021
TL;DR: A hybrid swarm intelligence algorithm with a K-means algorithm is proposed for text clustering and it is indicated that the proposed approach is robust and superior to other state-of-the-art methods.
Abstract: The fast-growing Internet results in massive amounts of text data. Due to the large volume of the unstructured format of text data, extracting relevant information and its analysis becomes very challenging. Text document clustering is a text-mining process that partitions the set of text-based documents into mutually exclusive clusters in such a way that documents within the same group are similar to each other, while documents from different clusters differ based on the content. One of the biggest challenges in text clustering is partitioning the collection of text data by measuring the relevance of the content in the documents. Addressing this issue, in this work a hybrid swarm intelligence algorithm with a K-means algorithm is proposed for text clustering. First, the hybrid fruit-fly optimization algorithm is tested on ten unconstrained CEC2019 benchmark functions. Next, the proposed method is evaluated on six standard benchmark text datasets. The experimental evaluation on the unconstrained functions, as well as on text-based documents, indicated that the proposed approach is robust and superior to other state-of-the-art methods.

62 citations


Journal ArticleDOI
25 Oct 2021
TL;DR: In this paper, the authors proposed an enhanced version of the firefly algorithm that corrects the acknowledged drawbacks of the original method by an explicit exploration mechanism and a chaotic local search strategy, theoretically tested on two sets of bound-constrained benchmark functions from the CEC suites and practically validated for automatically selecting the optimal dropout rate for the regularization of deep neural networks.
Abstract: Swarm intelligence techniques have been created to respond to theoretical and practical global optimization problems. This paper puts forward an enhanced version of the firefly algorithm that corrects the acknowledged drawbacks of the original method, by an explicit exploration mechanism and a chaotic local search strategy. The resulting augmented approach was theoretically tested on two sets of bound-constrained benchmark functions from the CEC suites and practically validated for automatically selecting the optimal dropout rate for the regularization of deep neural networks. Despite their successful applications in a wide spectrum of different fields, one important problem that deep learning algorithms face is overfitting. The traditional way of preventing overfitting is to apply regularization; the first option in this sense is the choice of an adequate value for the dropout parameter. In order to demonstrate its ability in finding an optimal dropout rate, the boosted version of the firefly algorithm has been validated for the deep learning subfield of convolutional neural networks, with respect to five standard benchmark datasets for image processing: MNIST, Fashion-MNIST, Semeion, USPS and CIFAR-10. The performance of the proposed approach in both types of experiments was compared with other recent state-of-the-art methods. To prove that there are significant improvements in results, statistical tests were conducted. Based on the experimental data, it can be concluded that the proposed algorithm clearly outperforms other approaches.

61 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to propose an approach that is able to find approximate (near-optimal) solution for multi-objective task scheduling problem in cloud environment, and at the same time to reduce the search time.
Abstract: Cloud computing represents relatively new paradigm of utilizing remote computing resources and is becoming increasingly important and popular technology, that supports on-demand (as needed) resource provisioning and releasing in almost real-time. Task scheduling has a crucial role in cloud computing and it represents one of the most challenging issues from this domain. Therefore, to establish more efficient resource employment, an effective and robust task allocation (scheduling) method is required. By using an efficient task scheduling algorithm, the overall performance and service quality, as well as end-users experience can be improved. As the number of tasks increases, the problem complexity rises as well, which results in a huge search space. This kind of problem belongs to the class of NP-hard optimization challenges. The objective of this paper is to propose an approach that is able to find approximate (near-optimal) solution for multi-objective task scheduling problem in cloud environment, and at the same time to reduce the search time. In the proposed manuscript, we present a swarm-intelligence based approach, the hybridized bat algorithm, for multi-objective task scheduling. We conducted experiments on the CloudSim toolkit using standard parallel workloads and synthetic workloads. The obtained results are compared to other similar, metaheuristic-based techniques that were evaluated under the same conditions. Simulation results prove great potential of our proposed approach in this domain.

58 citations


Journal ArticleDOI
TL;DR: A metaheuristics method to automatically find the near-optimal values of convolutional neural network hyperparameters based on a modified firefly algorithm and develop a system for automatic image classification of glioma brain tumor grades from magnetic resonance imaging are proposed.
Abstract: The most frequent brain tumor types are gliomas. The magnetic resonance imaging technique helps to make the diagnosis of brain tumors. It is hard to get the diagnosis in the early stages of the glioma brain tumor, although the specialist has a lot of experience. Therefore, for the magnetic resonance imaging interpretation, a reliable and efficient system is required which helps the doctor to make the diagnosis in early stages. To make classification of the images, to which class the glioma belongs, convolutional neural networks, which proved that they can obtain an excellent performance in the image classification tasks, can be used. Convolutional network hyperparameters’ tuning is a very important issue in this domain for achieving high accuracy on the image classification; however, this task takes a lot of computational time. Approaching this issue, in this manuscript, we propose a metaheuristics method to automatically find the near-optimal values of convolutional neural network hyperparameters based on a modified firefly algorithm and develop a system for automatic image classification of glioma brain tumor grades from magnetic resonance imaging. First, we have tested the proposed modified algorithm on the set of standard unconstrained benchmark functions and the performance is compared to the original algorithm and other modified variants. Upon verifying the efficiency of the proposed approach in general, it is applied for hyperparameters’ optimization of the convolutional neural network. The IXI dataset and the cancer imaging archive with more collections of data are used for evaluation purposes, and additionally, the method is evaluated on the axial brain tumor images. The obtained experimental results and comparative analysis with other state-of-the-art algorithms tested under the same conditions show the robustness and efficiency of the proposed method.

47 citations


Proceedings ArticleDOI
26 May 2021
TL;DR: In this paper, a metaheuristic method has been proposed to automatically search and target the near-optimal values of convolutional neural network hyperparameters based on hybridized version of elephant herding optimization swarm intelligence metaheuristics.
Abstract: Gliomas belong to the group of the most frequent types of brain tumors. For this specific type of brain tumors, in its beginning stages, it is extremely complex to get the exact diagnosis. Even with the works from the most experienced doctors, it will not be possible without magnetic resonance imaging, which aids to make the diagnosis of brain tumors. In order to create classification of the images, to where the class of glioma belongs to, for achieving superior performance, convolutional neural networks can be used. For achieving high-level accuracy on the image classification, the convolutional network hyperparameters’ calibrations must reach a very accurate response of high accuracy results and this task proves to take up a lot of computational time and energy. Proceeding with the proposed solution, in this scientific research paper a metaheuristic method has been proposed to automatically search and target the near-optimal values of convolutional neural network hyperparameters based on hybridized version of elephant herding optimization swarm intelligence metaheuristics. The hybridized elephant herding optimization has been incorporated for convolutional neural network hyperparameters’ tuning to develop a system for automatic and instantaneous image classification of glioma brain tumors grades from the magnetic resonance imaging. Comparative analysis was performed with other methods tested on the same problem instance an results proved superiority of approach proposed in this paper.

47 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, the authors proposed a refined version of the dragonfly algorithm, which is later applied to enhance the lifetime of wireless sensor network. And the performance of the proposed enhanced dragonfly metaheuristics has been assessed by comparing with its original implementation, traditional version of LEACH algorithm, and the particle swarm optimization.
Abstract: Wireless sensor networks represent one of the most crucial components of novel technologies, such as the internet of things and cloud computing. One of the greatest problems in any wireless sensor network is lifetime maximization, and it could be tackled by decreasing energy consumption by the nodes Numerous clustering algorithms have emerged in recent years, with the sole goal to establish energy consumption equilibrium among nodes and to increase its efficiency—this term is known as the load balancing. The LEACH, as one of the most important load balancing algorithms is not able to establish satisfying performance and it could be enhanced by using metaheuristics approaches. The research described in this paper has proposed a refined version of the dragonfly algorithm, which is later applied to enhance the lifetime of wireless sensor network. The performance of the proposed enhanced dragonfly metaheuristics has been assessed by comparing with its original implementation, traditional version of the LEACH algorithm, and the particle swarm optimization. Simulation results indicate that our proposed algorithm has superior performances and that it could retrieve valuable results in this domain.

46 citations


Journal ArticleDOI
07 Oct 2021-Sensors
TL;DR: In this paper, the authors proposed a novel Harris Hawks optimization algorithm with practical application for evolving convolutional neural network architecture to classify various grades of brain tumor using magnetic resonance imaging.
Abstract: The research presented in this manuscript proposes a novel Harris Hawks optimization algorithm with practical application for evolving convolutional neural network architecture to classify various grades of brain tumor using magnetic resonance imaging. The proposed improved Harris Hawks optimization method, which belongs to the group of swarm intelligence metaheuristics, further improves the exploration and exploitation abilities of the basic algorithm by incorporating a chaotic population initialization and local search, along with a replacement strategy based on the quasi-reflection-based learning procedure. The proposed method was first evaluated on 10 recent CEC2019 benchmarks and the achieved results are compared with the ones generated by the basic algorithm, as well as with results of other state-of-the-art approaches that were tested under the same experimental conditions. In subsequent empirical research, the proposed method was adapted and applied for a practical challenge of convolutional neural network design. The evolved network structures were validated against two datasets that contain images of a healthy brain and brain with tumors. The first dataset comprises well-known IXI and cancer imagining archive images, while the second dataset consists of axial T1-weighted brain tumor images, as proposed in one recently published study in the Q1 journal. After performing data augmentation, the first dataset encompasses 8.000 healthy and 8.000 brain tumor images with grades I, II, III, and IV and the second dataset includes 4.908 images with Glioma, Meningioma, and Pituitary, with 1.636 images belonging to each tumor class. The swarm intelligence-driven convolutional neural network approach was evaluated and compared to other, similar methods and achieved a superior performance. The obtained accuracy was over 95% in all conducted experiments. Based on the established results, it is reasonable to conclude that the proposed approach could be used to develop networks that can assist doctors in diagnostics and help in the early detection of brain tumors.

42 citations


Book ChapterDOI
01 Jan 2021
TL;DR: A modified Harris hawks optimization algorithm is proposed and adjusted to target cloud–edge workflow scheduling problem and outperformed other state-of-the-art approaches by reducing cost and makespan performance metrics.
Abstract: Edge computing is a relatively novel technology, which is closely related to the concepts of the Internet of things and cloud computing. The main purpose of edge computing is to bring the resources as close as possible to the clients, to the very edge of the cloud. By doing so, it is possible to achieve smaller response times and lower network bandwidth utilization. Workflow scheduling in such an edge–cloud environment is considered to be an NP-hard problem, which has to be solved by a stochastic approach, especially in the scenario of multiple optimization goals. In the research presented in this paper, a modified Harris hawks optimization algorithm is proposed and adjusted to target cloud–edge workflow scheduling problem. Simulations are carried out with two main objectives—cost and makespan. The proposed experiments have used real workflow models and evaluated the proposed algorithm by comparing it to the other approaches available in the recent literature which were tested in the same simulation environment and experimental conditions. Based on the results from conducted experiments, the proposed improved Harris hawks optimization algorithm outperformed other state-of-the-art approaches by reducing cost and makespan performance metrics.

38 citations


Proceedings ArticleDOI
26 May 2021
TL;DR: In this article, an improved implementation of a widely used firefly algorithm, adapted for tackling this important and current problem is presented. And the enhanced approach is compared with other state-of-the-art algorithms validated on the same data sets and it has been shown that the proposed implementation overcomes others in terms of features' number and accuracy.
Abstract: In this paper, the application of the swarm intelligence algorithm for feature selection challenges in the field of machine learning is presented. Feature selection, which is part of the dimension reduction process, helps in selecting those features from the data set that have the most significant impact on the performance and accuracy of the machine learning model. Since the feature selection searches for an optimal (sub-optimal) set of features in a large search area and bearing in mind that swarm intelligence algorithms have proven to be good optimizers for tackling these kinds of problems, they can be efficiently employed as a wrapper method for feature selection. This manuscript proposes the improved implementation of a widely used firefly algorithm, adapted for tackling this important and current problem. Observed drawbacks of the original firefly algorithm are overcome by introducing a quasi-reflection-based learning procedure in the initialization phase. The proposed method was validated against 21 datasets, while the k-nearest neighbors were used as a classification model. The enhanced approach is compared with other state-of-the-art algorithms validated on the same data sets and it has been shown that the proposed implementation overcomes others in terms of features’ number, as well as in the terms of classification accuracy.

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, the authors proposed a solution for the stated problem based on hybridized bat algorithm, which can help to reduce the large search space of possible solutions by finding a solution that is not optimal but close optimal.
Abstract: Neural networks (NNs) are a subset in the field of machine learning (ML) that tends to make it possible for a machine to learn and make new predictions based on previous experiences and provided data. It is important to emphasize that there is no need to program this kind of behavior since the whole process is supported by the “self-adjustment” of the algorithm, which can evaluate itself and therefore adjust its parameters to get better performance and accuracy. Neural networks are different from other types of machine learning algorithms in such a way that they do not use statistical and mathematical models to make future predictions. Instead, they replicate the structure and the processes that happen inside the human brain. However, this type of learning is very computationally expensive since there is an enormous amount of states and conditions in which the network itself can be found. Therefore, it can be said that a process of learning for neural networks relates to the collection of NP-complete problems because of the large search space of possible solutions. Swarm intelligence (SI) algorithms can help to reduce this large space of solutions by finding a solution that is not optimal but close optimal and provide satisfactory results given how much longer it would take to train a network without them. In this paper, the authors have proposed a solution for the stated problem based on hybridized bat algorithm.

Proceedings ArticleDOI
26 May 2021
TL;DR: The salp swarm algorithm is one of the novel swarm intelligence metaheuristics as mentioned in this paper, which has been improved by introducing the concept of opposite solutions in the initialization phase as well as in iterative search process, where a fine-tuned exploitation of the current best solution is performed by generating its opposite individual.
Abstract: The salp swarm algorithm is one of the novel swarm intelligence metaheuristics. The work proposed in this paper provides further improvements of the salp swarm algorithm, that have been achieved by modifications of the original approach. By analyzing solutions’ quality and convergence speed of basic salp swarm during practical simulations, it was concluded that the exploitation process can be improved. Improvements were achieved by introducing the concept of opposite solutions in the initialization phase, as well as in iterative search process, where a fine-tuned exploitation of the current best solution is performed by generating its opposite individual. Proposed improved salp swarm algorithm was tested on thirteen well-known global benchmarks. Comparative analysis was performed with seven other modern metaheuristics methods, and against the original salp swarm algorithm. Accomplished results have proven that proposed approach in a large degree outscores original algorithm and other approaches included in comparative analysis.

Book ChapterDOI
23 Apr 2021
TL;DR: In this paper, a hybridized version of the sine cosine algorithm adjusting for solving feature selection problem is proposed, which is relatively novel approach for combing and improving metaheuristics optimizer.
Abstract: Feature selection problem from the domain of machine learning refers to selecting only those features from the high dimensional datasets, that have prominent influence on dependent variable(s). In this way, dataset dimensionallity is reduced and only the riches data is kept, training process of machine learning model becomes more efficient and accuracy is increased. This manuscript proposes a new hybridized version of the sine cosine algorithm adjusting for solving feature selection problem. Hybridization is relatively novel approach for combing and improving metaheuristics optimizer. Notwithstanding that the basic sine cosine algorithm establishes good performance for solving NP hard challenges, based on simulation results, it was concluded that there is still space for improvement in its exploitation process. Original sine cosine algorithm and proposed hybridized implementation were tested on a well-known 21 machine learning datasets retrieved from the UCL repository. Comparative analysis between hybrid sine cosine and original one, as well as with 10 other state-of-the-art metaheuristics was conducted. Established results in terms of classification accuracy and fitness prove the robustness and efficiency of proposed method for solving this type of NP hard challenge.

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, the authors proposed an enhanced flower pollination algorithm for the task scheduling in the cloud computing environment, and compared the results of the proposed method to other similar approaches, such as PBACO, ACO, Min-Min, and FCFS allocation strategies.
Abstract: Cloud computing technology refers to on-demand access to services, applications, and infrastructure that runs on a distributed network utilizing virtualized resources. In the cloud model, an efficient task scheduling algorithm plays an important role in order to achieve better functioning in general and resource utilization of the cloud. The end-users submit tasks, and the scheduling algorithm needs to allocate them to the available resources on time. Task scheduling issue is considered as NP-hard problems, and metaheuristics algorithms demonstrate high efficiency in solving such problems, thus, in this work, we propose enhanced flower pollination algorithm for the task scheduling. The major focus of this study is to reduce the makespan. We compared the results of the proposed method to other similar approaches, such as PBACO, ACO, Min-Min, and FCFS allocation strategies. The obtained results from the experiment show that the proposed EEFPA scheduler has the potential to allocate submitted tasks by the user to the available resources on the cloud.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method for introducing a basic interface to an IoT device's security gateway architecture along with blockchain to provide decentralization and authentication, which adds much needed anonymity and versatility to IoT infrastructure, which is currently lacking.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a green cloud based queuing management system for 5G networks that helps in addressing the issues related to latency and energy consumption in mobile edge computing.
Abstract: The mobile users have acquired the benefits of cloud computing with the help of Mobile Edge Computing (MEC) technology in order to satisfy the increasing data demands. The efficiency of the system is highly limited by the bandwidth limitations and limitations associated with the mobile devices despite the rapid development of MEC as well as the cloud computing technology. Our aim is to provide an optimal method to optimize the energy consumption in the mobile edge computing. In this regard, the research paper proposed a Green Cloud based Queue Management system for 5G networks that helps in addressing the issues related to latency and energy consumption. While serving the users, the proposed methodology results in less amount of energy being wasted and hence the reduced latency. By means of alleviating the congestion and implementing the virtual list, this issue can be resolved greatly. Simulation is done with the help of NS2 green cloud simulator and the results are obtained by comparing the proposed model to conventional cloud model and cloudlet based on throughput, latency, energy consumption and normalized overhead as these are the evaluation measures. The results show that there has been considerable enhancement in the energy consumption. As the throughput increases, the quality of the service also increases.

DOI
24 Aug 2021
TL;DR: An enhanced harris hawks optimization algorithm is proposed to address the task of neural networks training and is able to achieve better overall results than other state-of-the-art metaheuristics that were taken into account in comparative analysis, in terms of classification accuracy and converging speed.
Abstract: The learning process is one of the most difficult problems in artificial neural networks. This process goal is to find the appropriate values for connection weights and biases and has a direct influence on the neural network classification and prediction accuracy. Since the search space is huge, traditional optimization techniques are not suitable as they are prone to slow convergence and getting trapped in the local optima. In this paper, an enhanced harris hawks optimization algorithm is proposed to address the task of neural networks training. Conducted experiments include 2 well-known classification benchmark datasets to evaluate the performance of the proposed method. The obtained results indicate that the devised algorithm has promising performance, as that it is able to achieve better overall results than other state-of-the-art metaheuristics that were taken into account in comparative analysis, in terms of classification accuracy and converging speed.

Journal ArticleDOI
15 Nov 2021-Sensors
TL;DR: In this paper, an N-gram stacked autoencoder supervised learning algorithm was used to extract features from tweets and the extracted features were then involved in a classification and prediction involving an ensemble fusion scheme of selected machine learning techniques such as decision tree (DT), support vector machine (SVM), random forest (RF), and K-nearest neighbor (KNN).
Abstract: The current population worldwide extensively uses social media to share thoughts, societal issues, and personal concerns. Social media can be viewed as an intelligent platform that can be augmented with a capability to analyze and predict various issues such as business needs, environmental needs, election trends (polls), governmental needs, etc. This has motivated us to initiate a comprehensive search of the COVID-19 pandemic-related views and opinions amongst the population on Twitter. The basic training data have been collected from Twitter posts. On this basis, we have developed research involving ensemble deep learning techniques to reach a better prediction of the future evolutions of views in Twitter when compared to previous works that do the same. First, feature extraction is performed through an N-gram stacked autoencoder supervised learning algorithm. The extracted features are then involved in a classification and prediction involving an ensemble fusion scheme of selected machine learning techniques such as decision tree (DT), support vector machine (SVM), random forest (RF), and K-nearest neighbour (KNN). all individual results are combined/fused for a better prediction by using both mean and mode techniques. Our proposed scheme of an N-gram stacked encoder integrated in an ensemble machine learning scheme outperforms all the other existing competing techniques such unigram autoencoder, bigram autoencoder, etc. Our experimental results have been obtained from a comprehensive evaluation involving a dataset extracted from open-source data available from Twitter that were filtered by using the keywords "covid", "covid19", "coronavirus", "covid-19", "sarscov2", and "covid_19".

Journal ArticleDOI
TL;DR: A new Dynamic Power Containment Technique (DPCT) algorithm is developed to reduce the harmonic loss and increase the system efficiency and this method is easier to develop and can be used for optimized control to choose the best switch state in each cycle.
Abstract: In modern industrial field Brushless DC Motor (BLDC) is important component of electromechanical energy conversion. The BLDC is used in many application like high power traction, high end pumps etc. In this BLDC motor have a drive to operate in a constant operation but some noise and losses are create, so the various control technique is used to reduce the noise although some harmonic loss is occur in existing system. In that reason a new Dynamic Power Containment Technique (DPCT) algorithm is developed to reduce the harmonic loss and increase the system efficiency. The inverter is used to drive the Brushless DC Motor (BLDC) in a constant speed operation. This proposed control system is replace to classical cascaded method in speed and current control of BLDC motor. The inverter input current is directly control with help of the proposed FPGA based DPCT control system. The proposed system performance is analysis through the MATLAB simulation software. This proposed DPCT model is given the solution of delay issue. Afterward, another immediate pay technique was proposed to foresee the adjustment in current inside the defer time. The existing comparison of two-step predictive strategy, the new method is also easier to develop and can be used for optimized control to choose the best switch state in each cycle.

Journal ArticleDOI
TL;DR: In this paper, a feature extraction scheme combining Gabor filtering technique and Walsh-Hadamard transform (WHT) is proposed to detect brain tumor. But, the proposed method is not suitable for the high-level features extracted from MRI images.
Abstract: Brain tumors are a serious and death-defying disease for human life. Discovering an appropriate brain tumor image from a magnetic resonance imaging (MRI) archive is a challenging job for the radiologist. Most search engines retrieve images on the basis of traditional text-based approaches. The main challenge in the MRI image analysis is that low-level visual information captured by the MRI machine and the high-level information identified by the assessor. This semantic gap is addressed in this study by designing a new feature extraction technique. In this paper, we introduce Content-Based Medical Image retrieval (CBMIR) system for retrieval of brain tumor images from the large data. Firstly, we remove noise from MRI images employing several filtering techniques. Afterward, we design a feature extraction scheme combining Gabor filtering technique (which is mainly focused on specific frequency content at the image region) and Walsh-Hadamard transform (WHT) (conquer technique for easy configuration of image) for discovering representative features from MRI images. After that, for retrieving the accurate and reliable image, we employ Fuzzy C-Means clustering Minkowski distance metric that can evaluate the similarity between the query image and database images. The proposed methodology design was tested on a publicly available brain tumor MRI image database. The experimental results demonstrate that our proposed approach outperforms most of the existing techniques like Gabor, wavelet, and Hough transform in detecting brain tumors and also take less time. The proposed approach will be beneficial for radiologists and also for technologists to build an automatic decision support system that will produce reproducible and objective results with high accuracy.

Book ChapterDOI
01 Jan 2021
TL;DR: A brief review of the CNN hyperparameters tuning will be presented and discussed and one promising approach is the application of swarm intelligence algorithms.
Abstract: Digital images have revolutionized work in numerous scientific fields such as healthcare, astronomy, biology, agriculture as well as in every day life. One of the frequent tasks in applications with digital images is image classification which is a very challenging task. Major progress was made when convolution neural networks were introduced. The use of CNN has produced significant improvements in applications that require image classification. With today’s technology, it is relatively simple to implement and use CNNs but in order to obtain the best possible results it is necessary to find the optimal architecture and hyperparameters for every single task. Due to a large number of hyperparameters, it is difficult to find the optimal configuration and there is no deterministic way to do it. In this early stage of CNN development, the common method of tuning CNN is by guessing and estimating, known as the guestimating method. Since this is a hard optimization problem, there is a chance to apply an optimization metaheuristic. There are several studies that have applied different optimization methods for tuning CNN hyperparameters have applied. One promising approach is the application of swarm intelligence algorithms. In this paper, a brief review of the CNN hyperparameters tuning will be presented and discussed.

Journal ArticleDOI
TL;DR: In this article, the K-means clustering Grey Wolf Optimization (KMGWO) algorithm was used to solve the grey wolf optimization problem and achieved the first rank in terms of performance.
Abstract: Purpose: The development of metaheuristic algorithms has increased by researchers to use them extensively in the field of business, science, and engineering. One of the common metaheuristic optimization algorithms is called Grey Wolf Optimization (GWO). The algorithm works based on imitation of the wolves' searching and the process of attacking grey wolves. The main purpose of this paper to overcome the GWO problem which is trapping into local optima. Design or Methodology or Approach: In this paper, the K-means clustering algorithm is used to enhance the performance of the original Grey Wolf Optimization by dividing the population into different parts. The proposed algorithm is called K-means clustering Grey Wolf Optimization (KMGWO). Findings: Results illustrate the efficiency of KMGWO is superior to GWO. To evaluate the performance of the KMGWO, KMGWO applied to solve 10 CEC2019 benchmark test functions. Results prove that KMGWO is better compared to GWO. KMGWO is also compared to Cat Swarm Optimization (CSO), Whale Optimization Algorithm-Bat Algorithm (WOA-BAT), and WOA, so, KMGWO achieves the first rank in terms of performance. Statistical results proved that KMGWO achieved a higher significant value compared to the compared algorithms. Also, the KMGWO is used to solve a pressure vessel design problem and it has outperformed results. Originality/value: Results prove that KMGWO is superior to GWO. KMGWO is also compared to cat swarm optimization (CSO), whale optimization algorithm-bat algorithm (WOA-BAT), WOA, and GWO so KMGWO achieved the first rank in terms of performance. Also, the KMGWO is used to solve a classical engineering problem and it is superior

Journal ArticleDOI
TL;DR: In this paper, the K-means clustering gray wolf optimization (KMGWO) algorithm is used to enhance the performance of the original GWO; the new algorithm is called K-mean clustering grey wolf optimization and it is applied to solve a classical engineering problem.
Abstract: This paper aims at studying meta-heuristic algorithms. One of the common meta-heuristic optimization algorithms is called grey wolf optimization (GWO). The key aim is to enhance the limitations of the wolves’ searching process of attacking gray wolves.,The development of meta-heuristic algorithms has increased by researchers to use them extensively in the field of business, science and engineering. In this paper, the K-means clustering algorithm is used to enhance the performance of the original GWO; the new algorithm is called K-means clustering gray wolf optimization (KMGWO).,Results illustrate the efficiency of KMGWO against to the GWO. To evaluate the performance of the KMGWO, KMGWO applied to solve CEC2019 benchmark test functions.,Results prove that KMGWO is superior to GWO. KMGWO is also compared to cat swarm optimization (CSO), whale optimization algorithm-bat algorithm (WOA-BAT), WOA and GWO so KMGWO achieved the first rank in terms of performance. In addition, the KMGWO is used to solve a classical engineering problem and it is superior.

Journal ArticleDOI
TL;DR: In this article, the authors proposed predictive data regression technique (PDRT) based carbon nanotubes (CNT) biosensor system has been introduced in the fields like Biomedical.
Abstract: In many ways, monitoring the patient's health with appropriate sensors to monitor the hospital’s connection, as if the patient is lying in bed. The previous method of monitoring can be uncomfortable for a variety of reasons. Tracking a growing number of patients requires many trained health professionals and doctors. The proposed predictive data regression technique (PDRT) based carbon nanotubes (CNT) biosensor system has been introduced in the fields like Biomedical. In this analysis technique, the sensor value is compared to the threshold value; if any changes occur, the controller sends the user alert via IoT. The patients are present in this area so that their physicians and many physiological measurements and applications can be seen. Carbon nanotubes (CNT) have been used as electronic mediators and adsorption substrates in biosensors due to their extraordinary electrochemical properties. Explore its potential biosensing applications, platinum, carbon nanomaterials, electrochemical electrode preparation and features. The purpose of this proposal is to maintain a system of inpatient physiological parameters and activity. The physician can see a graphical view of the patient's parameters and a computer system worn by the patient, the Wi-Fi network, and the display. Abnormalities inpatient data are messages sent to the doctor via an IoT system to remind them of the situation. The Proposed Predictive Data Regression Technology (PDRT) system is given a better performance to the user, and it provides the output accuracy.

Book ChapterDOI
Ira Tuba1, Ivana Strumberger1, Eva Tuba1, Nebojsa Bacanin1, Milan Tuba1 
17 Jul 2021
TL;DR: In this article, a deep statistical comparison method was used for comparing different versions of the widely used fireworks algorithm was developed and improved in the last ten year, and this paper provides a theoretical analysis of five different versions, a cooperative framework for FWA, bare bones FWA and dynamic search FWA.
Abstract: In the last decades, swarm intelligence algorithms have become a powerful tool for solving hard optimization problems. Nowadays numerous algorithms are proved to be good for different problems. With the overwhelming number of algorithms, it became hard for a common user to choose an appropriate method for solving a certain problem. To provide guidelines, it is necessary to classify optimization metaheuristics according to their capabilities. Deep statistical comparison represents a novel method for comparing and analyzing optimization algorithms. In this paper, the deep statistical comparison method was used for comparing different versions of the widely used fireworks algorithm. The fireworks algorithm was developed and improved in the last ten year, and this paper provides a theoretical analysis of five different versions, a cooperative framework for FWA, bare bones FWA, guided FWA, loser-out tournament based FWA, and dynamic search FWA. Based on the obtained results, the loser-out tournament based FWA has the best performance in the term of the solution quality, while the dynamic search FWA is the best in term of the solutions distribution in the search space.

Proceedings ArticleDOI
25 Oct 2021
TL;DR: In this article, the authors proposed novel AI techniques for segmenting the eye discs using EffUnet and perform classification using ASPP-EffUnet techniques, which achieved high accuracy in two phases.
Abstract: Main problem in current research area focused on generating automatic AI technique to detect bio medical images by slimming the dataset. Reducing the original dataset with actual unwanted noises can accelerate new data which helps to detect diseases with high accuracy. Highest level of accuracy can be achieved only by ensuring accuracy at each level of processing steps. Dataset slimming or reduction is NP hard problems due its resembling variants. In this research work we ensure high accuracy in two phases. In phase one feature selection using Normalized Tensor Tubal PCA (NTT-PCA) method is used. This method is based on tensor with single value decomposition (SVD) for accurate dimensionality reduction problems. The dimensionality reduced output from phase one is further processed for accurate classification in phase two. The classification of affected images is detected using ASPP – EffUnet. The atrous spatial pyramid pooling (ASPP) with efficient convolutional block in Unet is combined to provide ASPP – EffUnet CNN architecture for accurate classification. This two phase model is designed and implemented on benchmark datasets of glaucoma detection. It is processed efficiently by exploiting fundus image in the dataset. We propose novel AI techniques for segmenting the eye discs using EffUnet and perform classification using ASPP-EffUnet techniques. Highest accuracy is achieved by NTT-PCA dimensionality reduction process and ASPP-EffUnet based classification which detects the boundaries of eye cup and optical discs very curiously. Our resulting algorithm “NTT-PCA with ASPP-EffUnet “for dimensionality reduction and classification process which is optimized for reducing computational complexity with existing detection algorithms like PCA-LA-SVM,PCA-ResNet ASPP –Unet. We choose benchmark datasets ORIGA for our experimental analysis. The crucial areas in clinical setup are examined and implemented successfully. The prediction and classification accuracy of proposed technique is achieved nearly 100%.


Journal ArticleDOI
TL;DR: The dragonfly algorithm was developed in 2016 as discussed by the authors, which is one of the most popular metaheuristic techniques for complex optimization problems in the engineering area and has been used extensively in the past few years.
Abstract: The dragonfly algorithm was developed in 2016. It is one of the algorithms used by researchers to optimize an extensive series of uses and applications in various areas. At times, it offers superior performance compared to the most well-known optimization techniques. However, this algorithm faces several difficulties when it is utilized to enhance complex optimization problems. This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems. This review paper shows a comprehensive investigation of the dragonfly algorithm in the engineering area. First, an overview of the algorithm is discussed. Besides, we also examined the modifications of the algorithm. The merged forms of this algorithm with different techniques and the modifications that have been done to make the algorithm perform better are addressed. Additionally, a survey on applications in the engineering area that used the dragonfly algorithm is offered. The utilized engineering applications are the applications in the field of mechanical engineering problems, electrical engineering problems, optimal parameters, economic load dispatch, and loss reduction. The algorithm is tested and evaluated against particle swarm optimization algorithm and firefly algorithm. To evaluate the ability of the dragonfly algorithm and other participated algorithms a set of traditional benchmarks (TF1-TF23) were utilized. Moreover, to examine the ability of the algorithm to optimize large-scale optimization problems CEC-C2019 benchmarks were utilized. A comparison is made between the algorithm and other metaheuristic techniques to show its ability to enhance various problems. The outcomes of the algorithm from the works that utilized the dragonfly algorithm previously and the outcomes of the benchmark test functions proved that in comparison with participated algorithms (GWO, PSO, and GA), the dragonfly algorithm owns an excellent performance, especially for small to intermediate applications. Moreover, the congestion facts of the technique and some future works are presented. The authors conducted this research to help other researchers who want to study the algorithm and utilize it to optimize engineering problems.

Book ChapterDOI
TL;DR: An improvement on the original Bat algorithm has been made to speed up convergence and make the method more practical for large applications, and the proposed MBA establishes better global search ability and convergence than the original BA and other approaches.
Abstract: One popular example of metaheuristic algorithms from the swarm intelligence family is the Bat algorithm (BA). The algorithm was first presented in 2010 by Yang and quickly demonstrated its efficiency in comparison with other common algorithms. The BA is based on echolocation in bats. The BA uses automatic zooming to strike a balance between exploration and exploitation by imitating the deviations of the bat’s pulse emission rate and loudness as it searches for prey. The BA maintains solution diversity using the frequency-tuning technique. In this way, the BA can quickly and efficiently switch from exploration to exploitation. Therefore, it becomes an efficient optimizer for any application when a quick solution is needed. In this paper, an improvement on the original BA has been made to speed up convergence and make the method more practical for large applications. To conduct a comprehensive comparative analysis between the original BA, the modified BA proposed in this paper, and other state-of-the-art bio-inspired metaheuristics, the performance of both approaches is evaluated on a standard set of 23 (unimodal, multimodal, and fixed-dimension multimodal) benchmark functions. Afterwards, the modified BA was applied to solve a real-world job scheduling problem in hotels and restaurants. Based on the achieved performance metrics, the proposed MBA establishes better global search ability and convergence than the original BA and other approaches