scispace - formally typeset
Search or ask a question

Showing papers in "Neural Computing and Applications in 2022"


Journal ArticleDOI
TL;DR: The results indicate that the proposed PDO is effective in estimating optimal solutions for real-world optimization problems with unknown global optima, and shows stronger performance and higher capabilities than the other algorithms.

103 citations



Journal ArticleDOI
TL;DR: In this paper , the authors proposed an enhanced firefly algorithm adapted for tackling workflow scheduling challenges in a cloud-edge environment, which overcomes observed deficiencies of original firefly metaheuristic by incorporating genetic operators and quasi-reflection-based learning procedure.
Abstract: Edge computing is a novel technology, which is closely related to the concept of Internet of Things. This technology brings computing resources closer to the location where they are consumed by end-users-to the edge of the cloud. In this way, response time is shortened and lower network bandwidth is utilized. Workflow scheduling must be addressed to accomplish these goals. In this paper, we propose an enhanced firefly algorithm adapted for tackling workflow scheduling challenges in a cloud-edge environment. Our proposed approach overcomes observed deficiencies of original firefly metaheuristics by incorporating genetic operators and quasi-reflection-based learning procedure. First, we have validated the proposed improved algorithm on 10 modern standard benchmark instances and compared its performance with original and other improved state-of-the-art metaheuristics. Secondly, we have performed simulations for a workflow scheduling problem with two objectives-cost and makespan. We performed comparative analysis with other state-of-the-art approaches that were tested under the same experimental conditions. Algorithm proposed in this paper exhibits significant enhancements over the original firefly algorithm and other outstanding metaheuristics in terms of convergence speed and results' quality. Based on the output of conducted simulations, the proposed improved firefly algorithm obtains prominent results and managed to establish improvement in solving workflow scheduling in cloud-edge by reducing makespan and cost compared to other approaches.

45 citations


Journal ArticleDOI
TL;DR: In this article , a review of the EEG-based emotion recognition methods is presented, including feature extraction, feature selection/reduction, machine learning methods (e.g., k-nearest neighbor), support vector machine, decision tree, artificial neural network, random forest, and naive Bayes) and deep learning methods.
Abstract: Abstract Affective computing, a subcategory of artificial intelligence, detects, processes, interprets, and mimics human emotions. Thanks to the continued advancement of portable non-invasive human sensor technologies, like brain–computer interfaces (BCI), emotion recognition has piqued the interest of academics from a variety of domains. Facial expressions, speech, behavior (gesture/posture), and physiological signals can all be used to identify human emotions. However, the first three may be ineffectual because people may hide their true emotions consciously or unconsciously (so-called social masking). Physiological signals can provide more accurate and objective emotion recognition. Electroencephalogram (EEG) signals respond in real time and are more sensitive to changes in affective states than peripheral neurophysiological signals. Thus, EEG signals can reveal important features of emotional states. Recently, several EEG-based BCI emotion recognition techniques have been developed. In addition, rapid advances in machine and deep learning have enabled machines or computers to understand, recognize, and analyze emotions. This study reviews emotion recognition methods that rely on multi-channel EEG signal-based BCIs and provides an overview of what has been accomplished in this area. It also provides an overview of the datasets and methods used to elicit emotional states. According to the usual emotional recognition pathway, we review various EEG feature extraction, feature selection/reduction, machine learning methods (e.g., k-nearest neighbor), support vector machine, decision tree, artificial neural network, random forest, and naive Bayes) and deep learning methods (e.g., convolutional and recurrent neural networks with long short term memory). In addition, EEG rhythms that are strongly linked to emotions as well as the relationship between distinct brain areas and emotions are discussed. We also discuss several human emotion recognition studies, published between 2015 and 2021, that use EEG data and compare different machine and deep learning algorithms. Finally, this review suggests several challenges and future research directions in the recognition and classification of human emotional states using EEG.

45 citations


Journal ArticleDOI
TL;DR: A Multi-scale Strip Pooling Feature Aggregation Network is proposed, which uses the residual network as the backbone to extract different levels of semantic information and an Improved Pyramid Pooling module is introduced to mine deep multi-scale semantic information.

42 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed an e-diagnosis system based on machine learning (ML) algorithms to be implemented on the Internet of Medical Things (IoMT) environment, particularly for diagnosing diabetes mellitus (type 2 diabetes).
Abstract: This paper proposes an e-diagnosis system based on machine learning (ML) algorithms to be implemented on the Internet of Medical Things (IoMT) environment, particularly for diagnosing diabetes mellitus (type 2 diabetes). However, the ML applications tend to be mistrusted because of their inability to show the internal decision-making process, resulting in slow uptake by end-users within certain healthcare sectors. This research delineates the use of three interpretable supervised ML models: Naïve Bayes classifier, random forest classifier, and J48 decision tree models to be trained and tested using the Pima Indians diabetes dataset in R programming language. The performance of each algorithm is analyzed to determine the one with the best accuracy, precision, sensitivity, and specificity. An assessment of the decision process is also made to improve the model. It can be concluded that a Naïve Bayes model works well with a more fine-tuned selection of features for binary classification, while random forest works better with more features.

40 citations




Journal ArticleDOI
TL;DR: In this article , a comprehensive review of various pooling techniques proposed in the literature of computer vision and medical image analysis is provided, and an extensive set of experiments are conducted to compare a selected set of pooling algorithms on two different medical image classification problems, namely HEp-2 cells and diabetic retinopathy image classification.
Abstract: Convolutional neural networks (CNN) are widely used in computer vision and medical image analysis as the state-of-the-art technique. In CNN, pooling layers are included mainly for downsampling the feature maps by aggregating features from local regions. Pooling can help CNN to learn invariant features and reduce computational complexity. Although the max and the average pooling are the widely used ones, various other pooling techniques are also proposed for different purposes, which include techniques to reduce overfitting, to capture higher-order information such as correlation between features, to capture spatial or structural information, etc. As not all of these pooling techniques are well-explored for medical image analysis, this paper provides a comprehensive review of various pooling techniques proposed in the literature of computer vision and medical image analysis. In addition, an extensive set of experiments are conducted to compare a selected set of pooling techniques on two different medical image classification problems, namely HEp-2 cells and diabetic retinopathy image classification. Experiments suggest that the most appropriate pooling mechanism for a particular classification task is related to the scale of the class-specific features with respect to the image size. As this is the first work focusing on pooling techniques for the application of medical image analysis, we believe that this review and the comparative study will provide a guideline to the choice of pooling mechanisms for various medical image analysis tasks. In addition, by carefully choosing the pooling operations with the standard ResNet architecture, we show new state-of-the-art results on both HEp-2 cells and diabetic retinopathy image datasets.

37 citations




Journal ArticleDOI
TL;DR: In this article , a complex Pythagorean fuzzy ELECTRE II method was proposed for group decision making in complex PGF framework, which is designed to perform pairwise comparisons of the alternatives using the core notions of concordance, discordance and indifferent sets.
Abstract: Abstract This article contributes to the advancement and evolution of outranking decision-making methodologies, with a novel essay on the ELimination and Choice Translating REality (ELECTRE) family of methods. Its primary target is to unfold the constituents and expound the implementation of the ELECTRE II method for group decision making in complex Pythagorean fuzzy framework. This results in the complex Pythagorean fuzzy ELECTRE II method. By inception, it is intrinsically superior to models using one-dimensional data. It is designed to perform the pairwise comparisons of the alternatives using the core notions of concordance, discordance and indifferent sets, which is then followed by the construction of complex Pythagorean fuzzy concordance and discordance matrices. Further, the strong and weak outranking relations are developed by the comparison of concordance and discordance indices with the concordance and discordance levels. Later, the forward, reverse and average rankings of the alternatives are computed by the dint of strong and weak outranking graphs. This methodology is supported by a case study for the selection of wastewater treatment process, and by a numerical example for the selection of the best cloud solution for a big data project. Its consistency is confirmed by an effectiveness test and comparison analysis with the Pythagorean fuzzy ELECTRE II and complex Pythagorean fuzzy ELECTRE I methods.



Journal ArticleDOI
TL;DR: In this article , the authors proposed a new method called Dilated Transformer, which conducts self-attention alternately in local and global scopes for pair-wise patch relations capturing.
Abstract: Computer-aided medical image segmentation has been applied widely in diagnosis and treatment to obtain clinically useful information of shapes and volumes of target organs and tissues. In the past several years, convolutional neural network (CNN)-based methods (e.g., U-Net) have dominated this area, but still suffered from inadequate long-range information capturing. Hence, recent work presented computer vision Transformer variants for medical image segmentation tasks and obtained promising performances. Such Transformers modeled long-range dependency by computing pair-wise patch relations. However, they incurred prohibitive computational costs, especially on 3D medical images (e.g., CT and MRI). In this paper, we propose a new method called Dilated Transformer, which conducts self-attention alternately in local and global scopes for pair-wise patch relations capturing. Inspired by dilated convolution kernels, we conduct the global self-attention in a dilated manner, enlarging receptive fields without increasing the patches involved and thus reducing computational costs. Based on this design of Dilated Transformer, we construct a U-shaped encoder–decoder hierarchical architecture called D-Former for 3D medical image segmentation. Experiments on the Synapse and ACDC datasets show that our D-Former model, trained from scratch, outperforms various competitive CNN-based or Transformer-based segmentation models at a low computational cost without time-consuming per-training process.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a new intuitionistic fuzzy extension of MAIRCA framework, which can assess coronavirus vaccines according to some evaluation criteria, such as duration of protection, effectiveness of the vaccine, success against the mutations, and logistics.
Abstract: All over the world, the COVID-19 outbreak seriously affects life, whereas numerous people have infected and passed away. To control the spread of it and to protect people, appreciable vaccine development efforts continue with increasing momentum. Given that this pandemic will be in our lives for a long time, it is obvious that a reliable and useful framework is needed to choose among coronavirus vaccines. To this end, this paper proposes a new intuitionistic fuzzy extension of MAIRCA framework, named intuitionistic fuzzy MAIRCA (IF-MAIRCA) to assess coronavirus vaccines according to some evaluation criteria. Based on the group decision-making, the IF-MAIRCA framework both extracts the criteria weights and discovers the prioritization of the alternatives under uncertainty. In this work, as a case study, five coronavirus vaccines approved by the world's leading authorities are evaluated according to various criteria. The findings demonstrate that the most significant criteria considered in coronavirus vaccine selection are "duration of protection," "effectiveness of the vaccine," "success against the mutations," and "logistics," respectively, whereas the best coronavirus vaccine is AZD1222. Apart from this, the proposed model's robustness is verified with a three-phase sensitivity analysis.


Journal ArticleDOI
TL;DR: In this article , an inverted bell-curve-based ensemble of deep learning models was proposed for the detection of COVID-19 from CXR images. But, the proposed method is not suitable for the classification of lung cancer.
Abstract: Novel Coronavirus 2019 disease or COVID-19 is a viral disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The use of chest X-rays (CXRs) has become an important practice to assist in the diagnosis of COVID-19 as they can be used to detect the abnormalities developed in the infected patients' lungs. With the fast spread of the disease, many researchers across the world are striving to use several deep learning-based systems to identify the COVID-19 from such CXR images. To this end, we propose an inverted bell-curve-based ensemble of deep learning models for the detection of COVID-19 from CXR images. We first use a selection of models pretrained on ImageNet dataset and use the concept of transfer learning to retrain them with CXR datasets. Then the trained models are combined with the proposed inverted bell curve weighted ensemble method, where the output of each classifier is assigned a weight, and the final prediction is done by performing a weighted average of those outputs. We evaluate the proposed method on two publicly available datasets: the COVID-19 Radiography Database and the IEEE COVID Chest X-ray Dataset. The accuracy, F1 score and the AUC ROC achieved by the proposed method are 99.66%, 99.75% and 99.99%, respectively, in the first dataset, and, 99.84%, 99.81% and 99.99%, respectively, in the other dataset. Experimental results ensure that the use of transfer learning-based models and their combination using the proposed ensemble method result in improved predictions of COVID-19 in CXRs.


Journal ArticleDOI
TL;DR: In this article , an improved marine predators algorithm (IMPA) was used to find the best values for the hyperparameters of the CNN architecture and the proposed method uses a pretrained CNN model called ResNet50 (residual network).
Abstract: Breast cancer is the second leading cause of death in women; therefore, effective early detection of this cancer can reduce its mortality rate. Breast cancer detection and classification in the early phases of development may allow for optimal therapy. Convolutional neural networks (CNNs) have enhanced tumor detection and classification efficiency in medical imaging compared to traditional approaches. This paper proposes a novel classification model for breast cancer diagnosis based on a hybridized CNN and an improved optimization algorithm, along with transfer learning, to help radiologists detect abnormalities efficiently. The marine predators algorithm (MPA) is the optimization algorithm we used, and we improve it using the opposition-based learning strategy to cope with the implied weaknesses of the original MPA. The improved marine predators algorithm (IMPA) is used to find the best values for the hyperparameters of the CNN architecture. The proposed method uses a pretrained CNN model called ResNet50 (residual network). This model is hybridized with the IMPA algorithm, resulting in an architecture called IMPA-ResNet50. Our evaluation is performed on two mammographic datasets, the mammographic image analysis society (MIAS) and curated breast imaging subset of DDSM (CBIS-DDSM) datasets. The proposed model was compared with other state-of-the-art approaches. The obtained results showed that the proposed model outperforms the compared state-of-the-art approaches, which are beneficial to classification performance, achieving 98.32% accuracy, 98.56% sensitivity, and 98.68% specificity on the CBIS-DDSM dataset and 98.88% accuracy, 97.61% sensitivity, and 98.40% specificity on the MIAS dataset. To evaluate the performance of IMPA in finding the optimal values for the hyperparameters of ResNet50 architecture, it compared to four other optimization algorithms including gravitational search algorithm (GSA), Harris hawks optimization (HHO), whale optimization algorithm (WOA), and the original MPA algorithm. The counterparts algorithms are also hybrid with the ResNet50 architecture produce models named GSA-ResNet50, HHO-ResNet50, WOA-ResNet50, and MPA-ResNet50, respectively. The results indicated that the proposed IMPA-ResNet50 is achieved a better performance than other counterparts.

Journal ArticleDOI
TL;DR: In this paper , the authors employed a systematic literature review (SLR) to cover all aspects of outcomes from related papers, including survival analysis, forecasting, economic and geographical issues, monitoring methods, medication development, and hybrid apps.
Abstract: Recently, the COVID-19 epidemic has resulted in millions of deaths and has impacted practically every area of human life. Several machine learning (ML) approaches are employed in the medical field in many applications, including detecting and monitoring patients, notably in COVID-19 management. Different medical imaging systems, such as computed tomography (CT) and X-ray, offer ML an excellent platform for combating the pandemic. Because of this need, a significant quantity of study has been carried out; thus, in this work, we employed a systematic literature review (SLR) to cover all aspects of outcomes from related papers. Imaging methods, survival analysis, forecasting, economic and geographical issues, monitoring methods, medication development, and hybrid apps are the seven key uses of applications employed in the COVID-19 pandemic. Conventional neural networks (CNNs), long short-term memory networks (LSTM), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, random forest, and other ML techniques are frequently used in such scenarios. Next, cutting-edge applications related to ML techniques for pandemic medical issues are discussed. Various problems and challenges linked with ML applications for this pandemic were reviewed. It is expected that additional research will be conducted in the upcoming to limit the spread and catastrophe management. According to the data, most papers are evaluated mainly on characteristics such as flexibility and accuracy, while other factors such as safety are overlooked. Also, Keras was the most often used library in the research studied, accounting for 24.4 percent of the time. Furthermore, medical imaging systems are employed for diagnostic reasons in 20.4 percent of applications.

Journal ArticleDOI
TL;DR: In this article , a deep neural network is proposed to address the electricity consumption forecasting in the short-term, namely, a long shortterm memory (LSTM) network due to its ability to deal with sequential data such as time-series data.
Abstract: Nowadays, electricity is a basic commodity necessary for the well-being of any modern society. Due to the growth in electricity consumption in recent years, mainly in large cities, electricity forecasting is key to the management of an efficient, sustainable and safe smart grid for the consumer. In this work, a deep neural network is proposed to address the electricity consumption forecasting in the short-term, namely, a long short-term memory (LSTM) network due to its ability to deal with sequential data such as time-series data. First, the optimal values for certain hyper-parameters have been obtained by a random search and a metaheuristic, called coronavirus optimization algorithm (CVOA), based on the propagation of the SARS-Cov-2 virus. Then, the optimal LSTM has been applied to predict the electricity demand with 4-h forecast horizon. Results using Spanish electricity data during nine years and half measured with 10-min frequency are presented and discussed. Finally, the performance of the proposed LSTM using random search and the LSTM using CVOA is compared, on the one hand, with that of recently published deep neural networks (such as a deep feed-forward neural network optimized with a grid search) and temporal fusion transformers optimized with a sampling algorithm, and, on the other hand, with traditional machine learning techniques, such as a linear regression, decision trees and tree-based ensemble techniques (gradient-boosted trees and random forest), achieving the smallest prediction error below 1.5%.


Journal ArticleDOI
TL;DR: In this paper , the authors reviewed the recent CNN-based methods applied to the UAV-based remote sensing image analysis for crop/plant classification to help researchers and farmers to decide what algorithms they should use accordingly to their studied crops and the used hardware.
Abstract: During the last few years, Unmanned Aerial Vehicles (UAVs) technologies are widely used to improve agriculture productivity while reducing drudgery, inspection time, and crop management cost. Moreover, they are able to cover large areas in a matter of a few minutes. Due to the impressive technological advancement, UAV-based remote sensing technologies are increasingly used to collect valuable data that could be used to achieve many precision agriculture applications, including crop/plant classification. In order to process these data accurately, we need powerful tools and algorithms such as Deep Learning approaches. Recently, Convolutional Neural Network (CNN) has emerged as a powerful tool for image processing tasks achieving remarkable results making it the state-of-the-art technique for vision applications. In the present study, we reviewed the recent CNN-based methods applied to the UAV-based remote sensing image analysis for crop/plant classification to help researchers and farmers to decide what algorithms they should use accordingly to their studied crops and the used hardware. Fusing different UAV-based data and deep learning approaches have emerged as a powerful tool to classify different crop types accurately. The readers of the present review could acquire the most challenging issues facing researchers to classify different crop types from UAV imagery and their potential solutions to improve the performance of deep learning-based algorithms.

Journal ArticleDOI
TL;DR: This paper employs the latest DCNN-based object detection model (YOLOv4) for choosing regions (i.e., multiple objects) and chaos-based encryption for fast encryption and proposes a multi-object-oriented encryption algorithm to protect all the detected ROI at one go.

Journal ArticleDOI
TL;DR: The proposed approach, aspects are extracted from reviews and only these aspects and respective sentiments are employed for fake reviews detection and is compared with traditional machine learning techniques to prove that deep neural networks perform complex computation better than traditional techniques.

Journal ArticleDOI
TL;DR: In this article , a new hybrid intelligence approach based on random forest (RF) and particle swarm optimization (PSO) is proposed for predicting backbreak with high accuracy to reduce the unsolicited phenomenon induced by backbreak in open-pit blasting.
Abstract: Backbreak is a rock fracture problem that exceeds the limits of the last row of holes in an explosion operation. Excessive backbreak increases operational costs and also poses a threat to mine safety. In this regard, a new hybrid intelligence approach based on random forest (RF) and particle swarm optimization (PSO) is proposed for predicting backbreak with high accuracy to reduce the unsolicited phenomenon induced by backbreak in open-pit blasting. A data set of 234 samples with six input parameters including special drilling (SD), spacing (S), burden (B), hole length (L), stemming (T) and powder factor (PF) and one output parameter backbreak (BB) is set up in this study. Seven input combinations (one with six parameters, six with five parameters) are built to generate the optimal prediction model. The PSO algorithm is integrated with the RF algorithm to find the optimal hyper-parameters of each model and the fitness function, which is the mean absolute error (MAE) of ten cross-validations. The performance capacities of the optimal models are assessed using MAE, root-mean-square error (RMSE), Pearson correlation coefficient (R2) and mean absolute percentage error (MAPE). Findings demonstrated that the PSO–RF model combining L–S–B–T–PF with MAE of 0.0132 and 0.0568, RMSE of 0.0811 and 0.1686, R2 of 0.9990 and 0.9961 and MAPE of 0.0027 and 0.0116 in training and testing phases, respectively, has optimal prediction performance. The optimal PSO–RF models were compared with the classical artificial neural network, RF, genetic programming, support vector machine and convolutional neural network models and show that the PSO–RF model has superiority in predicting backbreak. The Gini index of each input variable has also been calculated in the RF model, which was 31.2 (L), 23.1 (S), 27.4 (B), 36.6 (T), 23.4 (PF) and 16.9 (SD), respectively.

Journal ArticleDOI
TL;DR: In this paper , a pre-trained VGG19 deep CNN architecture and the YOLOv3 detection algorithm were used to classify COVID-19 via X-ray images in high precision ratios.
Abstract: X-ray images are an easily accessible, fast, and inexpensive method of diagnosing COVID-19, widely used in health centers around the world. In places where there is a shortage of specialist doctors and radiologists, there is need for a system that can direct patients to advanced health centers by pre-diagnosing COVID-19 from X-ray images. Also, smart computer-aided systems that automatically detect COVID-19 positive cases will support daily clinical applications. The study aimed to classify COVID-19 via X-ray images in high precision ratios with pre-trained VGG19 deep CNN architecture and the YOLOv3 detection algorithm. For this purpose, VGG19, VGGCOV19-NET models, and the original Cascade models were created by feeding these models with the YOLOv3 algorithm. Cascade models are the original models fed with the lung zone X-ray images detected with the YOLOv3 algorithm. Model performances were evaluated using fivefold cross-validation according to recall, specificity, precision, f1-score, confusion matrix, and ROC analysis performance metrics. While the accuracy of the Cascade VGGCOV19-NET model was 99.84% for the binary class (COVID vs. no-findings) data set, it was 97.16% for the three-class (COVID vs. no-findings vs. pneumonia) data set. The Cascade VGGCOV19-NET model has a higher classification performance than VGG19, Cascade VGG19, VGGCOV19-NET and previous studies. Feeding the CNN models with the YOLOv3 detection algorithm decreases the training test time while increasing the classification performance. The results indicate that the proposed Cascade VGGCOV19-NET architecture was highly successful in detecting COVID-19. Therefore, this study contributes to the literature in terms of both YOLO-aided deep architecture and classification success.