scispace - formally typeset
Search or ask a question

Showing papers in "Applied Soft Computing in 2021"


Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed combined model can capture non-linear characteristics of WSTS, achieving better forecasting performance than single forecasting models, in terms of accuracy.
Abstract: Reliable and accurate wind speed forecasting (WSF) is fundamental for efficient exploitation of wind power. In particular, high accuracy short-term WSF (ST-WSF) has a significant impact on the efficiency of wind power generation systems. Due to the non-stationarity and stochasticity of the wind speed (WS), a single model is often not sufficient in practice for the accurate estimation of the WS. Hybrid models are being proposed to overcome the limitations of single models and increase the WS forecasting performance. In this paper, a new hybrid WSF model is developed based on long short-term memory (LSTM) network and decomposition methods with grey wolf optimizer (GWO). In the pre-processing stage, the missing data is filled by the weighted moving average (WMA) method, the WS time series (WSTS) data are smoothed by WMA filtering and the smoothed data are used as model input after Z -score normalization. The forecasting model is formed by the combination of a single model, a decomposition method and an advanced optimization algorithm. Successively, the hybrid WSF model is developed by combining the LSTM and decomposition methods, and optimizing the intrinsic mode function (IMF) estimated outputs with a grey wolf optimizer (GWO). The developed non-linear hybrid model is utilized on the data collected from five wind farms in the Marmara region, Turkey. The obtained experimental results indicate that the proposed combined model can capture non-linear characteristics of WSTS, achieving better forecasting performance than single forecasting models, in terms of accuracy.

343 citations


Journal ArticleDOI
TL;DR: An AI system that automatically analyzes CT images and provides the probability of infection to rapidly detect COVID-19 pneumonia and is able to overcome a series of challenges in this particular situation and deploy the system in four weeks.
Abstract: The sudden outbreak of novel coronavirus 2019 (COVID-19) increased the diagnostic burden of radiologists. In the time of an epidemic crisis, we hope artificial intelligence (AI) to reduce physician workload in regions with the outbreak, and improve the diagnosis accuracy for physicians before they could acquire enough experience with the new disease. In this paper, we present our experience in building and deploying an AI system that automatically analyzes CT images and provides the probability of infection to rapidly detect COVID-19 pneumonia. The proposed system which consists of classification and segmentation will save about 30%-40% of the detection time for physicians and promote the performance of COVID-19 detection. Specifically, working in an interdisciplinary team of over 30 people with medical and/or AI background, geographically distributed in Beijing and Wuhan, we are able to overcome a series of challenges (e.g. data discrepancy, testing time-effectiveness of model, data security, etc.) in this particular situation and deploy the system in four weeks. In addition, since the proposed AI system provides the priority of each CT image with probability of infection, the physicians can confirm and segregate the infected patients in time. Using 1,136 training cases (723 positives for COVID-19) from five hospitals, we are able to achieve a sensitivity of 0.974 and specificity of 0.922 on the test dataset, which included a variety of pulmonary diseases.

266 citations


Journal ArticleDOI
TL;DR: Two deep learning architectures have been proposed that automatically detect positive COVID-19 cases using Chest CT X-ray images and it is proved that the proposed architecture shows outstanding success in infection detection.
Abstract: Coronavirus disease 2019 (COVID-2019), which emerged in Wuhan, China in 2019 and has spread rapidly all over the world since the beginning of 2020, has infected millions of people and caused many deaths. For this pandemic, which is still in effect, mobilization has started all over the world, and various restrictions and precautions have been taken to prevent the spread of this disease. In addition, infected people must be identified in order to control the infection. However, due to the inadequate number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, Chest computed tomography (CT) becomes a popular tool to assist the diagnosis of COVID-19. In this study, two deep learning architectures have been proposed that automatically detect positive COVID-19 cases using Chest CT X-ray images. Lung segmentation (preprocessing) in CT images, which are given as input to these proposed architectures, is performed automatically with Artificial Neural Networks (ANN). Since both architectures contain AlexNet architecture, the recommended method is a transfer learning application. However, the second proposed architecture is a hybrid structure as it contains a Bidirectional Long Short-Term Memories (BiLSTM) layer, which also takes into account the temporal properties. While the COVID-19 classification accuracy of the first architecture is 98.14%, this value is 98.70% in the second hybrid architecture. The results prove that the proposed architecture shows outstanding success in infection detection and, therefore this study contributes to previous studies in terms of both deep architectural design and high classification success.

228 citations


Journal ArticleDOI
TL;DR: The proposed WMSDE can avoid premature convergence, balance local search ability and global search ability, accelerate convergence, improve the population diversity and the search quality, and is compared with five state-of-the-art DE variants by 11 benchmark functions.
Abstract: The optimization performance of differential evolution(DE) algorithm significantly depends on control parameters and mutation strategy. However, it is difficult to set suitable control parameters and select reasonable mutation strategy for DE in solving an actual engineering optimization problem. To solve these problems, a new optimal mutation strategy based on the complementary advantages of five mutation strategies is designed to develop a novel improved DE algorithm with the wavelet basis function, named WMSDE, which can improve the search quality, accelerate convergence and avoid fall into local optimum and stagnation. In the proposed WMSDE, the initial population is divided into several subpopulations to exchange search information between the different subpopulations and improve the population diversity to a certain extent. The wavelet basis function and normal distribution function are used to control the scaling factor and the crossover rate respectively in order to ensure the diversity of solutions and accelerate convergence. The new optimal mutation strategy is used to improve the local search ability and ensure the global search ability. Finally, the proposed WMSDE is compared with five state-of-the-art DE variants by 11 benchmark functions. The experiment results indicate that the proposed WMSDE can avoid premature convergence, balance local search ability and global search ability, accelerate convergence, improve the population diversity and the search quality. Additionally, a real-world airport gate assignment problem is employed to further prove the effectiveness of the proposed WMSDE. The results show that it can effectively solve the complex airport gate assignment problem, and obtain airport gate assignment rate of 97.6%.

198 citations


Journal ArticleDOI
TL;DR: A novel emotion recognition method based on a novel deep learning model (ERDL) which fuses graph convolutional neural network (GCNN) and long-short term memories neural networks (LSTM) and achieves better classification results than state-of-the-art methods.
Abstract: In recent years, graph convolutional neural networks have become research focus and inspired new ideas for emotion recognition based on EEG. Deep learning has been widely used in emotion recognition, but it is still challenging to construct models and algorithms in practical applications. In this paper, we propose a novel emotion recognition method based on a novel deep learning model (ERDL). Firstly, EEG data is calibrated by 3s baseline data and divided into segments with 6s time window, and then differential entropy is extracted from each segment to construct feature cube. Secondly, the feature cube of each segment serves as input of the novel deep learning model which fuses graph convolutional neural network (GCNN) and long-short term memories neural networks (LSTM). In the fusion model, multiple GCNNs are applied to extract graph domain features while LSTM cells are used to memorize the change of the relationship between two channels within a specific time and extract temporal features, and Dense layer is used to attain the emotion classification results. At last, we conducted extensive experiments on DEAP dataset and experimental results demonstrate that the proposed method has better classification results than the state-of-the-art methods. We attained the average classification accuracy of 90.45% and 90.60% for valence and arousal in subject-dependent experiments while 84.81% and 85.27% in subject-independent experiments.

194 citations


Journal ArticleDOI
TL;DR: This paper introduces automatic fake news detection approach in chrome environment on which it can detect fake news on Facebook, and uses multiple features associated with Facebook account with some news content features to analyze the behavior of the account through deep learning.
Abstract: In recent years, the rise of Online Social Networks has led to proliferation of social news such as product advertisement, political news, celebrity’s information, etc. Some of the social networks such as Facebook, Instagram and Twitter affected by their user through fake news. Unfortunately, some users use unethical means to grow their links and reputation by spreading fake news in the form of texts, images, and videos. However, the recent information appearing on an online social network is doubtful, and in many cases, it misleads other users in the network. Fake news is spread intentionally to mislead readers to believe false news, which makes it difficult for detection mechanism to detect fake news on the basis of shared content. Therefore, we need to add some new information related to user’s profile, such as user’s involvement with others for finding a particular decision. The disseminated information and their diffusion process create a big problem for detecting these contents promptly and thus highlighting the need for automatic fake news detection. In this paper, we are going to introduce automatic fake news detection approach in chrome environment on which it can detect fake news on Facebook. Specifically, we use multiple features associated with Facebook account with some news content features to analyze the behavior of the account through deep learning. The experimental analysis of real-world information demonstrates that our intended fake news detection approach has achieved higher accuracy than the existing state of art techniques.

192 citations


Journal ArticleDOI
TL;DR: An ensemble deep learning model can better meet the rapid detection requirements of the novel coronavirus disease COVID-19 and was compared with three component classifiers to evaluate accuracy, sensitivity, specificity, F value, and Matthews correlation coefficient.
Abstract: The rapid detection of the novel coronavirus disease, COVID-19, has a positive effect on preventing propagation and enhancing therapeutic outcomes. This article focuses on the rapid detection of COVID-19. We propose an ensemble deep learning model for novel COVID-19 detection from CT images. 2933 lung CT images from COVID-19 patients were obtained from previous publications, authoritative media reports, and public databases. The images were preprocessed to obtain 2500 high-quality images. 2500 CT images of lung tumor and 2500 from normal lung were obtained from a hospital. Transfer learning was used to initialize model parameters and pretrain three deep convolutional neural network models: AlexNet, GoogleNet, and ResNet. These models were used for feature extraction on all images. Softmax was used as the classification algorithm of the fully connected layer. The ensemble classifier EDL-COVID was obtained via relative majority voting. Finally, the ensemble classifier was compared with three component classifiers to evaluate accuracy, sensitivity, specificity, F value, and Matthews correlation coefficient. The results showed that the overall classification performance of the ensemble model was better than that of the component classifier. The evaluation indexes were also higher. This algorithm can better meet the rapid detection requirements of the novel coronavirus disease COVID-19.

180 citations


Journal ArticleDOI
TL;DR: In the improved PSO algorithm, an adaptive fractional-order velocity is introduced to enforce some disturbances on the particle swarm according to its evolutionary state, thereby enhancing its capability of jumping out of the local minima and exploring the searching space more thoroughly.
Abstract: In this paper, a new strategy is developed to plan the smooth path for mobile robots through an improved PSO algorithm in combination with the continuous high-degree Bezier curve. Rather than connecting several low-degree Bezier curve segments, the use of continuous high-degree Bezier curves facilitates the fulfillment of the requirement of high-order continuity such as the continuous curvature derivative, which is critical for the motion control of the mobile robots. On the other hand, the smooth path planning of mobile robots is mathematically an optimization problem that can be dealt with by evolutionary computation algorithms. In this regard, an improved particle swarm optimization (PSO) algorithm is proposed to tackle the local trapping and premature convergence issues. In the improved PSO algorithm, an adaptive fractional-order velocity is introduced to enforce some disturbances on the particle swarm according to its evolutionary state, thereby enhancing its capability of jumping out of the local minima and exploring the searching space more thoroughly. The superiority of the improved PSO algorithm is verified by comparing with several standard and modified PSO algorithms on some benchmark functions, and the advantages of the new strategy is also confirmed by several comprehensive simulation experiments for the smooth path planning of mobile robots.

169 citations


Journal ArticleDOI
TL;DR: This paper used topic identification and sentiment analysis to explore a large number of tweets in both countries with a high number of spreading and deaths by COVID-19, Brazil, and the USA.
Abstract: Twitter is a social media platform with more than 500 million users worldwide. It has become a tool for spreading the news, discussing ideas and comments on world events. Twitter is also an important source of health-related information, given the amount of news, opinions and information that is shared by both citizens and official sources. It is a challenge identifying interesting and useful content from large text-streams in different languages, few works have explored languages other than English. In this paper, we use topic identification and sentiment analysis to explore a large number of tweets in both countries with a high number of spreading and deaths by COVID-19, Brazil, and the USA. We employ 3,332,565 tweets in English and 3,155,277 tweets in Portuguese to compare and discuss the effectiveness of topic identification and sentiment analysis in both languages. We ranked ten topics and analyzed the content discussed on Twitter for four months providing an assessment of the discourse evolution over time. The topics we identified were representative of the news outlets during April and August in both countries. We contribute to the study of the Portuguese language, to the analysis of sentiment trends over a long period and their relation to announced news, and the comparison of the human behavior in two different geographical locations affected by this pandemic. It is important to understand public reactions, information dissemination and consensus building in all major forms, including social media in different countries.

139 citations


Journal ArticleDOI
TL;DR: This work presents a collaborative federated learning framework allowing multiple medical institutions screening COVID-19 from Chest X-ray images using deep learning without sharing patient data, and investigates several key properties and specificities of federatedLearning setting including the not independent and identically distributed (non-IID) and unbalanced data distributions that naturally arise.
Abstract: Today, the whole world is facing a great medical disaster that affects the health and lives of the people: the COVID-19 disease, colloquially known as the Corona virus. Deep learning is an effective means to assist radiologists to analyze the vast amount of chest X-ray images, which can potentially have a substantial role in streamlining and accelerating the diagnosis of COVID-19. Such techniques involve large datasets for training and all such data must be centralized in order to be processed. Due to medical data privacy regulations, it is often not possible to collect and share patient data in a centralized data server. In this work, we present a collaborative federated learning framework allowing multiple medical institutions screening COVID-19 from Chest X-ray images using deep learning without sharing patient data. We investigate several key properties and specificities of federated learning setting including the not independent and identically distributed (non-IID) and unbalanced data distributions that naturally arise. We experimentally demonstrate that the proposed federated learning framework provides competitive results to that of models trained by sharing data, considering two different model architectures. These findings would encourage medical institutions to adopt collaborative process and reap benefits of the rich private data in order to rapidly build a powerful model for COVID-19 screening.

116 citations


Journal ArticleDOI
TL;DR: This study addresses the prioritization of risks involved with self-driving vehicles by proposing new hybrid MCDM methods based on the Analytic Hierarchy Process (AHP), the Technique for order preference by similarity to an ideal solution (TOPSIS) and Vlse Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) under Pythagorean fuzzy environment.
Abstract: Self-driving vehicles are of critical importance to a future sustainable transport system, which is expected to become widespread around the world. However a substantial amount of risk is associated with self-driving vehicles which must be considered by decision-makers effectively. Given that automated driving technology and how it will interact with the mobility system are substantially risky, the risks involved in self-driving vehicles need to be addressed appropriately. The identified knowledge gap of the pre-literature review is that an overview of the identification which completely considers all types of risks related to self-driving vehicles does not exist. In response to this knowledge gap, this study aims to prioritize the risks in self-driving vehicles. Risk prioritization is a complicated multi-criteria decision making (MCDM) problem that requires consideration of multiple feasible alternatives and conflicting tangible and intangible criteria. This study addresses the prioritization of risks involved with self-driving vehicles by proposing new hybrid MCDM methods based on the Analytic Hierarchy Process (AHP), the Technique for order preference by similarity to an ideal solution (TOPSIS) and Vlse Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) under Pythagorean fuzzy environment. The result of the proposed model is validated by performing sensitivity analysis. The performance of proposed methodology with Pythagorean fuzzy sets is also compared with those with ordinary fuzzy sets and it is revealed that the proposed method produces reliable and informative outcomes better representing the impreciseness of decision making problems. The findings of this study will provide useful insight to the planners and policymakers for decision making in self-driving vehicles.

Journal ArticleDOI
TL;DR: A novel portfolio construction approach is developed using a hybrid model based on machine learning for stock prediction and mean–variance (MV) model for portfolio selection that is superior to traditional ways and benchmarks in terms of returns and risks.
Abstract: The success of portfolio construction depends primarily on the future performance of stock markets. Recent developments in machine learning have brought significant opportunities to incorporate prediction theory into portfolio selection. However, many studies show that a single prediction model is insufficient to achieve very accurate predictions and affluent returns. In this paper, a novel portfolio construction approach is developed using a hybrid model based on machine learning for stock prediction and mean–variance (MV) model for portfolio selection. Specifically, two stages are involved in this model: stock prediction and portfolio selection. In the first stage, a hybrid model combining eXtreme Gradient Boosting (XGBoost) with an improved firefly algorithm (IFA) is proposed to predict stock prices for the next period. The IFA is developed to optimize the hyperparameters of the XGBoost. In the second stage, stocks with higher potential returns are selected, and the MV model is employed for portfolio selection. Using the Shanghai Stock Exchange as the study sample, the obtained results demonstrate that the proposed method is superior to traditional ways (without stock prediction) and benchmarks in terms of returns and risks.

Journal ArticleDOI
TL;DR: In this article, the authors used time series models (ARIMA and SARIMA) to forecast the epidemiological trends of the COVID-19 pandemic for top-16 countries where 70%-80% of global cumulative cases are located.
Abstract: Most countries are reopening or considering lifting the stringent prevention policies such as lockdowns, consequently, daily coronavirus disease (COVID-19) cases (confirmed, recovered and deaths) are increasing significantly. As of July 25th, there are 16.5 million global cumulative confirmed cases, 9.4 million cumulative recovered cases and 0.65 million deaths. There is a tremendous necessity of supervising and estimating future COVID-19 cases to control the spread and help countries prepare their healthcare systems. In this study, time-series models - Auto-Regressive Integrated Moving Average (ARIMA) and Seasonal Auto-Regressive Integrated Moving Average (SARIMA) are used to forecast the epidemiological trends of the COVID-19 pandemic for top-16 countries where 70%-80% of global cumulative cases are located. Initial combinations of the model parameters were selected using the auto-ARIMA model followed by finding the optimized model parameters based on the best fit between the predictions and test data. Analytical tools Auto-Correlation function (ACF), Partial Auto-Correlation Function (PACF), Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) were used to assess the reliability of the models. Evaluation metrics Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE) and Mean Absolute Percent Error (MAPE) were used as criteria for selecting the best model. A case study was presented where the statistical methodology was discussed in detail for model selection and the procedure for forecasting the COVID-19 cases of the USA. Best model parameters of ARIMA and SARIMA for each country are selected manually and the optimized parameters are then used to forecast the COVID-19 cases. Forecasted trends for confirmed and recovered cases showed an exponential rise for countries such as the United States, Brazil, South Africa, Colombia, Bangladesh, India, Mexico and Pakistan. Similarly, trends for cumulative deaths showed an exponential rise for countries Brazil, South Africa, Chile, Colombia, Bangladesh, India, Mexico, Iran, Peru, and Russia. SARIMA model predictions are more realistic than that of the ARIMA model predictions confirming the existence of seasonality in COVID-19 data. The results of this study not only shed light on the future trends of the COVID-19 outbreak in top-16 countries but also guide these countries to prepare their health care policies for the ongoing pandemic. The data used in this work is obtained from publicly available John Hopkins University's COVID-19 database.

Journal ArticleDOI
TL;DR: An improved equilibrium optimization algorithm (IEOA) combined with a proposed recycling strategy for configuring the power distribution networks with optimal allocation of multiple distributed generators for enhanced distribution system performance, quality and reliability is proposed.
Abstract: It is imperative to distribution system operators to provide quantitative as well as qualitative power demand and satisfy consumers’ satisfaction. So, it is important to address one of the most promising combinatorial optimization problems for the optimal integration of power distribution network reconfiguration (PDNR) with distributed generations (DGs). In this regard, this paper proposes an improved equilibrium optimization algorithm (IEOA) combined with a proposed recycling strategy for configuring the power distribution networks with optimal allocation of multiple distributed generators. The recycling strategy is augmented to explore the solution space more effectively during iterations. The effectiveness of the proposed algorithm is checked on 23 standard benchmark functions. Simultaneous integration of PDNR and DG are carried out considering the 33 and 69-bus distribution test systems at three different load levels and its superiority is established. Verification of the proposed technique on large scale distribution system with a variety of control variables is introduced on a 137-bus large scale distribution system. These simulations lead to enhanced distribution system performance, quality and reliability. While, the integration represents a challenge for complexity and disability to achieve optimal solutions of the considered problem especially for multi-objective framework. To solve this challenge, a multi-objective function is developed considering total active power loss and overall voltage enhancement with respecting the system limitations. The proposed algorithm is contrasted with harmony search, genetic, refined genetic, fireworks, and firefly optimization algorithms. The obtained results confirm the effectiveness and robustness of the proposed technique compared with the competitive algorithms.

Journal ArticleDOI
TL;DR: The results in different scenarios demonstrate that as compared with several existing evolutionary algorithms, the CSA method can effectively explore the decision space and produce competitive results in terms of various performance evaluation indicators.
Abstract: This paper develops a novel population-based evolutionary method called cooperation search algorithm (CSA) to address the complex global optimization problem. Inspired by the team cooperation behaviors in modern enterprise, the CSA method randomly generates a set of candidate solutions in the problem space, and then three operators are repeatedly executed until the stopping criterion is met: the team communication operator is used to improve the global exploration and determine the promising search area; the reflective learning operator is used to achieve a comprise between exploration and exploitation; the internal competition operator is used to choose solutions with better performances for the next cycle. Firstly, three kinds of mathematical optimization problems (including 24 famous test functions, 25 CEC2005 test problems and 30 CEC2014 test problems) are used to test the convergence speed and search accuracy of the CSA method. Then, several famous engineering optimization problems (like Gear train design, Welded beam design and Speed reducer design) are chosen to testify the engineering practicality of the CSA method. The results in different scenarios demonstrate that as compared with several existing evolutionary algorithms, the CSA method can effectively explore the decision space and produce competitive results in terms of various performance evaluation indicators. Thus, an effective tool is provided for solving the complex global optimization problems.

Journal ArticleDOI
TL;DR: A Measurement of Alternatives and Ranking according to the Compromise Solution (MARCOS) technique under an intuitionistic fuzzy environment to rank insurance companies and yielded ten insurance companies ranking in terms of healthcare services in the era of COVID-19.
Abstract: Assessing and ranking private health insurance companies provides insurance agencies, insurance customers, and authorities with a reliable instrument for the insurance decision-making process. Moreover, because the world’s insurance sector suffers from a gap of evaluation of private health insurance companies during the COVID-19 outbreak, the need for a reliable, useful, and comprehensive decision tool is obvious. Accordingly, this article aims to identify insurance companies’ priority ranking in terms of healthcare services in Turkey during the COVID-19 outbreak through a multi-criteria performance evaluation methodology. Herein, alternatives are evaluated and then ranked as per 7 criteria and assessments of 5 experts. Experts’ judgments and assessments are full of uncertainties. We propose a Measurement of Alternatives and Ranking according to the Compromise Solution (MARCOS) technique under an intuitionistic fuzzy environment to rank insurance companies. The outcomes yielded ten insurance companies ranking in terms of healthcare services in the era of COVID-19. The payback period, premium price, and network are determined as the most crucial factors. Finally, a comprehensive sensitivity analysis is performed to verify the proposed methodology’s stability and effectiveness. The introduced approach met the insurance assessment problem during the COVID-19 pandemic very satisfactory manner based on sensitivity analysis findings.

Journal ArticleDOI
TL;DR: This is the first attempt in deep learning to learn custom filters within a single convolutional layer for identifying specific pneumonia classes in COVID-19, a deadly viral infection that has brought a significant threat to human lives.
Abstract: COVID-19 is a deadly viral infection that has brought a significant threat to human lives. Automatic diagnosis of COVID-19 from medical imaging enables precise medication, helps to control community outbreak, and reinforces coronavirus testing methods in place. While there exist several challenges in manually inferring traces of this viral infection from X-ray, Convolutional Neural Network (CNN) can mine data patterns that capture subtle distinctions between infected and normal X-rays. To enable automated learning of such latent features, a custom CNN architecture has been proposed in this research. It learns unique convolutional filter patterns for each kind of pneumonia. This is achieved by restricting certain filters in a convolutional layer to maximally respond only to a particular class of pneumonia/COVID-19. The CNN architecture integrates different convolution types to aid better context for learning robust features and strengthen gradient flow between layers. The proposed work also visualizes regions of saliency on the X-ray that have had the most influence on CNN's prediction outcome. To the best of our knowledge, this is the first attempt in deep learning to learn custom filters within a single convolutional layer for identifying specific pneumonia classes. Experimental results demonstrate that the proposed work has significant potential in augmenting current testing methods for COVID-19. It achieves an F1-score of 97.20% and an accuracy of 99.80% on the COVID-19 X-ray set.

Journal ArticleDOI
TL;DR: In this paper, an improved tunicate swarm algorithm (ITSA) was proposed for solving and optimizing the dynamic economic emission dispatch (DEED) problem, which aims to reduce the fuel cost and pollutant emission of the power system.
Abstract: This study proposes improved tunicate swarm algorithm (ITSA) for solving and optimizing the dynamic economic emission dispatch (DEED) problem. The DEED optimization target is to reduce the fuel cost and pollutant emission of the power system. In addition, DEED is a complex optimization problem and contains multiple optimization goals. To strengthen the ability of the ITSA algorithm for solving DEED, the tent mapping is employed to generate initial population for improving the directionality in the optimization process. Meanwhile, the gray wolf optimizer is used to generate the global search vector for improving global exploration ability, and the Levy flight is introduced to expand the search range. Three test systems containing 5, 10 and 15 generator units are employed to verify the solving performance of ITSA. The test results show that the ITSA algorithm can provide a competitive scheduling plan for test systems containing different units. ITSA proposed algorithm gives the optimal economic and environmental dynamic dispatch scheme for achieving more precise dispatch strategy.

Journal ArticleDOI
TL;DR: In this paper, a hotel recommendation system using sentiment analysis of the hotel reviews, and aspect-based review categorization is proposed, which is based on the queries given by a user and follows a systematic approach which first uses an ensemble of a binary classification called Bidirectional Encoder Representations from Transformers (BERT) model with three phases for positive-negative, neutral-negative and neutral-positive sentiments merged using a weight assigning protocol.
Abstract: Finding a suitable hotel based on user’s need and affordability is a complex decision-making process. Nowadays, the availability of an ample amount of online reviews made by the customers helps us in this regard. This very fact gives us a promising research direction in the field of tourism called hotel recommendation system which also helps in improving the information processing of consumers. Real-world reviews may showcase different sentiments of the customers towards a hotel and each review can be categorized based on different aspects such as cleanliness, value, service, etc. Keeping these facts in mind, in the present work, we have proposed a hotel recommendation system using Sentiment Analysis of the hotel reviews, and aspect-based review categorization which works on the queries given by a user. Furthermore, we have provided a new rich and diverse dataset of online hotel reviews crawled from Tripadvisor.com. We have followed a systematic approach which first uses an ensemble of a binary classification called Bidirectional Encoder Representations from Transformers (BERT) model with three phases for positive–negative, neutral–negative, neutral–positive sentiments merged using a weight assigning protocol. We have then fed these pre-trained word embeddings generated by the BERT models along with other different textual features such as word vectors generated by Word2vec, TF–IDF of frequent words, subjectivity score, etc. to a Random Forest classifier. After that, we have also grouped the reviews into different categories using an approach that involves fuzzy logic and cosine similarity. Finally, we have created a recommender system by the aforementioned frameworks. Our model has achieved a Macro F1-score of 84% and test accuracy of 92.36% in the classification of sentiment polarities. Also, the results of the categorized reviews have formed compact clusters. The results are quite promising and much better compared to state-of-the-art models. The relevant codes and notebooks can be found here .

Journal ArticleDOI
TL;DR: In this paper, a spherical vector-based particle swarm optimization (SPSO) algorithm is proposed to find the optimal path that minimizes the cost function by efficiently searching the configuration space of the UAV via the correspondence between the particle position and the speed, turn angle and climb/dive angle of the drone.
Abstract: This paper presents a new algorithm named spherical vector-based particle swarm optimization (SPSO) to deal with the problem of path planning for unmanned aerial vehicles (UAVs) in complicated environments subjected to multiple threats. A cost function is first formulated to convert the path planning into an optimization problem that incorporates requirements and constraints for the feasible and safe operation of the UAV. SPSO is then used to find the optimal path that minimizes the cost function by efficiently searching the configuration space of the UAV via the correspondence between the particle position and the speed, turn angle and climb/dive angle of the UAV. To evaluate the performance of SPSO, eight benchmarking scenarios have been generated from real digital elevation model maps. The results show that the proposed SPSO outperforms not only other particle swarm optimization (PSO) variants including the classic PSO, phase angle-encoded PSO and quantum-behave PSO but also other state-of-the-art metaheuristic optimization algorithms including the genetic algorithm (GA), artificial bee colony (ABC), and differential evolution (DE) in most scenarios. In addition, experiments have been conducted to demonstrate the validity of the generated paths for real UAV operations. Source code of the algorithm can be found at https://github.com/duongpm/SPSO .

Journal ArticleDOI
TL;DR: The results indicate that the LSTM deep-learning method outperforms the feed forward and feedback neural networks based on both accuracy and the convergence rate when reproducing the soil’s stress–strain behaviour.
Abstract: This paper presents a new trial to reproduce soil stress–strain behaviour by adapting a long short-term memory (LSTM) deep learning method. LSTM is an approach that employs time sequence data to predict future occurrences, and it can be used to consider the stress history of soil behaviour. The proposed LSTM method includes the following three steps: data preparation, architecture determination, and optimisation. The capacity of the adapted LSTM method is compared with that of feedforward and feedback neural networks using a new numerical benchmark dataset. The performance of the proposed LSTM method is verified through a dataset collected from laboratory tests. The results indicate that the LSTM deep-learning method outperforms the feed forward and feedback neural networks based on both accuracy and the convergence rate when reproducing the soil’s stress–strain behaviour. One new phenomenon referred to as “bias at low stress levels”, which was not noticed before, is first discovered and discussed for all neural network-based methods.

Journal ArticleDOI
TL;DR: The experimental test results indicated that the proposed deep learning ensemble model was generally more competitive when addressing imbalanced credit risk evaluation problems than other models.
Abstract: In recent years, research has found that in many credit risk evaluation domains, deep learning is superior to traditional machine learning methods and classifier ensembles perform significantly better than single classifiers. However, credit evaluation model based on deep learning ensemble algorithm has rarely been studied. Moreover, credit data imbalance still challenges the performance of credit scoring models. Therefore, to go some way to filling this research gap, this study developed a new deep learning ensemble credit risk evaluation model to deal with imbalanced credit data. First, an improved synthetic minority oversampling technique (SMOTE) method was developed to overcome known SMOTE shortcomings, after which a new deep learning ensemble classification method combined with the long-short-term-memory (LSTM) network and the adaptive boosting (AdaBoost) algorithm was developed to train and learn the processed credit data. Then, area under the curve (AUC), the Kolmogorov–Smirnov (KS) and the non-parametric Wilcoxon test were employed to compare the performance of the proposed model and other widely used credit scoring models on two imbalanced credit datasets. The experimental test results indicated that the proposed deep learning ensemble model was generally more competitive when addressing imbalanced credit risk evaluation problems than other models.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a novel framework for disposing the problem of transfer diagnosis with sparse target data, and the main idea is to pair the source and target data with the same machine condition and conduct individual domain adaptation so as to alleviate the lack of target data.
Abstract: Investigation of deep transfer learning on machinery fault diagnosis is helpful to overcome the limitations of a large volume of training data, and accelerate the practical applications of diagnostic algorithms. However, previous reported methods, mainly including parameter transfer and domain adaptation, still require a few labeled or massive unlabeled fault samples, which are not always available. In general, only extremely limited fault data, namely sparse data (single or several samples), can be obtained, and the labeling is also easy to be processed. This paper presents a novel framework for disposing the problem of transfer diagnosis with sparse target data. In consideration of the unclear data distribution described by the sparse data, the main idea is to pair the source and target data with the same machine condition and conduct individual domain adaptation so as to alleviate the lack of target data, diminish the distribution discrepancy as well as avoid negative transfer. More impressive, the issue of label space mismatching can be appropriately addressed in our network. The extensive experiments on two case studies are used to verify the proposed method. Comprehensive transfer scenarios, i.e., diverse working conditions and diverse machines, are considered. The thorough evaluation shows that the proposed method presents superior performance with respect to traditional transfer learning methods.

Journal ArticleDOI
TL;DR: This paper aims to use capsule neural networks in the fake news detection task, using different embedding models for news items of different lengths and outperforming the state-of-the-art methods on ISOT and LIAR.
Abstract: Fake news has increased dramatically in social media in recent years. This has prompted the need for effective fake news detection algorithms. Capsule neural networks have been successful in computer vision and are receiving attention for use in Natural Language Processing (NLP). This paper aims to use capsule neural networks in the fake news detection task. We use different embedding models for news items of different lengths. Static word embedding is used for short news items, whereas non-static word embeddings that allow incremental uptraining and updating in the training phase are used for medium length or long news statements. Moreover, we apply different levels of n-grams for feature extraction. Our proposed models are evaluated on two recent well-known datasets in the field, namely ISOT and LIAR. The results show encouraging performance, outperforming the state-of-the-art methods by 7.8% on ISOT and 3.1% on the validation set, and 1% on the test set of the LIAR dataset.

Journal ArticleDOI
TL;DR: A discrete variation of the Distributed Grey Wolf Optimizer (DGWO) for scheduling dependent tasks to VMs for maximizing the utilization of Virtual Machines (VMs) in cloud computing environments.
Abstract: Optimal scheduling of workflows in cloud computing environments is an essential element to maximize the utilization of Virtual Machines (VMs). In practice, scheduling of dependent tasks in a workflow requires distributing the tasks to the available VMs on the cloud. This paper introduces a discrete variation of the Distributed Grey Wolf Optimizer (DGWO) for scheduling dependent tasks to VMs. The scheduling process in DGWO is modeled as a minimization problem for two objectives: computation and data transmission costs. DGWO uses the largest order value (LOV) method to convert the continuous candidate solutions produced by DGWO to discrete candidate solutions. DGWO was experimentally tested and compared to well-known optimization-based scheduling algorithms (Particle Swarm Optimization (PSO), Grey Wolf Optimizer). The experimental results suggest that DGWO distributes tasks to VMs faster than the other tested algorithms. Besides, DGWO was compared to PSO and Binary PSO (BPSO) using WorkflowSim and scientific workflows of different sizes. The obtained simulation results suggest that DGWO provides the best makespan compared to the other algorithms.

Journal ArticleDOI
TL;DR: A clustering-based approach to detect anomalies concerning the amplitude and the shape of multivariate time series and is suitable for identifying anomalous amplitude and shape patterns in various application domains such as health care, weather data analysis, finance, and disease outbreak detection.
Abstract: Multivariate time series data come as a collection of time series describing different aspects of a certain temporal phenomenon. Anomaly detection in this type of data constitutes a challenging problem yet with numerous applications in science and engineering because anomaly scores come from the simultaneous consideration of the temporal and variable relationships. In this paper, we propose a clustering-based approach to detect anomalies concerning the amplitude and the shape of multivariate time series. First, we use a sliding window to generate a set of multivariate subsequences and thereafter apply an extended fuzzy clustering to reveal a structure present within the generated multivariate subsequences. Finally, a reconstruction criterion is employed to reconstruct the multivariate subsequences with the optimal cluster centers and the partition matrix. We construct a confidence index to quantify a level of anomaly detected in the series and apply Particle Swarm Optimization as an optimization vehicle for the problem of anomaly detection. Experimental studies completed on several synthetic and six real-world datasets suggest that the proposed methods can detect the anomalies in multivariate time series. With the help of available clusters revealed by the extended fuzzy clustering, the proposed framework can detect anomalies in the multivariate time series and is suitable for identifying anomalous amplitude and shape patterns in various application domains such as health care, weather data analysis, finance, and disease outbreak detection.

Journal ArticleDOI
TL;DR: ISBPSO adopts three new mechanisms based on a recently proposed binary PSO variant, sticky binary particle swarm optimization (SBPSO), to improve the evolutionary performance and substantially reduces the computation time compared with benchmark PSO-based FS methods.
Abstract: Feature selection (FS) is an important preprocessing technique for dimensionality reduction in classification problems. Particle swarm optimization (PSO) algorithms have been widely used as the optimizers for FS problems. However, with the increase of data dimensionality, the search space expands dramatically, which proposes significant challenges for optimization methods, including PSO. In this paper, we propose an improved sticky binary PSO (ISBPSO) algorithm for FS. ISBPSO adopts three new mechanisms based on a recently proposed binary PSO variant, sticky binary particle swarm optimization (SBPSO), to improve the evolutionary performance. First, a new initialization strategy using the feature weighting information based on mutual information is proposed. Second, a dynamic bits masking strategy for gradually reducing the search space during the evolutionary process is proposed. Third, based on the framework of memetic algorithms, a refinement procedure conducting genetic operations on the personal best positions of ISBPSO is used to alleviate the premature convergence problem. The results on 12 UCI datasets show that ISBPSO outperforms six benchmark PSO-based FS methods and two conventional FS methods (sequential forward selection and sequential backward selection) — ISBPSO obtains either higher or similar accuracies with fewer features in most cases. Moreover, ISBPSO substantially reduces the computation time compared with benchmark PSO-based FS methods. Further analysis shows that all the three proposed mechanisms are effective for improving the search performance of ISBPSO.

Journal ArticleDOI
TL;DR: The results show the viability of the proposed approach which yields Bozcaada as the appropriate site, when compared to and validated using the other multi-criteria decision-making techniques from the literature, including IRN based MABAC, WASPAS, and MAIRCA.
Abstract: Over the past 20 years, the development of offshore wind farms has become increasingly important across the world. One of the most crucial reasons for that is offshore wind turbines have higher average speeds than those onshore, producing more electricity. In this study, a new hybrid approach integrating Interval Rough Numbers (IRNs) into Best-Worst Method (BWM) and Measurement of Alternatives and Ranking according to Compromise Solution (MARCOS) is introduced for multi-criteria intelligent decision support to choose the best offshore wind farm site in a Turkey’s coastal area. Four alternatives in the Aegean Sea are considered based on a range of criteria. The results show the viability of the proposed approach which yields Bozcaada as the appropriate site, when compared to and validated using the other multi-criteria decision-making techniques from the literature, including IRN based MABAC, WASPAS, and MAIRCA.

Journal ArticleDOI
TL;DR: A novel dual attention method called DanHAR is proposed, which introduces the framework of blending channel attention and temporal attention on a CNN, demonstrating superiority in improving the comprehensibility for multimodal HAR.
Abstract: In the paper, we present a new dual attention method called DanHAR, which blends channel and temporal attention on residual networks to improve feature representation ability for sensor-based HAR task. Specially, the channel attention plays a key role in deciding what to focus, i.e., sensor modalities, while the temporal attention can focus on the target activity from a long sensor sequence to tell where to focus. Extensive experiments are conducted on four public HAR datasets, as well as weakly labeled HAR dataset. The results show that dual attention mechanism is of central importance for many activity recognition tasks. We obtain 2.02%, 4.20%, 1.95%, 5.22% and 5.00% relative improvement over regular ConvNets respectively on WISDM dataset, UNIMIB SHAR dataset, PAMAP2 dataset, OPPORTUNITY dataset, as well as weakly labeled HAR dataset. The DanHAR is able to surpass other state-of-the-art algorithms at negligible computational overhead. Visualizing analysis is conducted to show that the proposed attention can capture the spatial–temporal dependencies of multimodal sensing data, which amplifies the more important sensor modalities and timesteps during classification. The results are in good agreement with normal human intuition.

Journal ArticleDOI
TL;DR: This work introduces CTF, a large-scale COVID-19 Twitter dataset with labelled genuine and fake tweets, and proposes Cross-SEAN, a cross-stitch based semi-supervised end-to-end neural attention model which partially generalises to emerging fake news as it learns from relevant external knowledge.
Abstract: As the COVID-19 pandemic sweeps across the world, it has been accompanied by a tsunami of fake news and misinformation on social media. At the time when reliable information is vital for public health and safety, COVID-19 related fake news has been spreading even faster than the facts. During times such as the COVID-19 pandemic, fake news can not only cause intellectual confusion but can also place people’s lives at risk. This calls for an immediate need to contain the spread of such misinformation on social media. We introduce CTF , a large-scale COVID-19 Twitter dataset with labelled genuine and fake tweets. Additionally, we propose Cross-SEAN, a cross-stitch based semi-supervised end-to-end neural attention model which leverages the large amount of unlabelled data. Cross-SEAN partially generalises to emerging fake news as it learns from relevant external knowledge. We compare Cross-SEAN with seven state-of-the-art fake news detection methods. We observe that it achieves 0.95 F1 Score on CTF , outperforming the best baseline by 9%. We also develop Chrome-SEAN, a Cross-SEAN based chrome extension for real-time detection of fake tweets.