Showing papers in "Engineering Applications of Artificial Intelligence in 2021"
TL;DR: A comparative analysis of different feature selection methods is presented, and a general categorization of these methods is performed, which shows the strengths and weaknesses of the different studied swarm intelligence-based feature selection Methods.
Abstract: In the past decades, the rapid growth of computer and database technologies has led to the rapid growth of large-scale datasets. On the other hand, data mining applications with high dimensional datasets that require high speed and accuracy are rapidly increasing. An important issue with these applications is the curse of dimensionality, where the number of features is much higher than the number of patterns. One of the dimensionality reduction approaches is feature selection that can increase the accuracy of the data mining task and reduce its computational complexity. The feature selection method aims at selecting a subset of features with the lowest inner similarity and highest relevancy to the target class. It reduces the dimensionality of the data by eliminating irrelevant, redundant, or noisy data. In this paper, a comparative analysis of different feature selection methods is presented, and a general categorization of these methods is performed. Moreover, in this paper, state-of-the-art swarm intelligence is studied, and the recent feature selection methods based on these algorithms are reviewed. Furthermore, the strengths and weaknesses of the different studied swarm intelligence-based feature selection methods are evaluated.
TL;DR: Modeling results revealed that the MFO algorithm can capture better hyper-parameters of the SVM model in predicting TBM AR among all three hybrid models, confirming that this hybrid S VM model is a powerful and applicable technique addressing problems related to TBM performance with a high level of accuracy.
Abstract: The advance rate (AR) of a tunnel boring machine (TBM) in hard rock condition is a key parameter for the successful accomplishment of a tunneling project, and the proper and reliable prediction of this parameter can lead to minimizing the risks associated to high capital costs and scheduling for such projects. This research aims at optimizing the hyper-parameters of the support vector machine (SVM) technique through the use of three optimization algorithms, namely, gray wolf optimization (GWO), whale optimization algorithm (WOA) and moth flame optimization (MFO), in forecasting TBM AR. In fact, the role of these optimization techniques is to optimize the hyperparameters ‘C’ and ‘gamma’ of the SVM model to get higher performance prediction. To develop the hybrid SVM-based models, 1,286 sample sets of data collected from a water transfer tunnel in Malaysia comprising seven input variables, i.e., rock mass rating, uniaxial compressive strength, Brazilian tensile strength, rock quality designation, weathering zone, thrust force and revolution per minute, and one output variable, i.e., TBM AR, were considered and used. Several GWO-SVM, WOA-SVM and MFO-SVM models were constructed to predict TBM AR considering their effective parameters. The accuracy levels of the proposed models were assessed using four statistical indices, i.e., the coefficient of determination (R2), root mean squared error (RMSE), mean absolute error (MAE), and variance accounted for (VAF). Modeling results revealed that the MFO algorithm can capture better hyper-parameters of the SVM model in predicting TBM AR among all three hybrid models. R2 of (0.9623 and 0.9724), RMSE of (0.1269 and 0.1155), and VAF of (96.24 and 97.34%), respectively, for training and test stages of the MFO-SVM model confirmed that this hybrid SVM model is a powerful and applicable technique addressing problems related to TBM performance with a high level of accuracy.
TL;DR: A physics-informed neural network is developed to solve conductive heat transfer partial differential equation (PDE), along with convective heat transfer PDEs as boundary conditions (BCs), in manufacturing and engineering applications where parts are heated in ovens.
Abstract: A physics-informed neural network is developed to solve conductive heat transfer partial differential equation (PDE), along with convective heat transfer PDEs as boundary conditions (BCs), in manufacturing and engineering applications where parts are heated in ovens. Since convective coefficients are typically unknown, current analysis approaches based on trial-and-error finite element (FE) simulations are slow. The loss function is defined based on errors to satisfy PDE, BCs and initial condition. An adaptive normalizing scheme is developed to reduce loss terms simultaneously. In addition, theory of heat transfer is used for feature engineering. The predictions for 1D and 2D cases are validated by comparing with FE results. While comparing with theory-agnostic ML methods, it is shown that only by using physics-informed activation functions, the heat transfer beyond the training zone can be accurately predicted. Trained models were successfully used for real-time evaluation of thermal responses of parts subjected to a wide range of convective BCs.
TL;DR: In this article, the authors proposed three hybrid meta-heuristic algorithms, namely, ant colony optimization, fish swarm algorithm, and firefly algorithm, hybridized with variable neighborhood search to solve the sustainable medical supply chain network model.
Abstract: Nowadays, in the pharmaceutical industry, a growing concern with sustainability has become a strict consideration during the COVID-19 pandemic. There is a lack of good mathematical models in the field. In this research, a production-distribution-inventory-allocation-location problem in the sustainable medical supply chain network is designed to fill this gap. Also, the distribution of medicines related to COVID-19 patients and the periods of production and delivery of medicine according to the perishability of some medicines are considered. In the model, a multi-objective, multi-level, multi-product, and multi-period problem for a sustainable medical supply chain network is designed. Three hybrid meta-heuristic algorithms, namely, ant colony optimization, fish swarm algorithm, and firefly algorithm are suggested, hybridized with variable neighborhood search to solve the sustainable medical supply chain network model. Response surface method is used to tune the parameters since meta-heuristic algorithms are sensitive to input parameters. Six assessment metrics were used to assess the quality of the obtained Pareto frontier by the meta-heuristic algorithms on the considered problems. A real case study is used and empirical results indicate the superiority of the hybrid fish swarm algorithm with variable neighborhood search.
TL;DR: In this paper, a novel DE algorithm named quantum-based avian navigation optimizer algorithm (QANA) was proposed, which is inspired by the extraordinary precision navigation of migratory birds during long-distance aerial paths.
Abstract: Differential evolution is an effective and practical approach that is widely applied for solving global optimization problems. Nevertheless, its effectiveness and scalability are decreased when the problems’ dimension is increased. Hence, this paper is devoted to proposing a novel DE algorithm named quantum-based avian navigation optimizer algorithm (QANA) inspired by the extraordinary precision navigation of migratory birds during long-distance aerial paths. In the QANA, the population is distributed by partitioning into multi flocks to explore the search space effectively using proposed self-adaptive quantum orientation and quantum-based navigation consisted of two mutation strategies, DE/quantum/I and DE/quantum/II. Except for the first iteration, each flock is assigned using an introduced success-based population distribution (SPD) policy to one of the quantum mutation strategies. Meanwhile, the information flow is shared through the population using a new communication topology named V-echelon. Furthermore, we introduce two long-term and short-term memories to provide meaningful knowledge for partial landscape analysis and a qubit-crossover operator to generate the next search agents. The effectiveness and scalability of the proposed QANA were extensively evaluated using benchmark functions CEC 2018 and CEC 2013 as LSGO problems. The results were statistically analyzed by the Wilcoxon signed-rank sum test, ANOVA, and mean absolute error tests. Finally, the applicability of the QANA to solve real-world problems was evaluated by four engineering problems. The experimental results and statistical analysis prove that the QANA is superior to the competitor DE and swarm intelligence algorithms in test functions CEC 2018 and CEC 2013, with overall effectiveness of 80.46% and 73.33%, respectively.
TL;DR: In this paper, a Bayesian retinex algorithm for underwater image enhancement with multi-order gradient priors of reflectance and illumination is proposed, which can be used for color correction, naturalness preservation, structures and details promotion, artifacts or noise suppression.
Abstract: This paper develops a Bayesian retinex algorithm for enhancing single underwater image with multiorder gradient priors of reflectance and illumination. First, a simple yet effective color correction approach is adopted to remove color casts and recover naturalness. Then a maximum a posteriori formulation for underwater image enhancement is established on the color-corrected image by imposing multiorder gradient priors on reflectance and illumination. The l 1 norm is appropriately used to model piecewise and piecewise linear approximations on the reflectance, and the l 2 norm is used to enforce spatial smoothness and spatial linear smoothness on the illumination. Meanwhile, a complex underwater image enhancement issue is turned into two simple denoising subproblems where their convergence analyses are mathematically provided, and their solutions can be derived by an efficient optimization algorithm. Besides, the proposed model is fast implemented on pixelwise operations while not requiring additional prior knowledge about underwater imaging conditions. Final experiments demonstrate the effectiveness of the proposed method in color correction, naturalness preservation, structures and details promotion, artifacts or noise suppression. Compared with several traditional and leading enhancement approaches, the proposed method yields better results on qualitative and quantitative assessments. The superiority of the proposed method can be extended to several challenging applications.
TL;DR: A comprehensive computational campaign against the closely related and state-of-the-art algorithms in the literature shows that both the proposed heuristics and DABC are very effective for solving the problem under consideration.
Abstract: The distributed permutation flowshop scheduling problem (DPFSP) has been a hot issue in recent years. Due to the practical relevance of sequence-dependent setup time (SDST), we consider the DPFSP with SDST to minimize makespan. For the purpose, we propose three constructive heuristics and an effective discrete artificial bee colony (DABC) algorithm. All the heuristics are based on a greedy assignment rule and a local search of job blocks in each factory. In the local search, three different setup times are respectively considered for inserting a job block. In the DABC, to balance the local exploitation and the global exploration, we propose six composite neighborhood operators according to the problem characteristics. The first three are based on insertion and swap operators, and the second three have a close relationship with the critical factory. A problem-oriented local search method is developed to improve the best individual in the population. A comprehensive computational campaign against the closely related and state-of-the-art algorithms in the literature shows that both the proposed heuristics and DABC are very effective for solving the problem under consideration.
TL;DR: IG-bBOA maximizes both the classification accuracy and the mean of the mutual information between features and class labels and tries to minimize the number of selected features and is used within a three-phase proposed method called Ensemble Information Theory based binary Butterfly Optimization Algorithm (EIT-b BOA).
Abstract: Feature selection is the problem of finding the optimal subset of features for predicting class labels by removing irrelevant or redundant features. S-shaped Binary Butterfly Optimization Algorithm (S-bBOA) is a nature-inspired algorithm for solving the feature selection problems. The evidence shows that S-bBOA has a better performance in exploration, exploitation, convergence, and avoidance of getting stuck in local optimal compared to other optimization algorithms. However, S-bBOA does not consider redundancy and relevancy of features. This paper proposes Information Gain binary Butterfly Optimization Algorithm (IG-bBOA), to overcome the S-bBOA constraints firstly. IG-bBOA maximizes both the classification accuracy and the mean of the mutual information between features and class labels. In addition, IG-bBOA also tries to minimize the number of selected features and is used within a three-phase proposed method called Ensemble Information Theory based binary Butterfly Optimization Algorithm (EIT-bBOA). In the first phase, 80% of irrelevant and redundant features are removed using Minimal Redundancy-Maximal New Classification Information (MR-MNCI) feature selection. In the second phase, the best feature subset is selected using IG-bBOA. Finally, a similarity based ranking method is used to select the final features subset. The experimental results are conducted using six standard datasets from UCI repository. The findings confirm the efficiency of the proposed method in improving the classification accuracy and selecting the best optimal features subset with minimum number of feature in most cases.
TL;DR: Li et al. as discussed by the authors proposed a weighted network method based on the ordered visibility graph, named OVGWP, which considers not only the belief value itself, but also the cardinality of basic probability assignment.
Abstract: Transform of basic probability assignment to probability distribution is an important aspect of decision making process. To address this issue, a weighted network method based on the ordered visibility graph is proposed in this paper, named OVGWP. In this proposed method, the information volume of focal elements is calculated by belief entropy. The entropy value is used to determine the rank of each proposition. After generating the rank, a weighted network corresponding to the given basic probability assignment can be constructed. The global ratio for proportional belief transformation is determined by the degree of nodes and its weighted edges in the network. Compared with existing ordered visibility graph probability, we have considered not only the belief value itself, but also the cardinality of basic probability assignment. Hence the proposed OVGWP considers a much more comprehensive information for transformation. Experimental results reveal that OVGWP produces an effective and reasonable transformation performance compared with existing methods. If the basic probability assignment is given as m ( Θ ) = 1 , the proposed OVGWP has the same result with pignistic probability transformation. The proposed OVGWP satisfies the consistency of the upper and lower boundaries.
TL;DR: The problem of SLAM, its general model, framework, the difficulties, and leading approaches are described, and some of the most important approaches of all time are selected to understand the research development, current trends, and intellectual structure ofSLAM.
Abstract: Simultaneous Localization and Mapping (SLAM) is a key problem in the field of Artificial Intelligence and mobile robotics that addresses the problem of localization and mapping when a prior map of the workspace is not accessible. The determination of the SLAM problem has gained significant research momentum up to recent times. In this paper, firstly the problem of SLAM, its general model, framework, the difficulties, and leading approaches are described. Secondly, the progress of SLAM solving algorithms is surveyed throughout history. Pre-development, early SLAM solving algorithms, recent and present methods are presented and the progression of the state-of-art is reviewed based on the impact of leading approaches. We have selected some of the most important approaches of all time (1986–2019) to understand the research development, current trends, and intellectual structure of SLAM. Furthermore, from the trend of recent studies and the existence of difficult problems, a brief but sufficient review in the visual SLAM with the most outstanding approaches is presented. This paper provides one single sufficient review that allows researchers to understand the trend of SLAM, where it has come from, where it is going to and what needs to be more investigated in the SLAM-related field area. The future, in other words, the potential most important approaches inspiring the future researches in the SLAM problem can be seen. This paper will be an efficient overview and a valuable survey for introducing the SLAM solving approaches in mobile robotics as well as the general application of SLAM.
TL;DR: A new self-adaptive optimized grey model is proposed with the following improvements, which reveals that the optimization techniques exerted on the initial condition and background value can strikingly enhance the adaptability and prediction accuracy of the grey model.
Abstract: To alleviate the threatening pressure of energy shortage and environmental issues, the adoption of electric vehicles (EVs) is regarded as an effective measure. Therefore, accurate predictions of EVs sales and stock are crucial to deploying charging infrastructures, improving industrial policies, and providing credible references of the renewable sources demand in the transportation system. To this end, a new self-adaptive optimized grey model is proposed with the following improvements: first, a dynamic weighted sequence is generated to extract more value from the available observations by sufficiently highlighting the new data without information lapses. Second, the weighted coefficient and modified initial condition can adjust to various samples and thus augment the applicability of the proposed model. Third, Simpson’s formula is utilized to reconstruct the background value and then integrated with the modified initial condition to smooth the data saltations and further enhance the forecasting precision. To validate the rationality and efficacy of the novel model, four cases regarding the sales and stock of EVs are simulated by the proposed model compared with six benchmarks. As demonstrated in the empirical results, the novel model performs with the highest forecasting precision in most cases, which reveals that the optimization techniques exerted on the initial condition and background value can strikingly enhance the adaptability and prediction accuracy of the grey model. Thus, the novel model can be regarded as a promising tool for EVs prediction.
TL;DR: Two novel algorithms based on States of Matter Search (SMS) algorithm to find suitable embedding factors and reduce distortion are proposed for improved watermarking technology using meta-heuristic algorithm.
Abstract: With the rapid development of information technology, infringements have become increasingly serious. Digital watermarking is an effective method to protect information. The current watermarking technology still has room for further improvement in imperceptibility and robustness. This paper proposes an improved watermarking technology using meta-heuristic algorithm. Further, Quick Response code (QR code) is used as a carrier to transmit information. The improved Discrete Wavelet Transform-Singular Value Decomposition (DWT-SVD) is used to hide the watermark into the QR code. Therefore, digital watermarking is realized on the QR code. In the common watermark embedding methods, the digital watermark is related to the embedding strength. How to find a suitable embedding factor and reduce distortion is of great significance to these watermarking algorithms. This paper mainly proposes two novel algorithms based on States of Matter Search (SMS) algorithm to find suitable embedding factors. The first algorithm uses an adaptive parameter to control the movement of particles called the adaptive step States of Matter Search (sSMS). The second algorithm incorporates co-evolutionary matrix to enhance the search capability named Co-evolution States of Matter Search (CSMS). DWT-SVD is updated through two algorithms to acquire optimal embedding strength factors on the QR code watermarking. By adjusting the embedding strength factors, the intensity of the watermark embedded in different frequency domains would be modified. The experimental results have higher PSNR and the QR code can still be decoded by a general decoder. It shows that the proposed approaches are practicable and effective.
TL;DR: In this paper, an elegant approach based on MRFO integrated with Gradient-Based Optimizer (GBO), named MRFO-GBO, is proposed to efficiently solve the economic emission dispatch (EED) problems.
Abstract: Recently, Manta ray foraging optimization (MRFO) has been developed and applied for solving few engineering optimization problems. In this paper, an elegant approach based on MRFO integrated with Gradient-Based Optimizer (GBO), named MRFO–GBO, is proposed to efficiently solve the economic emission dispatch (EED) problems. The proposed MRFO–GBO aims to reduce the probability of original MRFO to get trapped into local optima as well as accelerate the solution process. The goal of solving optimal Economic Emission Dispatch (EED) is to economically provide all required electrical loads as well as minimizing the emission with satisfying the operating equality and inequality constraints. Single and multi-objective EED problems are solved using the proposed MRFO–GBO and classical MRFO. In multi-objective EED, fuzzy set theory is adapted to determine the best compromise solution among Pareto optimal solutions. The proposed algorithm is firstly validated through well-known CEC’17 test functions, and then applied for solving several scenarios of EED problems for three electrical systems with 3-generators, 5-generators, and 6-generators. The validation is achieved through different load levels of the tested systems to prove the robustness of the proposed algorithm. The results obtained by the proposed MRFO–GBO are compared with those obtained by recently published optimization techniques as well as the original MRFO and GBO. The results illustrate the ability of the proposed MRFO–GBO in effectively solving the single and multi-objective EED problems in terms of precision, robustness, and convergence characteristics.
TL;DR: In this paper, a method based on a weighted induced logarithmic distance is presented to help address multiple attribute decision making (MADM) with q-ROFS information.
Abstract: As a more flexible and practical approach than the Pythagorean fuzzy set and intuitionistic fuzzy set, the q-rung orthopair fuzzy set (q-ROFS) has been widely used to express vagueness and uncertainty. In this paper, a method based on a weighted induced logarithmic distance is presented to help address multiple attribute decision making (MADM) with q-ROFS information. A new induced weighted logarithmic distance measure is first proposed to remedy the shortcomings of existing methods. Some outstanding properties have also been examined in detail. Considering the superiority of q-ROFS in modeling uncertainties, the improved induced weighted logarithmic distance measure is then extended to q-ROFS, thereby obtaining two new q-ROFS distance measures. Moreover, based on the developed q-ROFS distance measures, a new method for handling MADM problems under q-ROFS environment is presented, wherein information concerning the attribute weights is completely unknown. Finally, a numerical example concerning smart phone selection is presented to demonstrate the validity and superiority of the proposed method.
TL;DR: In this article, a physics-informed deep learning approach was proposed for bearing condition monitoring and fault detection, which consists of a simple threshold model and a deep convolutional neural network (CNN) model.
Abstract: In recent years, advances in computer technology and the emergence of big data have enabled deep learning to achieve impressive successes in bearing condition monitoring and fault detection. While existing deep learning approaches are able to efficiently detect and classify bearing faults, most of these approaches depend exclusively on data and do not incorporate physical knowledge into the learning and prediction processes—or more importantly, embed the physical knowledge of bearing faults into the model training process, which makes the model physically meaningful. To address this challenge, we propose a physics-informed deep learning approach that consists of a simple threshold model and a deep convolutional neural network (CNN) model for bearing fault detection. In the proposed physics-informed deep learning approach, the threshold model first assesses the health classes of bearings based on known physics of bearing faults. Then, the CNN model automatically extracts high-level characteristic features from the input data and makes full use of these features to predict the health class of a bearing. We designed a loss function for training and validating the CNN model that selectively amplifies the effect of the physical knowledge assimilated by the threshold model when embedding this knowledge into the CNN model. The proposed physics-informed deep learning approach was validated using (1) data from 18 bearings on an agricultural machine operating in the field, and (2) data from bearings on a laboratory test stand in the Case Western Reserve University (CWRU) Bearing Data Center.
TL;DR: Wang et al. as discussed by the authors proposed a method to solve the problems by integrating linguistic D numbers (LDNs), double normalization-based multiple aggregation (DNMA) method, and Criteria Importance Through Inter-criteria Correlation (CRITIC) method.
Abstract: Since more and more blockchain platforms have been utilized in diverse business applications, the blockchain platform evaluation becomes significant for clients. There are challenges regarding the blockchain platform evaluation in terms of information uncertainty, multiple types of criteria, and the correlations between criteria. This study dedicates to proposing a method to solve these problems by integrating linguistic D numbers (LDNs), double normalization-based multiple aggregation (DNMA) method, and Criteria Importance Through Inter-criteria Correlation (CRITIC) method. Firstly, a conversion rule of LDNs is introduced to enhance the comparative rule of LDNs. Then, an integrated multiple criteria decision making framework is proposed by incorporating DNMA with LDNs. This method not only can effectively capture the incomplete or uncertain decision-making information with respect to cost, benefit, and target criteria, but also can reduce the loss of decision information caused by single normalized technology. The CRITIC method is integrated in the LDN-based DNMA method to reflect the correlations between criteria in the blockchain platform evaluation process. To investigate the efficiency of the proposed method, a numerical example of blockchain platform evaluation is given. The sensitivity analysis demonstrates the robustness and stability of the developed method. The comparative analysis shows that our method can identify the potentially important criteria in the decision-making process effectively.
TL;DR: In this article, an improved version of the Archimedes optimization algorithm (AOA) is introduced to determine the optimal parameters of polymer electrolyte membrane (PEM) fuel cell (FC).
Abstract: Meta-heuristic optimization algorithms aim to tackle real world problems through maximizing some specific criteria such as performance, profit, and quality or minimizing others such as cost, time, and error. Accordingly, this paper introduces an improved version of a well-known optimization algorithm namely Archimedes optimization algorithm (AOA). The enhanced version combines two efficient strategies namely Local escaping operator (LEO) and Orthogonal learning (OL) to introduce the (I-AOA) optimization algorithm. Moreover, the performance of the proposed I-AOA has been evaluated on the CEC’2020 test suite, and three engineering design problems. Furthermore, I-AOA is applied to determine the optimal parameters of polymer electrolyte membrane (PEM) fuel cell (FC). Two commercial types of PEM fuel cells: 250W PEMFC and BCS 500W are considered to prove the superiority of the proposed optimizer. During the optimization procedure, the seven unknown parameters ( ξ 1 , ξ 2 , ξ 3 , ξ 4 , λ , R C , and b ) of PEM fuel cell are assigned to be the decision variables. Whereas the cost function that required to be in a minimum state is represented by the RMSE between the estimated cell voltage and the measured data. The obtained results by the I-AOA are compared to other well-known optimizers such as Whale Optimization Algorithm (WOA), Moth-Flame Optimization Algorithm (MFO), Sine Cosine Algorithm (SCA), Particle Swarm Optimization Algorithm (PSO), Harris hawks optimization (HHO), Tunicate Swarm Algorithm (TSA) and original AOA. The comparison confirmed the superiority of the suggested algorithm in identifying the optimum PEM fuel cell parameters considering various operating conditions compared to the other optimization algorithms.
TL;DR: A new belief divergence measure is proposed for DST, which can reflect the correlation of different kinds of subsets by taking into account the belief measure and plausibility measure of mass function.
Abstract: Dempster–Shafer theory (DST) has extensive and important applications in information fusion. However, when the evidences are highly conflicting with each other, the Dempster’s combination rule often leads to a series of counter-intuitive results. In this paper, we propose a new belief divergence measure for DST, which can reflect the correlation of different kinds of subsets by taking into account the belief measure and plausibility measure of mass function. Furthermore, the proposed divergence measure has the properties of boundedness, non-degeneracy and symmetry. In addition, a new multi-source data fusion method is proposed based on the proposed divergence measure. This method utilizes not only the credibility weights but also the information volume weights to determine the comprehensive weights of evidences, which can fully reflect the relationship between evidences. Application cases and simulation results show that the proposed method is reasonable and effective.
TL;DR: In this paper, an improved marine predators algorithm (MPA) is used to optimize the shape of shape-adjustable generalized cubic developable Ball (SGCD-Ball, for short) surfaces.
Abstract: The shape optimization of developable surfaces is a pivotal and knotty technique in CAD/CAM and used in many product manufacturing planning operations, e.g., for ships, aircraft wing, automobiles, garments, etc. In this paper, an improved marine predators algorithm (MPA) is used to optimize the shape of shape-adjustable generalized cubic developable Ball (SGCD-Ball, for short) surfaces. Firstly, to solve the problems of shape adjustment and optimization for developable surfaces, we present a class of novel shape-adjustable generalized cubic Ball basis functions, and then construct the SGCD-Ball surfaces with shape parameters by using the presented basis functions. The shapes of the surfaces can be adjusted and optimized expediently by using the shape parameters. Secondly, the shape optimization of developable surfaces is mathematically an optimization problem that can be effectively dealt with by swarm intelligence algorithm. In this regard, by incorporating a quasi-opposition strategy and a differential evolution algorithm to the MPA, an improved MPA called ODMPA is developed to increase the population diversity and enhance its capability of jumping out of the local minima. Furthermore, the superiority of the proposed ODMPA is verified by comparing with standard MPA , modified MPA and several well-known intelligent algorithms on 23 classical benchmark functions, the CEC’17 test suite and three engineering optimization problems, respectively. Finally, by minimizing the energy of the SGCD-Ball surfaces as the evaluation standard, the shape optimization models of the corresponding enveloping and spine curve developable surfaces are established. The ODMPA is utilized to solve the shape optimization models, and the SGCD-Ball surfaces with minimum energy are obtained. Some representative numerical examples demonstrate the superiority of the proposed ODMPA in effectively solving the shape optimization models in terms of precision and robustness.
TL;DR: This work elaborates on a notion of “aggregate process” as a concurrent collective computation whose execution and interactions are sustained by a dynamic team of devices, whose spatial region can opportunistically vary over time.
Abstract: Edge computing promotes the execution of complex computational processes without the cloud, i.e., on top of the heterogeneous, articulated, and possibly mobile systems composed of IoT and edge devices. Such a pervasive smart fabric augments our environment with computing and networking capabilities. This leads to a complex and dynamic ecosystem of devices that should not only exhibit individual intelligence but also collective intelligence—the ability to take group decisions or process knowledge among autonomous units of a distributed environment. Self-adaptation and self-organisation mechanisms are also typically required to ensure continuous and inherent toleration of changes of various kinds, to distribution of devices, energy available, computational load, as well as faults. To achieve this behaviour in a massively distributed setting like edge computing demands, we seek for identifying proper abstractions, and engineering tools therefore, to smoothly capture collective behaviour, adaptivity, and dynamic injection and execution of concurrent distributed activities. Accordingly, we elaborate on a notion of “aggregate process” as a concurrent collective computation whose execution and interactions are sustained by a dynamic team of devices, whose spatial region can opportunistically vary over time. We ground this notion by extending the aggregate computing model and toolchain with new constructs to instantiate aggregate processes and regulate key aspects of their lifecycle. By virtue of an open-source implementation in the ScaFi framework, we show basic programming examples as well as case studies of edge computing, evaluated by simulation in realistic settings.
TL;DR: Feature Selection (FS) as discussed by the authors is a crucial pre-processing step in network management and specifically for the purposes of network intrusion detection, where trade-offs between performance and resource consumption are crucial.
Abstract: Machine Learning (ML) techniques are becoming an invaluable support for network intrusion detection, especially in revealing anomalous flows, which often hide cyber-threats. Typically, ML algorithms are exploited to classify/recognize data traffic on the basis of statistical features such as inter-arrival times, packets length distribution, mean number of flows, etc. Dealing with the vast diversity and number of features that typically characterize data traffic is a hard problem. This results in the following issues: (i) the presence of so many features leads to lengthy training processes (particularly when features are highly correlated), while prediction accuracy does not proportionally improve; (ii) some of the features may introduce bias during the classification process, particularly those that have scarce relation with the data traffic to be classified. To this end, by reducing the feature space and retaining only the most significant features, Feature Selection (FS) becomes a crucial pre-processing step in network management and, specifically, for the purposes of network intrusion detection. In this review paper, we complement other surveys in multiple ways: (i) evaluating more recent datasets (updated w.r.t. obsolete KDD 99) by means of a designed-from-scratch Python-based procedure; (ii) providing a synopsis of most credited FS approaches in the field of intrusion detection, including Multi-Objective Evolutionary techniques; (iii) assessing various experimental analyses such as feature correlation, time complexity, and performance. Our comparisons offer useful guidelines to network/security managers who are considering the incorporation of ML concepts into network intrusion detection, where trade-offs between performance and resource consumption are crucial.
TL;DR: A modified version of Manta ray foraging optimizer (MRFO) algorithm to deal with global optimization and multilevel image segmentation problems is presented and the FO-MRFO shows its superiority in comparison with the basic MRFO.
Abstract: This paper presents a modified version of Manta ray foraging optimizer (MRFO) algorithm to deal with global optimization and multilevel image segmentation problems. MRFO is a meta-heuristic technique that simulates the behaviors of manta rays to find the food. MRFO established its ability to find a suitable solution for a variant of optimization problems. However, by analyzing its behaviors during the optimization process, it is observed that its exploitation ability is less than exploration ability, which makes MRFO more sensitive to attractive to a local point. Therefore, we enhanced MRFO by using the fractional-order (FO) calculus during the exploitation phase. We used the heredity and non-locality properties of the Grunwald–Letnikov fractional differ-integral operator to simulate the after effect of the previous locations of manta rays on their future movement directions. The proposed Fractional-order MRFO (FO-MRFO) quality is confirmed using a set of two experimental series. Firstly, it is applied to find the solution for CEC2017 benchmark functions with different dimensions of 10, 30, and 50. Through performing the non-parametric statistical analysis, the FO-MRFO shows its superiority in comparison with the basic MRFO. For the second series of experiments, the developed algorithm is implemented as a multilevel threshold image segmentation technique. In this experiment, a variant of natural images is used to assess FO-MFRO. According to different performance measures, the FO-MRFO outperforms the compared algorithms in the global optimization and image segmentation.
TL;DR: A trust-based recommender system is presented that predicts the score of items that the target user has not rated, and if the item is not found, it offers the user the items dependent on that item that are also part of the user's interests.
Abstract: In recent years, the use of trust-based recommendation systems to predict the scores of items not rated by users has attracted many researchers’ interest. Accordingly, they create a trusted network of users, move in the trust graph, and search for the desired rank among the users by creating a Trust Walker and Random walk algorithm. Meanwhile, we face some challenges such as calculating the level of trust between users, the movement of Trust Walker using Random walk (random route selection), not discovering the desired rank, and as a result, the algorithm failure. In the present study, in order to solve the mentioned challenges, a trust-based recommender system is presented that predicts the ranks of items that the target user has not rated. In the first stage, a trusted network is developed based on the three criteria. In the next step, we define a Trust Walker to calculate the level of trust between users, and we apply the Biased Random Walk (BRW) algorithm to move it; the proposed method recommends it to the target user in the case of finding the desired rank of the item, and if that item does not exist in the defined trust network, it uses association rules to recognize items that are dependent on the item being searched and recommends them to the target user. The evaluation of this research has been performed on three datasets, and the obtained results indicate higher efficiency and more accuracy of the proposed method.
TL;DR: In this paper, an ensemble deep neural network was proposed to detect colorectal cancer using multi-class tissue features and achieved an accuracy of 99.13% on two publicly available datasets: NCT-CRC-HE-100K (107,180 images).
Abstract: With a mortality rate of approximately 33.33%, Colorectal cancer serves as the second most prevalent malignant tumor type in the world. AI-guided clinical care/tool can help in reducing health disparities, specifically in resource-constrained regions. In this paper, using multi-class tissue features, we proposed an Ensemble Deep Neural Network to Tumor in Colorectal Histology images. On two different publicly available datasets: NCT-CRC-HE-100K (107,180 images) and Colorectal Histology (5000 images), we achieved accuracies of 96.16% and 92.83%, respectively. When datasets are combined, it provided a benchmark accuracy of 99.13%. We efficiently used resourced data, thereby achieving results that outperformed the state-of-the-art works.
TL;DR: A convolutional neural network with attention modules was designed to accurately segment foreign objects from a complex background in real-time and proved that the attention modules could focus on the features of the salient region and inhibit the irrelevant background, which significantly improved the accuracy of the detection.
Abstract: Foreign objects in coal seriously affect the efficiency and safety of clean coal production. Currently, the removal of foreign objects in coal preparation plant mainly depends on manual picking, which has disadvantages of high labor intensity and low efficiency. Therefore, there is an urgent need for rapid detection and removal of foreign objects. However, due to the inference of the background and surround objects, it is a challenge for the accurate detection of foreign objects. In this study, a convolutional neural network (CNN) with attention modules was designed to accurately segment foreign objects from a complex background in real-time. The proposed network consists of an encoder and a decoder, and the attention mechanism was introduced into the decoder to capture rich semantic information. The visualization results proved that the attention modules could focus on the features of the salient region and inhibit the irrelevant background, which significantly improved the accuracy of the detection The results showed that the proposed model correctly recognized 97% of the foreign objects in the 1871 sets of test images. The mean intersection over union (MIOU) of the optimal model was 91.24%, and the inference speed was greater than 15 fps/s, which satisfied the real-time requirement.
TL;DR: In this paper, the authors proposed an interval-valued spherical fuzzy set (IVSF) cosine similarity measure to rank the alternatives and specify the preeminent preference in a multi-criteria decision-making problem.
Abstract: Due to the uncertainty and vagueness, ambiguity and subjectivity of the information in an intricate decision-making environment, the assessment data specified by experts are mostly fuzzy and uncertain. As an extension of Pythagorean fuzzy sets (PyFSs) and picture fuzzy sets (PFSs), spherical fuzzy sets (SFSs) are used frequently for presenting fuzzy and indeterminate information. In multi-criteria decision-making (MCDM) problems, the weights of criteria are not known generally. The maximizing deviation technique is a useful tool to handle such problems that we have partially or incomplete information about the criteria’ weights. This research expands the classical maximizing deviation technique to the spherical fuzzy maximizing deviation technique using single-valued (SV) and interval-valued (IV) spherical fuzzy sets to determine criteria weights. To rank the alternatives and specify the preeminent preference, we proposed the Interval Valued Spherical Fuzzy TOPSIS method based on the similarity measure instead of distance measure. For this purpose, we proposed an IVSF cosine similarity measure. To present its effectiveness and practicability, we apply the proposed methodology to an advertisement strategy selection problem, where IVSF sets are used to represent the evaluations about alternatives and criteria. A sensitivity analysis with different similarity measurements is performed to show the reliability of the proposed methodology.
TL;DR: Clustering Ensemble as mentioned in this paper is a knowledge reuse approach to solve the challenges inherent in clustering, it seeks to explore results of high stability and robustness by composing computed solutions achieved by base clustering algorithms without access to the features.
Abstract: Clustering, as an unsupervised learning, is aimed at discovering the natural groupings of a set of patterns, points, or objects. In clustering algorithms, a significant problem is the absence of a deterministic approach based on which users can decide which clustering method best matches a given set of input data. This is due to using certain criteria for optimization. Clustering ensemble as a knowledge reuse offers a solution to solve the challenges inherent in clustering. It seeks to explore results of high stability and robustness by composing computed solutions achieved by base clustering algorithms without getting access to the features. Combining base clusterings together degrades the quality of the final solution when low-quality ensemble members are used. Several researchers in this field have suggested the concept of clustering ensemble selection for the aim of selecting a subset of base clustering based on quality and diversity. While clustering ensemble makes a combination of all ensemble members, clustering ensemble selection chooses a subset of ensemble members and forms a smaller cluster ensemble that performs better than the clustering ensemble. This survey includes the historical development of data clustering that makes an overview on basic clustering techniques, discusses clustering ensemble algorithms including ensemble generation mechanisms and consensus function, and point out clustering ensemble selection techniques with considering quality and diversity.
TL;DR: A novel Multi-Agent Hierarchical Policy Gradient algorithm (MAHPG) is proposed, which is capable of learning various strategies and transcending expert cognition by adversarial self-play learning and outperforms the state-of-the-art air combat methods in terms of both defense and offense ability.
Abstract: Air-to-air confrontation has attracted wide attention from artificial intelligence scholars. However, in the complex air combat process, operational strategy selection depends heavily on aviation expert knowledge, which is usually expensive and difficult to obtain. Moreover, it is challenging to select optimal action sequences efficiently and accurately with existing methods, due to the high complexity of action selection when involving hybrid actions, e.g., discrete/continuous actions. In view of this, we propose a novel Multi-Agent Hierarchical Policy Gradient algorithm (MAHPG), which is capable of learning various strategies and transcending expert cognition by adversarial self-play learning. Besides, a hierarchical decision network is adopted to deal with the complicated and hybrid actions. It has a hierarchical decision-making ability similar to humankind, and thus, reduces the action ambiguity efficiently. Extensive experimental results demonstrate that the MAHPG outperforms the state-of-the-art air combat methods in terms of both defense and offense ability. Notably, it is discovered that the MAHPG has the ability of Air Combat Tactics Interplay Adaptation, and new operational strategies emerged that surpass the level of experts.
TL;DR: Zhang et al. as discussed by the authors proposed a Faster R-CNN-AON network to improve the robustness of underwater target detection by adding the adversarial occlusion network.
Abstract: Underwater target detection is an important part of ocean exploration, which has important applications in military and civil fields. Since the underwater environment is complex and changeable and the sample images that can be obtained are limited, this paper proposes a method to add the adversarial occlusion network (AON) to the standard Faster R-CNN detection algorithm which called Faster R-CNN-AON network. The AON network has a competitive relationship with the Faster R-CNN detection network, which learns how to block a given target and make it difficult for the detecting network to classify the blocked target correctly. Faster R-CNN detection network and the AON network compete and learn together, and ultimately enable the detection network to obtain better robustness for underwater seafood. The joint training of Faster R-CNN and the adversarial network can effectively prevent the detection network from overfitting the generated fixed features. The experimental results in this paper show that compared with the standard Faster R-CNN network, the increase of mAP on VOC07 data set is 2.6%, and the increase of mAP on the underwater data set is 4.2%.
TL;DR: A new nature-inspired algorithm called Lichtenberg Optimization Algorithm (LA) is applied to solve a complex inverse damage identification problem in mechanical structures built by composite material and was shown to be a powerful damage identification tool.
Abstract: Optimization is an essential tool to minimize or maximize functions, obtaining optimal results on costs, mass, energy, gains, among others. Actual problems may be multimodal, nonlinear, and discontinuous and may not be minimized by classical analytical methods that depend on the gradient. In this context, there are metaheuristic algorithms inspired by natural phenomena to optimize real engineering problems. No algorithm is the worst or the best, but more efficient for a given problem. Thus, a new nature-inspired algorithm called Lichtenberg Optimization Algorithm (LA) is applied in this study to solve a complex inverse damage identification problem in mechanical structures built by composite material. To verify the performance of the new algorithm, both LA and Finite Element Method (FEM) were used to identify delamination damage and the results were compared to other algorithms such as Genetic Algorithm (GA) and SunFlower Optimization (SFO). LA was shown to be a powerful damage identification tool since it was able to detect damage even in particular situations like noisy response and low damage severity.