scispace - formally typeset
Search or ask a question

Showing papers on "Soft computing published in 2021"


Journal ArticleDOI
TL;DR: Current limitations and challenges are discussed, including advances in network implementations, applications to unconventional resources, dataset acquisition and synthetic training, extrapolative potential, accuracy loss from soft computing, and the computational cost of 3D Deep Learning.

78 citations


Journal ArticleDOI
TL;DR: The review showed that SC methods are powerful tools which could provide flexible computational techniques with high level of accuracy for civil engineering problems, however, most of the published works neglected to present the required details and mathematical framework.
Abstract: Soft computing (SC), due to its high abilities to solve the complex problems with uncertainty and multiple parameters, has been widely investigated and used, especially in structural engineering. They have successfully estimated the capacity of structural reinforced concrete (RC) members and determined the properties of concrete. There are so many articles in literature that applied SC methods for the above goals. However, there is no work to present the capability of such approaches by providing an overview on the available and existing studies. The lack of state-of-the-art review on the subject is the main motivation to present a comprehensive review on the latest trends between 2010 and 2020 in predicting the behavior of concrete elements using soft computing methods. The considered RC structural elements are beams, columns, joints, slabs, frames, concrete filled tube sections and strengthened elements with fibre reinforced polymer. The purpose of the investigated works was predicting the concrete characteristics such as crack, bond, shrinkage, or the strength of the elements. The review showed that SC methods are powerful tools which could provide flexible computational techniques with high level of accuracy for civil engineering problems. However, most of the published works neglected to present the required details and mathematical framework.

59 citations


Journal ArticleDOI
TL;DR: It was found that MPMR model can be used as a reliable soft computing technique for non-linear problems for settlement of shallow foundations on soils and outperformed PSO-ANFIS andPSO-ANN.
Abstract: This research focuses on the application of three soft computing techniques including Minimax Probability Machine Regression (MPMR), Particle Swarm Optimization based Artificial Neural Network (ANN-PSO) and Particle Swarm Optimization based Adaptive Network Fuzzy Inference System (ANFIS-PSO) to study the shallow foundation reliability based on settlement criteria. Soil is a heterogeneous medium and the involvement of its attributes for geotechnical behaviour in soil-foundation system makes the prediction of settlement of shallow a complex engineering problem. This study explores the feasibility of soft computing techniques against the deterministic approach. The settlement of shallow foundation depends on the parameters γ (unit weight), e 0 (void ratio) and C C (compression index). These soil parameters are taken as input variables while the settlement of shallow foundation as output. To assess the performance of models, different performance indices i.e. RMSE, VAF, R2, Bias Factor, MAPE, LMI, U95, RSR, NS, RPD, etc. were used. From the analysis of results, it was found that MPMR model outperformed PSO-ANFIS and PSO-ANN. Therefore, MPMR can be used as a reliable soft computing technique for non-linear problems for settlement of shallow foundations on soils.

59 citations


Journal ArticleDOI
TL;DR: There are structures still in service with a high seismic vulnerability, which proposes an urgent need for a screening system’s damageability grading system, and the necessity of developing a rapid, reliable, and computationally easy method of seismic vulnerability assessment, more commonly known as RVS.
Abstract: Seismic vulnerability assessment of existing buildings is of great concern around the world. Different countries develop various approaches and methodologies to overcome the disastrous effects of earthquakes on the structural parameters of the building and the human losses. There are structures still in service with a high seismic vulnerability, which proposes an urgent need for a screening system's damageability grading system. Rapid urbanization and the proliferation of slums give rise to improper construction practices that make the building stock's reliability ambiguous, including old structures that were constructed either when the seismic codes were not advanced or not enforced by law. Despite having a good knowledge of structural analysis, it is impractical to conduct detailed nonlinear analysis on each building in the target area to define their seismic vulnerability. This indicates the necessity of developing a rapid, reliable, and computationally easy method of seismic vulnerability assessment, more commonly known as Rapid Visual Screening (RVS). This method begins with a walk-down survey by a trained evaluator , and an initial score is assigned to the structure. Further, the vulnerability parameters are defined (predictor variables), and the damage grades are defined. Various methods are then adopted to develop an optimum correlation between the parameters and damage grades. Soft Computing (SC) techniques including probabilistic approaches , meta-heuristics, and Artificial Intelligence (AI) theories such as artificial neural networks , machine learning, fuzzy logic, etc. due to their capabilities in targeting inherent imprecision of phenomena in real-world are among the most important and widely used approaches in this regard. In this paper, a comprehensive literature review of the most commonly used and newly developed innovative methodologies in RVS using powerful SC techniques has been presented to shed light on key factors, strengths, and applications of each SC technique in advancing the RVS field of study.

55 citations


Journal ArticleDOI
TL;DR: A novel meta-heuristic computing solver is presented for solving the singular three-point second-order boundary value problems using artificial neural networks optimized by the combined strength of global and local search ability of genetic algorithms and interior point algorithm, i.e., ANN–GA–IPA.
Abstract: In this paper, a novel meta-heuristic computing solver is presented for solving the singular three-point second-order boundary value problems using artificial neural networks (ANNs) optimized by the combined strength of global and local search ability of genetic algorithms (GAs) and interior point algorithm (IPA), ie, ANN–GA–IPA The inspiration for presenting this numerical work comes from the intention of introducing a consistent framework that combines the effective features of neural networks optimized with the contexts of soft computing to handle with such challenging systems Three numerical variants of singular second-order system have been taken to examine the proficiency, robustness, and stability of the designed approach The comparison of the proposed results of ANN–GA–IPA from available exact solutions shows the good agreement with 5 to 7 decimal places of the accuracy which established worth of the methodology through performance analyses on a single and multiple executions

53 citations


Journal ArticleDOI
TL;DR: This study presents state of the art about the use of soft computing techniques in TBM tunneling through practical applications and proposes recommendations for the optimal use of these techniques, in particular the importance of preliminary analyses for the selection and reduction of input parameters.

44 citations


Journal ArticleDOI
TL;DR: In this article, a review has been presented where supervised learning (SL) and soft computing (SC) techniques used in stress diagnosis have been meticulously investigated to highlight the contributions, strengths, and challenges faced in the implementation of these methods in stress diagnostic models.

42 citations


Journal ArticleDOI
TL;DR: Soft computing based watermarking approaches providing robustness, imperceptibility and good embedding capacity are compared systematically and major issues and potential solutions for soft computing-basedWatermarking are discussed to encourage further research in this area.
Abstract: Image watermarking techniques are used to provide copyright protection and verify ownership of media/entities. This technique refers to the concept of embedding of secret data/information of an owner in a given media/entity for determining any ownership conflicts that can arise. Many watermarking approaches have been offered by various authors in the last few years. However, there are not enough studies and comparisons of watermarking techniques in soft computing environments. Nowadays, soft computing techniques are used to improve the performance of watermarking algorithms. This paper surveys soft computing-based image watermarking for several applications. We first elaborate on novel applications, watermark characteristics and different kinds of watermarking systems. Then, soft computing based watermarking approaches providing robustness, imperceptibility and good embedding capacity are compared systematically. Furthermore, major issues and potential solutions for soft computing-based watermarking are also discussed to encourage further research in this area. Thus, this survey paper will be helpful for researchers to implement an optimized watermarking scheme for several applications.

40 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed some novel methods of computing correlation between PFSs via the three characteristic parameters of PFS by incorporating the ideas of Pythagorean fuzzy deviation, variance and covariance.
Abstract: Pythagorean fuzzy set (PFS) is an importance soft computing tool for curbing embedded vagueness in decision-making. To enhance the applicability of PFSs in modelling practical problems, many computing methods have been studied among which, correlation coefficient is vital. This paper proposes some novel methods of computing correlation between PFSs via the three characteristic parameters of PFSs by incorporating the ideas of Pythagorean fuzzy deviation, variance and covariance. These novel methods evaluate the magnitude of relationship, show the potency of correlation between the PFSs, and also indicate whether the PFSs are related in either positive or negative sense. The proposed techniques are substantiated with some theoretical results, and numerically validated to be superior in terms of accuracy and reliability in contrast to some hitherto similar techniques. Decision-making processes involving pattern recognition and career placement problems are determined with the aid of the proposed techniques.

37 citations


Journal ArticleDOI
TL;DR: In this paper, a wrapper-based feature selection method called "Tabu Search - Random Forest (TS-RF)" was proposed for Network Intrusion Detection Systems (NIDS) to reduce dimensionality of data.

36 citations


Journal ArticleDOI
TL;DR: A novel BPA generation method for binary problems called as the base algorithm is designed based on the kernel density estimation to construct the probability density function models, using the pairwise learning method to establish binary classification pairs.

Journal ArticleDOI
TL;DR: Experiments show that the proposed algorithm can achieve the best wirelength optimization and has a strong stability, especially for large-scale SMT problem, so as to better satisfy the demand of low delay of IC design under IEC architecture.

Journal ArticleDOI
TL;DR: The proposed deep swarm-optimized classifier is a feature-boosted DT, which learns features using a deep convolution net and an optimal feature set built using a metaheuristic WSA, which outperforms the other considered algorithms in terms of the classification accuracy.
Abstract: To compete in the current data-driven economy, it is essential that industrial manufacturers leverage real-time tangible information assets and embrace big data technologies. Data classification is one of the most proverbial analytical techniques within the cognitively capable manufacturing industries for finding the patterns in the structured and unstructured data at the plant, enterprise, and industry levels. This article presents a cognition-driven analytics model, CNN-WSADT, for the real-time data classification using three soft computing techniques, namely, deep learning [convolution neural network (CNN)], machine learning [decision tree (DT)], and swarm intelligence [wolf search algorithm (WSA)]. The proposed deep swarm-optimized classifier is a feature-boosted DT, which learns features using a deep convolution net and an optimal feature set built using a metaheuristic WSA. The performance of CNN-WSADT is studied on two benchmark datasets and the experimental results depict that the proposed cognition model outperforms the other considered algorithms in terms of the classification accuracy.

Journal ArticleDOI
TL;DR: A novel perspective in that the general type-2 fuzzy classifier can be implemented for embedded applications with excellent performance regarding hardware resources consumption is offered.

Journal ArticleDOI
TL;DR: The ability of interpretation of the predictive power of each feature in the credit dataset is strengthened by the engagement of experts in theCredit scoring process, and the proposed wrapper-based feature selection approach which explores how the features contributing most towards the classification of borrowers are explored.

Journal ArticleDOI
TL;DR: In this brief review, the recent progress in two niche applications are presented: neural network accelerators and numerical computing units, mainly focusing on the advances in hardware demonstrations.
Abstract: Memristors are now becoming a prominent candidate to serve as the building blocks of non-von Neumann in-memory computing architectures. By mapping analog numerical matrices into memristor crossbar arrays, efficient multiply accumulate operations can be performed in a massively parallel fashion using the physics mechanisms of Ohm’s law and Kirchhoff’s law. In this brief review, we present the recent progress in two niche applications: neural network accelerators and numerical computing units, mainly focusing on the advances in hardware demonstrations. The former one is regarded as soft computing since it can tolerant some degree of the device and array imperfections. The acceleration of multiple layer perceptrons, convolutional neural networks, generative adversarial networks, and long short-term memory neural networks are described. The latter one is hard computing because the solving of numerical problems requires high-precision devices. Several breakthroughs in memristive equation solvers with improved computation accuracies are highlighted. Besides, other nonvolatile devices with the capability of analog computing are also briefly introduced. Finally, we conclude the review with discussions on the challenges and opportunities for future research toward realizing memristive analog computing machines.

Journal ArticleDOI
TL;DR: Experimental results show that the BPNN attained the most accurate prediction of concrete CS based on both ultrasonic pulse velocity and rebound number values, and these two models are very potential to assist engineers in the design phase of civil engineering projects to estimate the concrete CS with a greater accuracy level.

Journal ArticleDOI
TL;DR: The results show that the proposed algorithm for the presented models of inventory control has better solutions, lower cost, and less CPU consumption than other algorithms.
Abstract: In the present day markets, it is essential for organizations that manage their supply chain efficiency to sustain their market share and improve profitability. Optimized inventory control is an integral part of supply chain management. In inventory control problems, determining the ordering times and the order quantities of products are the two strategic decisions either to minimize total costs or to maximize total profits. This paper presents three models of inventory control problems. These three models are deterministic single-product, deterministic multi-product, and stochastic single-product. Due to the high computational complexity, the presented models are solved using the Emperor Penguins Colony (EPC) algorithm as a metaheuristic algorithm and a soft computing method. EPC is a newly published metaheuristic algorithm, which has not yet been employed to solve the inventory control problem. The results of applying the proposed algorithm on the models are compared with the results obtained by nine state-of-the-art and popular metaheuristic algorithms. To justify the proposed EPC, both cost and runtime criteria are considered. To find significant differences between the results obtained by algorithms, statistical analysis is used. The results show that the proposed algorithm for the presented models of inventory control has better solutions, lower cost, and less CPU consumption than other algorithms.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed Dempster–Shafer theory-based rough granular description model is reasonable, effective, and robust, and is a promising rough granularity description model for complex data in real-world applications.

Journal ArticleDOI
TL;DR: An efficient and robust soft computing strategy was proposed, where the artificial neural networks are hybridized with genetic algorithm or particle swarm optimization to predict the bond strength in CES structures, showing that the developed GA-ANN and PSO-ANN models exhibit superior performance to both conventional ANN model and existing empirical equations.

Journal ArticleDOI
TL;DR: Artificial neural network (ANN) technique plays an important role to predict and optimize the performances of SAHs, which is very popular due to its fast computing speed and ability to solve complicated problems accurately which is not solved by other conventional approaches.
Abstract: Solar air heater (SAH) is a most commonly used solar energy utilization system, which collects solar radiation on absorber plate and transmits absorbed thermal energy to the flowing air. Many techniques were used by various researchers for increasing the performance of SAHs by experimental examination, but analytical and experimental studies takes more time and are very costly. To avoid these types of problems soft computing techniques are used, in which artificial neural network (ANN) technique plays an important role to predict and optimize the performances of SAHs. This technique is very popular due to its fast computing speed and ability to solve complicated problems accurately which is not solved by other conventional approaches. For solving any problem programming code is not required which is the main advantage of this technique. The main purpose of present work is to review the work related to applications of neural model for performance prediction of SAHs and find out the research gap for future investigations. Various research works shown in this paper concluded that ANN is very efficient technique for performance prediction of SAHs.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an enhanced version of the Whale Optimization Algorithm by combining it with a single point crossover method, which helps the WOA to escape from local optima by enhancing the exploration process.
Abstract: Software fault prediction (SFP) is a challenging process that any successful software should go through it to make sure that all software components are free of faults. In general, soft computing and machine learning methods are useful in tackling this problem. The size of fault data is usually huge since it is obtained from mining software historical repositories. This data consists of a large number of features (metrics). Determining the most valuable features (i.e., Feature Selection (FS) is an excellent solution to reduce data dimensionality. In this paper, we proposed an enhanced version of the Whale Optimization Algorithm (WOA) by combining it with a single point crossover method. The proposed enhancement helps the WOA to escape from local optima by enhancing the exploration process. Five different selection methods are employed: Tournament, Roulette wheel, Linear rank, Stochastic universal sampling, and random-based. To evaluate the performance of the proposed enhancement, 17 available SFP datasets are adopted from the PROMISE repository. The deep analysis shows that the proposed approach outperformed the original WOA and the other six state-of-the-art methods, as well as enhanced the overall performance of the machine learning classifier.

Journal ArticleDOI
TL;DR: A comparative structure for predicting the operational reliability in automotive manufacturing industry, using soft computing + statistical techniques shows that the Adaptive Neuro-Fuzzy Inference System (ANFIS) model yields better results in most cases and thus can be used for predicting operational reliability.

Journal ArticleDOI
TL;DR: This paper proposes a new soft computing model (artificial intelligence model) for modeling rock fragmentation with high accuracy, based on a boosted generalized additive model (BGAM) and a firefly algorithm (FFA), called FFA-BGAM, which provided the highest accuracy in predicting the SDR.
Abstract: This paper proposes a new soft computing model (artificial intelligence model) for modeling rock fragmentation (i.e., the size distribution of rock (SDR)) with high accuracy, based on a boosted generalized additive model (BGAM) and a firefly algorithm (FFA), called FFA-BGAM. Accordingly, the FFA was used as a robust optimization algorithm/meta-heuristic algorithm to optimize the BGAM model. A split-desktop environment was used to analyze and calculate the size of rock from 136 images, which were captured from 136 blasts. To this end, blast designs were collected and extracted as the input parameters. Subsequently, the proposed FFA-BGAM model was evaluated and compared through previous well-developed soft computing models, such as FFA-ANN (artificial neural network), FFA-ANFIS (adaptive neuro-fuzzy inference system), support vector machine (SVM), Gaussian process regression (GPR), and k-nearest neighbors (KNN) based on three performance indicators (MAE, RMSE, and R2). The results indicated that the new intelligent technique (i.e., FFA-BGAM) provided the highest accuracy in predicting the SDR with an MAE of 0.920, RMSE of 1.213, and R2 of 0.980. In contrast, the remaining models (i.e., FFA-ANN, FFA-ANFIS, SVM, GPR, and KNN) yielded lower accuracies in predicting the SDR, i.e., MAEs of 1.248, 1.661, 1.096, 1.573, 1.237; RMSEs of 1.598, 2.068, 1.402, 2.137, 1.717; and R2 of 0.967, 0.968, 0.972, 0.940, 0.963, respectively.

Journal ArticleDOI
TL;DR: In this article, a hybrid ensemble machine learning method for forecasting the rate of penetration (ROP) of tunnel boring machine (TBM), which is becoming a prerequisite for reliable cost assessment and project scheduling in tunnelling and underground projects in a rock environment.
Abstract: This study implements a hybrid ensemble machine learning method for forecasting the rate of penetration (ROP) of tunnel boring machine (TBM), which is becoming a prerequisite for reliable cost assessment and project scheduling in tunnelling and underground projects in a rock environment. For this purpose, a sum of 185 datasets was collected from the literature and used to predict the ROP of TBM. Initially, the main dataset was utilised to construct and validate four conventional soft computing (CSC) models, i.e. minimax probability machine regression, relevance vector machine, extreme learning machine, and functional network. Consequently, the estimated outputs of CSC models were united and trained using an artificial neural network (ANN) to construct a hybrid ensemble model (HENSM). The outcomes of the proposed HENSM are superior to other CSC models employed in this study. Based on the experimental results (training RMSE = 0.0283 and testing RMSE = 0.0418), the newly proposed HENSM is potential to assist engineers in predicting ROP of TBM in the design phase of tunnelling and underground projects.

Journal ArticleDOI
TL;DR: A hybrid model for classifying faults in power transformers revealed high performance in classifying transformer faults and improving the fault identification accuracy, compared with other soft computing and traditional models.

Journal ArticleDOI
TL;DR: A comprehensive review of research on DSM strategies to identify the challenging perspectives for future study is presented and soft computing techniques such as Fuzzy Logic, Artificial Neural Network, and Evolutionary Computation is discussed to deal with energy consumption minimization and scheduling problems.
Abstract: The ever increasing demand for electricity and the rapid increase in the number of automatic electrical appliances have posed a critical energy management challenge for both utilities and consumers. Substantial work has been reported on the Home Energy Management System (HEMS) but to the best of our knowledge, there is no single review highlighting all recent and past developments on Demand Side Management (DSM) and HEMS altogether. The purpose of each study is to raise user comfort, load scheduling, energy minimization, or economic dispatch problem. Researchers have proposed different soft computing and optimization techniques to address the challenge, but still it seems to be a pressing issue. This paper presents a comprehensive review of research on DSM strategies to identify the challenging perspectives for future study. We have described DSM strategies, their deployment and communication technologies. The application of soft computing techniques such as Fuzzy Logic (FL), Artificial Neural Network (ANN), and Evolutionary Computation (EC) is discussed to deal with energy consumption minimization and scheduling problems. Different optimization-based DSM approaches are also reviewed. We have also reviewed the practical aspects of DSM implementation for smart energy management.

Journal ArticleDOI
TL;DR: This paper further improves CJADE by innovatively incorporating a success‐intensity‐based roulette wheel selection method into it, and the resultant algorithm called SCJADE shows its superiority over its peer in terms of solution quality and convergence speed on IEEE CEC2017 optimization test suit.

Journal ArticleDOI
TL;DR: The proposed system analysis shows Adaptive Neuro-Fuzzy Inference System is best than the case-based reasoning algorithm for identifying the banana diseases in sooner stage.
Abstract: As it is observed that the banana production is plagued by numerous disease conditions and inflicting large loss to the poor farmers. By using modern technology of image processing and soft computing techniques, these may be known at the sooner stage and appropriate precautions may be taken to avoid more injury and thus increase in healthy production. In this research work used identified the banana diseases in sooner stage. Through the pre-processing technique, image is input to urge standardization and soft coring filter is completed to get rid of the noise. Then colour, shape and texture feature are completed for feature extraction, followed by classification techniques. During these classification techniques, two algorithms are used, that’s the Adaptive Neuro-Fuzzy Inference System and case-based reasoning. Then fuzzy logic is used for making the decision. The proposed system analysis was done using the Receiver Operating Characteristics (ROC) curve. The analysis shows Adaptive Neuro-Fuzzy Inference System is best than the case-based reasoning algorithm.

Journal ArticleDOI
TL;DR: The proposed method has achieved around 75% of resource utilization, which is highest compared to DHCI and CESCC, and the use of novel and innovative hybridization of machine learning, multi-objective, and soft computing methods in the proposed algorithm offers optimum scheduling and migration processes to balance PMs and VMs.
Abstract: A hybrid of supervised (artificial neural network), unsupervised (clustering) machine learning, and soft computing (interval type 2 fuzzy logic system)-based load balancing algorithm, i.e., clustering-based multiple objective dynamic load balancing technique (CMODLB), is introduced to balance the cloud load in the present work. Initially, our previously introduced artificial neural network-based dynamic load balancing (ANN-LB) technique is implemented to cluster the virtual machines (VMs) into underloaded and overloaded VMs using Bayesian optimization-based enhanced K-means (BOEK-means) algorithm. In the second stage, the user tasks are scheduled for underloading VMs to improve load balance and resource utilization. Scheduling of tasks is supported by multi-objective-based technique of order preference by similarity to ideal solution with particle swarm optimization (TOPSIS-PSO) algorithm using different cloud criteria. To realize load balancing among PMs, the VM manager makes decisions for VM migration. VM migration decision is done based on the suitable conditions, if a PM is overloaded, and if another PM is minimum loaded. The former condition balances load, while the latter condition minimizes energy consumption in PMs. VM migration is achieved through interval type 2 fuzzy logic system (IT2FS) whose decisions are based on multiple significant parameters. Experimental results show that the CMODLB method takes 31.067% and 71.6% less completion time than TaPRA and BSO, respectively. It has maintained 65.54% and 68.26% less MakeSpan than MaxMin and R.R algorithms, respectively. The proposed method has achieved around 75% of resource utilization, which is highest compared to DHCI and CESCC. The use of novel and innovative hybridization of machine learning, multi-objective, and soft computing methods in the proposed algorithm offers optimum scheduling and migration processes to balance PMs and VMs.