scispace - formally typeset
Search or ask a question

Showing papers in "Expert Systems With Applications in 2022"


Journal ArticleDOI
TL;DR: Info as mentioned in this paper is a modified weight mean method, whereby the weighted mean idea is employed for a solid structure and updating the vectors position using three core procedures: updating rule, vector combining, and a local search.
Abstract: This study presents the analysis and principle of an innovative optimizer named weIghted meaN oF vectOrs (INFO) to optimize different problems. INFO is a modified weight mean method, whereby the weighted mean idea is employed for a solid structure and updating the vectors’ position using three core procedures: updating rule, vector combining, and a local search. The updating rule stage is based on a mean-based law and convergence acceleration to generate new vectors. The vector combining stage creates a combination of obtained vectors with the updating rule to achieve a promising solution. The updating rule and vector combining steps were improved in INFO to increase the exploration and exploitation capacities. Moreover, the local search stage helps this algorithm escape low-accuracy solutions and improve exploitation and convergence. The performance of INFO was evaluated in 48 mathematical test functions, and five constrained engineering test cases including optimal design of 10-reservoir system and 4-reservoir system. According to the literature, the results demonstrate that INFO outperforms other basic and advanced methods in terms of exploration and exploitation. In the case of engineering problems, the results indicate that the INFO can converge to 0.99% of the global optimum solution. Hence, the INFO algorithm is a promising tool for optimal designs in optimization problems, which stems from the considerable efficiency of this algorithm for optimizing constrained cases. The source codes of INFO algorithm are publicly available at https://imanahmadianfar.com. and https://aliasgharheidari.com/INFO.html.

223 citations


Journal ArticleDOI
TL;DR: In this article , the authors provide readers with a review of publications which lie within the intersection of Industry 4.0, Big Data (BD), and healthcare operations and give future perspectives.
Abstract: The innovative technologies emerged with the industrial revolution “Industry 4.0” as well as the new ones on the way of advanced digitalization enable delivering enhanced, value-added and cost-effective manufacturing and service operations. One of the first areas of focus for Industry 4.0 applications is operations related to healthcare services. Effective management of healthcare resources, clinical care processes, service planning, delivery and evaluation of healthcare operations are essential for a well-functioning healthcare system. Yet, with the adoption of technologies such as Internet of Health Things, Medical Cyber–Physical Systems, Machine Learning, and Big Data (BD), the healthcare sector has recognized the relevance of Industry 4.0. The concept of BD offered numerous advantages and opportunities in this field. It changed the way information is gathered, shared and utilized. Hence, in this study our main ambition is to provide readers with a review of publications which lie within the intersection of Industry 4.0, BD, and healthcare operations and give future perspectives. Our review shows that BD constitutes an important place on the technologies Industry 4.0 provides in the healthcare domain. It helps design, improve, analyze, assess and optimize operations in the domain.

52 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel fuzzy MULTIMOORA-based method for sustainable supplier selection, which obtains weights by both group Best Worst Method (subjective method) and fuzzy Shannon Entropy Method (objective method), respectively.
Abstract: With the increasing awareness of environmental protection and social responsibility, sustainable supplier selection (SSS) has been receiving more and more attention. However, traditional methods for selecting sustainable suppliers suffer from three main drawbacks: (i) Few existing studies have considered criteria weightings that have been determined both subjectively and objectively, reasonably and simultaneously. Moreover, among these limited studies, existing methods for combining weightings that have been calculated both subjectively and objectively provide only a poor degree of differentiation. (ii) Although MULTIMOORA method has high potential to solve the complex SSS problem, few researchers have given it much attention. (iii) The current Reference Point Approach within MULTIMOORA is not comprehensive as it only considers the distance between the alternative and the positive ideal point. Together, these drawbacks result in both inefficiency and ineffectiveness in SSS. This paper proposes a novel fuzzy MULTIMOORA-based method for SSS. The proposed method, firstly, obtains weights by both group Best Worst Method (subjective method) and fuzzy Shannon Entropy Method (objective method), respectively. Then the two types of weights are combined by Deviation Maximization method, reasonably and effectively. Finally, the paper introduces an improved fuzzy MULTIMOORA method to rank alternative suppliers, which considers both the minimum distance from the positive ideal point and the maximum distance from the negative ideal point. The practicability and effectiveness of the proposed method is verified in an illustrative application in Company L, a well-known international forklift truck manufacturer in China.

52 citations


Journal ArticleDOI
TL;DR: In this paper, a tri-level attribute reduction framework is proposed to enrich three-way granular computing, and two approaches are proposed for constructing a specific reduct. But, the trilevel reducts are not unified by trilevel consistency.
Abstract: Attribute reduction serves as a pivotal topic of rough set theory for data analysis. The ideas of tri-level thinking from three-way decision can shed new light on three-level attribute reduction. Existing classification-specific and class-specific attribute reducts consider only macro-top and meso-middle levels. This paper introduces a micro-bottom level of object-specific reducts. The existing two types of reducts apply to the global classification with all objects and a local class with partial objects, respectively. The new type applies to an individual object. These three types of reducts constitute tri-level attribute reducts. Their development and hierarchy are worthy of systematical explorations. Firstly, object-specific reducts are defined by object consistency from dependency, and they improve both classification-specific and class-specific reducts. Secondly, tri-level reducts are unified by tri-level consistency. Hierarchical relationships between object-specific reducts and class-specific, classification-specific reducts are analyzed, and relevant connections of three-way classifications of attributes are given. Finally, tri-level reducts are systematically analyzed, and two approaches, i.e., the direct calculation and hierarchical transition, are suggested for constructing a specific reduct. We build a framework of tri-level thinking and analysis of attribute reduction to enrich three-way granular computing. Tri-level reducts lead to the sequential development and hierarchical deepening of attribute reduction, and their results profit intelligence processing and system reasoning.

46 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented a custom framework for detecting fire using transfer learning with state-of-the-art CNNs trained over real-world fire breakout images and used the Grad-CAM method for the visualization and localization of fire in the images.
Abstract: Fire is a severe natural calamity that causes significant harm to human lives and the environment. Recent works have proposed the use of computer vision for developing a cost-effective automated fire detection system. This paper presents a custom framework for detecting fire using transfer learning with state-of-the-art CNNs trained over real-world fire breakout images. The framework also uses the Grad-CAM method for the visualization and localization of fire in the images. The model also uses an attention mechanism that has significantly assisted the network in achieving better performances. It was observed through Grad-CAM results that the proposed use of attention led the model towards better localization of fire in the images. Among the plethora of models explored, the EfficientNetB0 emerged as the best-suited network choice for the problem. For the selected real-world fire image dataset, a test accuracy of 95.40% strongly supports the model's efficiency in detecting fire from the presented image samples. Also, a very high recall of 97.61 highlights that the model has negligible false negatives, suggesting the network to be reliable for fire detection.

43 citations


Journal ArticleDOI
TL;DR: In this paper , a fuzzy bi-level decision support system (DSS) is proposed to optimize a sustainable multi-level multi-product supply chain (SC) and co-modal transportation network for perishable products distribution.
Abstract: This study introduces a fuzzy bi-level Decision Support System (DSS) to optimize a sustainable multi-level multi-product Supply Chain (SC) and co-modal transportation network for perishable products distribution. To this end, two integrated multi-objective Mixed Integer Linear Programming (MILP) models are proposed to formulate the problem. On-time delivery is taken into account as the main factor that determines model performance due to perishability of products. Optimizing the design of SC network using the first level of the proposed DSS, the transportation network configuration is provided optimally in the second level considering different modes and options. In order to contribute to the literature, mainly by addressing uncertainty and perishability, a hybrid solution technique based on possibilistic linear programming and Fuzzy Weighted Goal Programming (FWGP) approach is developed to accommodate our suggested bi-level model. This technique can deal with problem uncertainty while also ensuring the sustainability of the overall system. Lp-metric method is implemented along with three well-known quality indicators to assess the performance of the proposed solution method and quality of obtained solutions. Finally, three illustrative numerical examples are provided using the CPLEX solver to showcase the applicability of the proposed methodology and discuss the complexity of the model. Results demonstrate the efficiency of the proposed methodology in finding optimal solutions compared to Lp-metric method, such that it is able to treat a problem with more than 2.2 million variables and 1.3 million constraints in 1093.08 s.

41 citations


Journal ArticleDOI
TL;DR: In this paper, a multi-objective optimization algorithm based on the slime mold algorithm (SMA) was proposed to solve the single-objectivity optimization problems. And the performance of the proposed MOSMA was validated on the CEC 20 multiobjective benchmark test functions.
Abstract: Recently, the Slime mould algorithm (SMA) was proposed to solve the single-objective optimization problems. It is considered as a strong algorithm for its efficient global search capability. This paper presents a multi-objective optimization algorithm based on the SMA called multi-objective SMA (MOSMA). An external archive is utilized with the SMA to store the Pareto optimal solutions obtained. The archive applied to emulate the social behaviour of the slime mould in the multi-objective search space. The performance of the MOSMA is validated on the CEC’20 multi-objective benchmark test functions. Furthermore eight well-known of constrained and unconstrained test cases, four constrained engineering design problems are tested to demonstrate the MOSMA superiority. Moreover, the real-world multi-objective optimization of helical coil spring for automotive application to depict the reliability of the presented MOSMA to solve real-world problems. Over the statistical side, the Wilcoxon test and performance indicators are used to assess the effectiveness of MOSMA against six well-known and robust optimization algorithms: multi-objective grey wolf optimizer (MOGWO), multi-objective particle swarm optimization (MOPSO), multi-objective salp swarm algorithm (MSSA), Non-dominated sorting genetic algorithm version 2 (NSGA-II), multi-objective whale optimization algorithm (MOWOA) and strength Pareto evolutionary algorithm 2 (SPEA2). The overall simulation results reveal that the proposed MOSMA has the ability to provide better solutions as compared to the other algorithms in terms of Pareto sets proximity (PSP) and inverted generational distance in decision space (IGDX) indicators.

38 citations


Journal ArticleDOI
TL;DR: In this article , the authors presented a developed IoT system for driving support by the use of type-2 fuzzy logic control module, which was tested in different cars by driving on various roads and results show excellent efficiency.
Abstract: Advanced models of Artificial Intelligence enable systems of IoT to work with great flexibility to the needs of users. In this article we present our developed IoT system for driving support by the use of type-2 fuzzy logic control module. We have developed the IoT system to collect the data about driving conditions and evaluate them adjusting to the needs of user. Applied module of fuzzy logic of the second type was used in analysis of accelerometers signals to flexibly adjust to uncertainty of evaluation of driving expectations of each driver. Our developed system was tested in different cars by driving on various roads and results show excellent efficiency. • Developed smart modules working in a novel AIoT system for a car diagnostic model. • 2nd type fuzzy control to support driving and adjust to driving style of different drivers. • Real world experiments with results to show efficiency of the developed system.

38 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel recommendation method which incorporates temporal reliability and confidence measures into the recommendation process and evaluated the quality of the predictions using a temporal reliability measure taking into account the changes of users' preferences over time.
Abstract: Recommender systems use intelligent algorithms to learn a user’s preferences and provide them relevant suggestions. Lack of sufficient ratings – also known as data sparsity problem – often results in poor recommendation performance. The existing recommendation methods have mainly focused on designing recommenders with high accuracy without paying much attention to the reliability of the recommendations. On the other hand, the users’ preferences may vary over time and considering the time factor in the design process is crucial, which has been largely ignored in most of the existing recommenders. To deal with these issues, a novel recommendation method is proposed in this paper which incorporates temporal reliability and confidence measures into the recommendation process. First, the effectiveness of the users’ rating is measured using a probabilistic approach and ineffective rating profiles are enriched by adding some implicit ratings to them. The quality of the predictions is evaluated using a temporal reliability measure taking into account the changes of users’ preferences over time. Then, the ratings with low reliability values are recalculated using a novel procedure, which updates the target user’s neighborhood by removing ineffective users. This leads to a temporal confidence measure that is used to update the neighborhood to provide more reliable and accurate recommendations. The superiority of the proposed method over state-of-the-art recommendation methods is shown by conducting extensive experiments on three benchmark datasets.

37 citations


Journal ArticleDOI
TL;DR: In this paper , three continuous review economic order quantity models for time-dependent deterioration using preservation technology were developed for a finite time horizon, incorporating promotional effort and full backorder, and the optimal solutions were derived for the number of orders, preservation technology cost and the fraction of a cycle with positive stock.
Abstract: This paper studies three continuous review economic order quantity models for time-dependent deterioration using preservation technology. First, a crisp model is developed and the model is extended into a fuzzy model to include the imprecise nature of demand. It is further extended to analyze the impact of the learning effect under the fuzzy environment. All models are developed for a finite time horizon, incorporating promotional effort and full backorder. The optimal solutions are derived for the number of orders, preservation technology cost, and the fraction of a cycle with positive stock. Three algorithms are developed to find the optimal solution for three models. Numerical analysis is performed to demonstrate the application, followed by a sensitivity analysis of the important parameters. The crisp model leads to the lowest total cost followed by fuzzy learning and fuzzy model. Even though the optimal number of orders is found to be the same for the three models, order quantity is more for the fuzzy model and less for the crisp model. The order quantity increases step-wise with an increase in preservation factor.

35 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel decomposition-ensemble model to predict the gold price more accurately, where the original gold prices are decomposed into sublayers with different frequencies by the improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN).
Abstract: • Adopt a novel hybrid model with frequency decomposition for gold prices prediction. • Use the improved CEEMDAN (ICEEMDAN) to improve the prediction performance. • Pass the inspection of standard measurement and MCS test. • Show remarkable superiority in forecasting accuracy over compared models. Gold price has always played an important role in the world economy and finance. In order to predict the gold price more accurately, this paper proposes a novel decomposition-ensemble model. Firstly, the original gold prices are decomposed into sublayers with different frequencies by the improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN). Secondly, the long short-term memory, convolutional neural networks, and convolutional block attention module (LSTM-CNN-CBAM) joint forecasting all sublayers. Finally, the prediction of the sublayers with different models is reconstructed as the final predicted results with the summation method. Among them, the proposed model could capture the essence of sequence effectively through ICEEMDAN algorithm, extract the long-term effect of the gold price by LSTM, mining the deep features of gold price data with CNN, and improving the feature extraction ability of the network through CBAM. It is proved by experiment that the cooperation among LSTM, CNN and CBAM can strengthen the modeling ability and improve the prediction accuracy. Moreover, the decomposition algorithm ICEEMDAN can further increase the forecast precision, and the prediction effect is better than other decomposition methods. Overall, the novel model ICEEMDAN-LSTM-CNN-CBAM (ILCC) could enhance the prediction accuracy and outperform other related comparative models.

Journal ArticleDOI
TL;DR: In this article , a fractional-order bank data model incorporating two unequal time delays is proposed and the role of time delay in stabilizing system and controlling the generation of Hopf bifurcation is sufficiently displayed.
Abstract: In order to reveal the change law of bank data and manage bank effectively, building mathematical models is a very effective approach. In this present study, we set up a novel fractional-order bank data model incorporating two unequal time delays. Firstly, we discuss the existence and uniqueness, non-negativeness, boundedness of the solution to the established bank data model by virtue of contraction mapping theorem, mathematical analysis technique, construct of an appropriate function, respectively. Secondly, the stability and the creation of Hopf bifurcation are investigated via the stability criterion and bifurcation principle of fractional-order differential equation, five new delay-independent stability conditions and bifurcation criteria ensuring the stability behavior and the onset of Hopf bifurcation of the involved bank data model are established. Furthermore, the role of time delay in stabilizing system and controlling the generation of Hopf bifurcation is sufficiently displayed. Thirdly, the global stability of the considered fractional-order bank data model is systematically explored. Fourthly, the Hopf bifurcation control issue of fractional-order bank data model is studied via PDξ controller. Finally, computer simulations are executed to verify the established primary results. The derived conclusions of this study are absolutely innovative and possess important theoretical guiding significance in maintaining a good operation of banks.

Journal ArticleDOI
Wang Zhichao1, Yan Ran1, Yifan Chen1, Xin Yang1, Genbao Zhang1 
TL;DR: In this article, a probabilistic hesitant fuzzy linguistic term sets (PHFLTSs) are used to implement risk assessment of failure modes by a panel of specialists. And the subjective and objective weights of risk factors are garnered by the best-worst method (BWM) and maximizing deviation method (MDM) separately, from which their integrated weights are incorporated into the technique for order preference by similarity to ideal solution (TOPSIS) so as to obtain the risk ranking of failure mode.
Abstract: Failure mode and effects analysis (FMEA) usually requires multi-domain specialists to implement the group risk assessment for identifying and eliminating system failures. Therefore, this paper combines several multi-criteria decision making (MCDM) techniques with probabilistic hesitant fuzzy linguistic term sets (PHFLTSs) to implement risk assessment of failure modes by a panel of specialists. It aims at overcoming some defects existing in the conventional FMEA, such as without epistemic uncertainty and group risk assessment, as well as with some questions incurring from the risk priority number (RPN). Consequently, group members utilize PHFLTSs to express their subjective uncertain risk assessments on failure modes, in which the social network analysis (SNA) and maximizing consensus method (MCM) are exploited to derive the subjective and objective weights of group members respectively, afterwards their integrated weights are employed to aggregate individual risk assessments into the collective risk assessment. Additionally, the subjective and objective weights of risk factors are garnered by the best-worst method (BWM) and maximizing deviation method (MDM) separately, from which their integrated weights are incorporated into the technique for order preference by similarity to ideal solution (TOPSIS) so as to obtain the risk ranking of failure modes. Finally, an example with sensitive and comparative analyses is presented to demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: In this article, the authors presented a CNN model for classification of ECG rhythmias by the hybrid models based on modified version of Marine Predators algorithm (MPA) and CNN, known as the IMPA-CNN.
Abstract: Preparation of Convolutional Neural Networks (CNNs) for classification purposes depends heavily on the knowledge of hyper-parameters tuning. This study aims, in particular in task of automated electrocardiograms (ECG), to minimize the user variability in the CNN training by searching and optimizing the CNN parameters automatically. In the clinical practice, the task of ECG classification analysis is restricted by existing models’ configuration. The hyper-parameters of the CNN model should be adjusted for the ECG classification problem. The best configuration for hyper-parameters is selected to have an impact on the production of the model. Deep knowledge of deep learning algorithms and suitable optimization techniques are also needed. Although there are many strategies for automated optimization, different benefits and disadvantages occur as they are applied to ECG classification problem. Here we present a CNN model for classification of non-ectopic (N), ventricular ectopic (V), supraventricular ectopic (S), and fusion (F) ECG rhythmias by the hybrid models based on modified version of Marine Predators algorithm (MPA) and CNN, known as the IMPA-CNN. The proposed model summarizes the feature extraction techniques of major features and, thus, outperforms other current classification models through automatically select the best hyper-parameters configuration of the CNN model. To reduce the time and complication complexity, optimum characteristics have been extracted directly from the raw signal using 1D-local binary pattern, higher order statistics, central moment, Hermite basis function discrete wavelet transform, and R–R intervals. Then, a modified version of MPA algorithm is used to select appropriate hyper-parameters for the CNN model like initial learning rate for the CNN model that is one of the major hyper parameters effect output performance, optimizer type which can be set to stochastic gradient descent (SGD), adaptive moment estimation (Adam), root mean square propagation (RMSprop), the activation function form used for modeling non-linear functions, set to ‘Rectified Linear Unit (ReLU), or ‘sigmoid’ and some other hyper-parameters are related to the optimization and training process of CNN model. Many available optimization algorithms for hyper-parameters optimization problems are provided. In addition, experiments with well know data sets like MIT-BIH arrhythmia, European ST-T database, and St. Petersburg INCART database are carried out to compare the performance of various optimization approaches and to provide practical illustration of the optimization of hyper-parameters for the proposed CNN model.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a heuristic method to build a recommender engine in IoT environment exploiting swarm intelligence techniques, where smart objects are represented using real-valued vectors obtained through the Doc2Vec model, a word embedding technique able to capture the semantic context representing documents and sentences with dense vectors.
Abstract: In smart environments, traditional information management approaches are often unsuitable to tackle with the needed elaborations due to the amount and the high dynamicity of entities involved. Smart objects (enhanced devices or IoT services belonging to a smart system) interact and maintain relations which need of effective and efficient selection/filtering mechanisms to better meet users’ requirements. Recommender systems provide useful and customized information, properly selected and filtered, for users and services. This paper proposes a heuristic method to build a recommender engine in IoT environment exploiting swarm intelligence techniques. Smart objects are represented using real-valued vectors obtained through the Doc2Vec model, a word embedding technique able to capture the semantic context representing documents and sentences with dense vectors. The vectors are associated to mobile agents that move in a virtual 2D space following a bio-inspired model - the flocking model - in which agents perform simple and local operations autonomously obtaining a global intelligent organization. A similarity rule, based on the assigned vectors, was designed so enabling agents to discriminate among them. A closer positioning (clustering) of only similar agents is achieved. The intelligent positioning allows easy identifying of similar smart objects, thus enabling a fast and effective selection operations. Experimental evaluations have allowed to demonstrate the validity of the approach, and on how the proposed methodology allows obtaining an increasing in performance of about 50%, in terms of clustering quality and relevance, compared to other existing approaches.

Journal ArticleDOI
TL;DR: In this paper , a novel approach is proposed to perform fault diagnosis in rotating equipment based on permutation entropy, signal processing, and artificial intelligence, which allows for the automatic selection of a frequency band that includes the characteristic resonance frequency of the fault.
Abstract: Rotating equipment is considered as a key component in several industrial sectors. In fact, the continuous operation of many industrial machines such as sub-sea pumps and gas turbines relies on the correct performance of their rotating equipment. In order to reduce the probability of malfunctions in this equipment, condition monitoring, and fault diagnosis systems are essential. In this work, a novel approach is proposed to perform fault diagnosis in rotating equipment based on permutation entropy, signal processing, and artificial intelligence. To that aim, vibration signals are employed for an indication of bearing performance. In order to facilitate fault diagnosis, fault detection and isolation are performed in two separate steps. As first, once a vibration signal is received, the faulty state of the bearing is determined by permutation entropy. In case a faulty state is detected, the fault type is determined using an approach based on signal processing and artificial intelligence. Wavelet packet transform and envelope analysis of the vibration signals are utilized to extract the frequency components of the fault. The proposed approach allows for the automatic selection of a frequency band that includes the characteristic resonance frequency of the fault, which is subject to change in different operational conditions. The method works by extracting the proper features of the signals that are used to decide about the faulty bearing’s condition by a multi-output adaptive neuro-fuzzy inference system classifier. The effectiveness of the approach is assessed by the Case Western Reserve University dataset: the analysis demonstrates the proposed method’s capabilities in accurately diagnosing faults in rotating equipment as compared to existing approaches. • Novel approach combining permutation entropy and MANFIS to diagnose bearing faults. • The approach is automated and its performance is not sensitive to imbalanced data. • The approach allows automatic selection of defect frequency bands. • The approach combines higher accuracy with more efficient implementation compared to other methods.

Journal ArticleDOI
TL;DR: In this paper, a novel framework for feature selection that relies on boosting, or sample re-weighting, to select sets of informative features in classification problems is proposed, which uses as its basis the feature rankings derived from fast and scalable tree-boosting models, such as XGBoost.
Abstract: As dimensions of datasets in predictive modelling continue to grow, feature selection becomes increasingly practical. Datasets with complex feature interactions and high levels of redundancy still present a challenge to existing feature selection methods. We propose a novel framework for feature selection that relies on boosting, or sample re-weighting, to select sets of informative features in classification problems. The method uses as its basis the feature rankings derived from fast and scalable tree-boosting models, such as XGBoost. We compare the proposed method to standard feature selection algorithms on 9 benchmark datasets. We show that the proposed approach reaches higher accuracies with fewer features on most of the tested datasets, and that the selected features have lower redundancy.

Journal ArticleDOI
TL;DR: Deep learning methods have achieved significant results in various fields as discussed by the authors , and many researchers have used deep learning algorithms in medical analyses, using multimodal data to achieve more accurate results.
Abstract: Deep learning methods have achieved significant results in various fields. Due to the success of these methods, many researchers have used deep learning algorithms in medical analyses. Using multimodal data to achieve more accurate results is a successful strategy because multimodal data provide complementary information. This paper first introduces the most popular modalities, fusion strategies, and deep learning architectures. We also explain learning strategies, including transfer learning, end-to-end learning, and multitask learning. Then, we give an overview of deep learning methods for multimodal medical data analysis. We have focused on articles published over the last four years. We end with a summary of the current state-of-the-art, common problems, and directions for future research.

Journal ArticleDOI
TL;DR: Bakurov et al. as mentioned in this paper revisited the Structural Similarity Index (SSIM) revisited: A data-driven approach for Structural Systems with Applications.
Abstract: Bakurov, I., Buzzelli, M., Schettini, R., Castelli, M., & Vanneschi, L. (2021). Structural similarity index (SSIM) revisited: A data-driven approach. Expert Systems with Applications, [116087]. https://doi.org/10.1016/j.eswa.2021.116087

Journal ArticleDOI
TL;DR: In this paper, a novel bio-inspired algorithm called Orca Predation Algorithm (OPA) is proposed, which simulates the hunting behavior of orcas and abstracts it into several mathematical models: including driving, encircling and attacking of prey.
Abstract: A novel bio-inspired algorithm called Orca Predation Algorithm (OPA) is proposed in this paper. OPA simulates the hunting behavior of orcas and abstracts it into several mathematical models: including driving, encircling and attacking of prey. The algorithm assigns different weights to the phases of prey driving and encircling through parameter adjustment to balance the exploitation and exploration stages of the algorithm. In the attacking phase, after considering the positions of several superior orcas and some randomly selected orcas, the optimal solution can be approached without losing the diversity of the particles. In order to estimate the performance of OPA, 67 unconstrained benchmark functions were first employed, and then the efficiency of the algorithm was further evaluated on five constrained engineering optimization problems. Besides, the computational complexity, parameter sensitivity and four qualitative metrics of OPA were analyzed to evaluate the applicability of the algorithm. The experimental results demonstrate that OPA can generate more promising results with superior performance relative to other test algorithms on different search landscapes.


Journal ArticleDOI
TL;DR: A blockchain-based solution is developed for local cargo networks using UHF-RFID, Internet of Things sensors and ethereum-based smart contracts to provide a fast shipping management architecture that ensures security between the parties.
Abstract: Today, the continuous growth of electronic commerce volume increases the importance of point-to-point shipment transportation. For this reason, it is extremely important that shipment management systems provide effective, efficient and fast service. A study of the literature shows that the improvements made in the field of transportation have been generally aimed at global logistics networks. In this study, unlike the existing literature, a blockchain-based solution is developed for local cargo networks. With this approach using Ultra High Radio Frequency (UHF-RFID), Internet of Things (IoT) sensors and blockchain-based smart contracts, a fast shipping management architecture that ensures security between the parties is provided. In this context, an automatic payment and approval mechanism is established between the parties using ethereum-based smart contracts. In addition, shipments equipped with RFID tags are automatically tracked and traceable using UHF-RFID antennas. The IoT data collected in the study are stored on a cloud-based server, which is a cost-effective solution. Smart contracts that enable communication between the parties are written in the Solidity language using the Ganache, Truffle and Metamask platforms. It was observed that the integration between the technologies is ensured and the communication between the parties works successfully in the test processes.

Journal ArticleDOI
TL;DR: In this paper, a simple and effective adaptive surrogate model to structural reliability analysis using deep neural network (DNN) is introduced, in which initial design of experiments (DoEs) are randomly selected from a given Monte Carlo Simulation (MCS) population to build the global approximate model of performance function (PF).
Abstract: This article introduces a simple and effective adaptive surrogate model to structural reliability analysis using deep neural network (DNN). In this paradigm, initial design of experiments (DoEs) are randomly selected from a given Monte Carlo Simulation (MCS) population to build the global approximate model of performance function (PF). More important points on the boundary of limit state function (LSF) and their vicinities are subsequently added relied on the surrogate model to enhance its accuracy without any complex techniques. A threshold is proposed to switch from a globally predicting model to a locally one for the approximation of LSF by eradicating previously used unimportant and noise points. Accordingly, the surrogate model becomes more precise for the MCS-based failure probability assessment with only a small number of experiments. Six numerical examples with highly nonlinear properties, various distributions of random variables and multiple failure modes, namely three benchmark ones regarding explicit mathematical PFs and the others relating to finite element method (FEM)-programmed truss structures under free vibration, are examined to validate the present approach.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper applied a three-phase systematic research approach to develop a decision support system to recognize and eradicate these challenges through persuasive strategic pathways, including low-interest loans and subsidies, cognitive development, and strict codes for newer enterprises.
Abstract: Green and climate smart mining (GCSM) is a recent advancement in mining that not only alleviates the exacerbating environmental and climate-related impacts but also protects the well-being of the communities in the mining regions. However, there is still a long way to transform current mining practices into GCSM due to many challenges hampering its implementation process. This study applied a three-phase systematic research approach to develop a decision support system to recognize and eradicate these challenges through persuasive strategic pathways. In the first phase, a combination of extensive literature review and the fuzzy Delphi method identifies 9 pathways to overcome 24 challenges divided into five broad categories. The second phase involves an integrated fuzzy decision analysis approach to rank all the challenges, explore their interrelationships, and prioritize the pathways. Finally, the sensitivity analysis is conducted for pathways prioritization against each of the five categories of challenges. The proposed system is illustrated by conducting a case study of the Chinese mining industry to verify its utility and applicability. Analysis unveiled challenges related to government and regulatory and technical and operational categories as the most critical to hamper GCSM implementation, which can be overcome through three best pathways, including low-interest loans and subsidies, cognitive development, and strict codes for newer enterprises. This research is a novel contribution to building connections between mining and sustainability science that provides a comprehensive insight for mining affiliated professionals to implement and promote GCSM, which has not yet been presented in the existing literature.

Journal ArticleDOI
TL;DR: A systematic literature review as a systematic, comprehensive, and reproducible review to dissect all the existing research that applied RL in the network-level TSC (NTSC) domain to provide the research community with statistical and conceptual knowledge.
Abstract: Improvement of traffic signal control (TSC) efficiency has been found to lead to improved urban transportation and enhanced quality of life. Recently, the use of reinforcement learning (RL) in various areas of TSC has gained significant traction; thus, we conducted a systematic literature review as a systematic, comprehensive, and reproducible review to dissect all the existing research that applied RL in the network-level TSC (NTSC) domain. The review only targeted the network-level articles that tested the proposed methods in networks with two or more intersections. We used natural language processing to define the search strings and searched Google Scholar, Web of Science, IEEE Xplore, ACM Digital Library, Springer Link, and Science Direct databases. This review covers 160 peer-reviewed articles from 30 countries published from 1994 to March 2020. The goal of this study is to provide the research community with statistical and conceptual knowledge, summarize existence evidence, characterize RL applications in NTSC domains, explore all applied methods and major first events in the defined scope, and identify areas for further research based on the explored research problems in current research.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an end-to-end framework consisting of deep feature extraction followed by feature selection (FS) for the detection of COVID-19 from CT scan images.
Abstract: Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). It may cause serious ailments in infected individuals and complications may lead to death. X-rays and Computed Tomography (CT) scans can be used for the diagnosis of the disease. In this context, various methods have been proposed for the detection of COVID-19 from radiological images. In this work, we propose an end-to-end framework consisting of deep feature extraction followed by feature selection (FS) for the detection of COVID-19 from CT scan images. For feature extraction, we utilize three deep learning based Convolutional Neural Networks (CNNs). For FS, we use a meta-heuristic optimization algorithm, Harmony Search (HS), combined with a local search method, Adaptive β -Hill Climbing (A β HC) for better performance. We evaluate the proposed approach on the SARS-COV-2 CT-Scan Dataset consisting of 2482 CT scan images and an updated version of the previous dataset containing 2926 CT scan images. For comparison, we use a few state-of-the-art optimization algorithms. The best accuracy scores obtained by the present approach are 97.30% and 98.87% respectively on the said datasets, which are better than many of the algorithms used for comparison. The performances are also at par with some recent works which use the same datasets. The codes for the FS algorithms are available at: https://github.com/khalid0007/Metaheuristic-Algorithms.

Journal ArticleDOI
TL;DR: In this article , a multi-agent reinforcement learning-based adaptive learning framework is proposed to obtain cost efficient preventive maintenance policies for a serial production line that has multiple levels of preventive maintenance actions.
Abstract: Designing preventive maintenance (PM) policies that ensure smooth and efficient production for large-scale manufacturing systems is non-trivial. Recent model-free reinforcement learning (RL) methods shed lights on how to cope with the non-linearity and stochasticity in such complex systems. However, the action space explosion impedes RL-based PM policies to be generalized to real applications. In order to obtain cost efficient PM policies for a serial production line that has multiple levels of PM actions, a novel multi-agent modeling is adopted to support adaptive learning by modeling each machine as cooperative agent. The evaluation of system-level production loss is leveraged to construct the reward function. An adaptive learning framework based on value-decomposition multi-agent actor–critic algorithm is utilized to obtain PM policies. In simulation study, the proposed framework demonstrates its effectiveness by leading other baselines on a comprehensive set of metrics whereas the centralized RL-based methods struggles to converge to stable policies. Our analysis further demonstrates that our multi-agent reinforcement learning based method learns effective PM policies without any knowledge about the environment and maintenance strategies.

Journal ArticleDOI
TL;DR: In this paper, a joint and deep learning framework was designed to predict clinical scores of Alzheimer's disease (AD) in middle-aged and elderly people with the gradual loss of cognitive ability.
Abstract: Alzheimer's disease (AD) is a progressive neurodegenerative disease that often grows in middle-aged and elderly people with the gradual loss of cognitive ability. Presently, there is no cure for AD. Furthermore, the current clinical diagnosis of AD is too time-consuming. In this paper, we design a joint and deep learning framework to predict clinical scores of AD. Specifically, the feature selection method combining group LASSO and correntropy is used to reduce dimensions and screen the features of brain regions related to AD. We explore the multi-layer independently recurrent neural network regression to study the internal connection between different brain regions and the time correlation between longitudinal data. The proposed joint deep learning network studies the relationship between the magnetic resonance imaging and clinical score, and predicts the clinical score. The predicted clinical score values allow doctors to perform early diagnosis and timely treatment of patients’ disease condition.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a conceptual framework for project-oriented organizations to select the most appropriate portfolio based on organizational resilience strategy, which can make them flexible in dealing with risks and decreasing the recovery time after disruptions.
Abstract: The COVID-19 pandemic has affected the world's economic condition significantly, and construction projects have faced many challenges and disruptions as well. This should be an alarm bell for project-oriented organizations to be prepared for such events and take necessary actions at the earliest time. In this regard, project-oriented organizations should establish their business based on the resilience concept, making them flexible in dealing with risks and decreasing the recovery time after disruptions. The current study proposes a practical conceptual framework for project-oriented organizations to select the most appropriate portfolio based on organizational resilience strategy. First, portfolios are identified, and the projects are clustered based on organizational resilience strategy using the Elbow and Fuzzy C-Means methods. The projects' scores are then determined employing the stakeholders' opinions and Robust Ordinal Priority Approach (OPA-R), which can handle the uncertainty of the input data. After that, each portfolio's score is determined using the obtained scores of the projects, and the best portfolio linked to the organizational resilience strategy is selected. The application of the proposed method to a project-oriented organization is examined, and its usage for the managers of project-oriented organizations is discussed in detail.

Journal ArticleDOI
TL;DR: In this paper , a review of recent advances in wrapper feature selection techniques for attack detection and classification, applied in intrusion detection area is presented, considering design, rationale, technical characteristics and common evaluation metrics.
Abstract: In this paper, we present a review of recent advances in wrapper feature selection techniques for attack detection and classification, applied in intrusion detection area. Due to the quantity of published papers in this area, it is difficult to ascertain the level of current research in wrapper feature selection techniques. Moreover, due to the wide variety of techniques and datasets, is difficult to identify relevant characteristics among them, regard it architecture, performance, advantages and issues. The reported results frequently are shown in heterogeneous way, as there are several metrics to measure the classification quality. From our review, we propose a classification taxonomy of the wrapper feature selection techniques in intrusion detection area, considering design, rationale, technical characteristics and common evaluation metrics. Also we consider a description of the common metrics and a brief discussion about the attack scenarios reported in this review. At the end of this work, we show the coverage of existing research, open challenges and new directions.