scispace - formally typeset
Search or ask a question

Showing papers in "Expert systems with applications in 2023"


Journal ArticleDOI
TL;DR: In this paper , a fusion model is proposed with the integration of the U-Net and Convolution Neural Network model for skin disease classification, which has outperformed on Adadelta optimizer with an accuracy value of 97.96%.
Abstract: Skin is one of the most significant organs, which serves as a barrier to the outside surroundings of the human body. To improve mortality, skin disease detection at a prior stage is necessary else it may convert to skin cancer. But its diagnosis at the prior stage that may increase life expectancy is a great experiment since it has a similar look to skin diseases. To deal with biomedical images, a new innovative automated system is required that can quickly and precisely identify skin lesions. Deep Learning is attracting a lot of attention in the treatment of numerous disorders. A fusion model is proposed here with the integration of the U-Net and Convolution Neural Network model. For this, the U-Net model has been utilized to segment the diseases using skin images and the Convolution Neural Network model has been proposed for the multi-class classification of segmented images. The model is simulated and analyzed using the HAM10000 dataset having 10,015 dermoscopy images of seven different classifications of skin diseases. The proposed model has been analyzed using two optimizers i.e. Adam and Adadelta on 20 epochs and 32 batch sizes for the skin disease classification. The model has outperformed on Adadelta optimizer with an accuracy value of 97.96%.

32 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a multi-adaptive receptive field-based graph neural framework (MARP) for hyperspectral image classification, where a graph attention (GAT) neural network is introduced to learn the importance of different-sized neighbourhoods and a long short-term memory (LSTM) method is adopted to update the nodes and preserve the local convolutional features of the nodes.
Abstract: In recent years, the applications of graph convolutional networks (GCNs) in hyperspectral image (HSI) classification have attracted much attention. However, hyperspectral classification faces problems such as complex noise effects, spectral variability, labelled training sample deficiency, and high spectral mixing between materials. Furthermore, the available GCN-based methods are computationally complex and cannot automatically adjust aggregate paths. To mitigate these issues, we propose a novel multiadaptive receptive field-based graph neural framework (MARP) for HSI classification. In our method, an adaptive receptive path aggregation (ARP) mechanism is proposed to suppress the impact of noise nodes on classification and automatically explore an adaptive receptive field, where a graph attention (GAT) neural network is introduced to learn the importance of different-sized neighbourhoods and a long short-term memory (LSTM) method is adopted to update the nodes and preserve the local convolutional features of the nodes. To address the problem that ARP may fall into a local optimum, we design a multiscale receptive mechanism. Extensive experimental results obtained on four public HSI datasets demonstrate that the proposed MARP method can mitigate oversmoothing and reduce computational complexity while achieving competitive performance when compared to several state-of-the-art methods.

24 citations


Journal ArticleDOI
TL;DR: In this paper , the authors developed a Monkeypox diagnosis model using Generalization and Regularization-based Transfer Learning approaches (GRA-TLA) for binary and multiclass classification.
Abstract: Monkeypox has become a significant global challenge as the number of cases increases daily. Those infected with the disease often display various skin symptoms and can spread the infection through contamination. Recently, Machine Learning (ML) has shown potential in image-based diagnoses, such as detecting cancer, identifying tumor cells, and identifying coronavirus disease (COVID)-19 patients. Thus, ML could potentially be used to diagnose Monkeypox as well. In this study, we developed a Monkeypox diagnosis model using Generalization and Regularization-based Transfer Learning approaches (GRA-TLA) for binary and multiclass classification. We tested our proposed approach on ten different convolutional Neural Network (CNN) models in three separate studies. The preliminary computational results showed that our proposed approach, combined with Extreme Inception (Xception), was able to distinguish between individuals with and without Monkeypox with an accuracy ranging from 77% to 88% in Studies One and Two, while Residual Network (ResNet)-101 had the best performance for multiclass classification in Study Three, with an accuracy ranging from 84% to 99%. In addition, we found that our proposed approach was computationally efficient compared to existing TL approaches in terms of the number of parameters (NP) and Floating-Point Operations per Second (FLOPs) required. We also used Local Interpretable Model-Agnostic Explanations (LIME) to explain our model’s predictions and feature extractions, providing a deeper understanding of the specific features that may indicate the onset of Monkeypox.

17 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a rough numbers-based extended Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH) method for prioritizing real-time traffic management systems.
Abstract: Digital transformation can help to make better use of existing transportation networks that are congested. One solution to the road congestion problem is real-time traffic management, which focuses on enhancing traffic flow conditions. The advantages of real-time traffic management systems have developed significantly as a result of connected autonomous vehicle (CAV) innovations. CAVs can act as enforcers for managing the traffic. This study aims to propose a novel rough numbers-based extended Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH) method for prioritizing real-time traffic management systems. Furthermore, a new approach for defining rough numbers is proposed, based on an improved methodology for defining rough numbers' lower and upper limits. This allows consideration of mutual relations between a set of objects and flexible representation of rough boundary interval depending on the dynamic environmental conditions. In this study, three main alternatives are defined for real-time traffic management systems: real-time traffic management, real-time traffic management integrated with CAVs, and real-time traffic management by using CAVs. For these alternatives, 5 main criteria and 18 sub-criteria are defined and then prioritized using the fuzzy multi-criteria decision-making (MCDM) approach. The proposed method's performance is validated through scenario analysis. The findings demonstrate that the proposed method is effective and applicable to real-world conditions. According to the study's findings, real-time traffic management with CAVs is the most advantageous alternative, while real-time traffic management integrated with CAVs is the least advantageous

16 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a WiFi indoor positioning algorithm based on support vector regression (SVR) optimized by particle swarm optimization (PSO), termed PSOSVRPos, which could achieve positioning accuracy with a mean absolute error of 1.040 m, a root mean square error (RMSE) of 0.863 m and errors within 1 m of 59.8%.
Abstract: Wireless fidelity (WiFi) indoor positioning has attracted the attention of thousands of researchers. It faces many challenges, and the primary problem is the low positioning accuracy, which hinders its widespread applications. To improve the accuracy, we propose a WiFi indoor positioning algorithm based on support vector regression (SVR) optimized by particle swarm optimization (PSO), termed PSOSVRPos. SVR algorithm devotes itself to solving localization as a regression problem by building the mapping between signal features and spatial coordinates in high dimensional space. PSO algorithm concentrates on the global-optimal parameter estimation of the SVR model. The positioning experiment is conducted on an open dataset (1511 samples, 154 features). The PSOSVRPos algorithm could achieve positioning accuracy with a mean absolute error of 1.040 m, a root mean square error (RMSE) of 0.863 m and errors within 1 m of 59.8%. Experimental results indicate that the PSOSVRPos algorithm is a precise approach for WiFi indoor positioning as it reduces the RMSE (35%) and errors within 1 m (14%) compared with state-of-the-art algorithms such as convolutional neural network (CNN) based methods.

11 citations


Journal ArticleDOI
TL;DR: In this article , the state-of-the-art literature on AR technologies in product assembly and disassembly from the Maintenance/Repair perspective is presented, and the working of various modules in AR technology on facilitating a user-friendly guiding platform and its applications are discussed with suitable illustrations.
Abstract: • Role of Augmented Reality in assembly and maintenance/repair is analysed. • Software and hardware elements of AR to aid manufacturing systems are discussed. • Challenges in AR tracking and registration techniques are discussed. • Future trends of AR to aid manufacturing systems are discussed. Manufacturing industries are currently experiencing the fourth revolution with the rapid advancements in immersive technologies for human-machine interaction (HMI) and flexible manufacturing systems (FMS). Product variance is limited due to barriers in the knowledge transfer between the stakeholders that existed in the pre and post-manufacturing phases. Augmented reality (AR) is a promising technology that can offer a high degree of adaptability and independence to support knowledge transfer at the most crucial manufacturing stages such as assembly, repair & maintenance. This article is focused on presenting the state of the art literature on AR technologies in product assembly and disassembly from the Maintenance/Repair perspective. The working of various modules in AR technology on facilitating a user-friendly guiding platform and its applications are discussed with suitable illustrations. The critical difficulties, such as tracking and rendering techniques for estimating human movement and environment experiences are observed to extend the adaptability of the technology. Future research potential, such as enhancing the virtual interface for reality, identifying worker behaviours, and enabling sharing and collaboration between multiple streams in an industrial context are analyzed.

11 citations


Journal ArticleDOI
TL;DR: In this paper , an ensembled convolutional neural network architecture (ConvNet) and ConvNet-LSTM architecture was used to detect atrial fibrillation heartbeats automatically.
Abstract: Automatic screening approaches can help diagnose Cardiovascular Disease (CVD) early, which is the leading source of mortality worldwide. Electrocardiogram (ECG/EKG)-based methods are frequently utilized to detect CVDs since they are a reliable and non-invasive tool. Due to this, Smart Cardiovascular Disease Detection System (SCDDS) has been offered in this manuscript to detect heart disease in advance. A wearable device embedded with electrodes and Internet of Things (IoT) sensors is utilized to record the EKG signals. Bluetooth is used to send EKG signals to the smartphone. The smartphone transfers the signals through an Android app to a pre-trained deep learning-based architecture deployed on the cloud. The architecture analyzes the EKG signal, and a heart report is communicated to the patient and advises further preventive action. We offered an ensembled Convolution Neural Network architecture (ConvNet) and Convolution Neural Network architecture - Long Short-Term Memory Networks (ConvNet-LSTM) architecture to detect atrial fibrillation heartbeats automatically. The architecture utilizes a convolutional neural network and long short-term memory network to extract local correlation features and capture the front-to-back dependencies of EKG sequence data. MIT-BIH atrial fibrillation database was utilized to design the architecture and achieved an overall categorization accuracy of 98% for the test set's heartbeat data. The findings of this work show that the suggested system has achieved significant accuracy with the ensembling of models. Such models can be deployed in wearable devices and smartphones for continuous monitoring and reporting of the heart.

11 citations


Journal ArticleDOI
TL;DR: In this article , a new mixed linear mathematical model is developed for an agriculture supply chain network to minimize the total fixed and variable costs of the closed-loop supply chain, efficient and well-known old and recent meta-heuristic algorithms are utilized.
Abstract: Nowadays, recent advances and developments in the agricultural sector have raised concerns regarding the environmental impact of agricultural wastes and the safety of agricultural products. Recently, reverse logistics have been used to address these issues. It is also possible to reduce the environmental impact of these by-products and return them to the network by opening compost centers and designing an optimal supply chain distribution network. In this paper, a new mixed linear mathematical model is developed for an agriculture supply chain network to minimize the total fixed and variable costs of the closed-loop supply chain,. To address the proposed model, efficient and well-known old and recent meta-heuristic algorithms are utilized. Besides, to boost intensification and diversification phases, three hybrid meta-heuristic algorithms are developed. Finally, the quality of the meta-heuristic and hybrid algorithms are investigated and compared.

10 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a multi-scale receptive fields graph attention neural network (MRGAT) for hyperspectral image classification, where a superpixel segment method is adopted to abstract the original HSI local spatial features, and graph edges are introduced into graph attention network to acquire the local semantic feature of the graph.
Abstract: Hyperspectral image (HSI) classification has attracted wide attention in many fields. Applying Graph Neural Network (GNN) to HSI classification is one of the research frontiers, which has improved the HSI classification accuracy greatly. However, GNN-based methods have not been widely applied due to their time-consuming, inefficient information description as well as poor anti-noise robustness. To overcome the deficiencies, a novel multi-scale receptive fields graph attention neural network (MRGAT) is proposed for HSI classification in this paper. In this network, a superpixel segment method is adopted to abstract the original HSI local spatial features. A two-layer one-dimensional convolution neural network (1D CNN) spectral transformer mechanism, is designed to extract the spectral features of superpixels, with which the spectral features can be acquired automatically. Furthermore, graph edges are introduced into Graph Attention Network (GAT) to acquire the local semantic feature of the graph. Moreover, inspired by the transformer network, we design a novel multi-scale receptive field GAT to extract the local-global adjacent node-features and edges-features. Finally, a graph attention network and a softMax function are utilized for multi-receptive feature fusion and pixel-label predicting. On Pavia University, Salinas, and Houston 2013 datasets, the overall accuracies (OAs) of our MRGAT are 71.76%, 82.61%, and 63.82%, respectively. Moreover, the performances with limited labeled samples indicates that the MRGAT contains superior adaptability. Compared with the competitive classifiers, MRGAT achieves high classification efficiency verified by training time comparison experiment.

10 citations


Journal ArticleDOI
TL;DR: In this paper , the authors applied four robust machine learning (ML) and deep learning (DL) algorithms to simulate the solubility and residual trapping index (RTI) of CO2 in a saline aquifer.
Abstract: Ongoing anthropogenic carbon dioxide (CO2) emissions to the atmosphere cause severe air pollution that leads to complex changes in the climate, which pose threats to human life and ecosystems more generally. Geological CO2 storage (GCS) offers a promising solution to overcome this critical environmental issue by removing some of the CO2 emissions. The performance of GCS projects depends directly on the solubility and residual trapping efficiency of CO2 in a saline aquifer. This study models the solubility trapping index (STI) and residual trapping index (RTI) of CO2 in saline aquifers by applying four robust machine learning (ML) and deep learning (DL) algorithms. Extreme learning machine (ELM), least square support vector machine (LSSVM), general regression neural network (GRNN), and convolutional neural network (CNN) are applied to 6811 compiled simulation records from published studies to provide accurate STI and RTI predictions. Employing different statistical error metrics coupled with supplementary evaluations, involving score and robustness analyses, the prediction accuracy of the models proposed is comparatively assessed. The findings of the study revealed that the LSSVM model delivers the lowest RMSE values: 0.0043 (STI) and 0.0105 (RTI) with few outlying predictions. Presenting the highest STI and RTI prediction scores the LSSVM is distinguished as the most credible model among all the four models studied. The models consider eight input variables, of which the time elapsed and injection rate displays the strongest correlations with STI and RTI, respectively. The results suggest that the proposed LSSVM model is best suited for monitoring CO2 sequestration efficiency from the data variables considered. Applying such models avoids time-consuming complex simulations and offers the potential to generate fast and reliable assessments of GCS project feasibility. Accurate modeling of CO2 storage trapping indexes guarantees successful geological CO2 storage operation, which is, in fact, the cornerstone of properly controlling and managing environmentally polluting gases.

10 citations


Journal ArticleDOI
TL;DR: In this paper , the performance evaluation of a genetic algorithm tuned Deep Learning (DL) and boosted tree-based techniques to predict several cryptocurrencies' closing prices was performed on six cryptocurrencies.
Abstract: • Deep learning and boosted tree approaches for cryptocurrency price modeling. • Hyperparameters tuning of models with genetic algorithms. • Benchmark deep learning and boosted tree-based techniques on six cryptocurrencies. • Convolutional neural networks are robust with enhanced prediction ability. The emergence of cryptocurrencies has drawn significant investment capital in recent years with an exponential increase in market capitalization and trade volume. However, the cryptocurrency market is highly volatile and burdened with substantial heterogeneous datasets characterized by complex interactions between predictors, which may be difficult for conventional techniques to achieve optimal results. In addition, volatility significantly impacts investment decisions; thus, investors are confronted with how to determine the price and assess their financial investment risks reasonably. This study investigates the performance evaluation of a genetic algorithm tuned Deep Learning (DL) and boosted tree-based techniques to predict several cryptocurrencies' closing prices. The DL models include Convolutional Neural Networks (CNN), Deep Forward Neural Networks, and Gated Recurrent Units. The study assesses the performance of the DL models with boosted tree-based models on six cryptocurrency datasets from multiple data sources using relevant performance metrics. The results reveal that the CNN model has the least mean average percentage error of 0.08 and produces a consistent and highest explained variance score of 0.96 (on average) compared to other models. Hence, CNN is more reliable with limited training data and easily generalizable for predicting several cryptocurrencies' daily closing prices. Also, the results will help practitioners obtain a better understanding of crypto market challenges and offer practical strategies to lower risks.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a deep learning-based approach for real-time polyp detection based on the 5th version of the You Only Look Once (YOLOv5) object detection algorithm and artificial bee colony (ABC) optimization algorithm.
Abstract: Colorectal cancer (CRC) is one of the most common cancer types with a high mortality rate. Colonoscopy is considered the gold standard in CRC screening, it also provides immediate removal of polyps, which are the precursors of CRC, significantly reducing CRC mortality. Polyps can be overlooked due to many factors and can progress to a fatal stage. Increasing the detection rate of missed polyps can be a turning point for CRC. Therefore, many traditional computer-aided detection (CAD) systems have been proposed, but the desired efficiency could not be obtained due to real-time detection or the limited sensitivity and specificity of the systems. In this article, we present a deep learning-based approach unlike traditional systems. This approach is basically based on 5th version of you only look once (YOLOv5) object detection algorithm and artificial bee colony (ABC) optimization algorithm. While models belonging to the YOLOv5 algorithm are used for polyp detection, the ABC algorithm is used to improve the performance of the models. The ABC algorithm is positioned to find the optimal activation functions and hyper-parameters for the YOLOv5 algorithm. The proposed method was performed on the novel Showa University and Nagoya University polyp database (SUN) dataset and PICCOLO white-light and narrow-band imaging colonoscopic dataset (PICCOLO). Experimental studies showed that the ABC algorithm successfully optimizes the YOLOv5 algorithm and offers much higher accuracy than the original YOLOv5 algorithm. The proposed method is far ahead of the existing methods in the literature in terms of speed and accuracy, with high performance in real-time polyp detection. This study is the first proposed method for optimization of activation functions and hyper-parameters for object detection algorithms.

Journal ArticleDOI
TL;DR: In this paper , the authors present a review of the research trends and patterns currently prevailing in the domain of mathematical expression recognition (MER) to identify and associate (semantic mapping) the leading research zones, core research areas, and research trends steering in the MER domain.
Abstract: Although recognition works on mathematical expressions have been explored for four decades, the current literature and trends are varied and frequently influenced by distinct emerging methods and technology. This situation instigates the necessity of an organized review to provide heedful insight into research trends and patterns currently prevailing in the domain of mathematical expression recognition (MER). To identify and associate (semantic mapping) the leading research zones, core research areas, and research trends steering in the MER domain. Identifying prominent recognition models based on extracted research areas. To develop the development chart from extracted research trends for directing the future works in this direction A manual and automatic search has been performed across the reputed digital libraries for corpus formation. The formulated corpus is used for topic modeling, and Latent Dirichlet Allocation is deployed for information modeling for achieving defined objectives. The corpus of 325 research papers published from 1967 to 2021 has been processed using LDA. The five major research areas and ten research trends are identified. Leading research area is “Segmentation and Classification Procedures”, and the trend with the highest related publications is “Contextual and Graph-based recognition”. “Attention and Deep Networks” has emerged as the newborn trend, and the identified newborn, young, and matured trends impetrate more exploration from the MER research community.

Journal ArticleDOI
TL;DR: In this paper , a new model with a two-factor stochastic equilibrium volatility level was proposed for price variance and volatility swaps with nonlinear payoff, which can be used to better describe the underlying price.
Abstract: This paper proposes a new model with a two-factor stochastic equilibrium volatility level that can be used to price variance and volatility swaps with nonlinear payoff. The adopted model uses the CIR process as the volatility process with the constant equilibrium level replaced with a stochastic one, and at the same time incorporates the regime switching mechanics in order to better describe the underlying price. To better understand how the introduced regime switching impacts both swap prices, we also conduct numerical experiments to compare our results with those obtained without regime switching.


Journal ArticleDOI
TL;DR: In this paper , a hybrid optimization concept is used to explore the search space efficiently and makes better performance in exploiting the feature selection, which is known as optimal feature selection for detecting lung cancer.
Abstract: Among all the diseases in human beings, lung cancer is known as the most hazardous disease that often leads to death rather than other cancer ailments. Lung cancer is asymptomatic, and so, it is unable to detect at the early stage. But, the rapid identification of lung cancer helps for sustaining the survival rate of people. Hence, many researchers develop various techniques for detecting lung cancer by undergoing different studies. Recently, computer technology has been used for solving these diagnosis problems. These developed systems involve diverse deep and machine learning approaches along with certain image-processing techniques for forecasting the severity level of lung cancer. Hence, this methodology plans to develop a novel intelligent method for diagnosing lung cancer. Initially, data is gathered by downloading two benchmark datasets, which include attribute information from various patients' health records. Furthermore, two standard techniques, “Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE)” have been used to extract features. Further, the deep features are retrieved from “the pooling layer of Convolutional Neural Network (CNN)”. Further to choose the significant features, the feature selection is taken place by the Best Fitness-based Squirrel Search Algorithm (BF-SSA), which is known as optimal feature selection. This hybrid optimization concept is considered to be superior in various domains to explore the search space efficiently and makes better performance in exploiting the feature selection. In the final phase called prediction, High Ranking Deep Ensemble Learning (HR-DEL) takes place concerning five forms of detection models. Finally, the high ranking of all the classifiers yields the final predicted output. The developed HR-DEL makes accurate prediction up to 8.79% better than the conventional methods and provides high robustness by reducing the dispersion or spread of the classification and model efficiency. The classification is performed, and the results are evaluated with the performance comparison of various algorithms.


Journal ArticleDOI
TL;DR: In this article , an improved hybrid genetic algorithm with variable neighborhood search (HGA-VNS) was proposed to address the flexible job shop scheduling problem considering the machine work load balance in machining system, with the minimization of the makespan.
Abstract: This paper proposes an improved hybrid genetic algorithm with variable neighborhood search (HGA-VNS) for addressing the flexible job shop scheduling problem (FJSP) considering the machine work load balance in machining system, with the minimization of the makespan. In the HGA-VNS algorithm, each solution is represented by a chromosome consists of two parts, where the first part is the code of the machining machine number, and the second part chromosome is the code of the machining process number. Second, considering the slow convergence speed of the previous algorithms, a combined methods in crossover and mutation operators that considers the machine work load balance is proposed. Third, a local search approach that carry out for key processes on the critical path which reduces the number of invalid transformations is proposed. For the HGA-VNS, using the orthogonal experiment approach, the best combination of parameters is provided. Then, the proposed HGA-VNS is tested on sets of extended instances based on the well-known benchmarks. Experimental results show that the HGA-VNS is effective, and its performance is significantly better than other algorithms in solving flexible job shop problems in a machining system. Finally, the proposed HGA-VNS is applied to optimize practical FJSP in enterprise F. Compared with the original scheduling scheme, the makespan of the optimal scheduling scheme is reduced by 14.92%, and HGA-VNS can obtain more efficient and economic solutions.

Journal ArticleDOI
TL;DR: In this paper , the Edge U-Net model was proposed for segmenting brain tumor tissue using boundary-related MRI data with the main data from brain MRIs, which achieved Dice scores of 88.8 % for meningioma, 91.76 % for glioma and 87.28 % for pituitary tumors.
Abstract: Blood clots in the brain are frequently caused by brain tumors. Early detection of these clots has the potential to significantly lower morbidity and mortality in cases of brain cancer. It is thus indispensable for a proper brain tumor diagnosis and treatment that tumor tissue magnetic resonance images (MRI) be accurately segmented. Several deep learning approaches to the segmentation of brain tumor MRIs have been proposed, each having been designed to properly map out ‘boundaries’ and thus achieve highly accurate segmentation. This study introduces a deep convolution neural network (DCNN), named the Edge U-Net model, built as an encoder-decoder structure inspired by the U-Net architecture. The Edge U-Net model can more precisely localise tumors by merging boundary-related MRI data with the main data from brain MRIs. In the decoder phase, boundary-related information from original MRIs of different scales is integrated with the appropriate adjacent contextual information. A novel loss function was added to this segmentation model to improve performance. This loss function is enhanced with boundary information, allowing the learning process to produce more precise results. In the conducted experiments, a public dataset with 3064 T1-Weighted Contrast Enhancement (T1-CE) images of three well-known brain tumor types were used. The experiment demonstrated that the proposed framework achieved satisfactory Dice score values compared with state-of-art models, with highly accurate differentiation of brain tissues. It achieved Dice scores of 88.8 % for meningioma, 91.76 % for glioma, and 87.28 % for pituitary tumors. Computations of other performance metrics such as the Jaccard index, sensitivity, and specificity were also conducted. According to the results, the Edge U-Net model is a potential diagnostic tool that can be used to help radiologists more precisely segment brain tumor tissue images.

Journal ArticleDOI
TL;DR: In this paper , a stacking ensemble learning model for daily runoff prediction based on different types of 1D and 2D CNNs is proposed and applied to the Quinebaug River Basin, Connecticut, USA.
Abstract: In recent years, applications of convolutional neural networks (CNNs) to runoff prediction have received some attention due to their excellent feature extraction capabilities. However, existing studies are still limited since merely either 1D or 2D CNNs are developed to predict runoff. In this study, a stacking ensemble learning model for daily runoff prediction based on different types of 1D and 2D CNNs is proposed and applied to the Quinebaug River Basin, Connecticut, USA. The structure of the CNN models is developed with reference to the classic LeNet5 network. Especially, the predictors are reconstructed into 1D vectors and 2D matrices with 10-, 20- and 30-day time steps. Totally 18 member models are constructed through selecting 3 representative 1D and 2D CNN models with 3 time steps. The simple average method (SAM) is used to integrate different CNN member models. The results show that the performance of the same-type SAM based on either 1D or 2D CNNs improves because it can counteract the effects of both positive and negative predicted values by the member models to some extent. Furthermore, the mixed-type SAM models based on both 1D and 2D CNN member models can further improve the prediction accuracy. The optimal model SAM15 consists of two 1D CNNs and four 2D CNNs. Compared with the optimal CNN member models, SAM15 reduces the validation RMSE by about 13% and improves the validation R and NSE by about 3% and 7%, respectively. This study highlights that the proposed stacking ensemble learning model can improve the daily runoff prediction accuracy through integration of the nonlinear fitting ability of 1D and 2D CNN member models.

Journal ArticleDOI
TL;DR: In this paper , a multi-task semi-supervised learning (MTSSL) framework was proposed to detect COVID-19 using auxiliary tasks for which adequate data is publicly available.
Abstract: Efficient diagnosis of COVID-19 plays an important role in preventing the spread of the disease. There are three major modalities to diagnose COVID-19 which include polymerase chain reaction tests, computed tomography scans, and chest X-rays (CXRs). Among these, diagnosis using CXRs is the most economical approach; however, it requires extensive human expertise to diagnose COVID-19 in CXRs, which may deprive it of cost-effectiveness. The computer-aided diagnosis with deep learning has the potential to perform accurate detection of COVID-19 in CXRs without human intervention while preserving its cost-effectiveness. Many efforts have been made to develop a highly accurate and robust solution. However, due to the limited amount of labeled data, existing solutions are evaluated on a small set of test dataset. In this work, we proposed a solution to this problem by using a multi-task semi-supervised learning (MTSSL) framework that utilized auxiliary tasks for which adequate data is publicly available. Specifically, we utilized Pneumonia, Lung Opacity, and Pleural Effusion as additional tasks using the ChesXpert dataset. We illustrated that the primary task of COVID-19 detection, for which only limited labeled data is available, can be improved by using this additional data. We further employed an adversarial autoencoder (AAE), which has a strong capability to learn powerful and discriminative features, within our MTSSL framework to maximize the benefit of multi-task learning. In addition, the supervised classification networks in combination with the unsupervised AAE empower semi-supervised learning, which includes a discriminative part in the unsupervised AAE training pipeline. The generalization of our framework is improved due to this semi-supervised learning and thus it leads to enhancement in COVID-19 detection performance. The proposed model is rigorously evaluated on the largest publicly available COVID-19 dataset and experimental results show that the proposed model attained state-of-the-art performance.

Journal ArticleDOI
TL;DR: In this paper , an adaptive decomposition-based evolutionary algorithm (ADEA) is proposed to guide the evolution process of MOEAs for multi-objective optimization problems with complex Pareto fronts.
Abstract: • An adaptive decomposition approach is proposed to guide the evolution process. • A structured metric is designed to assess the quality of the candidate solutions. • The structured metric performs differently on different rank fronts. • Once a weight vector is generated, the sub-objective space is divided into two ones. In decomposition-based multi-objective evolutionary algorithms (MOEAs), a set of uniformly distributed reference vectors (RVs) is usually adopted to decompose a multi-objective optimization problem (MOP) into several single-objective sub-problems, and the RVs are fixed during evolution. When it comes to multi-objective optimization problems (MOPs) with complex Pareto fronts (PFs), the effectiveness of the multi-objective evolutionary algorithm (MOEA) may degrade. To solve this problem, this article proposes an adaptive decomposition-based evolutionary algorithm (ADEA) for both multi- and many-objective optimization. In ADEA, the candidate solutions themselves are used as RVs, so that the RVs can be automatically adjusted to the shape of the Pareto front (PF). Also, the RVs are successively generated one by one, and once a reference vector (RV) is generated, the corresponding sub-objective space is dynamically decomposed into two sub-spaces. Moreover, a variable metric is proposed and merged with the proposed adaptive decomposition approach to assist the selection operation in evolutionary many-objective optimization (EMO). The effectiveness of ADEA is compared with several state-of-the-art MOEAs on a variety of benchmark MOPs with up to 15 objectives. The empirical results demonstrate that ADEA has competitive performance on most of the MOPs used in this study.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper developed a complete multivariate selection-combination short-term wind speed forecasting system, which is composed of two advanced feature-selection methods; six single forecasting models based on convolutional and recurrent neural networks and a multi-objective chameleon swarm optimization algorithm.
Abstract: Wind energy, as a typical environmentally friendly source of energy for power generation, has the advantages of being renewable and emitting no greenhouse gases. Moreover, in wind power generation, accurate wind speed prediction is vital. However, most existing forecasting models use only univariate time series forecasting models, ignoring the effect of other variables on wind speed and the improvement of the model using optimization algorithms, resulting in lower accuracy and stability. Aiming to fill this gap, we develop a complete multivariate selection-combination short-term wind speed forecasting system, which is composed of two advanced feature-selection methods; six single forecasting models based on convolutional and recurrent neural networks and a multi-objective chameleon swarm optimization algorithm. We prove theoretically that the proposed multi-objective chameleon swarm optimization algorithm has Pareto optimal solutions and performs best in some test functions by comparing it with other multi-objective swarm optimization algorithms. For wind speed prediction in summer, our proposed prediction system achieves a mean absolute percentage error of 1.937%, 2.110% and 2.584% in one-, two- and three-step forecasting, respectively, which is higher than single and other combined models.

Journal ArticleDOI
TL;DR: In this paper , a grey prediction model with a variable structure was established considering the data characteristic of small sample size of China's NEV sales, which showed that the sales were expected to be 3.03 million in 2030, but at a significantly slower rate.
Abstract: At present, the new energy vehicle (NEV) industry in China is at a huge risk of overheated investment and overcapacity. An accurate prediction of China’s future NEV market is of great significance for the Chinese government to control the growth of the industry at a reasonable speed and the production on a reasonable scale. To this end, a new grey prediction model with a variable structure was established considering the data characteristic of small sample size of China’s NEV sales. In the new model, the value range and optimization space of the order r were expanded, and the definitions and structures of two operators, the grey accumulating operator and grey inverse operator, were unified. Meanwhile, the new model had good structural variability and was fully compatible with other grey models of the same type. The performance of the model was tested with different data sequences, and results showed that the comprehensive performance of this model was better than that of other similar models. Lastly, the model was employed to predict China’s NEV sales. Results showed that the sales were expected to be 3.03 million in 2030, which indicates that China’s NEV market will continue to grow, but at a significantly slower rate. The government and enterprises need to take corresponding measures to promote the healthy development of China’s NEV industry.

Journal ArticleDOI
TL;DR: In this article , a two-echelon waste management system (WMS) is proposed to minimize operational costs and environmental impact by utilizing the industry 4.0 concept using traceability Internet of Thing-based devices.
Abstract: Nowadays, population growth and urban development lead to having an efficient waste management system (WMS) based on recent advances and trends. Alongside all functions and procedures in these systems, the waste collection plays a significant role. This study proposes a two-echelon WMS to minimize operational costs and environmental impact by utilizing the industry 4.0 concept. Both models utilize modern traceability Internet of Thing-based devices to compare real-time information of waste level in bins and separation centers with the threshold waste level (TWL) parameter. The first model optimizes the operational cost and CO2 emission of collecting waste from bins to the separation center by considering the time windows. A capacitated vehicle routing problem is designed as a later model-based to minimize the cost of waste transferring to recycling centers. In addition, to find the optimal solution, recent meta-heuristic algorithms are employed, and several novel heuristics based on the problem's specifications are developed. Furthermore, the developed heuristics methods are utilized to generate the initial feasible solutions in meta-heuristics and compared with random ones. The performance of the proposed algorithms is probed, and Best Worst Method (BWM) is applied to rank the algorithms based on relative percentage deviation, relative deviation index and hitting time.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an asymmetric U-shaped network based on the U-net core architecture to segment KUS images accurately and reliably, which mainly consists of a dense residual connection encoder, a multi-step up-sampling decoder with the hybrid attention module, and a side-out deep supervision module.
Abstract: • An asymmetric U-shaped network is developed to segment kidney automatically. • The multi-step down-sampling strategy can improve the adaptability of the network. • A hybrid attention module is designed to capture more robust kidney characteristics. • The side-out module can to guide network learns to predict precise segmentations. Kidney ultrasound (KUS) images segmentation is one of the key steps in computer-aided diagnosis. The perturbation of heterogeneous structure, similar intensity distribution, and kidney morphology pose challenges for the segmentation of KUS images. In this paper, we proposed an asymmetric U-shaped network based on the U-net core architecture to segment KUS images accurately and reliably. Specifically, the architecture mainly consists of a dense residual connection encoder, a multi-step up-sampling decoder with the hybrid attention module, and a side-out deep supervision module. The design of the dense residual connection encoder can capture sufficient kidney feature information to improve the representation ability of the network. The development of the hybrid attention module can further guide the network to pay more attention to the representation of the kidney. In addition, the introduction of the side-out deep supervision model can help the network obtain segmentation results that are closer to the ground-truth masks. Moreover, to reduce network parameters, we proposed a multi-step up-sampling optimization strategy to simplify the design of the network. We compare with several state-of-the-art medical image segmentation methods on the same KUS dataset using seven quantitative metrics. The results of our method on Jaccard, Dice, Accuracy, Recall, Precision, ASSD and AUC are 89.95%, 94.59%, 98.65%, 94.47%, 95.07%, 0.3006 and 0.9703, respectively. Experimental results demonstrate that the proposed method achieves the most competitive segmentation performance on KUS images.

Journal ArticleDOI
TL;DR: In this paper , a new blend of rough sets and Pythagorean fuzzy sets is proposed, namely, rough Pythagorean fuzzy sets (RPS), which can encapsulate two distinct types of uncertainties that appear in imprecise available data through the approximation of PGFs in crisp approximation space.
Abstract: A rough set approximates a subset of a universal set on the basis of some binary relation and is significant for the reduction of attributes of an information system. On the other hand, a Pythagorean fuzzy set provides information about the extent of truthness and falsity of a statement. Both these theories deal with different forms of uncertainty and can be united to get their combined benefits. This paper contributes a new blend of rough sets and Pythagorean fuzzy sets namely, rough Pythagorean fuzzy sets. This model can encapsulate two distinct types of uncertainties that appear in imprecise available data through the approximation of Pythagorean fuzzy sets in crisp approximation space. We define rough Pythagorean fuzzy sets on the basis of equivalence relation and generalize it for arbitrary binary relations. The manuscript also provides a general framework to study rough Pythagorean fuzzy approximations of different k-step neighborhood systems. The identities and properties of upper and lower rough Pythagorean fuzzy approximation operators are discussed for the neighborhood systems induced from different types of binary relations. Further, we develop algorithms that compute reduct family, core and rough Pythagorean fuzzy approximations of single-valued and set-valued information systems using indiscernibility relation and similarity relation, respectively. These algorithms are subjected to simple yet interesting applications.

Journal ArticleDOI
TL;DR: In this paper , a two-stage framework was proposed to detect fraudulent transactions that incorporates a deep Autoencoder as a representation learning method, and supervised deep learning techniques, and the experimental evaluations revealed that the proposed approach improved the performance of the employed deep learning-based classifiers.
Abstract: Due to the growth of e-commerce and online payment methods, the number of fraudulent transactions has increased. Financial institutions with online payment systems must utilize automatic fraud detection systems to reduce losses incurred due to fraudulent activities. The problem of fraud detection is often formulated as a binary classification model that can distinguish fraudulent transactions. Embedding the input data of the fraud dataset into a lower-dimensional representation is crucial to building robust and accurate fraud detection systems. This study proposes a two-stage framework to detect fraudulent transactions that incorporates a deep Autoencoder as a representation learning method, and supervised deep learning techniques. The experimental evaluations revealed that the proposed approach improves the performance of the employed deep learning-based classifiers. Specifically, the utilized deep learning classifiers trained on the transformed data set obtained by the deep Autoencoder significantly outperform their baseline classifiers trained on the original data in terms of all performance measures. Besides, models created using deep Autoencoder outperform those created using the principal component analysis (PCA)-obtained dataset as well as the existing models.

Journal ArticleDOI
TL;DR: In this article , a transfer learning strategy applied to state-of-the-art Convolutional Neural Networks (CNNs) fed with image-based representations of touch gestures performed by users on mobile devices is presented.
Abstract: Mobile devices are nowadays ubiquitous. They are equipped with a variety of sensors, each designed for capturing specific signals which can be exploited to acquire discriminating user’s traits, thus allowing to recognize, for instance, the authorized users. In this regard, we focus on capturing soft biometric traits from smartphones. Soft biometric information extracted from a human body (e.g., gender and age) is ancillary information proved to improve the performance of biometric authentication systems, and it has drawn a great deal of attention for its applications in healthcare, smart spaces and digital world as a whole. This paper presents an approach to gender and age-group recognition, namely TGSB, leveraging a transfer learning strategy applied to state-of-the-art Convolutional Neural Networks (CNNs) fed with image-based representations of touch gestures performed by users on mobile devices. We perform experiments considering, one at a time, touch gestures of the same kind, and combinations thereof, with intermediate and late fusion learning strategies. Experiments prove that TGSB is a promising approach, with up to 94% accuracy for the gender recognition and up to 99% for the age-group recognition. We highlight the most useful touch gesture for gender and age-group recognition, that is Scroll with 81% and 96% accuracy, respectively. We show that combining multiple touch gestures (intermediate fusion) with a joint latent subspace learning mechanism in CNNs improves the TGSB performance, up to 99% accuracy when considering a combination of two Scroll. Compared to previous works, TGSB exhibits much better performance in both gender and age-group recognition.