scispace - formally typeset
Search or ask a question

Showing papers in "Complex & Intelligent Systems in 2021"


Journal ArticleDOI
TL;DR: To meet the performance requirements of IoT enabled services, context-based offloading can play a crucial role, according to the study of drawn results and limitations of the existing frameworks.
Abstract: The Internet of Things (IoT) applications and services are increasingly becoming a part of daily life; from smart homes to smart cities, industry, agriculture, it is penetrating practically in every domain. Data collected over the IoT applications, mostly through the sensors connected over the devices, and with the increasing demand, it is not possible to process all the data on the devices itself. The data collected by the device sensors are in vast amount and require high-speed computation and processing, which demand advanced resources. Various applications and services that are crucial require meeting multiple performance parameters like time-sensitivity and energy efficiency, computation offloading framework comes into play to meet these performance parameters and extreme computation requirements. Computation or data offloading tasks to nearby devices or the fog or cloud structure can aid in achieving the resource requirements of IoT applications. In this paper, the role of context or situation to perform the offloading is studied and drawn to a conclusion, that to meet the performance requirements of IoT enabled services, context-based offloading can play a crucial role. Some of the existing frameworks EMCO, MobiCOP-IoT, Autonomic Management Framework, CSOS, Fog Computing Framework, based on their novelty and optimum performance are taken for implementation analysis and compared with the MAUI, AnyRun Computing (ARC), AutoScaler, Edge computing and Context-Sensitive Model for Offloading System (CoSMOS) frameworks. Based on the study of drawn results and limitations of the existing frameworks, future directions under offloading scenarios are discussed.

120 citations


Journal ArticleDOI
TL;DR: A crow search-based convolution neural networks model has been implemented in gesture recognition pertaining to the HCI domain and generates 100 percent training and testing accuracy that justifies the superiority of the model against traditional state-of-the-art models.
Abstract: Human–computer interaction (HCI) and related technologies focus on the implementation of interactive computational systems. The studies in HCI emphasize on system use, creation of new techniques that support user activities, access to information, and ensures seamless communication. The use of artificial intelligence and deep learning-based models has been extensive across various domains yielding state-of-the-art results. In the present study, a crow search-based convolution neural networks model has been implemented in gesture recognition pertaining to the HCI domain. The hand gesture dataset used in the study is a publicly available one, downloaded from Kaggle. In this work, a one-hot encoding technique is used to convert the categorical data values to binary form. This is followed by the implementation of a crow search algorithm (CSA) for selecting optimal hyper-parameters for training of dataset using the convolution neural networks. The irrelevant parameters are eliminated from consideration, which contributes towards enhancement of accuracy in classifying the hand gestures. The model generates 100 percent training and testing accuracy that justifies the superiority of the model against traditional state-of-the-art models.

114 citations


Journal ArticleDOI
TL;DR: The paper describes the usage of self-learning Hierarchical LSTM technique for classifying hatred and trolling contents in social media code-mixed data and the method developed based on HLSTM model helps in recognizing the hatred word context by mining the intention of the user for using that word in the sentence.
Abstract: The paper describes the usage of self-learning Hierarchical LSTM technique for classifying hatred and trolling contents in social media code-mixed data. The Hierarchical LSTM-based learning is a novel learning architecture inspired from the neural learning models. The proposed HLSTM model is trained to identify the hatred and trolling words available in social media contents. The proposed HLSTM systems model is equipped with self-learning and predicting mechanism for annotating hatred words in transliteration domain. The Hindi–English data are ordered into Hindi, English, and hatred labels for classification. The mechanism of word embedding and character-embedding features are used here for word representation in the sentence to detect hatred words. The method developed based on HLSTM model helps in recognizing the hatred word context by mining the intention of the user for using that word in the sentence. Wide experiments suggests that the HLSTM-based classification model gives the accuracy of 97.49% when evaluated against the standard parameters like BLSTM, CRF, LR, SVM, Random Forest and Decision Tree models especially when there are some hatred and trolling words in the social media data.

111 citations


Journal ArticleDOI
TL;DR: The paper carefully surveys various issues related to recommender systems that use AI, and also reviews the improvements made to these systems through the use of such AI approaches as fuzzy techniques, transfer learning, genetic algorithms, evolutionary algorithms, neural networks and deep learning, and active learning.
Abstract: Recommender systems provide personalized service support to users by learning their previous behaviors and predicting their current preferences for particular products. Artificial intelligence (AI), particularly computational intelligence and machine learning methods and algorithms, has been naturally applied in the development of recommender systems to improve prediction accuracy and solve data sparsity and cold start problems. This position paper systematically discusses the basic methodologies and prevailing techniques in recommender systems and how AI can effectively improve the technological development and application of recommender systems. The paper not only reviews cutting-edge theoretical and practical contributions, but also identifies current research issues and indicates new research directions. It carefully surveys various issues related to recommender systems that use AI, and also reviews the improvements made to these systems through the use of such AI approaches as fuzzy techniques, transfer learning, genetic algorithms, evolutionary algorithms, neural networks and deep learning, and active learning. The observations in this paper will directly support researchers and professionals to better understand current developments and new directions in the field of recommender systems using AI.

105 citations


Journal ArticleDOI
TL;DR: A new framework of cascaded deep learning classifiers to enhance the performance of these CAD systems for highly suspected COVID-19 and pneumonia diseases in X-ray images and shows that VGG16, ResNet50V2, and Dense Neural Network (DenseNet169) models achieved the best detection accuracy of CO VID-19, viral (Non-COVID- 19) pneumonia, and bacterial pneumonia images, respectively.
Abstract: Computer-aided diagnosis (CAD) systems are considered a powerful tool for physicians to support identification of the novel Coronavirus Disease 2019 (COVID-19) using medical imaging modalities. Therefore, this article proposes a new framework of cascaded deep learning classifiers to enhance the performance of these CAD systems for highly suspected COVID-19 and pneumonia diseases in X-ray images. Our proposed deep learning framework constitutes two major advancements as follows. First, complicated multi-label classification of X-ray images have been simplified using a series of binary classifiers for each tested case of the health status. That mimics the clinical situation to diagnose potential diseases for a patient. Second, the cascaded architecture of COVID-19 and pneumonia classifiers is flexible to use different fine-tuned deep learning models simultaneously, achieving the best performance of confirming infected cases. This study includes eleven pre-trained convolutional neural network models, such as Visual Geometry Group Network (VGG) and Residual Neural Network (ResNet). They have been successfully tested and evaluated on public X-ray image dataset for normal and three diseased cases. The results of proposed cascaded classifiers showed that VGG16, ResNet50V2, and Dense Neural Network (DenseNet169) models achieved the best detection accuracy of COVID-19, viral (Non-COVID-19) pneumonia, and bacterial pneumonia images, respectively. Furthermore, the performance of our cascaded deep learning classifiers is superior to other multi-label classification methods of COVID-19 and pneumonia diseases in previous studies. Therefore, the proposed deep learning framework presents a good option to be applied in the clinical routine to assist the diagnostic procedures of COVID-19 infection.

97 citations


Journal ArticleDOI
TL;DR: In this article, the authors summarize 14 countries' up-to-date national strategies and plans for Industry 4.0, and explain new terminologies and challenges for clarity and completeness.
Abstract: Since 2011, when the concepts of Industry 4.0 were first announced, this industrial revolution has grown and expanded from some theoretical concepts to real-world applications. Its practicalities can be found in many fields and affect nearly all of us in so many ways. While we are adapting to new changes, adjustments are starting to reveal on national and international levels. It is becoming clear that it is not just new innovations at play, technical advancements, governmental policies and markets have never been so intertwined. Here, we generally describe the concepts of Industry 4.0, explain some new terminologies and challenges for clarity and completeness. The key of this paper is that we summarise over 14 countries’ up-to-date national strategies and plans for Industry 4.0. Some of them are bottom-up, such as Portugal, some top-down, such as Italy, a few like the United States had already been moving in this direction long before 2011. We see governments are tailoring their efforts accordingly, and industries are adapting as well as driving those changes.

92 citations


Journal ArticleDOI
TL;DR: In this paper, some novel directional correlation coefficients are put forward to compute the relationship between two Pythagorean fuzzy sets by taking four parameters of the PFSs into consideration, which are the membership degree, non-membership degree, strength of commitment, and direction of commitment.
Abstract: Compared to the intuitionistic fuzzy sets, the Pythagorean fuzzy sets (PFSs) can provide the decision makers with more freedom to express their evaluation information There exist some research results on the correlation coefficient between PFSs, but sometimes they fail to deal with the problems of disease diagnosis and cluster analysis To tackle the drawbacks of the existing correlation coefficients between PFSs, some novel directional correlation coefficients are put forward to compute the relationship between two PFSs by taking four parameters of the PFSs into consideration, which are the membership degree, non-membership degree, strength of commitment, and direction of commitment Afterwards, two practical examples are given to show the application of the proposed directional correlation coefficient in the disease diagnosis, and the application of the proposed weighted directional correlation coefficient in the cluster analysis Finally, they are compared with the previous correlation coefficients that have been developed for PFSs

79 citations


Journal ArticleDOI
TL;DR: A new automated deep learning method is proposed for the classification of multiclass brain tumors using a modified genetic algorithm based on metaheuristics and a non-redundant serial-based approach.
Abstract: Multiclass classification of brain tumors is an important area of research in the field of medical imaging. Since accuracy is crucial in the classification, a number of techniques are introduced by computer vision researchers; however, they still face the issue of low accuracy. In this article, a new automated deep learning method is proposed for the classification of multiclass brain tumors. To realize the proposed method, the Densenet201 Pre-Trained Deep Learning Model is fine-tuned and later trained using a deep transfer of imbalanced data learning. The features of the trained model are extracted from the average pool layer, which represents the very deep information of each type of tumor. However, the characteristics of this layer are not sufficient for a precise classification; therefore, two techniques for the selection of features are proposed. The first technique is Entropy–Kurtosis-based High Feature Values (EKbHFV) and the second technique is a modified genetic algorithm (MGA) based on metaheuristics. The selected features of the GA are further refined by the proposed new threshold function. Finally, both EKbHFV and MGA-based features are fused using a non-redundant serial-based approach and classified using a multiclass SVM cubic classifier. For the experimental process, two datasets, including BRATS2018 and BRATS2019, are used without increase and have achieved an accuracy of more than 95%. The precise comparison of the proposed method with other neural nets shows the significance of this work.

79 citations


Journal ArticleDOI
TL;DR: Experiments prove that the ensemble learning approach gives promising results against other state-of-the-art techniques.
Abstract: Receiving an accurate emotional response from robots has been a challenging task for researchers for the past few years. With the advancements in technology, robots like service robots interact with users of different cultural and lingual backgrounds. The traditional approach towards speech emotion recognition cannot be utilized to enable the robot and give an efficient and emotional response. The conventional approach towards speech emotion recognition uses the same corpus for both training and testing of classifiers to detect accurate emotions, but this approach cannot be generalized for multi-lingual environments, which is a requirement for robots used by people all across the globe. In this paper, a series of experiments are conducted to highlight an ensemble learning effect using a majority voting technique for cross-corpus, multi-lingual speech emotion recognition system. A comparison of the performance of an ensemble learning approach against traditional machine learning algorithms is performed. This study tests a classifier’s performance trained on one corpus with data from another corpus to evaluate its efficiency for multi-lingual emotion detection. According to experimental analysis, different classifiers give the highest accuracy for different corpora. Using an ensemble learning approach gives the benefit of combining all classifiers’ effect instead of choosing one classifier and compromising certain language corpus’s accuracy. Experiments show an increased accuracy of 13% for Urdu corpus, 8% for German corpus, 11% for Italian corpus, and 5% for English corpus from with-in corpus testing. For cross-corpus experiments, an improvement of 2% when training on Urdu data and testing on German data and 15% when training on Urdu data and testing on Italian data is achieved. An increase of 7% in accuracy is obtained when testing on Urdu data and training on German data, 3% when testing on Urdu data and training on Italian data, and 5% when testing on Urdu data and training on English data. Experiments prove that the ensemble learning approach gives promising results against other state-of-the-art techniques.

69 citations


Journal ArticleDOI
TL;DR: An elaborate study on different CNN techniques used in image denoising with CNN, where some state-of-the-arts CNN image Denoising methods were depicted in graphical forms, while other methods were elaborately explained.
Abstract: Image denoising faces significant challenges, arising from the sources of noise. Specifically, Gaussian, impulse, salt, pepper, and speckle noise are complicated sources of noise in imaging. Convolutional neural network (CNN) has increasingly received attention in image denoising task. Several CNN methods for denoising images have been studied. These methods used different datasets for evaluation. In this paper, we offer an elaborate study on different CNN techniques used in image denoising. Different CNN methods for image denoising were categorized and analyzed. Popular datasets used for evaluating CNN image denoising methods were investigated. Several CNN image denoising papers were selected for review and analysis. Motivations and principles of CNN methods were outlined. Some state-of-the-arts CNN image denoising methods were depicted in graphical forms, while other methods were elaborately explained. We proposed a review of image denoising with CNN. Previous and recent papers on image denoising with CNN were selected. Potential challenges and directions for future research were equally fully explicated.

66 citations


Journal ArticleDOI
TL;DR: A novel fusion model hand-crafted with deep learning features called FM-HCF-DLF model for diagnosis and classification of COVID-19 is presented, which was experimentally validated using chest X-ray dataset and yielded superior performance.
Abstract: COVID-19 pandemic is increasing in an exponential rate, with restricted accessibility of rapid test kits. So, the design and implementation of COVID-19 testing kits remain an open research problem. Several findings attained using radio-imaging approaches recommend that the images comprise important data related to coronaviruses. The application of recently developed artificial intelligence (AI) techniques, integrated with radiological imaging, is helpful in the precise diagnosis and classification of the disease. In this view, the current research paper presents a novel fusion model hand-crafted with deep learning features called FM-HCF-DLF model for diagnosis and classification of COVID-19. The proposed FM-HCF-DLF model comprises three major processes, namely Gaussian filtering-based preprocessing, FM for feature extraction and classification. FM model incorporates the fusion of handcrafted features with the help of local binary patterns (LBP) and deep learning (DL) features and it also utilizes convolutional neural network (CNN)-based Inception v3 technique. To further improve the performance of Inception v3 model, the learning rate scheduler using Adam optimizer is applied. At last, multilayer perceptron (MLP) is employed to carry out the classification process. The proposed FM-HCF-DLF model was experimentally validated using chest X-ray dataset. The experimental outcomes inferred that the proposed model yielded superior performance with maximum sensitivity of 93.61%, specificity of 94.56%, precision of 94.85%, accuracy of 94.08%, F score of 93.2% and kappa value of 93.5%.

Journal ArticleDOI
TL;DR: A robust plant disease classification system is introduced by introducing a Custom CenterNet framework with DenseNet-77 as a base network and is more proficient and reliable to identify and classify plant diseases than other latest approaches.
Abstract: The agricultural production rate plays a pivotal role in the economic development of a country. However, plant diseases are the most significant impediment to the production and quality of food. The identification of plant diseases at an early stage is crucial for global health and wellbeing. The traditional diagnosis process involves visual assessment of an individual plant by a pathologist through on-site visits. However, manual examination for crop diseases is restricted because of less accuracy and the small accessibility of human resources. To tackle such issues, there is a demand to design automated approaches capable of efficiently detecting and categorizing numerous plant diseases. Precise identification and classification of plant diseases is a tedious job due because of the occurrence of low-intensity information in the image background and foreground, the huge color resemblance in the healthy and diseased plant areas, the occurrence of noise in the samples, and changes in the position, chrominance, structure, and size of plant leaves. To tackle the above-mentioned problems, we have introduced a robust plant disease classification system by introducing a Custom CenterNet framework with DenseNet-77 as a base network. The presented method follows three steps. In the first step, annotations are developed to get the region of interest. Secondly, an improved CenterNet is introduced in which DenseNet-77 is proposed for deep keypoints extraction. Finally, the one-stage detector CenterNet is used to detect and categorize several plant diseases. To conduct the performance analysis, we have used the PlantVillage Kaggle database, which is the standard dataset for plant diseases and challenges in terms of intensity variations, color changes, and differences found in the shapes and sizes of leaves. Both the qualitative and quantitative analysis confirms that the presented method is more proficient and reliable to identify and classify plant diseases than other latest approaches.

Journal ArticleDOI
TL;DR: In this paper, a survey of federated learning and neural architecture search approaches based on reinforcement learning, evolutionary algorithms and gradient-based approaches is presented, which is categorized into online and offline implementations, and single and multiobjective search approaches.
Abstract: Federated learning is a recently proposed distributed machine learning paradigm for privacy preservation, which has found a wide range of applications where data privacy is of primary concern. Meanwhile, neural architecture search has become very popular in deep learning for automatically tuning the architecture and hyperparameters of deep neural networks. While both federated learning and neural architecture search are faced with many open challenges, searching for optimized neural architectures in the federated learning framework is particularly demanding. This survey paper starts with a brief introduction to federated learning, including both horizontal, vertical, and hybrid federated learning. Then neural architecture search approaches based on reinforcement learning, evolutionary algorithms and gradient-based are presented. This is followed by a description of federated neural architecture search that has recently been proposed, which is categorized into online and offline implementations, and single- and multi-objective search approaches. Finally, remaining open research questions are outlined and promising research topics are suggested.

Journal ArticleDOI
TL;DR: A thorough review of different security and privacy threats and existing solutions that can provide security to social network users and discusses open issues, challenges, and relevant security guidelines to achieve trustworthiness in online social networks.
Abstract: With fast-growing technology, online social networks (OSNs) have exploded in popularity over the past few years. The pivotal reason behind this phenomenon happens to be the ability of OSNs to provide a platform for users to connect with their family, friends, and colleagues. The information shared in social network and media spreads very fast, almost instantaneously which makes it attractive for attackers to gain information. Secrecy and surety of OSNs need to be inquired from various positions. There are numerous security and privacy issues related to the user’s shared information especially when a user uploads personal content such as photos, videos, and audios. The attacker can maliciously use shared information for illegitimate purposes. The risks are even higher if children are targeted. To address these issues, this paper presents a thorough review of different security and privacy threats and existing solutions that can provide security to social network users. We have also discussed OSN attacks on various OSN web applications by citing some statistics reports. In addition to this, we have discussed numerous defensive approaches to OSN security. Finally, this survey discusses open issues, challenges, and relevant security guidelines to achieve trustworthiness in online social networks.

Journal ArticleDOI
TL;DR: A multiple-strategy learning particle swarm optimization algorithm, called MSL-PSO, to solve problems with large-scale variables, in which different learning strategies are utilized in different stages, to balance the convergence and diversity of the population.
Abstract: The balance between the exploration and the exploitation plays a significant role in the meta-heuristic algorithms, especially when they are used to solve large-scale optimization problems. In this paper, we propose a multiple-strategy learning particle swarm optimization algorithm, called MSL-PSO, to solve problems with large-scale variables, in which different learning strategies are utilized in different stages. At the first stage, each individual tries to probe some positions by learning from the demonstrators who have better performance on the fitness value and the mean position of the population. All the best probed positions, each of which has the best fitness among all positions probed by its corresponding individual, will compose a new temporary population. The temporary population will be sorted on the fitness values in a descending order, and will be used for each individual to find its demonstrators, which is based on the rank of the best probed solution in the temporary population and the rank of the individual in the current population, to learn using a new strategy in the second stage. The first stage is used to improve the exploration capability, and the second one is expected to balance the convergence and diversity of the population. To verify the effectiveness of MSL-PSO for solving large-scale optimization problems, some empirical experiments are conducted, which include CEC2008 problems with 100, 500, and 1000 dimensions, and CEC2010 problems with 1000 dimensions. Experimental results show that our proposed MSL-PSO is competitive or has a better performance compared with ten state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: QSPR analysis of these newly introduced indices based on neighbourhood degree sum of nodes are studied here which reveals their predicting power.
Abstract: Topological index is a numerical value associated with a chemical constitution for correlation of chemical structure with various physical properties, chemical reactivity or biological activity. In this work, some new indices based on neighborhood degree sum of nodes are proposed. To make the computation of the novel indices convenient, an algorithm is designed. Quantitative structure property relationship (QSPR) study is a good statistical method for investigating drug activity or binding mode for different receptors. QSPR analysis of the newly introduced indices is studied here which reveals their predicting power. A comparative study of the novel indices with some well-known and mostly used indices in structure-property modelling and isomer discrimination is performed. Some mathematical properties of these indices are also discussed here.

Journal ArticleDOI
TL;DR: An algorithm named fuzzy attribute-based joint integrated scheduling and tree formation (FAJIT) technique for tree formation and parent node selection using fuzzy logic in a heterogeneous network is proposed and is compared with the distributed algorithm for Integrated tree Construction and data Aggregation (DICA).
Abstract: Wireless sensor network (WSN) is used to sense the environment, collect the data, and further transmit it to the base station (BS) for analysis. A synchronized tree-based approach is an efficient approach to aggregate data from various sensor nodes in a WSN environment. However, achieving energy efficiency in such a tree formation is challenging. In this research work, an algorithm named fuzzy attribute-based joint integrated scheduling and tree formation (FAJIT) technique for tree formation and parent node selection using fuzzy logic in a heterogeneous network is proposed. FAJIT mainly focuses on addressing the parent node selection problem in the heterogeneous network for aggregating different types of data packets to improve energy efficiency. The selection of parent nodes is performed based on the candidate nodes with the minimum number of dynamic neighbors. Fuzzy logic is applied in the case of an equal number of dynamic neighbors. In the proposed technique, fuzzy logic is first applied to WSN, and then min–max normalization is used to retrieve normalized weights (membership values) for the given edges of the graph. This membership value is used to denote the degree to which an element belongs to a set. Therefore, the node with the minimum sum of all weights is considered as the parent node. The result of FAJIT is compared with the distributed algorithm for Integrated tree Construction and data Aggregation (DICA) on various parameters: average schedule length, energy consumption data interval, the total number of transmission slots, control overhead, and energy consumption in the control phase. The results demonstrate that the proposed algorithm is better in terms of energy efficiency.

Journal ArticleDOI
TL;DR: This work considers a fixed-charge solid transportation problem in multi-objective environment where all the data are intuitionistic fuzzy numbers with membership and non-membership function and reduces into a deterministic problem using accuracy function.
Abstract: During past few decades, fuzzy decision is an important attention in the areas of science, engineering, economic system, business, etc. To solve day-to-day problem, researchers use fuzzy data in transportation problem for presenting the uncontrollable factors; and most of multi-objective transportation problems are solved using goal programming. However, when the problem contains interval-valued data, then the obtained solution was provided by goal programming may not satisfy by all decision-makers. In such condition, we consider a fixed-charge solid transportation problem in multi-objective environment where all the data are intuitionistic fuzzy numbers with membership and non-membership function. The intuitionistic fuzzy transportation problem transforms into interval-valued problem using $$(\alpha ,\beta )$$ -cut, and thereafter, it reduces into a deterministic problem using accuracy function. Also the optimum value of alternative corresponds to the optimum value of accuracy function. A numerical example is included to illustrate the usefulness of our proposed model. Finally, conclusions and future works with the study are described.

Journal ArticleDOI
TL;DR: The SCOR 4.0 model can be used by both public and private sectors to improve their supply chain strategies in globalizing world to understand and evaluate the performance of supply chains.
Abstract: Supply chain operations reference (SCOR) is a combined benchmarking, business process reengineering, and best practices, and it also references a model that is intended to be an industry standard. SCOR model is one of the best models to describe supply chain activities in operations management for research and practice alike. There are radical changes in the structure of supply chains as well as developing technology in today’s information age. The purpose of this paper is to extend the SCOR model with new metrics related to Industry 4.0 and digitalization to understand and evaluate the performance of supply chains. New metrics added to the SCOR model and a novel SCOR 4.0 model is proposed. The novel performance evaluation model is structured as a three-level hierarchical structure to evaluate the supply chain. This problem is handled as a multi-criteria decision-making problem. This study uses the hybrid Best worst method and Pythagorean fuzzy AHP method to determine the weights of metrics. SCOR model is adapted to performance evaluation of the supply chain in the globalizing world. The most important metrics on the supply chain performances are determined and classified. Level 1 metrics are evaluated by Best worst method and their inner levels are evaluated by the Pythagorean fuzzy AHP method and the importance weights of each level 2 and level 3 metrics are obtained. A real application for the oil supply chain is presented to show the applicability of the proposed model. It is aimed to show the SCOR 4.0 model can be used by both public and private sectors to improve their supply chain strategies in globalizing world.

Journal ArticleDOI
TL;DR: A reformed capsule network is developed for the detection and classification of diabetic retinopathy using the convolution and primary capsule layer, the features are extracted from the fundus images and then using the class capsule layer and softmax layer the probability that the image belongs to a specific class is estimated.
Abstract: Nowadays, diabetic retinopathy is a prominent reason for blindness among the people who suffer from diabetes. Early and timely detection of this problem is critical for a good prognosis. An automated system for this purpose contains several phases like identification and classification of lesions in fundus images. Machine learning techniques based on manual extraction of features and automatic extraction of features with convolution neural network have been presented for diabetic retinopathy detection. The recent developments like capsule networks in deep learning and their significant success over traditional machine learning methods for a variety of applications inspired the researchers to apply them for diabetic retinopathy diagnosis. In this paper, a reformed capsule network is developed for the detection and classification of diabetic retinopathy. Using the convolution and primary capsule layer, the features are extracted from the fundus images and then using the class capsule layer and softmax layer the probability that the image belongs to a specific class is estimated. The efficiency of the proposed reformed network is validated concerning four performance measures by considering the Messidor dataset. The constructed capsule network attains an accuracy of 97.98%, 97.65%, 97.65%, and 98.64% on the healthy retina, stage 1, stage 2, and stage 3 fundus images.

Journal ArticleDOI
TL;DR: Compared with ChOA and five state-of-the-art algorithms, the statistical results show that EChOA has strong competitive capabilities and promising prospects.
Abstract: Chimp optimization algorithm (ChOA) is a recently proposed metaheuristic. Interestingly, it simulates the social status relationship and hunting behavior of chimps. Due to the more flexible and complex application fields, researchers have higher requirements for native algorithms. In this paper, an enhanced chimp optimization algorithm (EChOA) is proposed to improve the accuracy of solutions. First, the highly disruptive polynomial mutation is used to initialize the population, which provides the foundation for global search. Next, Spearman’s rank correlation coefficient of the chimps with the lowest social status is calculated with respect to the leader chimp. To reduce the probability of falling into the local optimum, the beetle antennae operator is used to improve the less fit chimps while gaining visual capability. Three strategies enhance the exploration and exploitation of the native algorithm. To verify the function optimization performance, EChOA is comprehensively analyzed on 12 classical benchmark functions and 15 CEC2017 benchmark functions. Besides, the practicability of EChOA is also highlighted by three engineering design problems and training multilayer perceptron. Compared with ChOA and five state-of-the-art algorithms, the statistical results show that EChOA has strong competitive capabilities and promising prospects.

Journal ArticleDOI
TL;DR: A comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers is presented in this paper, which covers the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumor analysis.
Abstract: Brain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.

Journal ArticleDOI
TL;DR: This paper proposes several new distance and similarity measures for the SVNS model, and it is proven that the proposed similarity measures produced the most consistent ranking results compared to other existing similarity measures.
Abstract: The single-valued neutrosophic set (SVNS) is a well-known model for handling uncertain and indeterminate information. Information measures such as distance measures, similarity measures and entropy measures are very useful tools to be used in many applications such as multi-criteria decision making (MCDM), medical diagnosis, pattern recognition and clustering problems. A lot of such information measures have been proposed for the SVNS model. However, many of these measures have inherent problems that prevent them from producing reasonable or consistent results to the decision makers. In this paper, we propose several new distance and similarity measures for the SVNS model. The proposed measures have been verified and proven to comply with the axiomatic definition of the distance and similarity measure for the SVNS model. A detailed and comprehensive comparative analysis between the proposed similarity measures and other well-known existing similarity measures has been done. Based on the comparison results, it is clearly proven that the proposed similarity measures are able to overcome the shortcomings that are inherent in existing similarity measures. Finally, an extensive set of numerical examples, related to pattern recognition and medical diagnosis, is given to demonstrate the practical applicability of the proposed similarity measures. In all numerical examples, it is proven that the proposed similarity measures are able to produce accurate and reasonable results. To further verify the superiority of the suggested similarity measures, the Spearman’s rank correlation coefficient test is performed on the ranking results that were obtained from the numerical examples, and it was again proven that the proposed similarity measures produced the most consistent ranking results compared to other existing similarity measures.

Journal ArticleDOI
TL;DR: According to the computational experiments using standard TSPLIB instances, greedy–Levy ACO outperforms max–min ACO and other latest TSP solvers, which demonstrates the effectiveness of the proposed methodology.
Abstract: Ant colony optimization (ACO) algorithm is a meta-heuristic and reinforcement learning algorithm, which has been widely applied to solve various optimization problems. The key to improving the performance of ACO is to effectively resolve the exploration/exploitation dilemma. Epsilon greedy is an important and widely applied policy-based exploration method in reinforcement learning and has also been employed to improve ACO algorithms as the pseudo-stochastic mechanism. Levy flight is based on Levy distribution and helps to balance searching space and speed for global optimization. Taking advantage of both epsilon greedy and Levy flight, a greedy–Levy ACO incorporating these two approaches is proposed to solve complicated combinatorial optimization problems. Specifically, it is implemented on the top of max–min ACO to solve the traveling salesman problem (TSP) problems. According to the computational experiments using standard TSPLIB instances, greedy–Levy ACO outperforms max–min ACO and other latest TSP solvers, which demonstrates the effectiveness of the proposed methodology.

Journal ArticleDOI
Lijuan Huang1, Guojie Xie1, Wende Zhao, Yan Gu1, Yi Huang1 
TL;DR: It is the recommendation of the authors that e-commerce platforms and logistics enterprises should pay attention to the prediction of regional logistics demand, choose scientific forecasting methods, and encourage the implementation of new distribution modes.
Abstract: With the rapid development of e-commerce, the backlog of distribution orders, insufficient logistics capacity and other issues are becoming more and more serious. It is very significant for e-commerce platforms and logistics enterprises to clarify the demand of logistics. To meet this need, a forecasting indicator system of Guangdong logistics demand was constructed from the perspective of e-commerce. The GM (1, 1) model and Back Propagation (BP) neural network model were used to simulate and forecast the logistics demand of Guangdong province from 2000 to 2019. The results show that the Guangdong logistics demand forecasting indicator system has good applicability. Compared with the GM (1, 1) model, the BP neural network model has smaller prediction error and more stable prediction results. Based on the results of the study, it is the recommendation of the authors that e-commerce platforms and logistics enterprises should pay attention to the prediction of regional logistics demand, choose scientific forecasting methods, and encourage the implementation of new distribution modes.

Journal ArticleDOI
TL;DR: In this article, the authors compared LSTM neural network and wavelet neural network (WNN) for spatio-temporal prediction of rainfall and runoff time-series trends in scarcely gauged hydrologic basins.
Abstract: This study compares LSTM neural network and wavelet neural network (WNN) for spatio-temporal prediction of rainfall and runoff time-series trends in scarcely gauged hydrologic basins. Using long-term in situ observed data for 30 years (1980–2009) from ten rain gauge stations and three discharge measurement stations, the rainfall and runoff trends in the Nzoia River basin are predicted through satellite-based meteorological data comprising of: precipitation, mean temperature, relative humidity, wind speed and solar radiation. The prediction modelling was carried out in three sub-basins corresponding to the three discharge stations. LSTM and WNN were implemented with the same deep learning topological structure consisting of 4 hidden layers, each with 30 neurons. In the prediction of the basin runoff with the five meteorological parameters using LSTM and WNN, both models performed well with respective R2 values of 0.8967 and 0.8820. The MAE and RMSE measures for LSTM and WNN predictions ranged between 11–13 m3/s for the mean monthly runoff prediction. With the satellite-based meteorological data, LSTM predicted the mean monthly rainfall within the basin with R2 = 0.8610 as compared to R2 = 0.7825 using WNN. The MAE for mean monthly rainfall trend prediction was between 9 and 11 mm, while the RMSE varied between 15 and 21 mm. The performance of the models improved with increase in the number of input parameters, which corresponded to the size of the sub-basin. In terms of the computational time, both models converged at the lowest RMSE at nearly the same number of epochs, with WNN taking slightly longer to attain the minimum RMSE. The study shows that in hydrologic basins with scarce meteorological and hydrological monitoring networks, the use satellite-based meteorological data in deep learning neural network models are suitable for spatial and temporal analysis of rainfall and runoff trends.

Journal ArticleDOI
TL;DR: In this paper, the authors introduced the Weighted Aggregate sum product assessment (WASPAS) method with Fermatean fuzzy sets (FFSs) for the HCW disposal location selection problem.
Abstract: Medical services inevitably generate healthcare waste (HCW) that may become hazardous to healthcare staffs, patients, the population, and the atmosphere. In most of the developing countries, HCW disposal management has become one of the fastest-growing challenges for urban municipalities and healthcare providers. Determining the location for HCW disposal centers is a relatively complex process due to the involvement of various alternatives, criteria, and strict government guidelines about the disposal of HCW. The objective of the paper is to introduce the WASPAS (weighted aggregated sum product assessment) method with Fermatean fuzzy sets (FFSs) for the HCW disposal location selection problem. This method combines the score function, entropy measure, and classical WASPAS approach within FFSs context. Next, a combined procedure using entropy and score function is proposed to estimate the criteria weights. To do this, a novel score function with its desirable properties and some entropy measures are introduced under the FFSs context. Further, an illustrative case study of the HCW disposal location selection problem on FFSs is established, which evidences the practicality and efficacy of the developed approach. Comparative discussion and sensitivity analysis are made to monitor the permanence of the introduced framework. The final results approve that the proposed methodology can effectively handle the ambiguity and inaccuracy in the decision-making procedure of HCW disposal location selection.

Journal ArticleDOI
TL;DR: The interval type 2 fuzzy set is used in a fuzzy transportation problem to represent the transportation cost, demand, and supply and the efficiency of the proposed algorithm is described.
Abstract: The fuzzy transportation problem is a very popular, well-known optimization problem in the area of fuzzy set and system. In most of the cases, researchers use type 1 fuzzy set as the cost of the transportation problem. Type 1 fuzzy number is unable to handle the uncertainty due to the description of human perception. Interval type 2 fuzzy set is an extended version of type 1 fuzzy set which can handle this ambiguity. In this paper, the interval type 2 fuzzy set is used in a fuzzy transportation problem to represent the transportation cost, demand, and supply. We define this transportation problem as interval type 2 fuzzy transportation problems. The utility of this type of fuzzy set as costs in transportation problem and its application in different real-world scenarios are described in this paper. Here, we have modified the classical Vogel’s approximation method for solved this fuzzy transportation problem. To the best of our information, there exists no algorithm based on Vogel’s approximation method in the literature for fuzzy transportation problem with interval type 2 fuzzy set as transportation cost, demand, and supply. We have used two Numerical examples to describe the efficiency of the proposed algorithm.

Journal ArticleDOI
TL;DR: It is concluded from the results about experiments that presented explainable recommendation framework provides high-quality recommendations that contains high accuracy, diversity and explainability.
Abstract: Recommendation system is a technology that can mine user's preference for items. Explainable recommendation is to produce recommendations for target users and give reasons at the same time to reveal reasons for recommendations. The explainability of recommendations that can improve the transparency of recommendations and the probability of users choosing the recommended items. The merits about explainability of recommendations are obvious, but it is not enough to focus solely on explainability of recommendations in field of explainable recommendations. Therefore, it is essential to construct an explainable recommendation framework to improve the explainability of recommended items while maintaining accuracy and diversity. An explainable recommendation framework based on knowledge graph and multi-objective optimization is proposed that can optimize the precision, diversity and explainability about recommendations at the same time. Knowledge graph connects users and items through different relationships to obtain an explainable candidate list for target user, and the path between target user and recommended item is used as an explanation basis. The explainable candidate list is optimized through multi-objective optimization algorithm to obtain the final recommendation list. It is concluded from the results about experiments that presented explainable recommendation framework provides high-quality recommendations that contains high accuracy, diversity and explainability.

Journal ArticleDOI
TL;DR: The proposed technique is validated on three benchmark databases BRATS 2018, BRATS 2019, and BRATS 2020 for tumor detection and achieved greater than 0.90 prediction scores in localization, segmentation and classification of brain lesions.
Abstract: Brain tumor is a group of anomalous cells. The brain is enclosed in a more rigid skull. The abnormal cell grows and initiates a tumor. Detection of tumor is a complicated task due to irregular tumor shape. The proposed technique contains four phases, which are lesion enhancement, feature extraction and selection for classification, localization, and segmentation. The magnetic resonance imaging (MRI) images are noisy due to certain factors, such as image acquisition, and fluctuation in magnetic field coil. Therefore, a homomorphic wavelet filer is used for noise reduction. Later, extracted features from inceptionv3 pre-trained model and informative features are selected using a non-dominated sorted genetic algorithm (NSGA). The optimized features are forwarded for classification after which tumor slices are passed to YOLOv2-inceptionv3 model designed for the localization of tumor region such that features are extracted from depth-concatenation (mixed-4) layer of inceptionv3 model and supplied to YOLOv2. The localized images are passed to McCulloch's Kapur entropy method to segment actual tumor region. Finally, the proposed technique is validated on three benchmark databases BRATS 2018, BRATS 2019, and BRATS 2020 for tumor detection. The proposed method achieved greater than 0.90 prediction scores in localization, segmentation and classification of brain lesions. Moreover, classification and segmentation outcomes are superior as compared to existing methods.