scispace - formally typeset
Search or ask a question

Showing papers in "Artificial Intelligence Review in 2021"


Journal ArticleDOI
TL;DR: This review categorizes the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis- based, loss function-based, sequenced models, weakly supervised, and multi-task methods.
Abstract: The semantic image segmentation task consists of classifying each pixel of an image into an instance, where each instance corresponds to a class. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. In this review, we categorize the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis-based, loss function-based, sequenced models, weakly supervised, and multi-task methods and provide a comprehensive review of the contributions in each of these groups. Further, for each group, we analyze each variant of these groups and discuss the limitations of the current approaches and present potential future research directions for semantic image segmentation.

398 citations


Journal ArticleDOI
TL;DR: A comprehensive comparison between XGBoost, LightGBM, CatBoost, random forests and gradient boosting has been performed and indicates that CatBoost obtains the best results in generalization accuracy and AUC in the studied datasets although the differences are small.
Abstract: The family of gradient boosting algorithms has been recently extended with several interesting proposals (i.e. XGBoost, LightGBM and CatBoost) that focus on both speed and accuracy. XGBoost is a scalable ensemble technique that has demonstrated to be a reliable and efficient machine learning challenge solver. LightGBM is an accurate model focused on providing extremely fast training performance using selective sampling of high gradient instances. CatBoost modifies the computation of gradients to avoid the prediction shift in order to improve the accuracy of the model. This work proposes a practical analysis of how these novel variants of gradient boosting work in terms of training speed, generalization performance and hyper-parameter setup. In addition, a comprehensive comparison between XGBoost, LightGBM, CatBoost, random forests and gradient boosting has been performed using carefully tuned models as well as using their default settings. The results of this comparison indicate that CatBoost obtains the best results in generalization accuracy and AUC in the studied datasets although the differences are small. LightGBM is the fastest of all methods but not the most accurate. Finally, XGBoost places second both in accuracy and in training speed. Finally an extensive analysis of the effect of hyper-parameter tuning in XGBoost, LightGBM and CatBoost is carried out using two novel proposed tools.

375 citations


Journal ArticleDOI
TL;DR: In the authors’ perspective, in situ monitoring of AM processes will significantly benefit from the object detection ability of ML, and data sharing of AM would enable faster adoption of ML in AM.
Abstract: Additive manufacturing (AM) or 3D printing is growing rapidly in the manufacturing industry and has gained a lot of attention from various fields owing to its ability to fabricate parts with complex features. The reliability of the 3D printed parts has been the focus of the researchers to realize AM as an end-part production tool. Machine learning (ML) has been applied in various aspects of AM to improve the whole design and manufacturing workflow especially in the era of industry 4.0. In this review article, various types of ML techniques are first introduced. It is then followed by the discussion on their use in various aspects of AM such as design for 3D printing, material tuning, process optimization, in situ monitoring, cloud service, and cybersecurity. Potential applications in the biomedical, tissue engineering and building and construction will be highlighted. The challenges faced by ML in AM such as computational cost, standards for qualification and data acquisition techniques will also be discussed. In the authors’ perspective, in situ monitoring of AM processes will significantly benefit from the object detection ability of ML. As a large data set is crucial for ML, data sharing of AM would enable faster adoption of ML in AM. Standards for the shared data are needed to facilitate easy sharing of data. The use of ML in AM will become more mature and widely adopted as better data acquisition techniques and more powerful computer chips for ML are developed.

229 citations


Journal ArticleDOI
TL;DR: This study presented the state of practice of DL in geotechnical engineering, and depicted the statistical trend of the published papers, as well as describing four major algorithms, including feedforward neural, recurrent neural network, convolutional neural network and generative adversarial network.
Abstract: With the advent of big data era, deep learning (DL) has become an essential research subject in the field of artificial intelligence (AI). DL algorithms are characterized with powerful feature learning and expression capabilities compared with the traditional machine learning (ML) methods, which attracts worldwide researchers from different fields to its increasingly wide applications. Furthermore, in the field of geochnical engineering, DL has been widely adopted in various research topics, a comprehensive review summarizing its application is desirable. Consequently, this study presented the state of practice of DL in geotechnical engineering, and depicted the statistical trend of the published papers. Four major algorithms, including feedforward neural (FNN), recurrent neural network (RNN), convolutional neural network (CNN) and generative adversarial network (GAN) along with their geotechnical applications were elaborated. In addition, a thorough summary containing pubilished literatures, the corresponding reference cases, the adopted DL algorithms as well as the related geotechnical topics was compiled. Furthermore, the challenges and perspectives of future development of DL in geotechnical engineering were presented and discussed.

194 citations


Journal ArticleDOI
TL;DR: This article provides an overview of the current developments in the field of multi-agent deep reinforcement learning, focusing primarily on literature from recent years that combinesDeep reinforcement learning methods with a multi- agent scenario.
Abstract: The advances in reinforcement learning have recorded sublime success in various domains. Although the multi-agent domain has been overshadowed by its single-agent counterpart during this progress, multi-agent reinforcement learning gains rapid traction, and the latest accomplishments address problems with real-world complexity. This article provides an overview of the current developments in the field of multi-agent deep reinforcement learning. We focus primarily on literature from recent years that combines deep reinforcement learning methods with a multi-agent scenario. To survey the works that constitute the contemporary landscape, the main contents are divided into three parts. First, we analyze the structure of training schemes that are applied to train multiple agents. Second, we consider the emergent patterns of agent behavior in cooperative, competitive and mixed scenarios. Third, we systematically enumerate challenges that exclusively arise in the multi-agent domain and review methods that are leveraged to cope with these challenges. To conclude this survey, we discuss advances, identify trends, and outline possible directions for future work in this research area.

180 citations


Journal ArticleDOI
TL;DR: The Sine Cosine Algorithm (SCA) as mentioned in this paper is a population-based optimization algorithm introduced by Mirjalili in 2016, motivated by the trigonometric sine and cosine functions.
Abstract: The Sine Cosine Algorithm (SCA) is a population-based optimization algorithm introduced by Mirjalili in 2016, motivated by the trigonometric sine and cosine functions. After providing an overview of the SCA algorithm, we survey a number of SCA variants and applications that have appeared in the literature. We then present the results of a series of computational experiments to validate the performance of the SCA against similar algorithms.

179 citations


Journal ArticleDOI
TL;DR: This work investigates and summarizes key methods of Deep Meta-Learning, which are categorized into (i) metric-, (ii) model-, and (iii) optimization-based techniques, and identifies the main open challenges.
Abstract: Deep neural networks can achieve great successes when presented with large data sets and sufficient computational resources. However, their ability to learn new concepts quickly is limited. Meta-learning is one approach to address this issue, by enabling the network to learn how to learn. The field of Deep Meta-Learning advances at great speed, but lacks a unified, in-depth overview of current techniques. With this work, we aim to bridge this gap. After providing the reader with a theoretical foundation, we investigate and summarize key methods, which are categorized into (i) metric-, (ii) model-, and (iii) optimization-based techniques. In addition, we identify the main open challenges, such as performance evaluations on heterogeneous benchmarks, and reduction of the computational costs of meta-learning.

167 citations


Journal ArticleDOI
TL;DR: In this paper, a novel metaheuristic algorithm called Chaos Game Optimization (CGO) is developed for solving optimization problems and the obtained results proved that the CGO is superior compared to the other metaheuristics in most of the cases.
Abstract: In this paper, a novel metaheuristic algorithm called Chaos Game Optimization (CGO) is developed for solving optimization problems The main concept of the CGO algorithm is based on some principles of chaos theory in which the configuration of fractals by chaos game concept and the fractals self-similarity issues are in perspective A total number of 239 mathematical functions which are categorized into four different groups are collected to evaluate the overall performance of the presented novel algorithm In order to evaluate the results of the CGO algorithm, three comparative analysis with different characteristics are conducted In the first step, six different metaheuristic algorithms are selected from the literature while the minimum, mean and standard deviation values alongside the number of function evaluations for the CGO and these algorithms are calculated and compared A complete statistical analysis is also conducted in order to provide a valid judgment about the performance of the CGO algorithm In the second one, the results of the CGO algorithm are compared to some of the recently developed fractal- and chaos-based algorithms Finally, the performance of the CGO algorithm is compared to some state-of-the-art algorithms in dealing with the state-of-the-art mathematical functions and one of the recent competitions on single objective real-parameter numerical optimization named “CEC 2017” is considered as numerical examples for this purpose In addition, a computational cost analysis is also conducted for the presented algorithm The obtained results proved that the CGO is superior compared to the other metaheuristics in most of the cases

143 citations


Journal ArticleDOI
TL;DR: This survey presents a survey of various action recognition techniques along with the HAR applications namely, content-based video summarization, human–computer interaction, education, healthcare, video surveillance, abnormal activity detection, sports, and entertainment.
Abstract: Human Action Recognition (HAR) involves human activity monitoring task in different areas of medical, education, entertainment, visual surveillance, video retrieval, as well as abnormal activity identification, to name a few. Due to an increase in the usage of cameras, automated systems are in demand for the classification of such activities using computationally intelligent techniques such as Machine Learning (ML) and Deep Learning (DL). In this survey, we have discussed various ML and DL techniques for HAR for the years 2011–2019. The paper discusses the characteristics of public datasets used for HAR. It also presents a survey of various action recognition techniques along with the HAR applications namely, content-based video summarization, human–computer interaction, education, healthcare, video surveillance, abnormal activity detection, sports, and entertainment. The advantages and disadvantages of action representation, dimensionality reduction, and action analysis methods are also provided. The paper discusses challenges and future directions for HAR.

142 citations


Journal ArticleDOI
TL;DR: Three key tasks during vision-based robotic grasping are concluded, which are object localization, object pose estimation and grasp estimation, which include 2D planar grasp methods and 6DoF grasp methods.
Abstract: This paper presents a comprehensive survey on vision-based robotic grasping. We conclude three key tasks during vision-based robotic grasping, which are object localization, object pose estimation and grasp estimation. In detail, the object localization task contains object localization without classification, object detection and object instance segmentation. This task provides the regions of the target object in the input data. The object pose estimation task mainly refers to estimating the 6D object pose and includes correspondence-based methods, template-based methods and voting-based methods, which affords the generation of grasp poses for known objects. The grasp estimation task includes 2D planar grasp methods and 6DoF grasp methods, where the former is constrained to grasp from one direction. These three tasks could accomplish the robotic grasping with different combinations. Lots of object pose estimation methods need not object localization, and they conduct object localization and object pose estimation jointly. Lots of grasp estimation methods need not object localization and object pose estimation, and they conduct grasp estimation in an end-to-end manner. Both traditional methods and latest deep learning-based methods based on the RGB-D image inputs are reviewed elaborately in this survey. Related datasets and comparisons between state-of-the-art methods are summarized as well. In addition, challenges about vision-based robotic grasping and future directions in addressing these challenges are also pointed out.

137 citations


Journal ArticleDOI
TL;DR: A survey of methods and concepts developed for the evaluation of dialogue systems can be found in this paper, where the authors differentiate between task-oriented, conversational, and question-answering dialogue systems.
Abstract: In this paper, we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation, in and of itself, is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost- and time-intensive. Thus, much work has been put into finding methods which allow a reduction in involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented, conversational, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then present the evaluation methods regarding that class.

Journal ArticleDOI
TL;DR: A comprehensive survey of evolutionary computation algorithms for dealing with 5-M complex challenges is presented by proposing a novel taxonomy according to the function of the approaches, including reducing problem difficulty, increasing algorithm diversity, accelerating convergence speed, reducing running time, and extending application field.
Abstract: Complex continuous optimization problems widely exist nowadays due to the fast development of the economy and society. Moreover, the technologies like Internet of things, cloud computing, and big data also make optimization problems with more challenges including Many-dimensions, Many-changes, Many-optima, Many-constraints, and Many-costs. We term these as 5-M challenges that exist in large-scale optimization problems, dynamic optimization problems, multi-modal optimization problems, multi-objective optimization problems, many-objective optimization problems, constrained optimization problems, and expensive optimization problems in practical applications. The evolutionary computation (EC) algorithms are a kind of promising global optimization tools that have not only been widely applied for solving traditional optimization problems, but also have emerged booming research for solving the above-mentioned complex continuous optimization problems in recent years. In order to show how EC algorithms are promising and efficient in dealing with the 5-M complex challenges, this paper presents a comprehensive survey by proposing a novel taxonomy according to the function of the approaches, including reducing problem difficulty, increasing algorithm diversity, accelerating convergence speed, reducing running time, and extending application field. Moreover, some future research directions on using EC algorithms to solve complex continuous optimization problems are proposed and discussed. We believe that such a survey can draw attention, raise discussions, and inspire new ideas of EC research into complex continuous optimization problems and real-world applications.

Journal ArticleDOI
TL;DR: The authors discusses transformer-based models for NLP tasks and highlights the pros and cons of the identified models, including Generative Pre-training (GPT), Transformer-XL, Cross-lingual Language Models (XLM), and Bidirectional Encoder Representations from Transformers (BERT).
Abstract: We cannot overemphasize the essence of contextual information in most natural language processing (NLP) applications. The extraction of context yields significant improvements in many NLP tasks, including emotion recognition from texts. The paper discusses transformer-based models for NLP tasks. It highlights the pros and cons of the identified models. The models discussed include the Generative Pre-training (GPT) and its variants, Transformer-XL, Cross-lingual Language Models (XLM), and the Bidirectional Encoder Representations from Transformers (BERT). Considering BERT’s strength and popularity in text-based emotion detection, the paper discusses recent works in which researchers proposed various BERT-based models. The survey presents its contributions, results, limitations, and datasets used. We have also provided future research directions to encourage research in text-based emotion detection using these models.

Journal ArticleDOI
TL;DR: According to this analysis, LSTM and CNN algorithms are the most used deep learning algorithms for sentiment analysis.
Abstract: With advanced digitalisation, we can observe a massive increase of user-generated content on the web that provides opinions of people on different subjects. Sentiment analysis is the computational study of analysing people's feelings and opinions for an entity. The field of sentiment analysis has been the topic of extensive research in the past decades. In this paper, we present the results of a tertiary study, which aims to investigate the current state of the research in this field by synthesizing the results of published secondary studies (i.e., systematic literature review and systematic mapping study) on sentiment analysis. This tertiary study follows the guidelines of systematic literature reviews (SLR) and covers only secondary studies. The outcome of this tertiary study provides a comprehensive overview of the key topics and the different approaches for a variety of tasks in sentiment analysis. Different features, algorithms, and datasets used in sentiment analysis models are mapped. Challenges and open problems are identified that can help to identify points that require research efforts in sentiment analysis. In addition to the tertiary study, we also identified recent 112 deep learning-based sentiment analysis papers and categorized them based on the applied deep learning algorithms. According to this analysis, LSTM and CNN algorithms are the most used deep learning algorithms for sentiment analysis.

Journal ArticleDOI
TL;DR: This paper presents a hybrid version of the Harris Hawks Optimization algorithm based on Bitwise operations and Simulated Annealing to solve the FS problem for classification purposes using wrapper methods and presented superior results compared to other algorithms.
Abstract: The significant growth of modern technology and smart systems has left a massive production of big data. Not only are the dimensional problems that face the big data, but there are also other emerging problems such as redundancy, irrelevance, or noise of the features. Therefore, feature selection (FS) has become an urgent need to search for the optimal subset of features. This paper presents a hybrid version of the Harris Hawks Optimization algorithm based on Bitwise operations and Simulated Annealing (HHOBSA) to solve the FS problem for classification purposes using wrapper methods. Two bitwise operations (AND bitwise operation and OR bitwise operation) can randomly transfer the most informative features from the best solution to the others in the populations to raise their qualities. The Simulate Annealing (SA) boosts the performance of the HHOBSA algorithm and helps to flee from the local optima. A standard wrapper method K-nearest neighbors with Euclidean distance metric works as an evaluator for the new solutions. A comparison between HHOBSA and other state-of-the-art algorithms is presented based on 24 standard datasets and 19 artificial datasets and their dimension sizes can reach up to thousands. The artificial datasets help to study the effects of different dimensions of data, noise ratios, and the size of samples on the FS process. We employ several performance measures, including classification accuracy, fitness values, size of selected features, and computational time. We conduct two statistical significance tests of HHOBSA like paired-samples T and Wilcoxon signed ranks. The proposed algorithm presented superior results compared to other algorithms.

Journal ArticleDOI
TL;DR: A systematic review of the overfit controlling methods and categorizes them into passive, active, and semi-active subsets, which includes the theoretical and experimental backgrounds of these methods, their strengths and weaknesses, and the emerging techniques for overfitting detection.
Abstract: Shallow neural networks process the features directly, while deep networks extract features automatically along with the training. Both models suffer from overfitting or poor generalization in many cases. Deep networks include more hyper-parameters than shallow ones that increase the overfitting probability. This paper states a systematic review of the overfit controlling methods and categorizes them into passive, active, and semi-active subsets. A passive method designs a neural network before training, while an active method adapts a neural network along with the training process. A semi-active method redesigns a neural network when the training performance is poor. This review includes the theoretical and experimental backgrounds of these methods, their strengths and weaknesses, and the emerging techniques for overfitting detection. The adaptation of model complexity to the data complexity is another point in this review. The relation between overfitting control, regularization, network compression, and network simplification is also stated. The paper ends with some concluding lessons from the literature.

Journal ArticleDOI
TL;DR: It is of utmost importance to use a correct tool for measuring the performance of the diverse set of metaheuristic algorithms to derive an appropriate judgment on the superiority of the algorithms and also to validate the claims raised by researchers for their specific objectives.
Abstract: The simulation-driven metaheuristic algorithms have been successful in solving numerous problems compared to their deterministic counterparts. Despite this advantage, the stochastic nature of such algorithms resulted in a spectrum of solutions by a certain number of trials that may lead to the uncertainty of quality solutions. Therefore, it is of utmost importance to use a correct tool for measuring the performance of the diverse set of metaheuristic algorithms to derive an appropriate judgment on the superiority of the algorithms and also to validate the claims raised by researchers for their specific objectives. The performance of a randomized metaheuristic algorithm can be divided into efficiency and effectiveness measures. The efficiency relates to the algorithm’s speed of finding accurate solutions, convergence, and computation. On the other hand, effectiveness relates to the algorithm’s capability of finding quality solutions. Both scopes are crucial for continuous and discrete problems either in single- or multi-objectives. Each problem type has different formulation and methods of measurement within the scope of efficiency and effectiveness performance. One of the most decisive verdicts for the effectiveness measure is the statistical analysis that depends on the data distribution and appropriate tool for correct judgments.

Journal ArticleDOI
TL;DR: The paper is aimed to provide an up-to-date survey of techniques that have been used for detecting skin cancer from skin lesion images to assist investigators in developing efficient models that automatically and accurately detects melanoma from skin lesions.
Abstract: Analysis of skin lesion images via visual inspection and manual examination to diagnose skin cancer has always been cumbersome This manual examination of skin lesions in order to detect melanoma can be time-consuming and tedious With the advancement in technology and rapid increase in computational resources, various machine learning techniques and deep learning models have emerged for the analysis of medical images most especially the skin lesion images The results of these models have been impressive, however analysis of skin lesion images with these techniques still experiences some challenges due to the unique and complex features of the skin lesion images This work presents a comprehensive survey of techniques that have been used for detecting skin cancer from skin lesion images The paper is aimed to provide an up-to-date survey that will assist investigators in developing efficient models that automatically and accurately detects melanoma from skin lesion images The paper is presented in five folds: First, we identify the challenges in detecting melanoma from skin lesions Second, we discuss the pre-processing and segmentation techniques of skin lesion images Third, we make comparative analysis of the state-of-the-arts Fourth we discuss classification techniques for classifying skin lesions into different classes of skin cancer We finally explore and analyse the performance of the state-of-the-arts methods employed in popular skin lesion image analysis competitions and challenges of ISIC 2018 and 2019 Application of ensemble deep learning models on well pre-processed and segmented images results in better classification performance of the skin lesion images

Journal ArticleDOI
TL;DR: A detailed and systematic overview of multi-agent deep reinforcement learning methods in views of challenges and applications and a taxonomy of challenges is proposed and the corresponding structures and representative methods are introduced.
Abstract: Deep reinforcement learning has proved to be a fruitful method in various tasks in the field of artificial intelligence during the last several years. Recent works have focused on deep reinforcement learning beyond single-agent scenarios, with more consideration of multi-agent settings. The main goal of this paper is to provide a detailed and systematic overview of multi-agent deep reinforcement learning methods in views of challenges and applications. Specifically, the preliminary knowledge is introduced first for a better understanding of this field. Then, a taxonomy of challenges is proposed and the corresponding structures and representative methods are introduced. Finally, some applications and interesting future opportunities for multi-agent deep reinforcement learning are given.

Journal ArticleDOI
TL;DR: Fault Detection and Diagnosis (FDD) is a well-studied area of research as discussed by the authors, where malfunction monitoring capabilities are instilled in the system for detection of the incipient faults and anticipation of their impact on the future behavior of the system using fault diagnosis techniques.
Abstract: Safety and reliability are absolutely important for modern sophisticated systems and technologies. Therefore, malfunction monitoring capabilities are instilled in the system for detection of the incipient faults and anticipation of their impact on the future behavior of the system using fault diagnosis techniques. In particular, state-of-the-art applications rely on the quick and efficient treatment of malfunctions within the equipment/system, resulting in increased production and reduced downtimes. This paper presents developments within Fault Detection and Diagnosis (FDD) methods and reviews of research work in this area. The review presents both traditional model-based and relatively new signal processing-based FDD approaches, with a special consideration paid to artificial intelligence-based FDD methods. Typical steps involved in the design and development of automatic FDD system, including system knowledge representation, data-acquisition and signal processing, fault classification, and maintenance related decision actions, are systematically presented to outline the present status of FDD. Future research trends, challenges and prospective solutions are also highlighted.

Journal ArticleDOI
TL;DR: Three state of art techniques are used as artificial neural network, extreme learning machine and deep neural network learning based CNN mode for the classification purpose and the model classification accuracy is obtained as 87.4%, 88% and 92%, respectively.
Abstract: A bipedal walking robot is a kind of humanoid robot. It is suppose to mimics human behavior and designed to perform human specific tasks. Currently, humanoid robots are not capable to walk like human being. To perform the walking task, in the current work, human gait data of six different walking styles named brisk walk, normal walk, very slow walk, medium walk, jogging and fast walk is collected through our configured IMU sensor and mobile-based accelerometers device. To capture the pattern for six different walking styles, data is extracted for hip, knee, ankle, shank, thigh and foot. A total six classes of walking activities are explored for clinical examination. The accelerometer is placed at center of the human body of 15 male and 10 female subjects. In the experimental setup, we have done exploratory analysis over the different gait capturing techniques, different gait features and different gait classification techniques. For the classification purpose, three state of art techniques are used as artificial neural network, extreme learning machine and deep neural network learning based CNN mode. The model classification accuracy is obtained as 87.4%, 88% and 92%, respectively. Here, WISDM activity data set is also used for verification purpose.

Journal ArticleDOI
TL;DR: Its applications in agriculture are summarized, include ripeness and component prediction, different classification themes, and plant disease detection, and the prospects of future works are put forward.
Abstract: Hyperspectral imaging is a non-destructive, nonpolluting, and fast technology, which can capture up to several hundred images of different wavelengths and offer relevant spectral signatures. Hyperspectral imaging technology has achieved breakthroughs in the acquisition of agricultural information and the detection of external or internal quality attributes of the agricultural product. Deep learning techniques have boosted the performance of hyperspectral image analysis. Compared with traditional machine learning, deep learning architectures exploit both spatial and spectral information of hyperspectral image analysis. To scrutinize thoroughly the current efforts, provide insights, and identify potential research directions on deep learning for hyperspectral image analysis in agriculture, this paper presents a systematic and comprehensive review. Firstly, its applications in agriculture are summarized, include ripeness and component prediction, different classification themes, and plant disease detection. Then, the recent achievements are reviewed in hyperspectral image analysis from the aspects of the deep learning models and the feature networks. Finally, the existing challenges of hyperspectral image analysis based on deep learning are summarized and the prospects of future works are put forward.

Journal ArticleDOI
TL;DR: A relatively new taxonomic classification list of both classical and new generation sets of metaheuristic algorithms available in the literature is presented, with the aim of providing an easily accessible collection of popular optimization tools for the global optimization research community who are at the forefront in utilizing these tools for solving complex and difficult real-world problems.
Abstract: Research in metaheuristics for global optimization problems are currently experiencing an overload of wide range of available metaheuristic-based solution approaches. Since the commencement of the first set of classical metaheuristic algorithms namely genetic, particle swarm optimization, ant colony optimization, simulated annealing and tabu search in the early 70s to late 90s, several new advancements have been recorded with an exponential growth in the novel proposals of new generation metaheuristic algorithms. Because these algorithms are neither entirely judged based on their performance values nor according to the useful insight they may provide, but rather the attention is given to the novelty of the processes they purportedly models, these area of study will continue to periodically see the arrival of several new similar techniques in the future. However, there is an obvious reason to keep track of the progressions of these algorithms by collating their general algorithmic profiles in terms of design inspirational source, classification based on swarm or evolutionary search concept, existing variation from the original design, and application areas. In this paper, we present a relatively new taxonomic classification list of both classical and new generation sets of metaheuristic algorithms available in the literature, with the aim of providing an easily accessible collection of popular optimization tools for the global optimization research community who are at the forefront in utilizing these tools for solving complex and difficult real-world problems. Furthermore, we also examined the bibliometric analysis of this field of metaheuristic for the last 30 years.

Journal ArticleDOI
TL;DR: This survey paper discusses opportunities and threats of using artificial intelligence (AI) technology in the manufacturing sector with consideration for offensive and defensive uses of such technology, and presents the major strengths and weaknesses of the main techniques in use.
Abstract: This survey paper discusses opportunities and threats of using artificial intelligence (AI) technology in the manufacturing sector with consideration for offensive and defensive uses of such technology. It starts with an introduction of Industry 4.0 concept and an understanding of AI use in this context. Then provides elements of security principles and detection techniques applied to operational technology (OT) which forms the main attack surface of manufacturing systems. As some intrusion detection systems (IDS) already involve some AI-based techniques, we focus on existing machine-learning and data-mining based techniques in use for intrusion detection. This article presents the major strengths and weaknesses of the main techniques in use. We also discuss an assessment of their relevance for application to OT, from the manufacturer point of view. Another part of the paper introduces the essential drivers and principles of Industry 4.0, providing insights on the advent of AI in manufacturing systems as well as an understanding of the new set of challenges it implies. AI-based techniques for production monitoring, optimisation and control are proposed with insights on several application cases. The related technical, operational and security challenges are discussed and an understanding of the impact of such transition on current security practices is then provided in more details. The final part of the report further develops a vision of security challenges for Industry 4.0. It addresses aspects of orchestration of distributed detection techniques, introduces an approach to adversarial/robust AI development and concludes with human–machine behaviour monitoring requirements.

Journal ArticleDOI
TL;DR: This paper is the first of its kind to attempt to review and define the role of AI in RFD and provides an all-encompassing review of rotor faults for the researchers and academics.
Abstract: Artificial intelligence (AI)-based rotor fault diagnosis (RFD) poses a variety of challenges to the prognostics and health management (PHM) of the Industry 4.0 revolution. Rotor faults have drawn more attention from the AI research community in terms of utilizing fault-specific characteristics in its feature engineering, compared to any other rotating machinery faults. While the rotor faults, specifically structural rotor faults (SRF), have proven to be the root cause of most of the rotating machinery issues, the research in this field largely revolves around bearing and gear faults. Within this scenario, this paper is the first of its kind to attempt to review and define the role of AI in RFD and provides an all-encompassing review of rotor faults for the researchers and academics. In addition, this study is unique in three ways: (i) it emphasizes the use of fault-specific characteristic features with AI, (ii) it is grounded in fault-wise analysis rather than component-wise analysis with appropriate fault categorization, and (iii) it portrays the current research and analysis in accordance with different phases of an AI-based RFD framework. Finally, the section on future research directions is aimed at bridging the gap between a laboratory-based solution and a real-world industrial solution for RFD.

Journal ArticleDOI
TL;DR: This paper attempts to synthesise the advantages and disadvantages of the procedural decisions in these approaches by conducting a systematic literature review of process prediction approaches.
Abstract: Process mining enables the reconstruction and evaluation of business processes based on digital traces in IT systems. An increasingly important technique in this context is process prediction. Given a sequence of events of an ongoing trace, process prediction allows forecasting upcoming events or performance measurements. In recent years, multiple process prediction approaches have been proposed, applying different data processing schemes and prediction algorithms. This study focuses on deep learning algorithms since they seem to outperform their machine learning alternatives consistently. Whilst having a common learning algorithm, they use different data preprocessing techniques, implement a variety of network topologies and focus on various goals such as outcome prediction, time prediction or control-flow prediction. Additionally, the set of log-data, evaluation metrics and baselines used by the authors diverge, making the results hard to compare. This paper attempts to synthesise the advantages and disadvantages of the procedural decisions in these approaches by conducting a systematic literature review.

Journal ArticleDOI
TL;DR: CovidSens as discussed by the authors is a vision of social sensing-based risk alert systems to spontaneously obtain and analyze social data to infer the state of the Coronavirus Disease 2019 propagation.
Abstract: With the spiraling pandemic of the Coronavirus Disease 2019 (COVID-19), it has becoming inherently important to disseminate accurate and timely information about the disease. Due to the ubiquity of Internet connectivity and smart devices, social sensing is emerging as a dynamic AI-driven sensing paradigm to extract real-time observations from online users. In this paper, we propose CovidSens, a vision of social sensing-based risk alert systems to spontaneously obtain and analyze social data to infer the state of the COVID-19 propagation. CovidSens can actively help to keep the general public informed about the COVID-19 spread and identify risk-prone areas by inferring future propagation patterns. The CovidSens concept is motivated by three observations: (1) people have been actively sharing their state of health and experience of the COVID-19 via online social media, (2) official warning channels and news agencies are relatively slower than people reporting their observations and experiences about COVID-19 on social media, and (3) online users are frequently equipped with substantially capable mobile devices that are able to perform non-trivial on-device computation for data processing and analytics. We envision an unprecedented opportunity to leverage the posts generated by the ordinary people to build a real-time sensing and analytic system for gathering and circulating vital information of the COVID-19 propagation. Specifically, the vision of CovidSens attempts to answer the questions: How to distill reliable information about the COVID-19 with the coexistence of prevailing rumors and misinformation in the social media? How to inform the general public about the latest state of the spread timely and effectively, and alert them to remain prepared? How to leverage the computational power on the edge devices (e.g., smartphones, IoT devices, UAVs) to construct fully integrated edge-based social sensing platforms for rapid detection of the COVID-19 spread? In this vision paper, we discuss the roles of CovidSens and identify the potential challenges in developing reliable social sensing-based risk alert systems. We envision that approaches originating from multiple disciplines (e.g., AI, estimation theory, machine learning, constrained optimization) can be effective in addressing the challenges. Finally, we outline a few research directions for future work in CovidSens.

Journal ArticleDOI
TL;DR: A novel multiple attribute decision-making (MADM) method with PFNs is elaborated and a study example that involves the service quality ranking of nursing facilities is provided to show the decision procedure of the proposed MADM method.
Abstract: The picture fuzzy sets (PFSs) state or model the voting information accurately without information loss. However, their existing operational laws usually generate unreasonable computing results, especially when the agreement degree (AD) or neutrality degree (ND) or opposition degree (OD) is zero. To tackle this issue, we propose the interactional operational laws (IOLs) to compute picture fuzzy numbers (PFNs), which can capture the interaction between the ADs and NDs in two PFNs, as well as the interaction between the ADs and ODs in two PFNs. Based on the proposed novel IOLs, partitioned Heronian mean (PHM) operator, and partitioned geometric Heronian mean (PGHM) operator, some picture fuzzy interactional PHM (PFIPHM), weighted PFIPHM (PFIWPHM), geometric PFIPHM (PFIPGHM), and weighted PFIPGHM (PFIWPGHM) operators are proposed in this paper. Afterwards, we investigate the properties of these operators. Using the PFIWPHM and PFIWPGHM operators, a novel multiple attribute decision-making (MADM) method with PFNs is elaborated. Finally, a study example that involves the service quality ranking of nursing facilities is provided to show the decision procedure of the proposed MADM method and we also give the comparative analysis between the proposed operators and the existing aggregation operators developed for PFNs.

Journal ArticleDOI
TL;DR: A review of the literature on drug discovery through ML tools and techniques that are enforced in every phase of drug development to accelerate the research process and deduce the risk and expenditure in clinical trials is provided in this article.
Abstract: This review provides the feasible literature on drug discovery through ML tools and techniques that are enforced in every phase of drug development to accelerate the research process and deduce the risk and expenditure in clinical trials. Machine learning techniques improve the decision-making in pharmaceutical data across various applications like QSAR analysis, hit discoveries, de novo drug architectures to retrieve accurate outcomes. Target validation, prognostic biomarkers, digital pathology are considered under problem statements in this review. ML challenges must be applicable for the main cause of inadequacy in interpretability outcomes that may restrict the applications in drug discovery. In clinical trials, absolute and methodological data must be generated to tackle many puzzles in validating ML techniques, improving decision-making, promoting awareness in ML approaches, and deducing risk failures in drug discovery.

Journal ArticleDOI
TL;DR: The need for irrationally introducing new nature inspired intelligent (NII) algorithms in literature is questioned, and possible drawbacks of NII algorithms met in literature are discussed and guidelines for the development of new nature-inspired algorithms are proposed.
Abstract: In the last decade, we observe an increasing number of nature-inspired optimization algorithms, with authors often claiming their novelty and their capabilities of acting as powerful optimization techniques. However, a considerable number of these algorithms do not seem to draw inspiration from nature or to incorporate successful tactics, laws, or practices existing in natural systems, while also some of them have never been applied in any optimization field, since their first appearance in literature. This paper presents some interesting findings that have emerged after the extensive study of most of the existing nature-inspired algorithms. The need for irrationally introducing new nature inspired intelligent (NII) algorithms in literature is also questioned and possible drawbacks of NII algorithms met in literature are discussed. In addition, guidelines for the development of new nature-inspired algorithms are proposed, in an attempt to limit the misleading appearance of variation of metaheuristics as nature inspired optimization algorithms.