scispace - formally typeset
Search or ask a question

Showing papers in "Expert Systems With Applications in 2021"


Journal ArticleDOI
TL;DR: A detailed description of the architecture of the autonomy system of the self-driving car developed at the Universidade Federal do Espirito Santo (UFES), named Intelligent Autonomous Robotics Automobile (IARA), is presented.
Abstract: We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the self-driving car developed at the Universidade Federal do Espirito Santo (UFES), named Intelligent Autonomous Robotics Automobile (IARA). Finally, we list prominent self-driving car research platforms developed by academia and technology companies, and reported in the media.

543 citations


Journal ArticleDOI
TL;DR: This open-source population-based optimization technique called Hunger Games Search is designed to be a standard tool for optimization in different areas of artificial intelligence and machine learning with several new exploratory and exploitative features, high performance, and high optimization capacity.
Abstract: A recent set of overused population-based methods have been published in recent years. Despite their popularity, most of them have uncertain, immature performance, partially done verifications, similar overused metaphors, similar immature exploration and exploitation components and operations, and an insecure tradeoff between exploration and exploitation trends in most of the new real-world cases. Therefore, all users need to extensively modify and adjust their operations based on main evolutionary methods to reach faster convergence, more stable balance, and high-quality results. To move the optimization community one step ahead toward more focus on performance rather than change of metaphor, a general-purpose population-based optimization technique called Hunger Games Search (HGS) is proposed in this research with a simple structure, special stability features and very competitive performance to realize the solutions of both constrained and unconstrained problems more effectively. The proposed HGS is designed according to the hunger-driven activities and behavioural choice of animals. This dynamic, fitness-wise search method follows a simple concept of “Hunger” as the most crucial homeostatic motivation and reason for behaviours, decisions, and actions in the life of all animals to make the process of optimization more understandable and consistent for new users and decision-makers. The Hunger Games Search incorporates the concept of hunger into the feature process; in other words, an adaptive weight based on the concept of hunger is designed and employed to simulate the effect of hunger on each search step. It follows the computationally logical rules (games) utilized by almost all animals and these rival activities and games are often adaptive evolutionary by securing higher chances of survival and food acquisition. This method's main feature is its dynamic nature, simple structure, and high performance in terms of convergence and acceptable quality of solutions, proving to be more efficient than the current optimization methods. The effectiveness of HGS was verified by comparing HGS with a comprehensive set of popular and advanced algorithms on 23 well-known optimization functions and the IEEE CEC 2014 benchmark test suite. Also, the HGS was applied to several engineering problems to demonstrate its applicability. The results validate the effectiveness of the proposed optimizer compared to popular essential optimizers, several advanced variants of the existing methods, and several CEC winners and powerful differential evolution (DE)-based methods abbreviated as LSHADE, SPS_L_SHADE_EIG, LSHADE_cnEpSi, SHADE, SADE, MPEDE, and JDE methods in handling many single-objective problems. We designed this open-source population-based method to be a standard tool for optimization in different areas of artificial intelligence and machine learning with several new exploratory and exploitative features, high performance, and high optimization capacity. The method is very flexible and scalable to be extended to fit more form of optimization cases in both structural aspects and application sides. This paper's source codes, supplementary files, Latex and office source files, sources of plots, a brief version and pseudocode, and an open-source software toolkit for solving optimization problems with Hunger Games Search and online web service for any question, feedback, suggestion, and idea on HGS algorithm will be available to the public at https://aliasgharheidari.com/HGS.html .

529 citations


Journal ArticleDOI
TL;DR: Results showed the deep approaches to be quite efficient when compared to the local texture descriptors in the detection of COVID-19 based on chest X-ray images.
Abstract: COVID-19 is a novel virus that causes infection in both the upper respiratory tract and the lungs. The numbers of cases and deaths have increased on a daily basis on the scale of a global pandemic. Chest X-ray images have proven useful for monitoring various lung diseases and have recently been used to monitor the COVID-19 disease. In this paper, deep-learning-based approaches, namely deep feature extraction, fine-tuning of pretrained convolutional neural networks (CNN), and end-to-end training of a developed CNN model, have been used in order to classify COVID-19 and normal (healthy) chest X-ray images. For deep feature extraction, pretrained deep CNN models (ResNet18, ResNet50, ResNet101, VGG16, and VGG19) were used. For classification of the deep features, the Support Vector Machines (SVM) classifier was used with various kernel functions, namely Linear, Quadratic, Cubic, and Gaussian. The aforementioned pretrained deep CNN models were also used for the fine-tuning procedure. A new CNN model is proposed in this study with end-to-end training. A dataset containing 180 COVID-19 and 200 normal (healthy) chest X-ray images was used in the study's experimentation. Classification accuracy was used as the performance measurement of the study. The experimental works reveal that deep learning shows potential in the detection of COVID-19 based on chest X-ray images. The deep features extracted from the ResNet50 model and SVM classifier with the Linear kernel function produced a 94.7% accuracy score, which was the highest among all the obtained results. The achievement of the fine-tuned ResNet50 model was found to be 92.6%, whilst end-to-end training of the developed CNN model produced a 91.6% result. Various local texture descriptors and SVM classifications were also used for performance comparison with alternative deep approaches; the results of which showed the deep approaches to be quite efficient when compared to the local texture descriptors in the detection of COVID-19 based on chest X-ray images.

460 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a novel nature-inspired meta-heuristic optimizer, called Reptile Search Algorithm (RSA), motivated by the hunting behaviour of Crocodiles.
Abstract: This paper proposes a novel nature-inspired meta-heuristic optimizer, called Reptile Search Algorithm (RSA), motivated by the hunting behaviour of Crocodiles. Two main steps of Crocodile behaviour are implemented, such as encircling, which is performed by high walking or belly walking, and hunting, which is performed by hunting coordination or hunting cooperation. The mentioned search methods of the proposed RSA are unique compared to other existing algorithms. The performance of the proposed RSA is evaluated using twenty-three classical test functions, thirty CEC2017 test functions, ten CEC2019 test functions, and seven real-world engineering problems. The obtained results of the proposed RSA are compared to various existing optimization algorithms in the literature. The results of the tested three benchmark functions revealed that the proposed RSA achieved better results than the other competitive optimization algorithms. The results of the Friedman ranking test proved that the RSA is a significantly superior method than other comparative methods. Finally, the results of the examined engineering problems showed that the RSA obtained better results compared to other various methods.

457 citations


Journal ArticleDOI
TL;DR: This study attempts to go beyond the traps of metaphors and introduce a novel metaphor-free population-based optimization based on the mathematical foundations and ideas of the Runge Kutta (RK) method widely well-known in mathematics.
Abstract: The optimization field suffers from the metaphor-based “pseudo-novel” or “fancy” optimizers. Most of these cliche methods mimic animals' searching trends and possess a small contribution to the optimization process itself. Most of these cliche methods suffer from the locally efficient performance, biased verification methods on easy problems, and high similarity between their components' interactions. This study attempts to go beyond the traps of metaphors and introduce a novel metaphor-free population-based optimization method based on the mathematical foundations and ideas of the Runge Kutta (RK) method widely well-known in mathematics. The proposed RUNge Kutta optimizer (RUN) was developed to deal with various types of optimization problems in the future. The RUN utilizes the logic of slope variations computed by the RK method as a promising and logical searching mechanism for global optimization. This search mechanism benefits from two active exploration and exploitation phases for exploring the promising regions in the feature space and constructive movement toward the global best solution. Furthermore, an enhanced solution quality (ESQ) mechanism is employed to avoid the local optimal solutions and increase convergence speed. The RUN algorithm's efficiency was evaluated by comparing with other metaheuristic algorithms in 50 mathematical test functions and four real-world engineering problems. The RUN provided very promising and competitive results, showing superior exploration and exploitation tendencies, fast convergence rate, and local optima avoidance. In optimizing the constrained engineering problems, the metaphor-free RUN demonstrated its suitable performance as well. The authors invite the community for extensive evaluations of this deep-rooted optimizer as a promising tool for real-world optimization. The source codes, supplementary materials, and guidance for the developed method will be publicly available at different hubs at http://imanahmadianfar.com and http://aliasgharheidari.com/RUN.html .

429 citations


Journal ArticleDOI
TL;DR: The experimental results and statistical tests demonstrate that the I-GWO algorithm is very competitive and often superior compared to the algorithms used in the experiments, and the results of the proposed algorithm on the engineering design problems demonstrate its efficiency and applicability.
Abstract: In this article, an Improved Grey Wolf Optimizer (I-GWO) is proposed for solving global optimization and engineering design problems. This improvement is proposed to alleviate the lack of population diversity, the imbalance between the exploitation and exploration, and premature convergence of the GWO algorithm. The I-GWO algorithm benefits from a new movement strategy named dimension learning-based hunting (DLH) search strategy inherited from the individual hunting behavior of wolves in nature. DLH uses a different approach to construct a neighborhood for each wolf in which the neighboring information can be shared between wolves. This dimension learning used in the DLH search strategy enhances the balance between local and global search and maintains diversity. The performance of the proposed I-GWO algorithm is evaluated on the CEC 2018 benchmark suite and four engineering problems. In all experiments, I-GWO is compared with six other state-of-the-art metaheuristics. The results are also analyzed by Friedman and MAE statistical tests. The experimental results and statistical tests demonstrate that the I-GWO algorithm is very competitive and often superior compared to the algorithms used in the experiments. The results of the proposed algorithm on the engineering design problems demonstrate its efficiency and applicability.

398 citations


Journal ArticleDOI
TL;DR: This research provides a comprehensive survey for the researchers by presenting the different aspects of ATS: approaches, methods, building blocks, techniques, datasets, evaluation methods, and future research directions.
Abstract: Automatic Text Summarization (ATS) is becoming much more important because of the huge amount of textual content that grows exponentially on the Internet and the various archives of news articles, scientific papers, legal documents, etc. Manual text summarization consumes a lot of time, effort, cost, and even becomes impractical with the gigantic amount of textual content. Researchers have been trying to improve ATS techniques since the 1950s. ATS approaches are either extractive, abstractive, or hybrid. The extractive approach selects the most important sentences in the input document(s) then concatenates them to form the summary. The abstractive approach represents the input document(s) in an intermediate representation then generates the summary with sentences that are different than the original sentences. The hybrid approach combines both the extractive and abstractive approaches. Despite all the proposed methods, the generated summaries are still far away from the human-generated summaries. Most researches focus on the extractive approach. It is required to focus more on the abstractive and hybrid approaches. This research provides a comprehensive survey for the researchers by presenting the different aspects of ATS: approaches, methods, building blocks, techniques, datasets, evaluation methods, and future research directions.

324 citations


Journal ArticleDOI
TL;DR: A summary of the fundamental deep neural network architectures and the most recent developments of deep learning methods for semantic segmentation of remote sensing imagery including non-conventional data such as hyperspectral images and point clouds are reviewed.
Abstract: Semantic segmentation of remote sensing imagery has been employed in many applications and is a key research topic for decades. With the success of deep learning methods in the field of computer vision, researchers have made a great effort to transfer their superior performance to the field of remote sensing image analysis. This paper starts with a summary of the fundamental deep neural network architectures and reviews the most recent developments of deep learning methods for semantic segmentation of remote sensing imagery including non-conventional data such as hyperspectral images and point clouds. In our review of the literature, we identified three major challenges faced by researchers and summarize the innovative development to address them. As tremendous efforts have been devoted to advancing pixel-level accuracy, the emerged deep learning methods demonstrated much-improved performance on several public data sets. As to handling the non-conventional, unstructured point cloud and rich spectral imagery, the performance of the state-of-the-art methods is, on average, inferior to that of the satellite imagery. Such a performance gap also exists in learning from small data sets. In particular, the limited non-conventional remote sensing data sets with labels is an obstacle to developing and evaluating new deep learning methods.

239 citations


Journal ArticleDOI
TL;DR: A comprehensive literature review is presented to provide an overview of how machine learning techniques can be applied to realize manufacturing mechanisms with intelligent actions and points to several significant research questions that are unanswered in the recent literature having the same target.
Abstract: Manufacturing organizations need to use different kinds of techniques and tools in order to fulfill their foundation goals. In this aspect, using machine learning (ML) and data mining (DM) techniques and tools could be very helpful for dealing with challenges in manufacturing. Therefore, in this paper, a comprehensive literature review is presented to provide an overview of how machine learning techniques can be applied to realize manufacturing mechanisms with intelligent actions. Furthermore, it points to several significant research questions that are unanswered in the recent literature having the same target. Our survey aims to provide researchers with a solid understanding of the main approaches and algorithms used to improve manufacturing processes over the past two decades. It presents the previous ML studies and recent advances in manufacturing by grouping them under four main subjects: scheduling, monitoring, quality, and failure. It comprehensively discusses existing solutions in manufacturing according to various aspects, including tasks (i.e., clustering, classification, regression), algorithms (i.e., support vector machine, neural network), learning types (i.e., ensemble learning, deep learning), and performance metrics (i.e., accuracy, mean absolute error). Furthermore, the main steps of knowledge discovery in databases (KDD) process to be followed in manufacturing applications are explained in detail. In addition, some statistics about the current state are also given from different perspectives. Besides, it explains the advantages of using machine learning techniques in manufacturing, expresses the ways to overcome certain challenges, and offers some possible further research directions.

237 citations


Journal ArticleDOI
Weiwei Jiang1
TL;DR: A review of recent works on deep learning models for stock market prediction by category the different data sources, various neural network structures, and common used evaluation metrics to help the interested researchers to synchronize with the latest progress and also help them to easily reproduce the previous studies as baselines.
Abstract: Stock market prediction has been a classical yet challenging problem, with the attention from both economists and computer scientists. With the purpose of building an effective prediction model, both linear and machine learning tools have been explored for the past couple of decades. Lately, deep learning models have been introduced as new frontiers for this topic and the rapid development is too fast to catch up. Hence, our motivation for this survey is to give a latest review of recent works on deep learning models for stock market prediction. We not only category the different data sources, various neural network structures, and common used evaluation metrics, but also the implementation and reproducibility. Our goal is to help the interested researchers to synchronize with the latest progress and also help them to easily reproduce the previous studies as baselines. Based on the summary, we also highlight some future research directions in this topic.

221 citations


Journal ArticleDOI
TL;DR: This study aimed to review and analyse articles about the occurrence of different types of infectious diseases, such as epidemics, pandemics, viruses or outbreaks, during the last 10 years, understand the application of sentiment analysis and obtain the most important literature findings.
Abstract: The COVID-19 pandemic caused by the novel coronavirus SARS-CoV-2 occurred unexpectedly in China in December 2019. Tens of millions of confirmed cases and more than hundreds of thousands of confirmed deaths are reported worldwide according to the World Health Organisation. News about the virus is spreading all over social media websites. Consequently, these social media outlets are experiencing and presenting different views, opinions and emotions during various outbreak-related incidents. For computer scientists and researchers, big data are valuable assets for understanding people's sentiments regarding current events, especially those related to the pandemic. Therefore, analysing these sentiments will yield remarkable findings. To the best of our knowledge, previous related studies have focused on one kind of infectious disease. No previous study has examined multiple diseases via sentiment analysis. Accordingly, this research aimed to review and analyse articles about the occurrence of different types of infectious diseases, such as epidemics, pandemics, viruses or outbreaks, during the last 10 years, understand the application of sentiment analysis and obtain the most important literature findings. Articles on related topics were systematically searched in five major databases, namely, ScienceDirect, PubMed, Web of Science, IEEE Xplore and Scopus, from 1 January 2010 to 30 June 2020. These indices were considered sufficiently extensive and reliable to cover our scope of the literature. Articles were selected based on our inclusion and exclusion criteria for the systematic review, with a total of n = 28 articles selected. All these articles were formed into a coherent taxonomy to describe the corresponding current standpoints in the literature in accordance with four main categories: lexicon-based models, machine learning-based models, hybrid-based models and individuals. The obtained articles were categorised into motivations related to disease mitigation, data analysis and challenges faced by researchers with respect to data, social media platforms and community. Other aspects, such as the protocol being followed by the systematic review and demographic statistics of the literature distribution, were included in the review. Interesting patterns were observed in the literature, and the identified articles were grouped accordingly. This study emphasised the current standpoint and opportunities for research in this area and promoted additional efforts towards the understanding of this research field.

Journal ArticleDOI
TL;DR: An automatic COVID screening (ACoS) system that uses radiomic texture descriptors extracted from CXR images to identify the normal, suspected, and nCOVID-19 infected patients is presented.
Abstract: Novel coronavirus disease (nCOVID-19) is the most challenging problem for the world. The disease is caused by severe acute respiratory syndrome coronavirus-2 (SARS-COV-2), leading to high morbidity and mortality worldwide. The study reveals that infected patients exhibit distinct radiographic visual characteristics along with fever, dry cough, fatigue, dyspnea, etc. Chest X-Ray (CXR) is one of the important, non-invasive clinical adjuncts that play an essential role in the detection of such visual responses associated with SARS-COV-2 infection. However, the limited availability of expert radiologists to interpret the CXR images and subtle appearance of disease radiographic responses remains the biggest bottlenecks in manual diagnosis. In this study, we present an automatic COVID screening (ACoS) system that uses radiomic texture descriptors extracted from CXR images to identify the normal, suspected, and nCOVID-19 infected patients. The proposed system uses two-phase classification approach (normal vs. abnormal and nCOVID-19 vs. pneumonia) using majority vote based classifier ensemble of five benchmark supervised classification algorithms. The training-testing and validation of the ACoS system are performed using 2088 (696 normal, 696 pneumonia and 696 nCOVID-19) and 258 (86 images of each category) CXR images, respectively. The obtained validation results for phase-I (accuracy (ACC) = 98.062%, area under curve (AUC) = 0.956) and phase-II (ACC = 91.329% and AUC = 0.831) show the promising performance of the proposed system. Further, the Friedman post-hoc multiple comparisons and z-test statistics reveals that the results of ACoS system are statistically significant. Finally, the obtained performance is compared with the existing state-of-the-art methods.

Journal ArticleDOI
TL;DR: A mathematical model of red fox habits, searching for food, hunting, and developing population while escaping from hunters is proposed, based on local and global optimization method with a reproduction mechanism.
Abstract: Fox is very popular in various regions of the Globe, where representatives of this kind can be found in Europe, Asia, North America, and even in some arctic regions. The way this predator lives and hunts is very peculiar. It is active all year round, traversing the lands in hunting both for different domestic and wild animals. In his strategy fox is using various tricks to distract prey while creeping what makes him a very efficient predator. The territorial habits and family relations between young and adult made the fox easily adaptable to various conditions and therefore helped him to survive in a changing environment. In this article we propose a mathematical model of red fox habits, searching for food, hunting, and developing population while escaping from hunters. Described model is based on local and global optimization method with a reproduction mechanism. The novel model developed for optimization purposes we name the Red Fox Optimization Algorithm (RFO). The proposed method was subjected to benchmark tests using 22 test functions and 7 classic engineering optimization problems. Experimental results are compared to other meta-heuristic algorithms to show potential advantages.

Journal ArticleDOI
TL;DR: This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress and the future trends and challenges in the classification and detection of breast cancer.
Abstract: Breast cancer is the second leading cause of death for women, so accurate early detection can help decrease breast cancer mortality rates. Computer-aided detection allows radiologists to detect abnormalities efficiently. Medical images are sources of information relevant to the detection and diagnosis of various diseases and abnormalities. Several modalities allow radiologists to study the internal structure, and these modalities have been met with great interest in several types of research. In some medical fields, each of these modalities is of considerable significance. This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress in this area. This review reflects on the classification of breast cancer utilizing multi-modalities medical imaging. Details are also given on techniques developed to facilitate the classification of tumors, non-tumors, and dense masses in various medical imaging modalities. It first provides an overview of the different approaches to machine learning, then an overview of the different deep learning techniques and specific architectures for the detection and classification of breast cancer. We also provide a brief overview of the different image modalities to give a complete overview of the area. In the same context, this review was performed using a broad variety of research databases as a source of information for access to various field publications. Finally, this review summarizes the future trends and challenges in the classification and detection of breast cancer.

Journal ArticleDOI
TL;DR: This study attempts to establish a multi-time, multi-site forecasting model of Beijing’s air quality by using deep learning network models based on spatiotemporal clustering and to compare them with a back-propagation neural network (BPNN).
Abstract: Effective air quality forecasting models are helpful for timely prevention and control of air pollution. However, the spatiotemporal distribution characteristics of air quality have not been fully considered in previous model development. This study attempts to establish a multi-time, multi-site forecasting model of Beijing’s air quality by using deep learning network models based on spatiotemporal clustering and to compare them with a back-propagation neural network (BPNN). For the overall forecasting, the performances in next-hour forecasting were ranked in ascending order of the BPNN, the convolutional neural network (CNN), the long short-term memory (LSTM) model, and the CNN-LSTM, with the LSTM as the optimal model in the multiple-hour forecasting. The performance of the seasonal forecasting was not significantly improved compared to the overall forecasting. For the spatial clustering-based forecasting, cluster 2 forecasting generally outperforms cluster 1 and the overall forecasting. Overall, either the seasonal or the spatial clustering-based forecasting is more suitable for the improvement of the forecasting in a certain season or cluster. In terms of model type, both the CNN-LSTM and the LSTM generally have better performance than the CNN and the BPNN.

Journal ArticleDOI
TL;DR: A new meta-heuristic method is proposed that inspires the behavior of the swarm of birds called Coot, and it is shown that this algorithm is capable to outperform most of the other optimization methods.
Abstract: Recently, many intelligent algorithms have been proposed to find the best solution for complex engineering problems. These algorithms can search volatile and multi-dimensional solution spaces and find optimal answers timely. In this paper, a new meta-heuristic method is proposed that inspires the behavior of the swarm of birds called Coot. The Coot algorithm imitates two different modes of movement of birds on the water surface: in the first phase, the movement of birds is irregular, and in the second phase, the movements are regular. The swarm moves towards a group of leading leaders to reach a food supply; the movement of the end of the swarm is in the form of a chain of coots, each of coot which moves behind its front coots. The algorithm then runs on a number of test functions, and the results are compared with well-known optimization algorithms. In addition, to solve several real problems, such as Tension/Compression spring, Pressure vessel design, Welded Beam Design, Multi-plate disc clutch brake, Step-cone pulley problem, Cantilever beam design, reducer design problem, and Rolling element bearing problem this algorithm is used to confirm the applicability of this algorithm. The results show that this algorithm is capable to outperform most of the other optimization methods. The source code is currently available for public from: https://www.mathworks.com/matlabcentral/fileexchange/89102-coot-optimization-algorithm .

Journal ArticleDOI
TL;DR: A taxonomy to categorize the proposed models for isolated and continuous sign language recognition is presented, discussing applications, datasets, hybrid models, complexity, and future lines of research in the field.
Abstract: Sign language, as a different form of the communication language, is important to large groups of people in society. There are different signs in each sign language with variability in hand shape, motion profile, and position of the hand, face, and body parts contributing to each sign. So, visual sign language recognition is a complex research area in computer vision. Many models have been proposed by different researchers with significant improvement by deep learning approaches in recent years. In this survey, we review the vision-based proposed models of sign language recognition using deep learning approaches from the last five years. While the overall trend of the proposed models indicates a significant improvement in recognition accuracy in sign language recognition, there are some challenges yet that need to be solved. We present a taxonomy to categorize the proposed models for isolated and continuous sign language recognition, discussing applications, datasets, hybrid models, complexity, and future lines of research in the field.

Journal ArticleDOI
TL;DR: A comprehensive review of recently developed deep learning methods for small object detection can be found in this article, where the authors summarize challenges and solutions of small-object detection, and present major deep learning techniques, including fusing feature maps, adding context information, balancing foreground-background examples, and creating sufficient positive examples.
Abstract: In computer vision, significant advances have been made on object detection with the rapid development of deep convolutional neural networks (CNN). This paper provides a comprehensive review of recently developed deep learning methods for small object detection. We summarize challenges and solutions of small object detection, and present major deep learning techniques, including fusing feature maps, adding context information, balancing foreground-background examples, and creating sufficient positive examples. We discuss related techniques developed in four research areas, including generic object detection, face detection, object detection in aerial imagery, and segmentation. In addition, this paper compares the performances of several leading deep learning methods for small object detection, including YOLOv3, Faster R-CNN, and SSD, based on three large benchmark datasets of small objects. Our experimental results show that while the detection accuracy on small objects by these deep learning methods was low, less than 0.4, Faster R-CNN performed the best, while YOLOv3 was a close second.

Journal ArticleDOI
TL;DR: This paper deals with industrial applications of ML techniques, intending to clarify the real potentialities, as well as potential flaws, of ML algorithms applied to operation management, and a comprehensive review is presented and organized in a way that should facilitate the orientation of practitioners in this field.
Abstract: Machine Learning (ML) is a branch of artificial intelligence that studies algorithms able to learn autonomously, directly from the input data. Over the last decade, ML techniques have made a huge leap forward, as demonstrated by Deep Learning (DL) algorithms implemented by autonomous driving cars, or by electronic strategy games. Hence, researchers have started to consider ML also for applications within the industrial field, and many works indicate ML as one the main enablers to evolve a traditional manufacturing system up to the Industry 4.0 level. Nonetheless, industrial applications are still few and limited to a small cluster of international companies. This paper deals with these topics, intending to clarify the real potentialities, as well as potential flaws, of ML algorithms applied to operation management. A comprehensive review is presented and organized in a way that should facilitate the orientation of practitioners in this field. To this aim, papers from 2000 to date are categorized in terms of the applied algorithm and application domain, and a keyword analysis is also performed, to details the most promising topics in the field. What emerges is a consistent upward trend in the number of publications, with a spike of interest for unsupervised and especially deep learning techniques, which recorded a very high number of publications in the last five years. Concerning trends, along with consolidated research areas, recent topics that are growing in popularity were also discovered. Among these, the main ones are production planning and control and defect analysis, thus suggesting that in the years to come ML will become pervasive in many fields of operation management.

Journal ArticleDOI
TL;DR: This review study serves as a solid reference for future studies in the arena of SI and in particular the MBO algorithm including its modifications, hybridizations, variants, and applications.
Abstract: Swarm intelligence (SI) is the collective behavior of decentralized, self-organized natural or artificial systems. Monarch butterfly optimization (MBO) algorithm is a class of swarm intelligence metaheuristic algorithm inspired by the migration behavior of monarch butterflies. Through the migration operation and butterfly adjusting operation, individuals in MBO are updated. MBO can outperform many state-of-the-art optimization techniques when solving global numerical optimization and engineering problems. This paper presents a comprehensive review of the MBO algorithm including its modifications, hybridizations, variants, and applications. Additionally, further research directions for MBO are discussed. This review study serves as a solid reference for future studies in the arena of SI and in particular the MBO algorithm.

Journal ArticleDOI
TL;DR: The overall results of CSA show that it offered a favorable global or near global solution and better performance compared to other meta-heuristics.
Abstract: This paper presents a novel meta-heuristic algorithm named Chameleon Swarm Algorithm (CSA) for solving global numerical optimization problems. The base inspiration for CSA is the dynamic behavior of chameleons when navigating and hunting for food sources on trees, deserts and near swamps. This algorithm mathematically models and implements the behavioral steps of chameleons in their search for food, including their behavior in rotating their eyes to a nearly 360°scope of vision to locate prey and grab prey using their sticky tongues that launch at high speed. These foraging mechanisms practiced by chameleons eventually lead to feasible solutions when applied to address optimization problems. The stability of the proposed algorithm was assessed on sixty-seven benchmark test functions and the performance was examined using several evaluation measures. These test functions involve unimodal, multimodal, hybrid and composition functions with different levels of complexity. An extensive comparative study was conducted to demonstrate the efficacy of CSA over other meta-heuristic algorithms in terms of optimization accuracy. The applicability of the proposed algorithm in reliably addressing real-world problems was demonstrated in solving five constrained and computationally expensive engineering design problems. The overall results of CSA show that it offered a favorable global or near global solution and better performance compared to other meta-heuristics.

Journal ArticleDOI
TL;DR: The entropy-based TOPSIS with adjustable weight coefficient is proposed in this paper, and it is found that the EW can enhance the function of the attribute with the highest diversity of attribute data (DAD) as well as weaken thefunction of the attributes with a low DAD in decision-making or evaluation.
Abstract: The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is a classical multi-attribute decision-making method, which is widely used in various fields for decision-making or evaluation. The entropy method (EM) is frequently used in determining attribute weights for TOPSIS, and the weight determined by the EM is always called entropy weight (EW). In this paper, based on a large number of data and theoretical analysis, the effects of the EW on TOPSIS are analyzed. It is found that the EW can enhance the function of the attribute with the highest diversity of attribute data (DAD) as well as weaken the function of the attributes with a low DAD in decision-making or evaluation. Sometimes the EW even causes the decision-making or evaluation result to be seriously affected by the attribute with the highest DAD (called primacy attribute, abbreviated as PA). Since the EW can enhance the function of the PA in decision-making or evaluation, it is conducive to increase the dipartite degree of the relative closeness (RC), but reduces the comprehensiveness of the RC, and may even lead to unreasonable decision-making or evaluation result. In order to adjust the effects of the EW on TOPSIS, the entropy-based TOPSIS with adjustable weight coefficient is proposed in this paper. Some discussions on the application of the proposed method are also given.

Journal ArticleDOI
TL;DR: This in-depth research introduced horizontal crossover search and vertical crossover search into the ACOR and improved the selection mechanism of the original ACOR to form an improved algorithm (CCACO) for the first time.
Abstract: The ant colony optimization (ACO) is the most exceptionally fundamental swarm-based solver for realizing discrete problems. In order to make it also suitable for solving continuous problems, a variant of ACO (ACOR) has been proposed already. The deep-rooted ACO always stands out in the eyes of well-educated researchers as one of the best-designed metaheuristic ways for realizing the solutions to real-world problems. However, ACOR has some stochastic components that need to be further improved in terms of solution quality and convergence speed. Therefore, to effectively improve these aspects, this in-depth research introduced horizontal crossover search (HCS) and vertical crossover search (VCS) into the ACOR and improved the selection mechanism of the original ACOR to form an improved algorithm (CCACO) for the first time. In CCACO, the HCS is mainly intended to increase the convergence rate. Meanwhile, the VCS and the developed selection mechanism are mainly aimed at effectively improving the ability to avoid dwindling into local optimal (LO) and the convergence accuracy. To reach next-level strong results for image segmentation and better illustrate its effectiveness, we conducted a series of comparative experiments with 30 benchmark functions from IEEE CEC 2014. In the experiment, we compared the developed CCACO with well-known conventional algorithms and advanced ones. All experimental results also show that its convergence speed and solution quality are superior to other algorithms, and its ability to avoid dropping into local optimum (LO) is more reliable than that of its peers. Furthermore, to further illustrate its enhanced performance, we applied it to image segmentation based on multi-threshold image segmentation (MTIS) method with a non-local means 2D histogram and Kapur's entropy. In the experiment, it was compared with existing competitive algorithms at low and high threshold levels. The experimental results show that the proposed CCACO achieves excellent segmentation results at both low and high threshold levels. For any help and guidance regarding this research, readers, and industry activists can refer to the background info at http://aliasgharheidari.com/ .

Journal ArticleDOI
TL;DR: The developed algorithm is analyzed on six constrained problems of engineering design to assess its appropriateness for finding the solutions of real-world problems, and the outcomes from the empirical analyzes depict that the proposed algorithm is better than other existing algorithms.
Abstract: This study introduces the extension of currently developed Seagull Optimization Algorithm (SOA) in terms of multi-objective problems, which is entitled as Multi-objective Seagull Optimization Algorithm (MOSOA). In this algorithm, a concept of dynamic archive is introduced, which has the feature to cache the non-dominated Pareto optimal solutions. The roulette wheel selection approach is utilized to choose the effective archived solutions by simulating the migration and attacking behaviors of seagulls. The proposed algorithm is approved by testing it with twenty-four benchmark test functions, and its performance is compared with existing metaheuristic algorithms. The developed algorithm is analyzed on six constrained problems of engineering design to assess its appropriateness for finding the solutions of real-world problems. The outcomes from the empirical analyzes depict that the proposed algorithm is better than other existing algorithms. The proposed algorithm also considers those Pareto optimal solutions, which demonstrate high convergence.

Journal ArticleDOI
TL;DR: The novel meta-heuristic algorithm called Black Widow Optimization (BWO) is introduced to find the best threshold configuration using Otsu or Kapur as objective function and is found to be most promising for multi-level image segmentation problem over other segmentation approaches that are currently used in the literature.
Abstract: Segmentation is a crucial step in image processing applications. This process separates pixels of the image into multiple classes that permits the analysis of the objects contained in the scene. Multilevel thresholding is a method that easily performs this task, the problem is to find the best set of thresholds that properly segment each image. Techniques as Otsu’s between class variance or Kapur’s entropy helps to find the best thresholds but they are computationally expensive for more than two thresholds. To overcome such problem this paper introduces the use of the novel meta-heuristic algorithm called Black Widow Optimization (BWO) to find the best threshold configuration using Otsu or Kapur as objective function. To evaluate the performance and effectiveness of the BWO-based method, it has been considered the use of a variety of benchmark images, and compared against six well-known meta-heuristic algorithms including; the Gray Wolf Optimization (GWO), Moth Flame Optimization (MFO), Whale Optimization Algorithm (WOA), Sine–Cosine Algorithm (SCA), Slap Swarm Algorithm (SSA), and Equilibrium Optimization (EO). The experimental results have revealed that the proposed BWO-based method outperform the competitor algorithms in terms of the fitness values as well as the others performance measures such as PSNR, SSIM and FSIM. The statistical analysis manifests that the BWO-based method achieves efficient and reliable results in comparison with the other methods. Therefore, BWO-based method was found to be most promising for multi-level image segmentation problem over other segmentation approaches that are currently used in the literature.

Journal ArticleDOI
TL;DR: The proposed Dynamic Salp swarm algorithm (DSSA) outperformed the original SSA and the other well-known optimization algorithms over the 23 datasets in terms of classification accuracy, fitness function values, the number of selected features, and convergence speed.
Abstract: Recently, many optimization algorithms have been applied for Feature selection (FS) problems and show a clear outperformance in comparison with traditional FS methods. Therefore, this has motivated our study to apply the new Salp swarm algorithm (SSA) on the FS problem. However, SSA, like other optimizations algorithms, suffer from the problem of population diversity and fall into local optima. To solve these problems, this study presents an enhanced version of SSA which is known as the Dynamic Salp swarm algorithm (DSSA). Two main improvements were included in SSA to solve its problems. The first improvement includes the development of a new equation for salps’ position update. The use of this new equation is controlled by using Singer's chaotic map. The purpose of the first improvement is to enhance SSA solutions' diversity. The second improvement includes the development of a new local search algorithm (LSA) to improve SSA exploitation. The proposed DSSA was combined with the K-nearest neighbor (KNN) classifier in a wrapper mode. 20 benchmark datasets were selected from the UCI repository and 3 Hadith datasets to test and evaluate the effectiveness of the proposed DSSA algorithm. The DSSA results were compared with the original SSA and four well-known optimization algorithms including Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Lion Optimizer (ALO), and Grasshopper Optimization Algorithm (GOA). From the obtained results, DSSA outperformed the original SSA and the other well-known optimization algorithms over the 23 datasets in terms of classification accuracy, fitness function values, the number of selected features, and convergence speed. Also, DSSA accuracy results were compared with the most recent variants of the SSA algorithm. DSSA showed a significant improvement over the competing algorithms in statistical analysis. These results confirm the capability of the proposed DSSA to simultaneously improve the classification accuracy while selecting the minimal number of the most informative features.

Journal ArticleDOI
TL;DR: An effort to map the current research topics in Twitter focusing on three major areas: the structure and properties of the social graph, sentiment analysis and threats such as spam, bots, fake news and hate speech is presented.
Abstract: Twitter is the third most popular worldwide Online Social Network (OSN) after Facebook and Instagram. Compared to other OSNs, it has a simple data model and a straightforward data access API. This makes it ideal for social network studies attempting to analyze the patterns of online behavior, the structure of the social graph, the sentiment towards various entities and the nature of malicious attacks in a vivid network with hundreds of millions of users. Indeed, Twitter has been established as a major research platform, utilized in more than ten thousands research articles over the last ten years. Although there are excellent review and comparison studies for most of the research that utilizes Twitter, there are limited efforts to map this research terrain as a whole. Here we present an effort to map the current research topics in Twitter focusing on three major areas: the structure and properties of the social graph, sentiment analysis and threats such as spam, bots, fake news and hate speech. We also present Twitter’s basic data model and best practices for sampling and data access. This survey also lays the ground of computational techniques used in these areas such as Graph Sampling, Natural Language Processing and Machine Learning. Along with existing reviews and comparison studies, we also discuss the key findings and the state of the art in these methods. Overall, we hope that this survey will help researchers create a clear conceptual model of Twitter and act as a guide to expand further the topics presented.

Journal ArticleDOI
TL;DR: This work proposes an end-to-end real-time SER model that is capable of processing original speech signals for the emotion recognition that utilizes lightweight dilated CNN architecture that implements the multi-learning trick (MLT) approach.
Abstract: Speech is the most dominant source of communication among humans, and it is an efficient way for human–computer interaction (HCI) to exchange information. Nowadays, speech emotion recognition (SER) is an active research area that plays a crucial role in real-time applications. In this era, the SER system has lacked real-time speech processing. To address this problem, we propose an end-to-end real-time SER model that is based on a one-dimensional dilated convolutional neural network (DCNN). Our model used a multi-learning strategy to parallel extract spatial salient emotional features and learn long term contextual dependencies from the speech signals. We used residual blocks with a skip connection (RBSC) module , in order to find a correlation, the emotional cues, and the sequence learning (Seq_L) module , to learn the long term contextual dependencies in the input features. Furthermore, we used a fusion layer to concatenate these learned features for the final emotion recognition task. Our model structure is quite simple, and it is capable of automatically learning salient discriminative features from the speech signals. We evaluated our model using benchmark IEMOCAP and EMO-DB datasets and obtained a high recognition accuracy, which were 73% and 90%, respectively. The experimental results indicated the significance and the efficiency of our proposed model have shown excessive assistance with the implementation of a real-time SER system. Hence, our model is capable of processing original speech signals for the emotion recognition that utilizes lightweight dilated CNN architecture that implements the multi-learning trick (MLT) approach.

Journal ArticleDOI
TL;DR: A k-Nearest-Neighbors technique, for which a genetic algorithm is applied for the efficient feature selection to reduce the dataset dimensions and enhance the classifier pace, is employed for diagnosing the stage of patients’ disease.
Abstract: Lung cancer is one of the most common diseases for human beings everywhere throughout the world. Early identification of this disease is the main conceivable approach to enhance the possibility of patients’ survival. In this paper, a k-Nearest-Neighbors technique, for which a genetic algorithm is applied for the efficient feature selection to reduce the dataset dimensions and enhance the classifier pace, is employed for diagnosing the stage of patients’ disease. To improve the accuracy of the proposed algorithm, the best value for k is determined using an experimental procedure. The implementation of the proposed approach on a lung cancer database reveals 100% accuracy. This implies that one could use the algorithm to find a correlation between the clinical information and data mining techniques to support lung cancer staging diagnosis efficiently.

Journal ArticleDOI
TL;DR: This work revises the FCM algorithm to make it applicable to data with unequal cluster sizes, noise and outliers, and non-uniform mass distribution and shows that the RFCM algorithm works for both cases and outperforms the both categories of the algorithms.
Abstract: Clustering algorithms aim at finding dense regions of data based on similarities and dissimilarities of data points. Noise and outliers contribute to the computational procedure of the algorithms as well as the actual data points that leads to inaccurate and misplaced cluster centers. This problem also arises when sizes of the clusters are different that moves centers of small clusters towards large clusters. Mass of the data points is important as well as their location in engineering and physics where non-uniform mass distribution results displacement of the cluster centers towards heavier clusters even if sizes of the clusters are identical and the data are noise-free. Fuzzy C-Means (FCM) algorithm that suffers from these problems is the most popular fuzzy clustering algorithm and has been subject of numerous researches and developments though improvements are still marginal. This work revises the FCM algorithm to make it applicable to data with unequal cluster sizes, noise and outliers, and non-uniform mass distribution. Revised FCM (RFCM) algorithm employs adaptive exponential functions to eliminate impacts of noise and outliers on the cluster centers and modifies constraint of the FCM algorithm to prevent large or heavier clusters from attracting centers of small clusters. Several algorithms are reviewed and their mathematical structures are discussed in the paper including Possibilistic Fuzzy C-Means (PFCM), Possibilistic C-Means (PCM), Robust Fuzzy C-Means (FCM-σ), Noise Clustering (NC), Kernel Fuzzy C-Means (KFCM), Intuitionistic Fuzzy C-Means (IFCM), Robust Kernel Fuzzy C-Mean (KFCM-σ), Robust Intuitionistic Fuzzy C-Means (IFCM-σ), Kernel Intuitionistic Fuzzy C-Means (KIFCM), Robust Kernel Intuitionistic Fuzzy C-Means (KIFCM-σ), Credibilistic Fuzzy C-Means (CFCM), Size-insensitive integrity-based Fuzzy C-Means (siibFCM), Size-insensitive Fuzzy C-Means (csiFCM), Subtractive Clustering (SC), Density Based Spatial Clustering of Applications with Noise (DBSCAN), Gaussian Mixture Models (GMM), Spectral clustering, and Outlier Removal Clustering (ORC). Some of these algorithms are suitable for noisy data and some others are designed for data with unequal clusters. The study shows that the RFCM algorithm works for both cases and outperforms the both categories of the algorithms.