scispace - formally typeset
Search or ask a question

Showing papers in "Artificial Intelligence Review in 2020"


Journal ArticleDOI
TL;DR: Deep Convolutional Neural Networks (CNNs) as mentioned in this paper are a special type of Neural Networks, which has shown exemplary performance on several competitions related to Computer Vision and Image Processing.
Abstract: Deep Convolutional Neural Network (CNN) is a special type of Neural Networks, which has shown exemplary performance on several competitions related to Computer Vision and Image Processing. Some of the exciting application areas of CNN include Image Classification and Segmentation, Object Detection, Video Processing, Natural Language Processing, and Speech Recognition. The powerful learning ability of deep CNN is primarily due to the use of multiple feature extraction stages that can automatically learn representations from the data. The availability of a large amount of data and improvement in the hardware technology has accelerated the research in CNNs, and recently interesting deep CNN architectures have been reported. Several inspiring ideas to bring advancements in CNNs have been explored, such as the use of different activation and loss functions, parameter optimization, regularization, and architectural innovations. However, the significant improvement in the representational capacity of the deep CNN is achieved through architectural innovations. Notably, the ideas of exploiting spatial and channel information, depth and width of architecture, and multi-path information processing have gained substantial attention. Similarly, the idea of using a block of layers as a structural unit is also gaining popularity. This survey thus focuses on the intrinsic taxonomy present in the recently reported deep CNN architectures and, consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature-map exploitation, channel boosting, and attention. Additionally, the elementary understanding of CNN components, current challenges, and applications of CNN are also provided.

1,328 citations


Journal ArticleDOI
TL;DR: A comprehensive review of LSTM’s formulation and training, relevant applications reported in the literature and code resources implementing this model for a toy example are presented.
Abstract: Long short-term memory (LSTM) has transformed both machine learning and neurocomputing fields. According to several online sources, this model has improved Google’s speech recognition, greatly improved machine translations on Google Translate, and the answers of Amazon’s Alexa. This neural system is also employed by Facebook, reaching over 4 billion LSTM-based translations per day as of 2017. Interestingly, recurrent neural networks had shown a rather discrete performance until LSTM showed up. One reason for the success of this recurrent network lies in its ability to handle the exploding/vanishing gradient problem, which stands as a difficult issue to be circumvented when training recurrent or very deep neural networks. In this paper, we present a comprehensive review that covers LSTM’s formulation and training, relevant applications reported in the literature and code resources implementing this model for a toy example.

412 citations


Journal ArticleDOI
TL;DR: This paper provides a detailed survey of popular deep learning models that are increasingly applied in sentiment analysis and presents a taxonomy of sentiment analysis, which highlights the power of deep learning architectures for solving sentiment analysis problems.
Abstract: Social media is a powerful source of communication among people to share their sentiments in the form of opinions and views about any topic or article, which results in an enormous amount of unstructured information. Business organizations need to process and study these sentiments to investigate data and to gain business insights. Hence, to analyze these sentiments, various machine learning, and natural language processing-based approaches have been used in the past. However, deep learning-based methods are becoming very popular due to their high performance in recent times. This paper provides a detailed survey of popular deep learning models that are increasingly applied in sentiment analysis. We present a taxonomy of sentiment analysis and discuss the implications of popular deep learning architectures. The key contributions of various researchers are highlighted with the prime focus on deep learning approaches. The crucial sentiment analysis tasks are presented, and multiple languages are identified on which sentiment analysis is done. The survey also summarizes the popular datasets, key features of the datasets, deep learning model applied on them, accuracy obtained from them, and the comparison of various deep learning models. The primary purpose of this survey is to highlight the power of deep learning architectures for solving sentiment analysis problems.

385 citations


Journal ArticleDOI
TL;DR: A comprehensive and structured review of the most relevant and recent unsupervised feature selection methods reported in the literature is provided and a taxonomy of these methods is presented.
Abstract: In recent years, unsupervised feature selection methods have raised considerable interest in many research areas; this is mainly due to their ability to identify and select relevant features without needing class label information. In this paper, we provide a comprehensive and structured review of the most relevant and recent unsupervised feature selection methods reported in the literature. We present a taxonomy of these methods and describe the main characteristics and the fundamental ideas they are based on. Additionally, we summarized the advantages and disadvantages of the general lines in which we have categorized the methods analyzed in this review. Moreover, an experimental comparison among the most representative methods of each approach is also presented. Finally, we discuss some important open challenges in this research area.

325 citations


Journal ArticleDOI
TL;DR: Optimisation results and discussion confirm that the BES algorithm competes well with advanced meta-heuristic algorithms and conventional methods.
Abstract: This study proposes a bald eagle search (BES) algorithm, which is a novel, nature-inspired meta-heuristic optimisation algorithm that mimics the hunting strategy or intelligent social behaviour of bald eagles as they search for fish. Hunting by BES is divided into three stages. In the first stage (selecting space), an eagle selects the space with the most number of prey. In the second stage (searching in space), the eagle moves inside the selected space to search for prey. In the third stage (swooping), the eagle swings from the best position identified in the second stage and determines the best point to hunt. Swooping starts from the best point and all other movements are directed towards this point. BES is tested by adopting a three-part evaluation methodology that (1) describes the benchmarking of the optimisation problem to evaluate the algorithm performance, (2) compares the algorithm performance with that of other intelligent computation techniques and parameter settings and (3) evaluates the algorithm based on mean, standard deviation, best point and Wilcoxon signed-rank test statistic of the function values. Optimisation results and discussion confirm that the BES algorithm competes well with advanced meta-heuristic algorithms and conventional methods.

281 citations


Journal ArticleDOI
TL;DR: This survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.
Abstract: In this paper we present a broad overview of the last 40 years of research on cognitive architectures To date, the number of existing architectures has reached several hundred, but most of the existing surveys do not reflect this growth and instead focus on a handful of well-established architectures In this survey we aim to provide a more inclusive and high-level overview of the research on cognitive architectures Our final set of 84 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning, reasoning and metareasoning In order to assess the breadth of practical applications of cognitive architectures we present information on over 900 practical projects implemented using the cognitive architectures in our list We use various visualization techniques to highlight the overall trends in the development of the field In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress

259 citations


Journal ArticleDOI
TL;DR: This paper aims at reviewing and analyzing related studies carried out in recent decades, from the experimental design perspective, and identifying limitations in the existing body of literature based upon which some directions for future research can be gleaned.
Abstract: Missing value imputation (MVI) has been studied for several decades being the basic solution method for incomplete dataset problems, specifically those where some data samples contain one or more missing attribute values. This paper aims at reviewing and analyzing related studies carried out in recent decades, from the experimental design perspective. Altogether, 111 journal papers published from 2006 to 2017 are reviewed and analyzed. In addition, several technical issues encountered during the MVI process are addressed, such as the choice of datasets, missing rates and missingness mechanisms, and the MVI techniques and evaluation metrics employed, are discussed. The results of analysis of these issues allow limitations in the existing body of literature to be identified based upon which some directions for future research can be gleaned.

240 citations


Journal ArticleDOI
TL;DR: A survey of various techniques suggested for compressing and accelerating the ML and DL models is presented and the challenges of the existing techniques are discussed and future research directions in the field are provided.
Abstract: In recent years, machine learning (ML) and deep learning (DL) have shown remarkable improvement in computer vision, natural language processing, stock prediction, forecasting, and audio processing to name a few. The size of the trained DL model is large for these complex tasks, which makes it difficult to deploy on resource-constrained devices. For instance, size of the pre-trained VGG16 model trained on the ImageNet dataset is more than 500 MB. Resource-constrained devices such as mobile phones and internet of things devices have limited memory and less computation power. For real-time applications, the trained models should be deployed on resource-constrained devices. Popular convolutional neural network models have millions of parameters that leads to increase in the size of the trained model. Hence, it becomes essential to compress and accelerate these models before deploying on resource-constrained devices while making the least compromise with the model accuracy. It is a challenging task to retain the same accuracy after compressing the model. To address this challenge, in the last couple of years many researchers have suggested different techniques for model compression and acceleration. In this paper, we have presented a survey of various techniques suggested for compressing and accelerating the ML and DL models. We have also discussed the challenges of the existing techniques and have provided future research directions in the field.

221 citations


Journal ArticleDOI
TL;DR: Support vector machine and artificial neural network were found to be the most used machine learning algorithms for stock market prediction.
Abstract: The stock market is a key pivot in every growing and thriving economy, and every investment in the market is aimed at maximising profit and minimising associated risk. As a result, numerous studies have been conducted on the stock-market prediction using technical or fundamental analysis through various soft-computing techniques and algorithms. This study attempted to undertake a systematic and critical review of about one hundred and twenty-two (122) pertinent research works reported in academic journals over 11 years (2007–2018) in the area of stock market prediction using machine learning. The various techniques identified from these reports were clustered into three categories, namely technical, fundamental, and combined analyses. The grouping was done based on the following criteria: the nature of a dataset and the number of data sources used, the data timeframe, the machine learning algorithms used, machine learning task, used accuracy and error metrics and software packages used for modelling. The results revealed that 66% of documents reviewed were based on technical analysis; whiles 23% and 11% were based on fundamental analysis and combined analyses, respectively. Concerning the number of data source, 89.34% of documents reviewed, used single sources; whiles 8.2% and 2.46% used two and three sources respectively. Support vector machine and artificial neural network were found to be the most used machine learning algorithms for stock market prediction.

171 citations


Journal ArticleDOI
TL;DR: An overview on neutrosophic set is presented with the aim of offering a clear perspective on the different concepts, tools and trends related to their extensions and indicates that some developing economics (such as China, India, Turkey) are quite active in neutrosophile set research.
Abstract: Neutrosophic set, initiated by Smarandache, is a novel tool to deal with vagueness considering the truth-membership T, indeterminacy-membership I and falsity-membership F satisfying the condition $$0\le T+I+F\le 3$$. It can be used to characterize the uncertain information more sufficiently and accurately than intuitionistic fuzzy set. Neutrosophic set has attracted great attention of many scholars that have been extended to new types and these extensions have been used in many areas such as aggregation operators, decision making, image processing, information measures, graph and algebraic structures. Because of such a growth, we present an overview on neutrosophic set with the aim of offering a clear perspective on the different concepts, tools and trends related to their extensions. A total of 137 neutrosophic set publication records from Web of Science are analyzed. Many interesting results with regard to the annual trends, the top players in terms of country level as well as institutional level, the publishing journals, the highly cited papers, and the research landscape are yielded and explained in-depth. The results indicate that some developing economics (such as China, India, Turkey) are quite active in neutrosophic set research. Moreover, the co-authorship analysis of the country and institution, the co-citation analysis of the journal, reference and author, and the co-occurrence analysis of the keywords are presented by VOSviewer software.

150 citations


Journal ArticleDOI
TL;DR: This review paper provides an overview of the most popular approaches to automated personality detection, various computational datasets, its industrial applications, and state-of-the-art machine learning models for personality detection with specific focus on multimodal approaches.
Abstract: Recently, the automatic prediction of personality traits has received a lot of attention. Specifically, personality trait prediction from multimodal data has emerged as a hot topic within the field of affective computing. In this paper, we review significant machine learning models which have been employed for personality detection, with an emphasis on deep learning-based methods. This review paper provides an overview of the most popular approaches to automated personality detection, various computational datasets, its industrial applications, and state-of-the-art machine learning models for personality detection with specific focus on multimodal approaches. Personality detection is a very broad and diverse topic: this survey only focuses on computational approaches and leaves out psychological studies on personality detection.

Journal ArticleDOI
TL;DR: This review showed that mammograms and histopathologic images were mostly used to classify breast cancer, and most of the selected studies used accuracy and area-under-the-curve metrics followed by sensitivity, precision, and F-measure metrics to evaluate the performance of the developed breast cancer classification models.
Abstract: Breast cancer is a common and fatal disease among women worldwide. Therefore, the early and precise diagnosis of breast cancer plays a pivotal role to improve the prognosis of patients with this disease. Several studies have developed automated techniques using different medical imaging modalities to predict breast cancer development. However, few review studies are available to recapitulate the existing literature on breast cancer classification. These studies provide an overview of the classification, segmentation, or grading of many cancer types, including breast cancer, by using traditional machine learning approaches through hand-engineered features. This review focuses on breast cancer classification by using medical imaging multimodalities through state-of-the-art artificial deep neural network approaches. It is anticipated to maximize the procedural decision analysis in five aspects, such as types of imaging modalities, datasets and their categories, pre-processing techniques, types of deep neural network, and performance metrics used for breast cancer classification. Forty-nine journal and conference publications from eight academic repositories were methodically selected and carefully reviewed from the perspective of the five aforementioned aspects. In addition, this study provided quantitative, qualitative, and critical analyses of the five aspects. This review showed that mammograms and histopathologic images were mostly used to classify breast cancer. Moreover, about 55% of the selected studies used public datasets, and the remaining used exclusive datasets. Several studies employed augmentation, scaling, and image normalization pre-processing techniques to minimize inconsistencies in breast cancer images. Several types of shallow and deep neural network architecture were employed to classify breast cancer using images. The convolutional neural network was utilized frequently to construct an effective breast cancer classification model. Some of the selected studies employed a pre-trained network or developed new deep neural networks to classify breast cancer. Most of the selected studies used accuracy and area-under-the-curve metrics followed by sensitivity, precision, and F-measure metrics to evaluate the performance of the developed breast cancer classification models. Finally, this review presented 10 open research challenges for future scholars who are interested to develop breast cancer classification models through various imaging modalities. This review could serve as a valuable resource for beginners on medical image classification and for advanced scientists focusing on deep learning-based breast cancer classification through different medical imaging modalities.

Journal ArticleDOI
TL;DR: This paper is the first SLR specifically on the deep learning based RS to summarize and analyze the existing studies based on the best quality research publications and indicated that autoencoder models are the most widely exploited deep learning architectures for RS followed by the Convolutional Neural Networks and the Recurrent Neural Networks.
Abstract: These days, many recommender systems (RS) are utilized for solving information overload problem in areas such as e-commerce, entertainment, and social media. Although classical methods of RS have achieved remarkable successes in providing item recommendations, they still suffer from many issues such as cold start and data sparsity. With the recent achievements of deep learning in various applications such as Natural Language Processing (NLP) and image processing, more efforts have been made by the researchers to exploit deep learning methods for improving the performance of RS. However, despite the several research works on deep learning based RS, very few secondary studies were conducted in the field. Therefore, this study aims to provide a systematic literature review (SLR) of deep learning based RSs that can guide researchers and practitioners to better understand the new trends and challenges in the field. This paper is the first SLR specifically on the deep learning based RS to summarize and analyze the existing studies based on the best quality research publications. The paper particularly adopts an SLR approach based on the standard guidelines of the SLR designed by Kitchemen-ham which uses selection method and provides detail analysis of the research publications. Several publications were gathered and after inclusion/exclusion criteria and the quality assessment, the selected papers were finally used for the review. The results of the review indicated that autoencoder (AE) models are the most widely exploited deep learning architectures for RS followed by the Convolutional Neural Networks (CNNs) and the Recurrent Neural Networks (RNNs) models. Also, the results showed that Movie Lenses is the most popularly used datasets for the deep learning-based RS evaluation followed by the Amazon review datasets. Based on the results, the movie and e-commerce have been indicated as the most common domains for RS and that precision and Root Mean Squared Error are the most commonly used metrics for evaluating the performance of the deep leaning based RSs.

Journal ArticleDOI
TL;DR: The main advantages of proposed algorithm are: (1) have no counterintuitive phenomena; (2) without division or antilogarithm by zero problem; (3) own stronger ability to distinguish alternatives.
Abstract: The 5G industry is of great concern to countries to formulate a major national strategy for 5G planning, promote industrial upgrading, and accelerate their economic and technological modernization. When considering the 5G industry evaluation, the basic issues involve strong uncertainty. Pythagorean fuzzy sets, depicted by membership degree and non-membership degree, are a more resultful means for capturing uncertainty. In this paper, the comparison issue in Pythagorean fuzzy environment is disposed by proposing novel score function. Next, the $$\ominus $$ and $$\oslash $$ operations are defined and their properties are proved. Later, the objective weight is calculated by Criteria Importance Through Inter-criteria Correlation method. Meanwhile, the combined weight is determined by reflecting both subjective weight and the objective weight. Then, the Pythagorean fuzzy decision making algorithm based Combined Compromise Solution is developed. Lastly, the validity of algorithm is expounded by the 5G evaluation issue, along with their sensitivity analysis. The main advantages of proposed algorithm are: (1) have no counterintuitive phenomena; (2) without division or antilogarithm by zero problem; (3) own stronger ability to distinguish alternatives.

Journal ArticleDOI
TL;DR: A review of the literature that analyzes the use of big data tools and big data analytics techniques in areas like health and medical care, social networking and internet, government and public sector, natural resource management, economic and business sector is presented.
Abstract: Big data has become a significant research area due to the birth of enormous data generated from various sources like social media, internet of things and multimedia applications. Big data has played critical role in many decision makings and forecasting domains such as recommendation systems, business analysis, healthcare, web display advertising, clinicians, transportation, fraud detection and tourism marketing. The rapid development of various big data tools such as Hadoop, Storm, Spark, Flink, Kafka and Pig in research and industrial communities has allowed the huge number of data to be distributed, communicated and processed. Big data applications use big data analytics techniques to efficiently analyze large amounts of data. However, choosing the suitable big data tools based on batch and stream data processing and analytics techniques for development a big data system are difficult due to the challenges in processing and applying big data. Practitioners and researchers who are developing big data systems have inadequate information about the current technology and requirement concerning the big data platform. Hence, the strengths and weaknesses of big data technologies and effective solutions for Big Data challenges are needed to be discussed. Hence, due to that, this paper presents a review of the literature that analyzes the use of big data tools and big data analytics techniques in areas like health and medical care, social networking and internet, government and public sector, natural resource management, economic and business sector. The goals of this paper are to (1) understand the trend of big data-related research and current frames of big data technologies; (2) identify trends in the use or research of big data tools based on batch and stream processing and big data analytics techniques; (3) assist and provide new researchers and practitioners to place new research activity in this domain appropriately. The findings of this study will provide insights and knowledge on the existing big data platforms and their application domains, the advantages and disadvantages of big data tools, big data analytics techniques and their use, and new research opportunities in future development of big data systems.

Journal ArticleDOI
TL;DR: The existing image segmentation quality evaluation methods are summarized, mainly including unsupervised methods and supervised methods, and the application of metrics in natural, medical and remote sensing image evaluation is further outlined.
Abstract: Image segmentation is a prerequisite for image processing. There are many methods for image segmentation, and as a result, a great number of methods for evaluating segmentation results have also been proposed. How to effectively evaluate the quality of image segmentation is very important. In this paper, the existing image segmentation quality evaluation methods are summarized, mainly including unsupervised methods and supervised methods. Based on hot issues, the application of metrics in natural, medical and remote sensing image evaluation is further outlined. In addition, an experimental comparison for some methods were carried out and the effectiveness of these methods was ranked. At the same time, the effectiveness of classical metrics for remote sensing and medical image evaluation is also verified.

Journal ArticleDOI
TL;DR: A comprehensive survey of the most recent approaches involving the hybridization of SI and EC algorithms for DL, the architecture of DNNs, and DNN training to improve the classification accuracy is presented.
Abstract: Deep learning (DL) has become an important machine learning approach that has been widely successful in many applications. Currently, DL is one of the best methods of extracting knowledge from large sets of raw data in a (nearly) self-organized manner. The technical design of DL depends on the feed-forward information flow principle of artificial neural networks with multiple layers of hidden neurons, which form deep neural networks (DNNs). DNNs have various architectures and parameters and are often developed for specific applications. However, the training process of DNNs can be prolonged based on the application and training set size (Gong et al. 2015). Moreover, finding the most accurate and efficient architecture of a deep learning system in a reasonable time is a potential difficulty associated with this approach. Swarm intelligence (SI) and evolutionary computing (EC) techniques represent simulation-driven non-convex optimization frameworks with few assumptions based on objective functions. These methods are flexible and have been proven effective in many applications; therefore, they can be used to improve DL by optimizing the applied learning models. This paper presents a comprehensive survey of the most recent approaches involving the hybridization of SI and EC algorithms for DL, the architecture of DNNs, and DNN training to improve the classification accuracy. The paper reviews the significant roles of SI and EC in optimizing the hyper-parameters and architectures of a DL system in context to large scale data analytics. Finally, we identify some open problems for further research, as well as potential issues related to DL that require improvements, and an extensive bibliography of the pertinent research is presented.

Journal ArticleDOI
TL;DR: Some of the most popular nature-inspired optimization methods currently reported on the literature are analyzed, while also discussing their applications for solving real-world problems and their impact on the current literature.
Abstract: Nature-inspired metaheuristics comprise a compelling family of optimization techniques. These algorithms are designed with the idea of emulating some kind natural phenomena (such as the theory of evolution, the collective behavior of groups of animals, the laws of physics or the behavior and lifestyle of human beings) and applying them to solve complex problems. Nature-inspired methods have taken the area of mathematical optimization by storm. Only in the last few years, literature related to the development of this kind of techniques and their applications has experienced an unprecedented increase, with hundreds of new papers being published every single year. In this paper, we analyze some of the most popular nature-inspired optimization methods currently reported on the literature, while also discussing their applications for solving real-world problems and their impact on the current literature. Furthermore, we open discussion on several research gaps and areas of opportunity that are yet to be explored within this promising area of science.

Journal ArticleDOI
TL;DR: This survey provides a review of past and recent research on quaternion neural networks and their applications in different domains and details methods, algorithms and applications for each quaternions-valued neural networks proposed.
Abstract: Quaternion neural networks have recently received an increasing interest due to noticeable improvements over real-valued neural networks on real world tasks such as image, speech and signal processing. The extension of quaternion numbers to neural architectures reached state-of-the-art performances with a reduction of the number of neural parameters. This survey provides a review of past and recent research on quaternion neural networks and their applications in different domains. The paper details methods, algorithms and applications for each quaternion-valued neural networks proposed.

Journal ArticleDOI
TL;DR: This study encourages the researchers and developers of meta-heuristic algorithms to use symbiotic organisms search (SOS), which has been able to solve the majority of engineering issues so far, because it is a simple and powerful algorithm to solve complex and NP-hard problems.
Abstract: Recently, meta-heuristic algorithms have made remarkable progress in solving types of complex and NP-hard problems. So that, most of this algorithms are inspired by swarm intelligence and biological systems as well as other physical and chemical systems in nature. Of course, different divisions for meta-heuristic algorithms have been presented so far, and the number of these algorithms is increasing day by day. Among the meta-heuristic algorithms, some algorithms have a very high efficiency, which are a suitable method for solving real-world problems, but some algorithms have not been sufficiently studied. One of the nature-inspired meta-heuristic algorithms is symbiotic organisms search (SOS), which has been able to solve the majority of engineering issues so far. In this paper, firstly, the primary principles, the basic concepts, and mathematical relations of the SOS algorithm are presented and then the engineering applications of the SOS algorithm and published researches in different applications are examined as well as types of modified and multi-objective versions and hybridized discrete models of this algorithm are studied. This study encourages the researchers and developers of meta-heuristic algorithms to use this algorithm for solving various problems, because it is a simple and powerful algorithm to solve complex and NP-hard problems. In addition, a detailed and perfect statistical analysis was performed on the studies that had used this algorithm. According to the accomplished studies and investigations, features and factors of this algorithm are better than other meta-heuristic algorithm, which has increased its usability in various fields.

Journal ArticleDOI
TL;DR: A survey of the use of DL architectures in computer-assisted imaging contexts, attending two different image modalities: the actively studied computed tomography and the under-studied positron emission tomography, as well as the combination of both modalities, which has been an important landmark in several decisions related to numerous diseases.
Abstract: Medical imaging is a rich source of invaluable information necessary for clinical judgements. However, the analysis of those exams is not a trivial assignment. In recent times, the use of deep learning (DL) techniques, supervised or unsupervised, has been empowered and it is one of the current research key areas in medical image analysis. This paper presents a survey of the use of DL architectures in computer-assisted imaging contexts, attending two different image modalities: the actively studied computed tomography and the under-studied positron emission tomography, as well as the combination of both modalities, which has been an important landmark in several decisions related to numerous diseases. In the making of this review, we analysed over 180 relevant studies, published between 2014 and 2019, that are sectioned by the purpose of the research and the imaging modality type. We conclude by addressing research issues and suggesting future directions for further improvement. To our best knowledge, there is no previous work making a review of this issue.

Journal ArticleDOI
TL;DR: A comprehensive survey on feature selection approaches for clustering is introduced by reflecting the advantages/disadvantages of current approaches from different perspectives and identifying promising trends for future research.
Abstract: The massive growth of data in recent years has led challenges in data mining and machine learning tasks. One of the major challenges is the selection of relevant features from the original set of available features that maximally improves the learning performance over that of the original feature set. This issue attracts researchers’ attention resulting in a variety of successful feature selection approaches in the literature. Although there exist several surveys on unsupervised learning (e.g., clustering), lots of works concerning unsupervised feature selection are missing in these surveys (e.g., evolutionary computation based feature selection for clustering) for identifying the strengths and weakness of those approaches. In this paper, we introduce a comprehensive survey on feature selection approaches for clustering by reflecting the advantages/disadvantages of current approaches from different perspectives and identifying promising trends for future research.

Journal ArticleDOI
TL;DR: Chaos has been integrated into the standard BSA, for the first time, in order to enhance the global convergence feature by preventing premature convergence and stumbling in the local solutions.
Abstract: Swarm intelligence based optimization methods have been proposed by observing the movements of alive swarms such as bees, birds, cats, and fish in order to obtain a global solution in a reasonable time when mathematical models cannot be formed. However, many swarm intelligence algorithms suffer premature convergence and they may stumble in local optima. Bird swarm algorithm (BSA) is one of the most recent swarm-based methods that suffers the same problems in some situations. In order to obtain a faster convergence with high accuracy from the swarm based optimization algorithms, different methods have been utilized for balancing the exploitation and exploration. In this paper, chaos has been integrated into the standard BSA, for the first time, in order to enhance the global convergence feature by preventing premature convergence and stumbling in the local solutions. Furthermore, a new research area has been introduced for chaotic dynamics. The standard BSA and the chaotic BSAs proposed in this paper have been tested on unimodal and multimodal unconstrained benchmark functions, and on constrained real-life engineering design problems. Generally, the obtained results from the proposed novel chaotic BSAs with an appropriate chaotic map can outperform the standard BSA on benchmark functions and engineering design problems. The proposed chaotic BSAs are expected to be used effectively in many complex problems in future by integrating enhanced multi-dimensional chaotic maps, time-continuous chaotic systems, and hybrid multi-dimensional maps.

Journal ArticleDOI
TL;DR: A quantum behaved particle swarm algorithm has been used for inverse kinematic solution of a 7-degree-of-freedom serial manipulator and the results have been compared with other swarm techniques such as firefly algorithm, particle swarm optimization (PSO) and artificial bee colony (ABC).
Abstract: In this study, a quantum behaved particle swarm algorithm has used for inverse kinematic solution of a 7-degree-of-freedom serial manipulator and the results have been compared with other swarm techniques such as firefly algorithm (FA), particle swarm optimization (PSO) and artificial bee colony (ABC). Firstly, the DH parameters of the robot manipulator are created and transformation matrices are revealed. Afterward, the position equations are derived from these matrices. The position of the end effector of the robotic manipulator in the work space is estimated using Quantum PSO and other swarm algorithms. For this reason, a fitness function which name is Euclidian has been determined. This function calculates the difference between the actual position and the estimated position of the manipulator end effector. In this study, the algorithms have tested with two different scenarios. In the first scenario, values for a single position were obtained while values for a hundred different positions were obtained in the second scenario. In fact, the second scenario confirms the quality of the QPSO in the inverse kinematic solution by verifying the first scenario. According to the results obtained; Quantum behaved PSO has yielded results that are much more efficient than standard PSO, ABC and FA. The advantages of the improved algorithm are the short computation time, fewer iterations and the number of particles.

Journal ArticleDOI
TL;DR: Several issues in wind farms are presented and two future research directions are pointed out to develop artificial intelligent algorithms for wind farm control systems and wind speed and power prediction.
Abstract: Wind farms are enormous and complex control systems. It is challenging and valuable to control and optimize wind farms. Their applications are widely used in various industries. Artificial intelligent algorithms are effective methods for optimization problems due to their distinctive characteristics. They have been successfully applied to wind farms. In this paper, several issues in wind farms are presented. Applications of artificial intelligent algorithms in wind farm controllers, Mach number, wind speed prediction, wind power prediction and other problems of wind farms are reviewed. Two future research directions are pointed out to develop artificial intelligent algorithms for wind farm control systems and wind speed and power prediction.

Journal ArticleDOI
TL;DR: A comprehensive survey of more than 120 techniques suggested by various researchers from time to time for Cancelable Biometrics is presented and a novel taxonomy for the same is developed.
Abstract: Biometric recognition is a challenging research field but suffers from privacy and security concerns. To address this concern, Cancelable Biometrics is suggested in literature in which a Biometric image of a sample is distorted or transformed in such a manner that it becomes difficult to obtain the original Biometric image from the distorted one. Another important characteristic of Cancelable Biometrics is that it can be reissued if compromised. In this research paper, we present a comprehensive survey of more than 120 techniques suggested by various researchers from time to time for Cancelable Biometrics and a novel taxonomy for the same is developed. Further, various performance measures used in Cancelable Biometrics are reviewed and their mathematical formulations are given. Cancelable Biometrics also suffer from various security attacks as given in literature. A review of these security attacks is carried out. We have also performed a review of databases used in literature for nine different Cancelable Biometrics viz. Face, Iris, Speech, Fingerprint, Signature, Palmprint, ECG, Palmvein and Fingervein. Lastly, we have also given future research directions in this field. This study shall be useful for the researchers and practitioners working in this fascinating research area.

Journal ArticleDOI
TL;DR: The traditional TODIM (an acronym in Portuguese for interactive multi-criteria decision making) method is extended to handle the HFLTSs based on the novel comparison function and distance measure.
Abstract: As a popular tool for modeling the qualitative assessment information, the hesitant fuzzy linguistic term sets (HFLTSs) can allow the decision makes or experts to give several possible linguistic terms to rate the objects with respect to the criterion. Although there exist many multi-criteria decision-making methods put forward for handling the HFLTSs, they were developed based on the assumption that the decision makers can always provide completely rational assessments and they do not take the decision makers’ psychological behaviors into consideration. In this paper, the traditional TODIM (an acronym in Portuguese for interactive multi-criteria decision making) method is extended to handle the HFLTSs based on the novel comparison function and distance measure. Firstly, we put forward a novel function for comparing two HFLTSs more effectively. After that, a novel hesitance degree function as well as some novel distance measures are given for HFLTSs. Then we apply them to extend the traditional TODIM method for solving the HFLTSs. Finally, a practical example concerning the evaluation and ranking of several satellite launching centers is provided to illustrate the validity and applicability of the proposed method.

Journal ArticleDOI
TL;DR: A survey of recent development in filtering channel selection techniques along with their feature extraction and classification methods for MI-based EEG applications is presented.
Abstract: Brain computer interface (BCI) systems are used in a wide range of applications such as communication, neuro-prosthetic and environmental control for disabled persons using robots and manipulators. A typical BCI system uses different types of inputs; however, Electroencephalography (EEG) signals are most widely used due to their non-invasive EEG electrodes, portability, and cost efficiency. The signals generated by the brain while performing or imagining a motor related task [motor imagery (MI)] signals are one of the important inputs for BCI applications. EEG data is usually recorded from more than 100 locations across the brain, so efficient channel selection algorithms are of great importance to identify optimal channels related to a particular application. The main purpose of applying channel selection is to reduce computational complexity while analysing EEG signals, improve classification accuracy by reducing over-fitting, and decrease setup time. Different channel selection evaluation algorithms such as filtering, wrapper, and hybrid methods have been used for extracting optimal channel subsets by using predefined criteria. After extensively reviewing the literature in the field of EEG channel selection, we can conclude that channel selection algorithms provide a possibility to work with fewer channels without affecting the classification accuracy. In some cases, channel selection increases the system performance by removing the noisy channels. The research in the literature shows that the same performance can be achieved using a smaller channel set, with 10–30 channels in most cases. In this paper, we present a survey of recent development in filtering channel selection techniques along with their feature extraction and classification methods for MI-based EEG applications.

Journal ArticleDOI
TL;DR: This paper elucidates on the way of extracting email content and behavior-based features, what features are appropriate in the detection of UBEs, and the selection of the most discriminating feature set, and facilitates an exhaustive comparative study using several state-of-the-art machine learning algorithms.
Abstract: With the influx of technological advancements and the increased simplicity in communication, especially through emails, the upsurge in the volume of unsolicited bulk emails (UBEs) has become a severe threat to global security and economy. Spam emails not only waste users’ time, but also consume a lot of network bandwidth, and may also include malware as executable files. Alternatively, phishing emails falsely claim users’ personal information to facilitate identity theft and are comparatively more dangerous. Thus, there is an intrinsic need for the development of more robust and dependable UBE filters that facilitate automatic detection of such emails. There are several countermeasures to spam and phishing, including blacklisting and content-based filtering. However, in addition to content-based features, behavior-based features are well-suited in the detection of UBEs. Machine learning models are being extensively used by leading internet service providers like Yahoo, Gmail, and Outlook, to filter and classify UBEs successfully. There are far too many options to consider, owing to the need to facilitate UBE detection and the recent advances in this domain. In this paper, we aim at elucidating on the way of extracting email content and behavior-based features, what features are appropriate in the detection of UBEs, and the selection of the most discriminating feature set. Furthermore, to accurately handle the menace of UBEs, we facilitate an exhaustive comparative study using several state-of-the-art machine learning algorithms. Our proposed models resulted in an overall accuracy of 99% in the classification of UBEs. The text is accompanied by snippets of Python code, to enable the reader to implement the approaches elucidated in this paper.

Journal ArticleDOI
TL;DR: The empirical findings discover that the criteria of SEO possessed a self-effect relationship based on DEMATEL technique, and the website with lowest gap would be the optimal example for administrators of websites to make high ranking website during the time that this study is executed.
Abstract: Search engine optimization (SEO) has been considered one of the most important techniques in internet marketing. This study establishes a decision model of search engine ranking for administrators to improve the performances of websites that satisfy users’ needs. To probe into the interrelationship and influential weights among criteria of SEO and evaluate the gaps of performance to achieve the aspiration level in real world, this research utilizes hybrid modified multiple criteria decision-making models, including decision-making trial and evaluation laboratory (DEMATEL), DEMATEL-based analytic network process (called DANP), and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR). The empirical findings discover that the criteria of SEO possessed a self-effect relationship based on DEMATEL technique. According to the influential network relation map (INRM), external website optimization is the top priority dimension that needs to be improved when implementing SEO. Among the six criteria for evaluation, meta tags is the most significant criterion influencing search engine ranking, followed by keywords and website design. The evaluation of search engine ranking reveals that the website with lowest gap would be the optimal example for administrators of websites to make high ranking website during the time that this study is executed.