scispace - formally typeset
Search or ask a question

Showing papers in "Computational Intelligence and Neuroscience in 2019"


Journal ArticleDOI
TL;DR: A systematic and meta-analysis survey of WOA is conducted to help researchers to use it in different areas or hybridize it with other common algorithms, and paves a way to present a new technique by hybridizing both WOA and BAT algorithms.
Abstract: The whale optimization algorithm (WOA) is a nature-inspired metaheuristic optimization algorithm, which was proposed by Mirjalili and Lewis in 2016. This algorithm has shown its ability to solve many problems. Comprehensive surveys have been conducted about some other nature-inspired algorithms, such as ABC and PSO. Nonetheless, no survey search work has been conducted on WOA. Therefore, in this paper, a systematic and meta-analysis survey of WOA is conducted to help researchers to use it in different areas or hybridize it with other common algorithms. Thus, WOA is presented in depth in terms of algorithmic backgrounds, its characteristics, limitations, modifications, hybridizations, and applications. Next, WOA performances are presented to solve different problems. Then, the statistical results of WOA modifications and hybridizations are established and compared with the most common optimization algorithms and WOA. The survey's results indicate that WOA performs better than other common algorithms in terms of convergence speed and balancing between exploration and exploitation. WOA modifications and hybridizations also perform well compared to WOA. In addition, our investigation paves a way to present a new technique by hybridizing both WOA and BAT algorithms. The BAT algorithm is used for the exploration phase, whereas the WOA algorithm is used for the exploitation phase. Finally, statistical results obtained from WOA-BAT are very competitive and better than WOA in 16 benchmarks functions. WOA-BAT also outperforms well in 13 functions from CEC2005 and 7 functions from CEC2019.

141 citations


Journal ArticleDOI
TL;DR: An improved grey wolf optimization algorithm with variable weights (VW-GWO) is proposed, which works better than the standard GWO, the ant lion optimization (ALO), the particle swarm optimization (PSO) algorithm, and the bat algorithm.
Abstract: With a hypothesis that the social hierarchy of the grey wolves would be also followed in their searching positions, an improved grey wolf optimization (GWO) algorithm with variable weights (VW-GWO) is proposed. And to reduce the probability of being trapped in local optima, a new governing equation of the controlling parameter is also proposed. Simulation experiments are carried out, and comparisons are made. Results show that the proposed VW-GWO algorithm works better than the standard GWO, the ant lion optimization (ALO), the particle swarm optimization (PSO) algorithm, and the bat algorithm (BA). The novel VW-GWO algorithm is also verified in high-dimensional problems.

104 citations


Journal ArticleDOI
TL;DR: This review will present different brain measurement techniques, along with their pros and cons, and the main cerebral indexes linked to the specific mental states of interest (used in most of the neuromarketing research).
Abstract: The new technological advances achieved during the last decade allowed the scientific community to investigate and employ neurophysiological measures not only for research purposes but also for the study of human behaviour in real and daily life situations. The aim of this review is to understand how and whether neuroscientific technologies can be effectively employed to better understand the human behaviour in real decision-making contexts. To do so, firstly, we will describe the historical development of neuromarketing and its main applications in assessing the sensory perceptions of some marketing and advertising stimuli. Then, we will describe the main neuroscientific tools available for such kind of investigations (e.g., measuring the cerebral electrical or hemodynamic activity, the eye movements, and the psychometric responses). Also, this review will present different brain measurement techniques, along with their pros and cons, and the main cerebral indexes linked to the specific mental states of interest (used in most of the neuromarketing research). Such indexes have been supported by adequate validations from the scientific community and are largely employed in neuromarketing research. This review will also discuss a series of papers that present different neuromarketing applications, such us in-store choices and retail, services, pricing, brand perception, web usability, neuropolitics, evaluation of the food and wine taste, and aesthetic perception of artworks. Furthermore, this work will face the ethical issues arisen on the use of these tools for the evaluation of the human behaviour during decision-making tasks. In conclusion, the main challenges that neuromarketing is going to face, as well as future directions and possible scenarios that could be derived by the use of neuroscience in the marketing field, will be identified and discussed.

94 citations


Journal ArticleDOI
TL;DR: An overview of the heuristic optimization algorithm dragonfly and its variants is presented and its convergence rate is better than the other algorithms in the literature, such as PSO and GA.
Abstract: One of the most recently developed heuristic optimization algorithms is dragonfly by Mirjalili. Dragonfly algorithm has shown its ability to optimizing different real-world problems. It has three variants. In this work, an overview of the algorithm and its variants is presented. Moreover, the hybridization versions of the algorithm are discussed. Furthermore, the results of the applications that utilized the dragonfly algorithm in applied science are offered in the following area: machine learning, image processing, wireless, and networking. It is then compared with some other metaheuristic algorithms. In addition, the algorithm is tested on the CEC-C06 2019 benchmark functions. The results prove that the algorithm has great exploration ability and its convergence rate is better than the other algorithms in the literature, such as PSO and GA. In general, in this survey, the strong and weak points of the algorithm are discussed. Furthermore, some future works that will help in improving the algorithm’s weak points are recommended. This study is conducted with the hope of offering beneficial information about dragonfly algorithm to the researchers who want to study the algorithm.

92 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed models can accurately and quickly identify the eleven tomato disease types and segment the locations and shapes of the infected areas.
Abstract: This study develops tomato disease detection methods based on deep convolutional neural networks and object detection models. Two different models, Faster R-CNN and Mask R-CNN, are used in these methods, where Faster R-CNN is used to identify the types of tomato diseases and Mask R-CNN is used to detect and segment the locations and shapes of the infected areas. To select the model that best fits the tomato disease detection task, four different deep convolutional neural networks are combined with the two object detection models. Data are collected from the Internet and the dataset is divided into a training set, a validation set, and a test set used in the experiments. The experimental results show that the proposed models can accurately and quickly identify the eleven tomato disease types and segment the locations and shapes of the infected areas.

88 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method of convolutional neural network is efficient for the diagnosis of thyroid diseases with SPECT images, and it has superior performance than other CNN methods.
Abstract: Thyroid disease has now become the second largest disease in the endocrine field; SPECT imaging is particularly important for the clinical diagnosis of thyroid diseases. However, there is little research on the application of SPECT images in the computer-aided diagnosis of thyroid diseases based on machine learning methods. A convolutional neural network with optimization-based computer-aided diagnosis of thyroid diseases using SPECT images is developed. Three categories of diseases are considered, and they are Graves' disease, Hashimoto disease, and subacute thyroiditis. A modified DenseNet architecture of convolutional neural network is employed, and the training method is improved. The architecture is modified by adding the trainable weight parameters to each skip connection in DenseNet. And the training method is improved by optimizing the learning rate with flower pollination algorithm for network training. Experimental results demonstrate that the proposed method of convolutional neural network is efficient for the diagnosis of thyroid diseases with SPECT images, and it has superior performance than other CNN methods.

85 citations


Journal ArticleDOI
TL;DR: The KeratoDetect algorithm analyzes the corneal topography of the eye using a convolutional neural network that is able to extract and learn the features of a keratoconus eye and can assist the ophthalmologist in rapid screening of patients, thus reducing diagnostic errors and facilitating treatment.
Abstract: Keratoconus (KTC) is a noninflammatory disorder characterized by progressive thinning, corneal deformation, and scarring of the cornea. The pathological mechanisms of this condition have been investigated for a long time. In recent years, this disease has come to the attention of many research centers because the number of people diagnosed with keratoconus is on the rise. In this context, solutions that facilitate both the diagnostic and treatment options are quickly needed. The main contribution of this paper is the implementation of an algorithm that is able to determine whether an eye is affected or not by keratoconus. The KeratoDetect algorithm analyzes the corneal topography of the eye using a convolutional neural network (CNN) that is able to extract and learn the features of a keratoconus eye. The results show that the KeratoDetect algorithm ensures a high level of performance, obtaining an accuracy of 99.33% on the data test set. KeratoDetect can assist the ophthalmologist in rapid screening of its patients, thus reducing diagnostic errors and facilitating treatment.

78 citations


Journal ArticleDOI
TL;DR: The results proved that the proposed modified pretrained model “AlexNet-SVM” can outperform a convolutional neural network created from scratch and the original AlexNet in identifying the brain haemorrhage and the transfer of knowledge from natural images to medical images classification is possible.
Abstract: In this paper, we address the problem of identifying brain haemorrhage which is considered as a tedious task for radiologists, especially in the early stages of the haemorrhage. The problem is solved using a deep learning approach where a convolutional neural network (CNN), the well-known AlexNet neural network, and also a modified novel version of AlexNet with support vector machine (AlexNet-SVM) classifier are trained to classify the brain computer tomography (CT) images into haemorrhage or nonhaemorrhage images. The aim of employing the deep learning model is to address the primary question in medical image analysis and classification: can a sufficient fine-tuning of a pretrained model (transfer learning) eliminate the need of building a CNN from scratch? Moreover, this study also aims to investigate the advantages of using SVM as a classifier instead of a three-layer neural network. We apply the same classification task to three deep networks; one is created from scratch, another is a pretrained model that was fine-tuned to the brain CT haemorrhage classification task, and our modified novel AlexNet model which uses the SVM classifier. The three networks were trained using the same number of brain CT images available. The experiments show that the transfer of knowledge from natural images to medical images classification is possible. In addition, our results proved that the proposed modified pretrained model “AlexNet-SVM” can outperform a convolutional neural network created from scratch and the original AlexNet in identifying the brain haemorrhage.

70 citations


Journal ArticleDOI
TL;DR: This is the first study, to the best of the knowledge, to practically apply the modified PCANet technique for EEG-based driving fatigue detection and identified that the parietal and occipital lobes of the brain were strongly associated with driving fatigue.
Abstract: The rapid development of the automotive industry has brought great convenience to our life, which also leads to a dramatic increase in the amount of traffic accidents. A large proportion of traffic accidents were caused by driving fatigue. EEG is considered as a direct, effective, and promising modality to detect driving fatigue. In this study, we presented a novel feature extraction strategy based on a deep learning model to achieve high classification accuracy and efficiency in using EEG for driving fatigue detection. EEG signals were recorded from six healthy volunteers in a simulated driving experiment. The feature extraction strategy was developed by integrating the principal component analysis (PCA) and a deep learning model called PCA network (PCANet). In particular, the principal component analysis (PCA) was used to preprocess EEG data to reduce its dimension in order to overcome the limitation of dimension explosion caused by PCANet, making this approach feasible for EEG-based driving fatigue detection. Results demonstrated high and robust performance of the proposed modified PCANet method with classification accuracy up to 95%, which outperformed the conventional feature extraction strategies in the field. We also identified that the parietal and occipital lobes of the brain were strongly associated with driving fatigue. This is the first study, to the best of our knowledge, to practically apply the modified PCANet technique for EEG-based driving fatigue detection.

68 citations


Journal ArticleDOI
TL;DR: The proposed framework DE-CNN has higher accuracy and is less time consuming than the state-of-the-art algorithms, and the performance of the proposed framework is evaluated on five Arabic sentiment datasets.
Abstract: In recent years, convolutional neural network (CNN) has attracted considerable attention since its impressive performance in various applications, such as Arabic sentence classification. However, building a powerful CNN for Arabic sentiment classification can be highly complicated and time consuming. In this paper, we address this problem by combining differential evolution (DE) algorithm and CNN, where DE algorithm is used to automatically search the optimal configuration including CNN architecture and network parameters. In order to achieve the goal, five CNN parameters are searched by the DE algorithm which include convolution filter sizes that control the CNN architecture, number of filters per convolution filter size (NFCS), number of neurons in fully connected (FC) layer, initialization mode, and dropout rate. In addition, the effect of the mutation and crossover operators in DE algorithm were investigated. The performance of the proposed framework DE-CNN is evaluated on five Arabic sentiment datasets. Experiments' results show that DE-CNN has higher accuracy and is less time consuming than the state-of-the-art algorithms.

66 citations


Journal ArticleDOI
TL;DR: The proposed LightFD classifier has better performance in real-time EEG mental state prediction, and it is expected to have broad application prospects in practical brain-computer interaction (BCI).
Abstract: Fatigue driving can easily lead to road traffic accidents and bring great harm to individuals and families. Recently, electroencephalography- (EEG-) based physiological and brain activities for fatigue detection have been increasingly investigated. However, how to find an effective method or model to timely and efficiently detect the mental states of drivers still remains a challenge. In this paper, we combine common spatial pattern (CSP) and propose a light-weighted classifier, LightFD, which is based on gradient boosting framework for EEG mental states identification. The comparable results with traditional classifiers, such as support vector machine (SVM), convolutional neural network (CNN), gated recurrent unit (GRU), and large margin nearest neighbor (LMNN), show that the proposed model could achieve better classification performance, as well as the decision efficiency. Furthermore, we also test and validate that LightFD has better transfer learning performance in EEG classification of driver mental states. In summary, our proposed LightFD classifier has better performance in real-time EEG mental state prediction, and it is expected to have broad application prospects in practical brain-computer interaction (BCI).

Journal ArticleDOI
TL;DR: The results suggest that alpha power is crucial to isolate a subject from the environment, and move attention from external to internal cues, and emphasize that the emerging use of VR associated with EEG may have important implications to study brain rhythms and support the design of artificial systems.
Abstract: Variations in alpha rhythm have a significant role in perception and attention. Recently, alpha decrease has been associated with externally directed attention, especially in the visual domain, whereas alpha increase has been related to internal processing such as mental arithmetic. However, the role of alpha oscillations and how the different components of a task (processing of external stimuli, internal manipulation/representation, and task demand) interact to affect alpha power are still unclear. Here, we investigate how alpha power is differently modulated by attentional tasks depending both on task difficulty (less/more demanding task) and direction of attention (internal/external). To this aim, we designed two experiments that differently manipulated these aspects. Experiment 1, outside Virtual Reality (VR), involved two tasks both requiring internal and external attentional components (intake of visual items for their internal manipulation) but with different internal task demands (arithmetic vs. reading). Experiment 2 took advantage of the VR (mimicking an aircraft cabin interior) to manipulate attention direction: it included a condition of VR immersion only, characterized by visual external attention, and a condition of a purely mental arithmetic task during VR immersion, requiring neglect of sensory stimuli. Results show that: (1) In line with previous studies, visual external attention caused a significant alpha decrease, especially in parieto-occipital regions; (2) Alpha decrease was significantly larger during the more demanding arithmetic task, when the task was driven by external visual stimuli; (3) Alpha dramatically increased during the purely mental task in VR immersion, whereby the external stimuli had no relation with the task. Our results suggest that alpha power is crucial to isolate a subject from the environment, and move attention from external to internal cues. Moreover, they emphasize that the emerging use of VR associated with EEG may have important implications to study brain rhythms and support the design of artificial systems.

Journal ArticleDOI
TL;DR: The results show the effectiveness and efficiency of SBBO compared with BBO variants and other representative algorithms for LSOPs and confirm that the proposed computing resource allocation is vital to the large-scale optimization within the limited computation budget.
Abstract: Biogeography-based optimization (BBO), a recent proposed metaheuristic algorithm, has been successfully applied to many optimization problems due to its simplicity and efficiency. However, BBO is sensitive to the curse of dimensionality; its performance degrades rapidly as the dimensionality of the search space increases. In this paper, a selective migration operator is proposed to scale up the performance of BBO and we name it selective BBO (SBBO). The differential migration operator is selected heuristically to explore the global area as far as possible whist the normal distributed migration operator is chosen to exploit the local area. By the means of heuristic selection, an appropriate migration operator can be used to search the global optimum efficiently. Moreover, the strategy of cooperative coevolution (CC) is adopted to solve large-scale global optimization problems (LSOPs). To deal with subgroup imbalance contribution to the whole solution in the context of CC, a more efficient computing resource allocation is proposed. Extensive experiments are conducted on the CEC 2010 benchmark suite for large-scale global optimization, and the results show the effectiveness and efficiency of SBBO compared with BBO variants and other representative algorithms for LSOPs. Also, the results confirm that the proposed computing resource allocation is vital to the large-scale optimization within the limited computation budget.

Journal ArticleDOI
TL;DR: A CNN classifier is implemented to explore the feasibility of deep learning approach to identify lymphocytes and ALL subtypes, and this approach is benchmarked against a dominant approach of support vector machines (SVMs) applying handcrafted feature engineering.
Abstract: This paper presents the recognition for WHO classification of acute lymphoblastic leukaemia (ALL) subtypes. The two ALL subtypes considered are T-lymphoblastic leukaemia (pre-T) and B-lymphoblastic leukaemia (pre-B). They exhibit various characteristics which make it difficult to distinguish between subtypes from their mature cells, lymphocytes. In a common approach, handcrafted features must be well designed for this complex domain-specific problem. With deep learning approach, handcrafted feature engineering can be eliminated because a deep learning method can automate this task through the multilayer architecture of a convolutional neural network (CNN). In this work, we implement a CNN classifier to explore the feasibility of deep learning approach to identify lymphocytes and ALL subtypes, and this approach is benchmarked against a dominant approach of support vector machines (SVMs) applying handcrafted feature engineering. Additionally, two traditional machine learning classifiers, multilayer perceptron (MLP), and random forest are also applied for the comparison. The experiments show that our CNN classifier delivers better performance to identify normal lymphocytes and pre-B cells. This shows a great potential for image classification with no requirement of multiple preprocessing steps from feature engineering.

Journal ArticleDOI
TL;DR: The final results indicate that the Gaussian process regression method, albeit more time consuming, proved to be more efficient in terms of the mean absolute error (MAE), the root mean square error (RMSE), and the coefficient of determination (R2).
Abstract: Accurate prediction of the seawater intrusion extent is necessary for many applications, such as groundwater management or protection of coastal aquifers from water quality deterioration. However, most applications require a large number of simulations usually at the expense of prediction accuracy. In this study, the Gaussian process regression method is investigated as a potential surrogate model for the computationally expensive variable density model. Gaussian process regression is a nonparametric kernel-based probabilistic model able to handle complex relations between input and output. In this study, the extent of seawater intrusion is represented by the location of the 0.5 kg/m3 iso-chlore at the bottom of the aquifer (seawater intrusion toe). The initial position of the toe, expressed as the distance of the specific line from a number of observation points across the coastline, along with the pumping rates are the surrogate model inputs, whereas the final position of the toe constitutes the output variable set. The training sample of the surrogate model consists of 4000 variable density simulations, which differ not only in the pumping rate pattern but also in the initial concentration distribution. The Latin hypercube sampling method is used to obtain the pumping rate patterns. For comparison purposes, a number of widely used regression methods are employed, specifically regression trees and Support Vector Machine regression (linear and nonlinear). A Bayesian optimization method is applied to all the regressors, to maximize their efficiency in the prediction of seawater intrusion. The final results indicate that the Gaussian process regression method, albeit more time consuming, proved to be more efficient in terms of the mean absolute error (MAE), the root mean square error (RMSE), and the coefficient of determination ( ).

Journal ArticleDOI
TL;DR: Deep learning provides an effective approach to predict the SOM content by visible and near-infrared spectroscopy and DenseNet is a promising method for reducing the amount of data preprocessing.
Abstract: Deep learning is characterized by its strong ability of data feature extraction. This method can provide unique advantages when applying it to visible and near-infrared spectroscopy for predicting soil organic matter (SOM) content in those cases where the SOM content is negatively correlated with the spectral reflectance of soil. This study relied on the SOM content data of 248 red soil samples and their spectral reflectance data of 400–2450 nm in Fengxin County, Jiangxi Province (China) to meet three objectives. First, a multilayer perceptron and two convolutional neural networks (LeNet5 and DenseNet10) were used to predict the SOM content based on spectral variation and variable selection, and the outcomes were compared with that from the traditional back-propagation neural network (BPN). Second, the four methods were applied to full-spectrum modeling to test the difference to selected feature variables. Finally, the potential of direct modeling was evaluated using spectral reflectance data without any spectral variation. The results of prediction accuracy showed that deep learning performed better at predicting the SOM content than did the traditional BPN. Based on full-spectrum data, deep learning was able to obtain more feature information, thus achieving better and more stable results (i.e., similar average accuracy and far lower standard deviation) than those obtained through variable selection. DenseNet achieved the best prediction result, with a coefficient of determination (R2) = 0.892 ± 0.004 and a ratio of performance to deviation (RPD) = 3.053 ± 0.056 in validation. Based on DenseNet, the application of spectral reflectance data (without spectral variation) produced robust results for application-level purposes (validation R2 = 0.853 ± 0.007 and validation RPD = 2.639 ± 0.056). In conclusion, deep learning provides an effective approach to predict the SOM content by visible and near-infrared spectroscopy and DenseNet is a promising method for reducing the amount of data preprocessing.

Journal ArticleDOI
TL;DR: The proposed detection method based on multifeature fusion of flame could improve the accuracy and reduce the false alarm rate compared with a state-of-the-art technique and can be applied to real-time camera monitoring systems.
Abstract: The threat to people's lives and property posed by fires has become increasingly serious. To address the problem of a high false alarm rate in traditional fire detection, an innovative detection method based on multifeature fusion of flame is proposed. First, we combined the motion detection and color detection of the flame as the fire preprocessing stage. This method saves a lot of computation time in screening the fire candidate pixels. Second, although the flame is irregular, it has a certain similarity in the sequence of the image. According to this feature, a novel algorithm of flame centroid stabilization based on spatiotemporal relation is proposed, and we calculated the centroid of the flame region of each frame of the image and added the temporal information to obtain the spatiotemporal information of the flame centroid. Then, we extracted features including spatial variability, shape variability, and area variability of the flame to improve the accuracy of recognition. Finally, we used support vector machine for training, completed the analysis of candidate fire images, and achieved automatic fire monitoring. Experimental results showed that the proposed method could improve the accuracy and reduce the false alarm rate compared with a state-of-the-art technique. The method can be applied to real-time camera monitoring systems, such as home security, forest fire alarms, and commercial monitoring.

Journal ArticleDOI
TL;DR: This paper proposes a novel method that combines the multiband signal decomposition filtering and the CSP-rank channel selection methods to select significant channels, and then linear discriminant analysis (LDA) was used to calculate the classification accuracy.
Abstract: Background. Due to the redundant information contained in multichannel electroencephalogram (EEG) signals, the classification accuracy of brain-computer interface (BCI) systems may deteriorate to a large extent. Channel selection methods can help to remove task-independent electroencephalogram (EEG) signals and hence improve the performance of BCI systems. However, in different frequency bands, brain areas associated with motor imagery are not exactly the same, which will result in the inability of traditional channel selection methods to extract effective EEG features. New Method. To address the above problem, this paper proposes a novel method based on common spatial pattern- (CSP-) rank channel selection for multifrequency band EEG (CSP-R-MF). It combines the multiband signal decomposition filtering and the CSP-rank channel selection methods to select significant channels, and then linear discriminant analysis (LDA) was used to calculate the classification accuracy. Results. The results showed that our proposed CSP-R-MF method could significantly improve the average classification accuracy compared with the CSP-rank channel selection method.

Journal ArticleDOI
TL;DR: In this paper, regression analysis was used to design and compare three different fuzzy logic models for predicting software estimation effort: Mamdani, Sugeno, and Sugeno with linear output.
Abstract: Software effort estimation plays a critical role in project management. Erroneous results may lead to overestimating or underestimating effort, which can have catastrophic consequences on project resources. Machine-learning techniques are increasingly popular in the field. Fuzzy logic models, in particular, are widely used to deal with imprecise and inaccurate data. The main goal of this research was to design and compare three different fuzzy logic models for predicting software estimation effort: Mamdani, Sugeno with constant output, and Sugeno with linear output. To assist in the design of the fuzzy logic models, we conducted regression analysis, an approach we call “regression fuzzy logic.” State-of-the-art and unbiased performance evaluation criteria such as standardized accuracy, effect size, and mean balanced relative error were used to evaluate the models, as well as statistical tests. Models were trained and tested using industrial projects from the International Software Benchmarking Standards Group (ISBSG) dataset. Results showed that data heteroscedasticity affected model performance. Fuzzy logic models were found to be very sensitive to outliers. We concluded that when regression analysis was used to design the model, the Sugeno fuzzy inference system with linear output outperformed the other models.

Journal ArticleDOI
TL;DR: The hybrid strategy combined global updating with local updating is developed to design updating method of the ACO pheromone, which can overcome the inertia of the ant colony and force them to explore a new and better path.
Abstract: For the problem of mobile robot's path planning under the known environment, a path planning method of mixed artificial potential field (APF) and ant colony optimization (ACO) based on grid map is proposed. First, based on the grid model, APF is improved in three ways: the attraction field, the direction of resultant force, and jumping out the infinite loop. Then, the hybrid strategy combined global updating with local updating is developed to design updating method of the ACO pheromone. The process of optimization of ACO is divided into two phases. In the prophase, the direction of the resultant force obtained by the improved APF is used as the inspired factors, which leads ant colony to move in a directional manner. In the anaphase, the inspired factors are canceled, and ant colony transition is completely based on pheromone updating, which can overcome the inertia of the ant colony and force them to explore a new and better path. Finally, some simulation experiments and mobile robot environment experiments are done. The experiment results verify that the method has stronger stability and environmental adaptability.

Journal ArticleDOI
TL;DR: An algorithm, known as the Brownian motion, is used to improve the randomization stage of the dragonfly algorithm and provided up to 90% improvement compared to the original algorithm's minimum point access.
Abstract: The dragonfly algorithm (DA) is one of the optimization techniques developed in recent years. The random flying behavior of dragonflies in nature is modeled in the DA using the Levy flight mechanism (LFM). However, LFM has disadvantages such as the overflowing of the search area and interruption of random flights due to its big searching steps. In this study, an algorithm, known as the Brownian motion, is used to improve the randomization stage of the DA. The modified DA was applied to 15 single-objective and 6 multiobjective problems and then compared with the original algorithm. The modified DA provided up to 90% improvement compared to the original algorithm's minimum point access. The modified algorithm was also applied to welded beam design, a well-known benchmark problem, and thus was able to calculate the optimum cost 20% lower.

Journal ArticleDOI
TL;DR: A novel classification framework and a novel data reduction method to distinguish multiclass motor imagery (MI) electroencephalography (EEG) for brain computer interface (BCI) based on the manifold of covariance matrices in a Riemannian perspective is proposed.
Abstract: This paper proposes a novel classification framework and a novel data reduction method to distinguish multiclass motor imagery (MI) electroencephalography (EEG) for brain computer interface (BCI) based on the manifold of covariance matrices in a Riemannian perspective. For method 1, a subject-specific decision tree (SSDT) framework with filter geodesic minimum distance to Riemannian mean (FGMDRM) is designed to identify MI tasks and reduce the classification error in the nonseparable region of FGMDRM. Method 2 includes a feature extraction algorithm and a classification algorithm. The feature extraction algorithm combines semisupervised joint mutual information (semi-JMI) with general discriminate analysis (GDA), namely, SJGDA, to reduce the dimension of vectors in the Riemannian tangent plane. And the classification algorithm replaces the FGMDRM in method 1 with k-nearest neighbor (KNN), named SSDT-KNN. By applying method 2 on BCI competition IV dataset 2a, the kappa value has been improved from 0.57 to 0.607 compared to the winner of dataset 2a. And method 2 also obtains high recognition rate on the other two datasets.

Journal ArticleDOI
TL;DR: An insight is given into several fields, covering speech production and auditory perception, cognitive aspects of speech communication and language understanding, both speech recognition and text-to-speech synthesis in more details, and consequently the main directions in development of spoken dialogue systems.
Abstract: Speech technologies have been developed for decades as a typical signal processing area, while the last decade has brought a huge progress based on new machine learning paradigms. Owing not only to their intrinsic complexity but also to their relation with cognitive sciences, speech technologies are now viewed as a prime example of interdisciplinary knowledge area. This review article on speech signal analysis and processing, corresponding machine learning algorithms, and applied computational intelligence aims to give an insight into several fields, covering speech production and auditory perception, cognitive aspects of speech communication and language understanding, both speech recognition and text-to-speech synthesis in more details, and consequently the main directions in development of spoken dialogue systems. Additionally, the article discusses the concepts and recent advances in speech signal compression, coding, and transmission, including cognitive speech coding. To conclude, the main intention of this article is to highlight recent achievements and challenges based on new machine learning paradigms that, over the last decade, had an immense impact in the field of speech signal processing.

Journal ArticleDOI
TL;DR: An image processing-based method for automating the task of pipe corrosion detection and can be a promising tool to assist building maintenance agents during the phase of pipe system survey.
Abstract: To maintain the serviceability of buildings, the owners need to be informed about the current condition of the water supply and waste disposal systems. Therefore, timely and accurate detection of corrosion on pipe surface is a crucial task. The conventional manual surveying process performed by human inspectors is notoriously time consuming and labor intensive. Hence, this study proposes an image processing-based method for automating the task of pipe corrosion detection. Image texture including statistical measurement of image colors, gray-level co-occurrence matrix, and gray-level run length is employed to extract features of pipe surface. Support vector machine optimized by differential flower pollination is then used to construct a decision boundary that can recognize corroded and intact pipe surfaces. A dataset consisting of 2000 image samples has been collected and utilized to train and test the proposed hybrid model. Experimental results supported by the Wilcoxon signed-rank test confirm that the proposed method is highly suitable for the task of interest with an accuracy rate of 92.81%. Thus, the model proposed in this study can be a promising tool to assist building maintenance agents during the phase of pipe system survey.

Journal ArticleDOI
TL;DR: A hybrid deep neural network scheduler (HDNNS) is proposed to solve job-shop scheduling problems (JSSPs) and the results show that the MAKESPAN index of HDNNS is 9% better than that of HNN and the index is also 4% betterthan that of ANN in ZLP dataset.
Abstract: In this paper, a hybrid deep neural network scheduler (HDNNS) is proposed to solve job-shop scheduling problems (JSSPs) In order to mine the state information of schedule processing, a job-shop scheduling problem is divided into several classification-based subproblems And a deep learning framework is used for solving these subproblems HDNNS applies the convolution two-dimensional transformation method (CTDT) to transform irregular scheduling information into regular features so that the convolution operation of deep learning can be introduced into dealing with JSSP The simulation experiments designed for testing HDNNS are in the context of JSSPs with different scales of machines and jobs as well as different time distributions for processing procedures The results show that the MAKESPAN index of HDNNS is 9% better than that of HNN and the index is also 4% better than that of ANN in ZLP dataset With the same neural network structure, the training time of the HDNNS method is obviously shorter than that of the DEEPRM method In addition, the scheduler has an excellent generalization performance, which can address large-scale scheduling problems with only small-scale training data

Journal ArticleDOI
TL;DR: A stacked Bidirectional Long Short-Term Memory (BiLSTM) neural network based on the coattention mechanism to extract the interaction between questions and answers, combining cosine similarity and Euclidean distance to score the question and answer sentences is proposed.
Abstract: Deep learning is the crucial technology in intelligent question answering research tasks. Nowadays, extensive studies on question answering have been conducted by adopting the methods of deep learning. The challenge is that it not only requires an effective semantic understanding model to generate a textual representation but also needs the consideration of semantic interaction between questions and answers simultaneously. In this paper, we propose a stacked Bidirectional Long Short-Term Memory (BiLSTM) neural network based on the coattention mechanism to extract the interaction between questions and answers, combining cosine similarity and Euclidean distance to score the question and answer sentences. Experiments are tested and evaluated on publicly available Text REtrieval Conference (TREC) 8-13 dataset and Wiki-QA dataset. Experimental results confirm that the proposed model is efficient and particularly it achieves a higher mean average precision (MAR) of 0.7613 and mean reciprocal rank (MRR) of 0.8401 on the TREC dataset.

Journal ArticleDOI
TL;DR: The db-scan unsupervised learning technique is explored with the goal of using it in the binarization process of continuous swarm intelligence metaheuristic algorithms and shows consistently better results in terms of computation time and quality of the solutions when compared with TFs and random operators.
Abstract: The integration of machine learning techniques and metaheuristic algorithms is an area of interest due to the great potential for applications. In particular, using these hybrid techniques to solve combinatorial optimization problems (COPs) to improve the quality of the solutions and convergence times is of great interest in operations research. In this article, the db-scan unsupervised learning technique is explored with the goal of using it in the binarization process of continuous swarm intelligence metaheuristic algorithms. The contribution of the db-scan operator to the binarization process is analyzed systematically through the design of random operators. Additionally, the behavior of this algorithm is studied and compared with other binarization methods based on clusters and transfer functions (TFs). To verify the results, the well-known set covering problem is addressed, and a real-world problem is solved. The results show that the integration of the db-scan technique produces consistently better results in terms of computation time and quality of the solutions when compared with TFs and random operators. Furthermore, when it is compared with other clustering techniques, we see that it achieves significantly improved convergence times.

Journal ArticleDOI
TL;DR: The spiking cerebellar model was able to reproduce in the robotic platform how biological systems deal with external sources of error, in both ideal and real (noisy) environments.
Abstract: A bioinspired adaptive model, developed by means of a spiking neural network made of thousands of artificial neurons, has been leveraged to control a humanoid NAO robot in real time. The learning properties of the system have been challenged in a classic cerebellum-driven paradigm, a perturbed upper limb reaching protocol. The neurophysiological principles used to develop the model succeeded in driving an adaptive motor control protocol with baseline, acquisition, and extinction phases. The spiking neural network model showed learning behaviours similar to the ones experimentally measured with human subjects in the same task in the acquisition phase, while resorted to other strategies in the extinction phase. The model processed in real-time external inputs, encoded as spikes, and the generated spiking activity of its output neurons was decoded, in order to provide the proper correction on the motor actuators. Three bidirectional long-term plasticity rules have been embedded for different connections and with different time scales. The plasticities shaped the firing activity of the output layer neurons of the network. In the perturbed upper limb reaching protocol, the neurorobot successfully learned how to compensate for the external perturbation generating an appropriate correction. Therefore, the spiking cerebellar model was able to reproduce in the robotic platform how biological systems deal with external sources of error, in both ideal and real (noisy) environments.

Journal ArticleDOI
TL;DR: It is demonstrated that the speller size is an important parameter to consider in improving the usability of P300 BCI for communication purposes, measured in terms of effectiveness, efficiency, and satisfaction under overt and covert attention conditions.
Abstract: The vast majority of P300-based brain-computer interface (BCI) systems are based on the well-known P300 speller presented by Farwell and Donchin for communication purposes and an alternative to people with neuromuscular disabilities, such as impaired eye movement. The purpose of the present work is to study the effect of speller size on P300-based BCI usability, measured in terms of effectiveness, efficiency, and satisfaction under overt and covert attention conditions. To this end, twelve participants used three speller sizes under both attentional conditions to spell 12 symbols. The results indicated that the speller size had, in both attentional conditions, a significant influence on performance. In both conditions (covert and overt), the best performances were obtained with the small and medium speller sizes, both being the most effective. The speller size did not significantly affect workload on the three speller sizes. In contrast, covert attention condition produced very high workload due to the increased resources expended to complete the task. Regarding users’ preferences, significant differences were obtained between speller sizes. The small speller size was considered as the most complex, the most stressful, the less comfortable, and the most tiring. The medium speller size was always considered in the medium rank, which is the speller size that was evaluated less frequently and, for each dimension, the worst one. In this sense, the medium and the large speller sizes were considered as the most satisfactory. Finally, the medium speller size was the one to which the three standard dimensions were collected: high effectiveness, high efficiency, and high satisfaction. This work demonstrates that the speller size is an important parameter to consider in improving the usability of P300 BCI for communication purposes. The obtained results showed that using the proposed medium speller size, performance and satisfaction could be improved.

Journal ArticleDOI
TL;DR: The proposed M2D CNN, a novel multichannel 2D CNN model, is proposed to classify 3D fMRI data and achieves the highest accuracy and alleviates data overfitting given its smaller number of parameters as compared with 3D CNN.
Abstract: Deep learning models have been successfully applied to the analysis of various functional MRI data. Convolutional neural networks (CNN), a class of deep neural networks, have been found to excel at extracting local meaningful features based on their shared-weights architecture and space invariance characteristics. In this study, we propose M2D CNN, a novel multichannel 2D CNN model, to classify 3D fMRI data. The model uses sliced 2D fMRI data as input and integrates multichannel information learned from 2D CNN networks. We experimentally compared the proposed M2D CNN against several widely used models including SVM, 1D CNN, 2D CNN, 3D CNN, and 3D separable CNN with respect to their performance in classifying task-based fMRI data. We tested M2D CNN against six models as benchmarks to classify a large number of time-series whole-brain imaging data based on a motor task in the Human Connectome Project (HCP). The results of our experiments demonstrate the following: (i) convolution operations in the CNN models are advantageous for high-dimensional whole-brain imaging data classification, as all CNN models outperform SVM; (ii) 3D CNN models achieve higher accuracy than 2D CNN and 1D CNN model, but 3D CNN models are computationally costly as any extra dimension is added in the input; (iii) the M2D CNN model proposed in this study achieves the highest accuracy and alleviates data overfitting given its smaller number of parameters as compared with 3D CNN.