scispace - formally typeset
Search or ask a question

Showing papers in "Computational Intelligence and Neuroscience in 2015"


Journal ArticleDOI
TL;DR: The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.
Abstract: Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.

252 citations


Journal ArticleDOI
TL;DR: A novel sentiment analysis model based on common-sense knowledge extracted from ConceptNet based ontology and context information is proposed which shows the effectiveness of the proposed methods.
Abstract: Sentiment analysis research has been increasing tremendously in recent times due to the wide range of business and social applications. Sentiment analysis from unstructured natural language text has recently received considerable attention from the research community. In this paper, we propose a novel sentiment analysis model based on common-sense knowledge extracted from ConceptNet based ontology and context information. ConceptNet based ontology is used to determine the domain specific concepts which in turn produced the domain specific important features. Further, the polarities of the extracted concepts are determined using the contextual polarity lexicon which we developed by considering the context information of a word. Finally, semantic orientations of domain specific features of the review document are aggregated based on the importance of a feature with respect to the domain. The importance of the feature is determined by the depth of the feature in the ontology. Experimental results show the effectiveness of the proposed methods.

158 citations


Journal ArticleDOI
TL;DR: A methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization,SGPSO, and a New Model of PSO called NMPSO is presented.
Abstract: Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

134 citations


Journal ArticleDOI
Yoonseok Shin1
TL;DR: A boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain.
Abstract: Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.

111 citations


Journal ArticleDOI
TL;DR: The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.
Abstract: Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.

81 citations


Journal ArticleDOI
TL;DR: The essential theory and applications of the HS algorithm are described and reviewed, and a modified HS method inspired by the idea of Pareto-dominance-based ranking is presented to handle a practical wind generator optimal design problem.
Abstract: The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

80 citations


Journal ArticleDOI
TL;DR: This paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN), and shows that the method is accurate and effective in predicting the resource demands.
Abstract: In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands.

63 citations


Journal ArticleDOI
TL;DR: This work performs several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information, increasing the neurological plausibility of random permutations and highlighting their utility in vector space models of semantics.
Abstract: Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, "noisy" permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

62 citations


Journal ArticleDOI
TL;DR: A strategy to convert a network configuration between different activation functions without altering the network mapping capabilities will be presented.
Abstract: A comprehensive review on the problem of choosing a suitable activation function for the hidden layer of a feed forward neural network has been widely investigated. Since the nonlinear component of a neural network is the main contributor to the network mapping capabilities, the different choices that may lead to enhanced performances, in terms of training, generalization, or computational costs, are analyzed, both in general-purpose and in embedded computing environments. Finally, a strategy to convert a network configuration between different activation functions without altering the network mapping capabilities will be presented.

62 citations


Journal ArticleDOI
TL;DR: The velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy and the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model.
Abstract: For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process.

61 citations


Journal ArticleDOI
TL;DR: The firefly algorithm is employed to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier, called the firefly-based SVM (firefly-SVM).
Abstract: The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.

Journal ArticleDOI
TL;DR: This paper develops a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment and shows that taking into account features of the substrate- binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand.
Abstract: Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand

Journal ArticleDOI
TL;DR: The ERP results may suggest that decisionMaking under ambiguity occupies higher working memory and recalls more past experience while decision making under risk mainly mobilizes attentional resources to calculate current information.
Abstract: Our study aims to contrast the neural temporal features of early stage of decision making in the context of risk and ambiguity. In monetary gambles under ambiguous or risky conditions, 12 participants were asked to make a decision to bet or not, with the eventrelated potentials (ERPs) recorded meantime. The proportion of choosing to bet in ambiguous condition was significantly lower than that in risky condition. An ERP component identified as P300 was found. The P300 amplitude elicited in risky condition was significantly larger than that in ambiguous condition. The lower bet rate in ambiguous condition and the smaller P300 amplitude elicited by ambiguous stimuli revealed that people showed much more aversion in the ambiguous condition than in the risky condition. The ERP results may suggest that decision making under ambiguity occupies higher working memory and recalls more past experience while decision making under risk mainly mobilizes attentional resources to calculate current information. These findings extended the current understanding of underlying mechanism for early assessment stage of decision making and explored the difference between the decision making under risk and ambiguity.

Journal ArticleDOI
TL;DR: In this paper, a log-determinant (LogDet) function is proposed as a smooth and closer approximation to rank for obtaining a low-rank representation in subspace clustering, and augmented Lagrange multipliers are applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data.
Abstract: Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.

Journal ArticleDOI
TL;DR: This paper proposes a novel tensor-based imputation approach that is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume.
Abstract: Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

Journal ArticleDOI
TL;DR: A feature selection technique using backward stepwise linear regression together with linear discriminant analysis is designed to classify cognitive normal (CN) subjects, early MCI (EMCI), lateMCI (LMCI), and AD subjects in an exhaustive two-group classification process, showing a dominance of the neuropsychological parameters like MMSE and RAVLT.
Abstract: Brain atrophy in mild cognitive impairment (MCI) and Alzheimer's disease (AD) are difficult to demarcate to assess the progression of AD. This study presents a statistical framework on the basis of MRI volumes and neuropsychological scores. A feature selection technique using backward stepwise linear regression together with linear discriminant analysis is designed to classify cognitive normal (CN) subjects, early MCI (EMCI), late MCI (LMCI), and AD subjects in an exhaustive two-group classification process. Results show a dominance of the neuropsychological parameters like MMSE and RAVLT. Cortical volumetric measures of the temporal, parietal, and cingulate regions are found to be significant classification factors. Moreover, an asymmetrical distribution of the volumetric measures across hemispheres is seen for CN versus EMCI and EMCI versus AD, showing dominance of the right hemisphere; whereas CN versus LMCI and EMCI versus LMCI show dominance of the left hemisphere. A 2-fold cross-validation showed an average accuracy of 93.9%, 90.8%, and 94.5%, for the CN versus AD, CN versus LMCI, and EMCI versus AD, respectively. The accuracy for groups that are difficult to differentiate like EMCI versus LMCI was 73.6%. With the inclusion of the neuropsychological scores, a significant improvement (24.59%) was obtained over using MRI measures alone.

Journal ArticleDOI
TL;DR: This paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications and evaluates the performance of these networks from the aspects of accuracy in classification and efficiency in computation.
Abstract: Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

Journal ArticleDOI
TL;DR: This paper presents real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors using wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movements.
Abstract: EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors.We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in realtime experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

Journal ArticleDOI
TL;DR: A novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries and it is validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately.
Abstract: Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately.

Journal ArticleDOI
TL;DR: This study presents an approach to decision making under Z-information based on direct computation over Z-numbers and utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics.
Abstract: Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics.

Journal ArticleDOI
TL;DR: In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.
Abstract: This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.

Journal ArticleDOI
TL;DR: The possibility of improving the overall performance using individual query expansion terms selection methods has been explored and the semantic similarity approach is used to select semantically similar terms with the query after applying Borda count ranks combining approach.
Abstract: Pseudo-Relevance Feedback (PRF) is a well-known method of query expansion for improving the performance of information retrieval systems. All the terms of PRF documents are not important for expanding the user query. Therefore selection of proper expansion term is very important for improving system performance. Individual query expansion terms selection methods have been widely investigated for improving its performance. Every individual expansion term selection method has its own weaknesses and strengths. To overcome the weaknesses and to utilize the strengths of the individual method, we used multiple terms selection methods together. In this paper, first the possibility of improving the overall performance using individual query expansion terms selection methods has been explored. Second, Borda count rank aggregation approach is used for combining multiple query expansion terms selection methods. Third, the semantic similarity approach is used to select semantically similar terms with the query after applying Borda count ranks combining approach. Our experimental results demonstrated that our proposed approaches achieved a significant improvement over individual terms selection method and related state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems and the optimal solution is obtained by dual simplex method.
Abstract: An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.

Journal ArticleDOI
TL;DR: A new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively is presented.
Abstract: Image enhancement is an important procedure of image processing and analysis This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image The enhancement process is a nonlinear optimization problem with several constraints CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper

Journal ArticleDOI
TL;DR: The results showed that the GA-based models are able to significantly outperform the benchmark and the proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied.
Abstract: Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.

Journal ArticleDOI
TL;DR: The proposed hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks and is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) basedK-mean clustering techniques in terms of both accuracy and execution time.
Abstract: Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K- means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed.

Journal ArticleDOI
TL;DR: An improved QPSO algorithm is presented, called IQPSO, by combiningQPSO and an opposition-based learning concept, which achieves better results than QPSo and other variants of PSO on majority of test problems.
Abstract: An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

Journal ArticleDOI
TL;DR: An improved teaching-learning-based optimization algorithm is presented, which introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factors to replace the original random number in teacher phase and learner phase.
Abstract: Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.

Journal ArticleDOI
TL;DR: The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions.
Abstract: A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions.

Journal ArticleDOI
TL;DR: An architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition and a transmitter for transmitting video is proposed.
Abstract: For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system.