scispace - formally typeset
Search or ask a question

Showing papers in "Memetic Computing in 2009"


Journal ArticleDOI
TL;DR: The efficiency of the proposed SFLSDE seems to be very high especially for large scale problems and complex fitness landscapes and three other modern DE based metaheuristic for a large and varied set of test problems.
Abstract: This paper proposes the scale factor local search differential evolution (SFLSDE). The SFLSDE is a differential evolution (DE) based memetic algorithm which employs, within a self-adaptive scheme, two local search algorithms. These local search algorithms aim at detecting a value of the scale factor corresponding to an offspring with a high performance, while the generation is executed. The local search algorithms thus assist in the global search and generate offspring with high performance which are subsequently supposed to promote the generation of enhanced solutions within the evolutionary framework. Despite its simplicity, the proposed algorithm seems to have very good performance on various test problems. Numerical results are shown in order to justify the use of a double local search instead of a single search. In addition, the SFLSDE has been compared with a standard DE and three other modern DE based metaheuristic for a large and varied set of test problems. Numerical results are given for relatively low and high dimensional cases. A statistical analysis of the optimization results has been included in order to compare the results in terms of final solution detected and convergence speed. The efficiency of the proposed algorithm seems to be very high especially for large scale problems and complex fitness landscapes.

170 citations


Journal ArticleDOI
TL;DR: A memetic algorithm for solving JSSPs with an objective of minimizing makespan while satisfying a number of hard constraints is developed and results show that MA, as compared to GA, not only improves the quality of solutions but also reduces the overall computational time.
Abstract: The job-shop scheduling problem is well known for its complexity as an NP-hard problem. We have considered JSSPs with an objective of minimizing makespan while satisfying a number of hard constraints. In this paper, we developed a memetic algorithm (MA) for solving JSSPs. Three priority rules were designed, namely partial re-ordering, gap reduction and restricted swapping, and used as local search techniques in our MA. We have solved 40 benchmark problems and compared the results obtained with a number of established algorithms in the literature. The experimental results show that MA, as compared to GA, not only improves the quality of solutions but also reduces the overall computational time.

138 citations


Journal ArticleDOI
TL;DR: This paper extends the notion of memes from a computational viewpoint and explores the purpose, definitions, design guidelines and architecture for effective memetic computing, illustrating the power of high-order meme-based learning.
Abstract: In computational intelligence, the term ‘memetic algorithm’ has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a ‘meme’ has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as ‘memetic algorithm’ is too specific, and ultimately a misnomer, as much as a ‘meme’ is defined too generally to be of scientific use. In this paper, we extend the notion of memes from a computational viewpoint and explore the purpose, definitions, design guidelines and architecture for effective memetic computing. Utilizing two conceptual case studies, we illustrate the power of high-order meme-based learning. With applications ranging from cognitive science to machine learning, memetic computing has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning.

115 citations


Journal ArticleDOI
TL;DR: A new representation motivated by observations that Bioinformatics and Systems Biology often give rise to very large-scale datasets that are noisy, ambiguous and usually described by a large number of attributes is presented, which is up to 2–3 times faster than state-of-the-art evolutionary learning representations designed specifically for efficiency purposes.
Abstract: Evolutionary learning techniques are comparable in accuracy with other learning methods such as Bayesian Learning, SVM, etc. These techniques often produce more interpretable knowledge than, e.g. SVM; however, efficiency is a significant drawback. This paper presents a new representation motivated by our observations that Bioinformatics and Systems Biology often give rise to very large-scale datasets that are noisy, ambiguous and usually described by a large number of attributes. The crucial observation is that, in the most successful rules obtained for such datasets, only a few key attributes (from the large number of available ones) are expressed in a rule, hence automatically discovering these few key attributes and only keeping track of them contributes to a substantial speed up by avoiding useless match operations with irrelevant attributes. Thus, in effective terms this procedure is performing a fine-grained feature selection at a rule-wise level, as the key attributes may be different for each learned rule. The representation we propose has been tested within the BioHEL machine learning system, and the experiments performed show that not only the representation has competent learning performance, but that it also manages to reduce considerably the system run-time. That is, the proposed representation is up to 2–3 times faster than state-of-the-art evolutionary learning representations designed specifically for efficiency purposes.

99 citations


Journal ArticleDOI
TL;DR: This work introduces for the first time the concepts of local optimum structure and generalize the notion of neighborhood to connectivity structure for analysis of MAs and analyzes the solution quality and computational efficiency of the core search operators in Lamarckian memetic algorithms.
Abstract: Memetic algorithms (MAs) represent an emerging field that has attracted increasing research interest in recent times. Despite the popularity of the field, we remain to know rather little of the search mechanisms of MAs. Given the limited progress made on revealing the intrinsic properties of some commonly used complex benchmark problems and working mechanisms of Lamarckian memetic algorithms in general non-linear programming, we introduce in this work for the first time the concepts of local optimum structure and generalize the notion of neighborhood to connectivity structure for analysis of MAs. Based on the two proposed concepts, we analyze the solution quality and computational efficiency of the core search operators in Lamarckian memetic algorithms. Subsequently, the structure of local optimums of a few representative and complex benchmark problems is studied to reveal the effects of individual learning on fitness landscape and to gain clues into the success or failure of MAs. The connectivity structure of local optimum for different memes or individual learning procedures in Lamarckian MAs on the benchmark problems is also investigated to understand the effects of choice of memes in MA design.

82 citations


Journal ArticleDOI
TL;DR: A Grammar-based Genetic Programming Hyper-Heuristic framework (GPHH) for evolving constructive heuristics for timetabling and it is shown that the framework is very competitive with other constructive techniques, and did outperform other hyper-heuristic frameworks on many occasions.
Abstract: This paper introduces a Grammar-based Genetic Programming Hyper-Heuristic framework (GPHH) for evolving constructive heuristics for timetabling. In this application GP is used as an online learning method which evolves heuristics while solving the problem. In other words, the system keeps on evolving heuristics for a problem instance until a good solution is found. The framework is tested on some of the most widely used benchmarks in the field of exam timetabling and compared with the best state-of-the-art approaches. Results show that the framework is very competitive with other constructive techniques, and did outperform other hyper-heuristic frameworks on many occasions.

70 citations


Journal ArticleDOI
TL;DR: The mechanism of generating immigrants, which is the most important issue among immigrants schemes for EAs in dynamic environments, is examined and the interactions between the two types of schemes reveal positive effect in improving the performance of EAs on DOPs.
Abstract: In recent years, there has been a growing interest in studying evolutionary algorithms (EAs) for dynamic optimization problems (DOPs). Among approaches developed for EAs to deal with DOPs, immigrants schemes have been proven to be beneficial. Immigrants schemes for EAs on DOPs aim at maintaining the diversity of the population throughout the run via introducing new individuals into the current population. In this paper, we carefully examine the mechanism of generating immigrants, which is the most important issue among immigrants schemes for EAs in dynamic environments. We divide existing immigrants schemes into two types, namely the direct immigrants scheme and the indirect immigrants scheme, according to the way in which immigrants are generated. Then experiments are conducted to understand the difference in the behaviors of different types of immigrants schemes and to compare their performance in dynamic environments. Furthermore, a new immigrants scheme is proposed to combine the merits of two types of immigrants schemes. The experimental results show that the interactions between the two types of schemes reveal positive effect in improving the performance of EAs in dynamic environments.

65 citations


Journal ArticleDOI
TL;DR: In order to demonstrate its practicality, it is shown how the issues of current soft sensor development and maintenance can be effectively dealt with by using the architecture as a construction plan for the development of adaptive soft sensing algorithms.
Abstract: This work presents an architecture for the development of on-line prediction models. The architecture defines unified modular environment based on three concepts from machine learning, these are: (i) ensemble methods, (ii) local learning, and (iii) meta learning. The three concepts are organised in a three layer hierarchy within the architecture. For the actual prediction making any data-driven predictive method such as artificial neural network, support vector machines, etc. can be implemented and plugged in. In addition to the predictive methods, data pre-processing methods can also be implemented as plug-ins. Models developed according to the architecture can be trained and operated in different modes. With regard to the training, the architecture supports the building of initial models based on a batch of training data, but if this data is not available the models can also be trained in incremental mode. In a scenario where correct target values are (occasionally) available during the run-time, the architecture supports life-long learning by providing several adaptation mechanisms across the three hierarchical levels. In order to demonstrate its practicality, we show how the issues of current soft sensor development and maintenance can be effectively dealt with by using the architecture as a construction plan for the development of adaptive soft sensing algorithms.

54 citations


Journal ArticleDOI
TL;DR: A Memetic system to solve the application problem of Financial Portfolio Optimization by introducing the Tree-based Genetic Algorithm (GA), a recursive representation for individuals which allows the genome to learn information regarding relationships between the assets, and the evaluation of intermediate nodes.
Abstract: We introduce a Memetic system to solve the application problem of Financial Portfolio Optimization. This problem consists of selecting a number of assets from a market and their relative weights to form an investment strategy. These weights must be optimized against a utility function that considers the expected return of each asset, and their co-variance; which means that as the number of available assets increases, the search space increases exponentially. Our method introduces two new concepts that set it apart from previous evolutionary based approaches. The first is the Tree-based Genetic Algorithm (GA), a recursive representation for individuals which allows the genome to learn information regarding relationships between the assets, and the evaluation of intermediate nodes. The second is the hybridization with local search, which allows the system to fine-tune the weights of assets after the tree structure has been decided. These two innovations make our system superior than other representations used for multi-weight assignment of portfolios.

51 citations


Journal ArticleDOI
TL;DR: The last 2 decades have seen the emergence of a large number of computational intelligence techniques derived from thenatural sciences, newnature-inspired problem-solving paradigms emerged that are based on Darwinian evolution, entomology, condensed matter physics, neurobiology, immunology, etc.
Abstract: The last 2 decades have seen the emergence of a large number of computational intelligence techniques derived from thenatural sciences.Newnature-inspiredproblem-solving paradigms emerged that are based on Darwinian evolution, entomology, condensed matter physics, neurobiology, immunology, etc. These paradigms, in turn, popularized search methodologies such as Genetic Algorithms, Genetic Programming, Evolution Strategies, Particle Swarm Optimization, Ant Colony Optimization, Simulated Annealing, Neural Networks, Artificial Immune Systems, etc. At the same time, other search methodologies such as Tabu-search, Scatter Search, GRASP, etc, remained “metaphor-less”. A war-of-the-method ensued and continues, often it is the case that each of these search paradigmshas a nicheflagship publication where the latest advances within the paradigm are presented. The IEEE Transactions on Evolutionary Algorithms or its twin publication on Neural Networks, the Evolutionary Computation journal, the journal of Genetic Programming and Evolvable Machines, the Swarm Intelligence Journal, etc, are examples of scientific outlets for work derived from nature-inspired principles, while the Journal of Heuristics, the Journal of Soft Computing—A Fusion of Foundations, Methodologies and Applications or the more recent International Journal ofMetaheuristics are examples of publications where the research emphasis is not necessarily on natural computation.

38 citations


Journal ArticleDOI
TL;DR: Focusing on the role of probability distributions and factorizations in estimation of distribution algorithms, a survey of current challenges where further research must provide answers that extend the potential and applicability of these algorithms is presented.
Abstract: In this paper, we identify a number of topics relevant for the improvement and development of discrete estimation of distribution algorithms. Focusing on the role of probability distributions and factorizations in estimation of distribution algorithms, we present a survey of current challenges where further research must provide answers that extend the potential and applicability of these algorithms. In each case we state the research topic and elaborate on the reasons that make it relevant for estimation of distribution algorithms. In some cases current work or possible alternatives for the solution of the problem are discussed.

Journal ArticleDOI
TL;DR: This work presents an integrated vision architecture capable of incrementally learning several visual categories based on natural hand-held objects and imposes no restrictions on the viewing angle of presented objects, relaxing the common constraint on canonical views.
Abstract: We present an integrated vision architecture capable of incrementally learning several visual categories based on natural hand-held objects. Additionally we focus on interactive learning, which requires real-time image processing methods and a fast learning algorithm. The overall system is composed of a figure-ground segregation part, several feature extraction methods and a life-long learning approach combining incremental learning with category-specific feature selection. In contrast to most visual categorization approaches, where typically each view is assigned to a single category, we allow labeling with an arbitrary number of shape and color categories. We also impose no restrictions on the viewing angle of presented objects, relaxing the common constraint on canonical views.

Journal ArticleDOI
TL;DR: A brief description of a selection of theoretical tools that can be used for designing and analyzing various heuristics, including several examples of preprocessing procedures and probabilistic instance analysis methods are given.
Abstract: An intensive practical experimentation is certainly required for the purpose of heuristics design and evaluation, however a theoretical approach is also important in this area of research. This paper gives a brief description of a selection of theoretical tools that can be used for designing and analyzing various heuristics. For design and evaluation, we consider several examples of preprocessing procedures and probabilistic instance analysis methods. We also discuss some attempts at the theoretical explanation of successes and failures of certain heuristics.

Journal ArticleDOI
TL;DR: A novel hybridization of GA and tabu search is proposed to address the issue of balance selection pressure and population diversity and can significantly improve GAs in terms of solution quality as well as convergence speed.
Abstract: Genetic algorithm (GA) is well-known for its effectiveness in global search and optimization. To balance selection pressure and population diversity is an important issue of designing GA. This paper proposes a novel hybridization of GA and tabu search (TS) to address this issue. The proposed method embeds the key elements of TS—tabu restriction and aspiration criterion—into the survival selection operator of GA. More specifically, the tabu restriction is used to prevent inbreeding for diversity maintenance, and the aspiration criterion is activated to provide moderate selection pressure under the tabu restriction. The interaction of tabu restriction and aspiration criterion enables survivor selection to balance selection pressure and population diversity. The experimental results on numerical and combinatorial optimization problems show that this hybridization can significantly improve GAs in terms of solution quality as well as convergence speed. An empirical analysis further identifies the influences of the TS strategies on the performance of this hybrid GA.

Journal ArticleDOI
TL;DR: The covariate shift in incremental-learning environments is focused on and the model-selection criterion is derived, which is to be an essential object function for memetic algorithms to solve these kinds of learning problems.
Abstract: Learning strategies under covariate shift have recently been widely discussed. The density of learning inputs under covariate shift is different from that of test inputs. Learning machines in such environments need to employ special learning strategies to acquire greater capabilities of generalizing through learning. However, incremental learning methods are also used for learning in non-stationary learning environments, which represent a kind of covariate shift. However, the relation between covariate-shift environments and incremental-learning environments has not been adequately discussed. This paper focuses on the covariate shift in incremental-learning environments and our re-construction of a suitable incremental-learning method. Then, the model-selection criterion is also derived, which is to be an essential object function for memetic algorithms to solve these kinds of learning problems.

Journal ArticleDOI
TL;DR: It is determined that very few individuals in an EA population have a significant influence on future population dynamics with the impact size fitting a power law distribution, and concluded that such EA designs can not be dominated by a small number of individuals and hence should theoretically be capable of exhibiting higher degrees of parallel search behavior.
Abstract: Deepening our understanding of the characteristics and behaviors of population-based search algorithms remains an important ongoing challenge in Evolutionary Computation. To date however, most studies of Evolutionary Algorithms have only been able to take place within tightly restricted experimental conditions. For instance, many analytical methods can only be applied to canonical algorithmic forms or can only evaluate evolution over simple test functions. Analysis of EA behavior under more complex conditions is needed to broaden our understanding of this population-based search process. This paper presents an approach to analyzing EA behavior that can be applied to a diverse range of algorithm designs and environmental conditions. The approach is based on evaluating an individual’s impact on population dynamics using metrics derived from genealogical graphs. From experiments conducted over a broad range of conditions, some important conclusions are drawn in this study. First, it is determined that very few individuals in an EA population have a significant influence on future population dynamics with the impact size fitting a power law distribution. The power law distribution indicates there is a non-negligible probability that single individuals will dominate the entire population, irrespective of population size. Two EA design features are however found to cause strong changes to this aspect of EA behavior: (1) the population topology and (2) the introduction of completely new individuals. If the EA population topology has a long path length or if new (i.e. historically uncoupled) individuals are continually inserted into the population, then power law deviations are observed for large impact sizes. It is concluded that such EA designs can not be dominated by a small number of individuals and hence should theoretically be capable of exhibiting higher degrees of parallel search behavior.

Journal ArticleDOI
TL;DR: This paper presents a multilevel distributed memetic algorithm (ML-DMA) for the static RWA which finds provable optimal solutions for most benchmark instances with known lower bounds and is capable of handling large instances.
Abstract: The Routing and Wavelength Assignment problem is a graph optimization problem which deals with optical networks, where communication requests in a network have to be fulfilled. In this paper, we present a multilevel distributed memetic algorithm (ML-DMA) for the static RWA which finds provable optimal solutions for most benchmark instances with known lower bounds and is capable of handling large instances. Components of our ML-DMA include iterated local search, recombination, multilevel scaling, and a gossip-based distribution algorithm. Results demonstrated that our ML-DMA is among the most sophisticated heuristic RWA algorithms published so far.

Journal ArticleDOI
TL;DR: A simple two-layered neural network that implements a novel and fast Reinforcement Learning that is applicable to small physical robots running in the real world environments and evaluates the efficacy of the proposed learning mechanism.
Abstract: In the past few years, the field of autonomous robot has been rigorously studied and non-industrial applications of robotics are rapidly emerging. One of the most interesting aspects of this field is the development of the learning ability which enables robots to autonomously adapt to given environments without human guidance. As opposed to the conventional methods of robots’ control, where human logically design the behavior of a robot, the ability to acquire action strategies through some learning processes will not only significantly reduce the production costs of robots but also improves the applicability of robots in wider tasks and environments. However, learning algorithms usually require large calculation cost, which make them unsuitable for robots with limited resources. In this study, we propose a simple two-layered neural network that implements a novel and fast Reinforcement Learning. The proposed learning method requires significantly less calculation resources, hence is applicable to small physical robots running in the real world environments. For this study, we built several simple robots and implemented the proposed learning mechanism to them. In the experiments, to evaluate the efficacy of the proposed learning mechanism, several robots were simultaneously trained to acquire obstacle avoidance strategies in the same environment, thus, forming a dynamic environment where the learning task is substantially harder than in the case of learning in a static environment and promising result was obtained.

Journal ArticleDOI
TL;DR: The present approach of innovation and optimum design is based on basic mechanics with fuzzy goal formulations and heuristics, like axiomatic design, which is useful for optimizing new concepts and also existing machine designs showing possibilities for notable cost savings.
Abstract: Common design principles apply to design of mechanical and biological machine structures. Most of the main properties of machines and creatures are determined by programmed genetics. These determine the geometry, material selection, functioning of machines and biological creatures and the fitness for service. The present approach of innovation and optimum design is based on basic mechanics with fuzzy goal formulations and heuristics, like axiomatic design. These models are combined synergistically to formulate the desired properties of the machines. First, engineering mechanics and heuristics are shown to have a finalistic guidance on the conceptual design of optimal fluid conveying channels consisting of a branch and a closing device. Then a multi-objective algorithm is tested in an industrial case study design of a preloaded screw fastened flange plate and it is shown to be a reliable tool for testing and innovating new solutions. The goals and constraints are modellled consistently by the same goal function form. The joints have to be reliable against risks of separation, relaxation, fatigue and creep fracture due to pressure differences. Compared to conventional results it gives nearly the same technical and safety functions even at half the cost. This approach is useful for optimizing new concepts and also existing machine designs showing possibilities for notable cost savings.

Journal ArticleDOI
TL;DR: Two methods for tuning membership functions of a kernel fuzzy classifier based on the idea of SVM (support vector machine) training are proposed and usually both methods improve classification performance by tuned membership functions.
Abstract: We propose two methods for tuning membership functions of a kernel fuzzy classifier based on the idea of SVM (support vector machine) training. We assume that in a kernel fuzzy classifier a fuzzy rule is defined for each class in the feature space. In the first method, we tune the slopes of the membership functions at the same time so that the margin between classes is maximized under the constraints that the degree of membership to which a data sample belongs is the maximum among all the classes. This method is similar to a linear all-at-once SVM. We call this AAO tuning. In the second method, we tune the membership function of a class one at a time. Namely, for a class the slope of the associated membership function is tuned so that the margin between the class and the remaining classes is maximized under the constraints that the degrees of membership for the data belonging to the class are large and those for the remaining data are small. This method is similar to a linear one-against-all SVM. This is called OAA tuning. According to the computer experiment for fuzzy classifiers based on kernel discriminant analysis and those with ellipsoidal regions, usually both methods improve classification performance by tuning membership functions and classification performance by AAO tuning is slightly better than that by OAA tuning.

Journal ArticleDOI
TL;DR: The problem of finding the maximum fuzzy clique has been formalized on fuzzy graphs and the problem reduces to an unconstrained quadratic 0–1 programming problem.
Abstract: The maximum clique problem is an important problem in graph theory. Many real-life problems are still being mapped into this problem for their effective solutions. A natural extension of this problem that has emerged very recently in many real-life networks, is its fuzzification. The problem of finding the maximum fuzzy clique has been formalized on fuzzy graphs and subsequently addressed in this paper. It has been shown here that the problem reduces to an unconstrained quadratic 0–1 programming problem. Using a maximum neural network, along with mutation capability of genetic adaptive systems, the reduced problem has been solved. Empirical studies have been done by applying the method on stock flow graphs to identify the collusion set, which contains a group of traders performing unfair trading among themselves. Additionally, it has been applied on a gene co-expression network to find out significant gene modules and on some benchmark graphs.

Journal ArticleDOI
TL;DR: Adaptive soft computing techniques, such as evolving connectionist systems (ECOS), incremental learning and other adaptive learning models aim to relax the “sufficiency" requirement by continuously updating a model which keeps learning from data streams.
Abstract: In the last 20years, we have witnessed the remarkable progress in computational-intelligence modelling for various applications. However, a majority of these researches make one fundamental assumption that sufficient and representative data are provided in advance for training. Because this assumption often does not hold in many real applications, recent efforts on adaptive soft computing techniques, such as evolving connectionist systems (ECOS), incremental learning and other adaptive learning models aim to relax the “sufficiency” requirement by continuously updating a model which keeps learning from data streams. ECOS addresses the learning from a data stream or chunks of data whose underlying distribution changes over time, by training a neural network continuously and adapting its structure and functionality through repeated interactions with the environment or other learning systems. Similarly, incremental learning develops the ability of a computational model to continuously accumulate knowledge learned over time from noisy and/or incomplete data. In practice, incremental learning is also featured by one-pass property, which enables the algorithm to work with real time data streams presented only once to the learning machine. Other adaptive soft computing