scispace - formally typeset
Search or ask a question

Showing papers on "Continuous optimization published in 2010"


Journal ArticleDOI
TL;DR: This work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution employed in well-studied continuous optimization problems of engineering design and indicates that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence.
Abstract: Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm that shares many similarities with evolutionary computation techniques. However, the PSO is driven by the simulation of a social psychological metaphor motivated by collective behaviors of bird and other social organisms instead of the survival of the fittest individual. Inspired by the classical PSO method and quantum mechanics theories, this work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution. The application of Gaussian mutation operator instead of random sequences in QPSO is a powerful strategy to improve the QPSO performance in preventing premature convergence to local optima. In this paper, new combinations of QPSO and Gaussian probability distribution are employed in well-studied continuous optimization problems of engineering design. Two case studies are described and evaluated in this work. Our results indicate that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence and, in most cases, they outperform the results presented in the literature.

405 citations


Journal ArticleDOI
TL;DR: A novel set-based PSO (S-PSO) method for the solutions of some combinatorial optimization problems (COPs) in discrete space is presented and tested on two famous COPs: the traveling salesman problem and the multidimensional knapsack problem.
Abstract: Particle swarm optimization (PSO) is predominately used to find solutions for continuous optimization problems. As the operators of PSO are originally designed in an n-dimensional continuous space, the advancement of using PSO to find solutions in a discrete space is at a slow pace. In this paper, a novel set-based PSO (S-PSO) method for the solutions of some combinatorial optimization problems (COPs) in discrete space is presented. The proposed S-PSO features the following characteristics. First, it is based on using a set-based representation scheme that enables S-PSO to characterize the discrete search space of COPs. Second, the candidate solution and velocity are defined as a crisp set, and a set with possibilities, respectively. All arithmetic operators in the velocity and position updating rules used in the original PSO are replaced by the operators and procedures defined on crisp sets, and sets with possibilities in S-PSO. The S-PSO method can thus follow a similar structure to the original PSO for searching in a discrete space. Based on the proposed S-PSO method, most of the existing PSO variants, such as the global version PSO, the local version PSO with different topologies, and the comprehensive learning PSO (CLPSO), can be extended to their corresponding discrete versions. These discrete PSO versions based on S-PSO are tested on two famous COPs: the traveling salesman problem and the multidimensional knapsack problem. Experimental results show that the discrete version of the CLPSO algorithm based on S-PSO is promising.

382 citations


Journal ArticleDOI
TL;DR: In the proposed SGHS algorithm, a new improvisation scheme is developed so that the good information captured in the current global best solution can be well utilized to generate new harmonies.

352 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is proved that the proposed continuous max-flow and min-cut models, with or without supervised constraints, give rise to a series of global binary solutions λ∗(x) ∊ {0,1}, which globally solves the original nonconvex image partitioning problems.
Abstract: We propose and study novel max-flow models in the continuous setting, which directly map the discrete graph-based max-flow problem to its continuous optimization formulation. We show such a continuous max-flow model leads to an equivalent min-cut problem in a natural way, as the corresponding dual model. In this regard, we revisit basic conceptions used in discrete max-flow / min-cut models and give their new explanations from a variational perspective. We also propose corresponding continuous max-flow and min-cut models constrained by priori supervised information and apply them to interactive image segmentation/labeling problems. We prove that the proposed continuous max-flow and min-cut models, with or without supervised constraints, give rise to a series of global binary solutions λ∗(x) ∊ {0,1}, which globally solves the original nonconvex image partitioning problems. In addition, we propose novel and reliable multiplier-based max-flow algorithms. Their convergence is guaranteed by classical optimization theories. Experiments on image segmentation, unsupervised and supervised, validate the effectiveness of the discussed continuous max-flow and min-cut models and suggested max-flow based algorithms.

264 citations


Book ChapterDOI
01 Jan 2010
TL;DR: In this article, a two-stage hybrid search method, called Eagle Strategy, was proposed for stochastic optimization problems, which combines the random search using Levy walk with the firefly algorithm in an iterative manner.
Abstract: Most global optimization problems are nonlinear and thus difficult to solve, and they become even more challenging when uncertainties are present in objective functions and constraints. This paper provides a new two-stage hybrid search method, called Eagle Strategy, for stochastic optimization. This strategy intends to combine the random search using Levy walk with the firefly algorithm in an iterative manner. Numerical studies and results suggest that the proposed Eagle Strategy is very efficient for stochastic optimization. Finally practical implications and potential topics for further research will be discussed.

217 citations


Book
01 Jan 2010
TL;DR: An overview of Empirical Process Optimization and some Probability Results Used in Bayesian Inference are presented.
Abstract: Preliminaries.- An Overview of Empirical Process Optimization.- Elements of Response Surface Methods.- Optimization Of First Order Models.- Experimental Designs For First Order Models.- Analysis and Optimization of Second Order Models.- Experimental Designs for Second Order Models.- Statistical Inference in Process Optimization.- Statistical Inference in First Order RSM Optimization.- Statistical Inference in Second Order RSM Optimization.- Bias Vs. Variance.- Robust Parameter Design and Robust Optimization.- Robust Parameter Design.- Robust Optimization.- Bayesian Approaches in Process Optimization.- to Bayesian Inference.- Bayesian Methods for Process Optimization.- to Optimization of Simulation and Computer Models.- Simulation Optimization.- Kriging and Computer Experiments.- Appendices.- Basics of Linear Regression.- Analysis of Variance.- Matrix Algebra and Optimization Results.- Some Probability Results Used in Bayesian Inference.

196 citations


Proceedings ArticleDOI
18 Jul 2010
TL;DR: This work proposed a memetic algorithm, MA-SW-Chains, for large scale global optimization, which assigns to each individual a local search intensity that depends on its features, by chaining different local search applications.
Abstract: Memetic algorithms are effective algorithms to obtain reliable and accurate solutions for complex continuous optimization problems. Nowadays, high dimensional optimization problems are an interesting field of research. The high dimensionality introduces new problems for the optimization process, requiring more scalable algorithms that, at the same time, could explore better the higher domain space around each solution. In this work, we proposed a memetic algorithm, MA-SW-Chains, for large scale global optimization. This algorithm assigns to each individual a local search intensity that depends on its features, by chaining different local search applications. MA-SW-Chains is an adaptation to large scale optimization of a previous algorithm, MA-CMA-Chains, to improve its performance on high-dimensional problems. Finally, we present the results obtained by our proposal using the benchmark problems defined in the Special Session of Large Scale Global Optimization on the IEEE Congress on Evolutionary Computation in 2010.

175 citations


Journal ArticleDOI
TL;DR: An extensive computational study is carried out and the results are compared with several algorithms from the literature, including BA for solving generalized assignment problems (GAP) with an ejection chain neighborhood mechanism.

172 citations


Journal ArticleDOI
01 Mar 2010
TL;DR: This paper presents optimization aspects of a multi-pass milling operation carried out using three non-traditional optimization algorithms namely, artificial bee colony (ABC), particle swarm optimization (PSO), and simulated annealing (SA).
Abstract: The effective optimization of machining process parameters affects dramatically the cost and production time of machined components as well as the quality of the final products. This paper presents optimization aspects of a multi-pass milling operation. The objective considered is minimization of production time (i.e. maximization of production rate) subjected to various constraints of arbor strength, arbor deflection, and cutting power. Various cutting strategies are considered to determine the optimal process parameters like the number of passes, depth of cut for each pass, cutting speed, and feed. The upper and lower bounds of the process parameters are also considered in the study. The optimization is carried out using three non-traditional optimization algorithms namely, artificial bee colony (ABC), particle swarm optimization (PSO), and simulated annealing (SA). An application example is presented and solved to illustrate the effectiveness of the presented algorithms. The results of the presented algorithms are compared with the previously published results obtained by using other optimization techniques.

148 citations


Journal ArticleDOI
26 Jul 2010
TL;DR: This work casts the major practical requirements for architectural surface paneling, including mold reuse, into a global optimization framework that interleaves discrete and continuous optimization steps to minimize production cost while meeting user-specified quality constraints.
Abstract: The emergence of large-scale freeform shapes in architecture poses big challenges to the fabrication of such structures. A key problem is the approximation of the design surface by a union of patches, so-called panels, that can be manufactured with a selected technology at reasonable cost, while meeting the design intent and achieving the desired aesthetic quality of panel layout and surface smoothness. The production of curved panels is mostly based on molds. Since the cost of mold fabrication often dominates the panel cost, there is strong incentive to use the same mold for multiple panels. We cast the major practical requirements for architectural surface paneling, including mold reuse, into a global optimization framework that interleaves discrete and continuous optimization steps to minimize production cost while meeting user-specified quality constraints. The search space for optimization is mainly generated through controlled deviation from the design surface and tolerances on positional and normal continuity between neighboring panels. A novel 6-dimensional metric space allows us to quickly compute approximate inter-panel distances, which dramatically improves the performance of the optimization and enables the handling of complex arrangements with thousands of panels. The practical relevance of our system is demonstrated by paneling solutions for real, cutting-edge architectural freeform design projects.

147 citations


Journal ArticleDOI
TL;DR: The authors modified the particle position representation, particle movement, and particle velocity in this study to construct a particle swarm optimization (PSO) for an elaborate multi-objective job-shop scheduling problem.
Abstract: Most previous research into the job-shop scheduling problem has concentrated on finding a single optimal solution (e.g., makespan), even though the actual requirement of most production systems requires multi-objective optimization. The aim of this paper is to construct a particle swarm optimization (PSO) for an elaborate multi-objective job-shop scheduling problem. The original PSO was used to solve continuous optimization problems. Due to the discrete solution spaces of scheduling optimization problems, the authors modified the particle position representation, particle movement, and particle velocity in this study. The modified PSO was used to solve various benchmark problems. Test results demonstrated that the modified PSO performed better in search quality and efficiency than traditional evolutionary heuristics.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: It is shown how extremum seeking can be achieved by combining an arbitrary continuous optimization method with an estimator for the derivatives of the unknown steady-state reference-to-output map to ensure non-local convergence of all trajectories to the vicinity of the extremum.
Abstract: A unifying, prescriptive framework is presented for the design of a family of adaptive extremum seeking controllers. It is shown how extremum seeking can be achieved by combining an arbitrary continuous optimization method (such as gradient descent or continuous Newton) with an estimator for the derivatives of the unknown steady-state reference-to-output map. A tuning strategy is presented for the controller parameters that ensures non-local convergence of all trajectories to the vicinity of the extremum. It is shown that this tuning strategy leads to multiple time scales in the closed-loop dynamics, and that the slowest time scale dynamics approximate the chosen continuous optimization method. Results are given for both static and dynamic plants. For simplicity, only single-input-single-output (SISO) plants are considered.

Journal ArticleDOI
TL;DR: A comparison of simulation results reveals optimization efficacy of the proposed scheme over evolutionary programming (EP), genetic algorithm (GA), particle swarm optimization (PSO), mixed-integer particle Swarm optimization (MIPSO) and sequential quadratic programming (SQP) used in MATPOWER for the global optimization of multi-constraint OPF problems.
Abstract: This paper presents biogeography based optimization (BBO) technique for solving constrained optimal power flow problems in power systems, considering valve point nonlinearities of generators. In this paper, the proposed algorithm has been tested in 9-bus and IEEE 30-bus systems under various simulated conditions. A comparison of simulation results reveals optimization efficacy of the proposed scheme over evolutionary programming (EP), genetic algorithm (GA), particle swarm optimization (PSO), mixed-integer particle swarm optimization (MIPSO) and sequential quadratic programming (SQP) used in MATPOWER for the global optimization of multi-constraint OPF problems.

Journal ArticleDOI
TL;DR: The method the advocate first convexifies the problem and then solves a sequence of subproblems, whose solutions form a trajectory that leads to the solution, to illustrate how well the algorithm performs.
Abstract: One of the challenging optimization problems is determining the minimizer of a nonlinear programming problem that has binary variables. A vexing difficulty is the rate the work to solve such problems increases as the number of discrete variables increases. Any such problem with bounded discrete variables, especially binary variables, may be transformed to that of finding a global optimum of a problem in continuous variables. However, the transformed problems usually have astronomically large numbers of local minimizers, making them harder to solve than typical global optimization problems. Despite this apparent disadvantage, we show that the approach is not futile if we use smoothing techniques. The method we advocate first convexifies the problem and then solves a sequence of subproblems, whose solutions form a trajectory that leads to the solution. To illustrate how well the algorithm performs we show the computational results of applying it to problems taken from the literature and new test problems with known optimal solutions.

Journal ArticleDOI
TL;DR: A new combination method based on path relinking, which considers a broader area around the population members than previous combination methods, and a population-update method which improves the balance between intensification and diversification.

Journal IssueDOI
TL;DR: The expected improvement approach is demonstrated on two electromagnetic problems, namely, a microwave filter and a textile antenna, and it is shown that this approach can improve the quality of designs on these problems.
Abstract: The increasing use of expensive computer simulations in engineering places a serious computational burden on associated optimization problems. Surrogate-based optimization becomes standard practice in analyzing such expensive black-box problems. This article discusses several approaches that use surrogate models for optimization and highlights one sequential design approach in particular, namely, expected improvement. The expected improvement approach is demonstrated on two electromagnetic problems, namely, a microwave filter and a textile antenna. © 2010 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010.

Journal ArticleDOI
TL;DR: In this work, structural shape optimization using isogeometric analysis is studied on 2D and shell problems and the proposed framework is extended to topology optimization using trimming techniques.

Posted Content
TL;DR: A selected list of test problems for unconstrained optimization, using at least a subset of functions with diverse properties to make sure whether or not the tested algorithm can solve certain type of optimization efficiently.
Abstract: Test functions are important to validate new optimization algorithms and to compare the performance of various algorithms There are many test functions in the literature, but there is no standard list or set of test functions one has to follow New optimization algorithms should be tested using at least a subset of functions with diverse properties so as to make sure whether or not the tested algorithm can solve certain type of optimization efficiently Here we provide a selected list of test problems for unconstrained optimization

Journal ArticleDOI
TL;DR: In this paper, the authors leverage properties of the Heaviside projection method (HPM) to separate the design variable field from the analysis mesh in continuum topology optimization, which can be used to reduce the number of independent design variables without significantly restricting the design space.
Abstract: Topology optimization methodologies typically use the same discretization for the design variable and analysis meshes. Analysis accuracy and expense are thus directly tied to design dimensionality and optimization expense. This paper proposes leveraging properties of the Heaviside projection method (HPM) to separate the design variable field from the analysis mesh in continuum topology optimization. HPM projects independent design variables onto element space over a prescribed length scale. A single design variable therefore influences several elements, creating a redundancy within the design that can be exploited to reduce the number of independent design variables without significantly restricting the design space. The algorithm begins with sparse design variable fields and adapts these fields as the optimization progresses. The technique is demonstrated on minimum compliance (maximum stiffness) problems solved using continuous optimization and genetic algorithms. For the former, the proposed algorithm typically identifies solutions having objective functions within 1% of those found using full design variable fields. Computational savings are minor to moderate for the minimum compliance formulation with a single constraint, and are substantial for formulations having many local constraints. When using genetic algorithms, solutions are consistently obtained on mesh resolutions that were previously considered intractable. Copyright © 2009 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems.
Abstract: In this paper, a novel hybrid Artificial Bee Colony (ABC) and Quantum Evolutionary Algorithm (QEA) is proposed for solving continuous optimization problems. ABC is adopted to increase the local search capacity as well as the randomness of the populations. In this way, the improved QEA can jump out of the premature convergence and find the optimal value. To show the performance of our proposed hybrid QEA with ABC, a number of experiments are carried out on a set of well-known Benchmark continuous optimization problems and the related results are compared with two other QEAs: the QEA with classical crossover operation, and the QEA with 2-crossover strategy. The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems.

Dissertation
01 Jan 2010
TL;DR: It is found that state-of-the-art optimizer variants with their supposedly adaptive behavioural parameters do not have a general and consistent performance advantage but are outperformed in several cases by simplified optimizers, if only the behavioural parameters are tuned properly.
Abstract: This thesis is about the tuning and simplification of black-box (direct-search, derivative-free) optimization methods, which by definition do not use gradient information to guide their search for an optimum but merely need a fitness (cost, error, objective) measure for each candidate solution to the optimization problem. Such optimization methods often have parameters that infuence their behaviour and efficacy. A Meta-Optimization technique is presented here for tuning the behavioural parameters of an optimization method by employing an additional layer of optimization. This is used in a number of experiments on two popular optimization methods, Differential Evolution and Particle Swarm Optimization, and unveils the true performance capabilities of an optimizer in different usage scenarios. It is found that state-of-the-art optimizer variants with their supposedly adaptive behavioural parameters do not have a general and consistent performance advantage but are outperformed in several cases by simplified optimizers, if only the behavioural parameters are tuned properly.

Journal ArticleDOI
TL;DR: A new robust optimization method for problems with objective functions that may be computed via numerical simulations and incorporate constraints that need to be feasible under perturbations that significantly improves the robustness for both designs.
Abstract: We propose a new robust optimization method for problems with objective functions that may be computed via numerical simulations and incorporate constraints that need to be feasible under perturbations. The proposed method iteratively moves along descent directions for the robust problem with nonconvex constraints and terminates at a robust local minimum. We generalize the algorithm further to model parameter uncertainties. We demonstrate the practicability of the method in a test application on a nonconvex problem with a polynomial cost function as well as in a real-world application to the optimization problem of intensity-modulated radiation therapy for cancer treatment. The method significantly improves the robustness for both designs.

Journal ArticleDOI
TL;DR: The computational results show that the proposed DLHS algorithm is more effective or at least competitive in finding near-optimal solutions compared with state-of-the-art harmony search variants.
Abstract: This article presents a local-best harmony search algorithm with dynamic subpopulations (DLHS) for solving the bound-constrained continuous optimization problems. Unlike existing harmony search algorithms, the DLHS algorithm divides the whole harmony memory (HM) into many small-sized sub-HMs and the evolution is performed in each sub-HM independently. To maintain the diversity of the population and to improve the accuracy of the final solution, information exchange among the sub-HMs is achieved by using a periodic regrouping schedule. Furthermore, a novel harmony improvisation scheme is employed to benefit from good information captured in the local best harmony vector. In addition, an adaptive strategy is developed to adjust the parameters to suit the particular problems or the particular phases of search process. Extensive computational simulations and comparisons are carried out by employing a set of 16 benchmark problems from the literature. The computational results show that, overall, the proposed D...

Journal ArticleDOI
TL;DR: A numerical procedure for finding the sparsest and densest realization of a given reaction network is proposed in this paper and solved in the framework of mixed integer linear programming (MILP).
Abstract: A numerical procedure for finding the sparsest and densest realization of a given reaction network is proposed in this paper. The problem is formulated and solved in the framework of mixed integer linear programming (MILP) where the continuous optimization variables are the nonnegative reaction rate coefficients, and the corresponding integer variables ensure the finding of the realization with the minimal or maximal number of reactions. The mass-action kinetics is expressed in the form of linear constraints adjoining the optimization problem. More complex realization problems can also be solved using the proposed framework by modifying the objective function and/or the constraints appropriately.

Posted Content
TL;DR: In this article, the authors studied the convergence of Markov Decision Processes made of a large number of objects to optimization problems on ordinary differential equations (ODE) and showed that the optimal reward of such a Markov decision process, satisfying a Bellman equation, converges to the solution of a continuous Hamilton-Jacobi-Bellman (HJB) equation based on the mean field approximation of the MDP.
Abstract: We study the convergence of Markov Decision Processes made of a large number of objects to optimization problems on ordinary differential equations (ODE). We show that the optimal reward of such a Markov Decision Process, satisfying a Bellman equation, converges to the solution of a continuous Hamilton-Jacobi-Bellman (HJB) equation based on the mean field approximation of the Markov Decision Process. We give bounds on the difference of the rewards, and a constructive algorithm for deriving an approximating solution to the Markov Decision Process from a solution of the HJB equations. We illustrate the method on three examples pertaining respectively to investment strategies, population dynamics control and scheduling in queues are developed. They are used to illustrate and justify the construction of the controlled ODE and to show the gain obtained by solving a continuous HJB equation rather than a large discrete Bellman equation.

BookDOI
01 Jan 2010
TL;DR: This chapter discusses optimization software tools for teaching and learning, as well as examples of Optimization Problems, and some of the techniques used to solve these problems.
Abstract: 1. Introduction: Examples of Optimization Problems, Historical Overview.- 2. Optimality Conditions: Convex Sets, Inequalities, Local First- and Second-Order Optimality Conditions, Duality.- 3. Unconstrained Optimization Problems: Elementary Search and Localization Methods, Descent Methods with Line Search, Trust Region Methods, Conjugate Gradient Methods, Quasi-Newton Methods.- 4. Linearly Constrained Optimization Problems: Linear and Quadratic Optimization, Projection Methods.- 5. Nonlinearly Constrained Optimization Methods: Penalty Methods, SQP Methods.- 6. Interior-Point Methods for Linear Optimization: The Central Path, Newton's Method for the Primal-Dual System, Path-Following Algorithms, Predictor-Corrector Methods.- 7. Semidefinite Optimization: Selected Special Cases, The S-Procedure, The Function log det, Path-Following Methods, How to Solve SDO Problems?, Icing on the Cake: Pattern Separation via Ellipsoids.- 8. Global Optimization: Branch and Bound Methods, Cutting Plane Methods.- Appendices: A Second Look at the Constraint Qualifications, The Fritz John Condition, Optimization Software Tools for Teaching and Learning.- Bibliography.- Index of Symbols.- Subject Index.

Journal ArticleDOI
TL;DR: The core idea of this approach is joint relaxation and restriction, which employs consistency relaxation and coupled bi-directional solution search, which can lead to about 22% less power dissipation subject to the same timing constraints.
Abstract: Gate sizing and threshold voltage (Vt) assignment are popular techniques for circuit timing and power optimization. Existing methods, by and large, are either sensitivity-driven heuristics or based on discretizing continuous optimization solutions. Sensitivity-driven heuristics are easily trapped in local optima and the discretization may be subject to remarkable errors. In this paper, we propose a systematic combinatorial approach for simultaneous gate sizing and Vt assignment. The core idea of this approach is joint relaxation and restriction, which employs consistency relaxation and coupled bi-directional solution search. The process of joint relaxation and restriction is conducted iteratively to systematically improve solutions. Our algorithm is compared with a state-of-the-art previous work on benchmark circuits. The results from our algorithm can lead to about 22% less power dissipation subject to the same timing constraints.

Book ChapterDOI
16 Dec 2010
TL;DR: Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Abstract: Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

Book ChapterDOI
18 Jan 2010
TL;DR: This work shows how to extend the Sequential Parameter Optimization framework [SPO; see 5] to operate effectively under time bounds and represents a new state of the art in model-based optimization of algorithms with continuous parameters on single problem instances.
Abstract: The optimization of algorithm performance by automatically identifying good parameter settings is an important problem that has recently attracted much attention in the discrete optimization community. One promising approach constructs predictive performance models and uses them to focus attention on promising regions of a design space. Such methods have become quite sophisticated and have achieved significant successes on other problems, particularly in experimental design applications. However, they have typically been designed to achieve good performance only under a budget expressed as a number of function evaluations (e.g., target algorithm runs). In this work, we show how to extend the Sequential Parameter Optimization framework [SPO; see 5] to operate effectively under time bounds. Our methods take into account both the varying amount of time required for different algorithm runs and the complexity of model building and evaluation; they are particularly useful for minimizing target algorithm runtime. Specifically, we avoid the up-front cost of an initial design, introduce a time-bounded intensification mechanism, and show how to reduce the overhead incurred by constructing and using models. Overall, we show that our method represents a new state of the art in model-based optimization of algorithms with continuous parameters on single problem instances.

Journal ArticleDOI
TL;DR: A heuristic inspired on the T‐Cell model of the immune system, used for solving constrained (numerical) optimization problems, and validated using several test functions taken from the specialized literature on evolutionary optimization.
Abstract: SUMMARY In this paper, we present a heuristic inspired on the T-Cell model of the immune system (i.e. an artificial immune system). The proposed approach (called T-Cell) is used for solving constrained (numerical) optimization problems, and is validated using several test functions taken from the specialized literature on evolutionary optimization. Additionally, several engineering optimization problems are also used for assessing the performance of the proposed approach. The results are compared with respect to approaches representative of the state-of-the-art in constrained evolutionary optimization. Copyright q 2010 John Wiley & Sons, Ltd.