scispace - formally typeset
Search or ask a question

Showing papers on "Continuous optimization published in 2012"


Journal ArticleDOI
TL;DR: An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions.

1,359 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper forms multi-target tracking as a discrete-continuous optimization problem that handles each aspect in its natural domain and allows leveraging powerful methods for multi-model fitting and demonstrates the accuracy and robustness of this approach with state-of-the-art performance on several standard datasets.
Abstract: The problem of multi-target tracking is comprised of two distinct, but tightly coupled challenges: (i) the naturally discrete problem of data association, i.e. assigning image observations to the appropriate target; (ii) the naturally continuous problem of trajectory estimation, i.e. recovering the trajectories of all targets. To go beyond simple greedy solutions for data association, recent approaches often perform multi-target tracking using discrete optimization. This has the disadvantage that trajectories need to be pre-computed or represented discretely, thus limiting accuracy. In this paper we instead formulate multi-target tracking as a discrete-continuous optimization problem that handles each aspect in its natural domain and allows leveraging powerful methods for multi-model fitting. Data association is performed using discrete optimization with label costs, yielding near optimality. Trajectory estimation is posed as a continuous fitting problem with a simple closed-form solution, which is used in turn to update the label costs. We demonstrate the accuracy and robustness of our approach with state-of-the-art performance on several standard datasets.

362 citations


Book
26 Jan 2012
TL;DR: Computational Optimization of Systems Governed by Partial Differential Equations offers readers a combined treatment of PDE-constrained optimization and uncertainties and an extensive discussion of multigrid optimization.
Abstract: This book fills a gap between theory-oriented investigations in PDE-constrained optimization and the practical demands made by numerical solutions of PDE optimization problems. The authors discuss computational techniques representing recent developments that result from a combination of modern techniques for the numerical solution of PDEs and for sophisticated optimization schemes. Computational Optimization of Systems Governed by Partial Differential Equations offers readers a combined treatment of PDE-constrained optimization and uncertainties and an extensive discussion of multigrid optimization. It provides a bridge between continuous optimization and PDE modeling and focuses on the numerical solution of the corresponding problems. Audience: This book is intended for graduate students working in PDE-constrained optimization and students taking a seminar on numerical PDE-constrained optimization. It is also suitable as an introduction for researchers in scientific computing with PDEs who want to work in the field of optimization and for those in optimization who want to consider methodologies from the field of numerical PDEs. It will help researchers in the natural sciences and engineering to formulate and solve optimization problems.

293 citations


Journal ArticleDOI
TL;DR: An efficient optimization algorithm called teaching–learning-based optimization (TLBO) is proposed in this article to solve continuous unconstrained and constrained optimization problems and the results show the better performance of the proposed algorithm.
Abstract: An efficient optimization algorithm called teaching–learning-based optimization (TLBO) is proposed in this article to solve continuous unconstrained and constrained optimization problems. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The algorithm is tested on 25 different unconstrained benchmark functions and 35 constrained benchmark functions with different characteristics. For the constrained benchmark functions, TLBO is tested with different constraint handling techniques such as superiority of feasible solutions, self-adaptive penalty, ϵ-constraint, stochastic ranking and ensemble of constraints. The performance of the TLBO algorithm is compared with that of other optimization algorithms and the results show the better performance of the proposed algorithm.

267 citations


Journal ArticleDOI
TL;DR: The advantages and disadvantages of recently developed methods, using evolutionary algorithms or metaheuristics, to solve similar parameter optimization problems and an attempt to answer the question of what is now the best extant numerical solution method.
Abstract: There has been significant progress in the development of numerical methods for the determination of optimal trajectories for continuous dynamic systems, especially in the last 20 years In the 1980s, the principal contribution was new methods for discretizing the continuous system and converting the optimization problem into a nonlinear programming problem This has been a successful approach that has yielded optimal trajectories for very sophisticated problems In the last 15–20 years, researchers have applied a qualitatively different approach, using evolutionary algorithms or metaheuristics, to solve similar parameter optimization problems Evolutionary algorithms use the principle of “survival of the fittest” applied to a population of individuals representing candidate solutions for the optimal trajectories Metaheuristics optimize by iteratively acting to improve candidate solutions, often using stochastic methods In this paper, the advantages and disadvantages of these recently developed methods are described and an attempt is made to answer the question of what is now the best extant numerical solution method

255 citations


Journal ArticleDOI
TL;DR: This paper focuses on three very similar evolutionary algorithms: genetic algorithm, particle swarm optimization (PSO), and differential evolution (DE), while GA is more suitable for discrete optimization, PSO and DE are more natural for continuous optimization.
Abstract: This paper focuses on three very similar evolutionary algorithms: genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE). While GA is more suitable for discrete optimization, PSO and DE are more natural for continuous optimization. The paper first gives a brief introduction to the three EA techniques to highlight the common computational procedures. The general observations on the similarities and differences among the three algorithms based on computational steps are discussed, contrasting the basic performances of algorithms. Summary of relevant literatures is given on job shop, flexible job shop, vehicle routing, location-allocation, and multimode resource constrained project scheduling problems.

224 citations


Proceedings Article
01 Dec 2012
TL;DR: This paper provides an overview of the PSL language and its techniques for inference and weight learning.
Abstract: Probabilistic soft logic (PSL) is a framework for collective, probabilistic reasoning in relational domains. PSL uses first order logic rules as a template language for graphical models over random variables with soft truth values from the interval [0, 1]. Inference in this setting is a continuous optimization task, which can be solved efficiently. This paper provides an overview of the PSL language and its techniques for inference and weight learning. An implementation of PSL is available at http://psl.umiacs.umd.edu/.

222 citations


Journal ArticleDOI
TL;DR: According to obtained results, relative estimation errors of the HAPE model are the lowest of them and quadratic form (HAPEQ) provides better-fit solutions due to fluctuations of the socio-economic indicators.

188 citations


Journal ArticleDOI
01 Jan 2012
TL;DR: A new version of ABC, called DisABC, is introduced, which is particularly designed for binary optimization, and uses a new differential expression, which employs a measure of dissimilarity between binary vectors in place of the vector subtraction operator typically used in the original ABC algorithm.
Abstract: Artificial bee colony (ABC) algorithm is one of the recently proposed swarm intelligence based algorithms for continuous optimization. Therefore it is not possible to use the original ABC algorithm directly to optimize binary structured problems. In this paper we introduce a new version of ABC, called DisABC, which is particularly designed for binary optimization. DisABC uses a new differential expression, which employs a measure of dissimilarity between binary vectors in place of the vector subtraction operator typically used in the original ABC algorithm. Such an expression helps to maintain the major characteristics of the original one and is respondent to the structure of binary optimization problems, too. Similar to original ABC algorithm, DisABC's differential expression works in continuous space while its consequence is used in a two-phase heuristic to construct a complete solution in binary space. Effectiveness of DisABC algorithm is tested on solving the uncapacitated facility location problem (UFLP). A set of 15 benchmark test problem instances of UFLP are adopted from OR-Library and solved by the proposed algorithm. Results are compared with two other state of the art binary optimization algorithms, i.e., binDE and PSO algorithms, in terms of three quality indices. Comparisons indicate that DisABC performs very well and can be regarded as a promising method for solving wide class of binary optimization problems.

186 citations


Journal ArticleDOI
TL;DR: This paper compares the performance of RCCRO with a large number of optimization techniques on a large set of standard continuous benchmark functions and finds that RCC RO outperforms all the others on the average, showing that CRO is suitable for solving problems in the continuous domain.
Abstract: Optimization problems can generally be classified as continuous and discrete, based on the nature of the solution space. A recently developed chemical-reaction-inspired metaheuristic, called chemical reaction optimization (CRO), has been shown to perform well in many optimization problems in the discrete domain. This paper is dedicated to proposing a real-coded version of CRO, namely, RCCRO, to solve continuous optimization problems. We compare the performance of RCCRO with a large number of optimization techniques on a large set of standard continuous benchmark functions. We find that RCCRO outperforms all the others on the average. We also propose an adaptive scheme for RCCRO which can improve the performance effectively. This shows that CRO is suitable for solving problems in the continuous domain.

163 citations


Journal ArticleDOI
TL;DR: Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems.

Journal ArticleDOI
TL;DR: This work proposes a joint approach that embeds well control optimization within the search for optimum well placement configurations using derivative-free methods based on pattern search.
Abstract: Well placement and control optimization in oil field development are commonly performed in a sequential manner. In this work, we propose a joint approach that embeds well control optimization within the search for optimum well placement configurations. We solve for well placement using derivative-free methods based on pattern search. Control optimization is solved by sequential quadratic programming using gradients efficiently computed through adjoints. Joint optimization yields a significant increase, of up to 20% in net present value, when compared to reasonable sequential approaches. The joint approach does, however, require about an order of magnitude increase in the number of objective function evaluations compared to sequential procedures. This increase is somewhat mitigated by the parallel implementation of some of the pattern-search algorithms used in this work. Two pattern-search algorithms using eight and 20 computing cores yield speedup factors of 4.1 and 6.4, respectively. A third pattern-search procedure based on a serial evaluation of the objective function is less efficient in terms of clock time, but the optimized cost function value obtained with this scheme is marginally better.

Journal ArticleDOI
01 Mar 2012
TL;DR: This paper proposes a set-based PSO to solve the discrete combinatorial optimization problem VRPTW (S-PSO-VRPTW), which treats the discrete search space as an arc set of the complete graph that is defined by the nodes in the VR PTW and regards the candidate solution as a subset of arcs.
Abstract: Vehicle routing problem with time windows (VRPTW) is a well-known NP-hard combinatorial optimization problem that is crucial for transportation and logistics systems. Even though the particle swarm optimization (PSO) algorithm is originally designed to solve continuous optimization problems, in this paper, we propose a set-based PSO to solve the discrete combinatorial optimization problem VRPTW (S-PSO-VRPTW). The general method of the S-PSO-VRPTW is to select an optimal subset out of the universal set by the use of the PSO framework. As the VRPTW can be defined as selecting an optimal subgraph out of the complete graph, the problem can be naturally solved by the proposed algorithm. The proposed S-PSO-VRPTW treats the discrete search space as an arc set of the complete graph that is defined by the nodes in the VRPTW and regards the candidate solution as a subset of arcs. Accordingly, the operators in the algorithm are defined on the set instead of the arithmetic operators in the original PSO algorithm. Besides, the process of position updating in the algorithm is constructive, during which the constraints of the VRPTW are considered and a time-oriented, nearest neighbor heuristic is used. A normalization method is introduced to handle the primary and secondary objectives of the VRPTW. The proposed S-PSO-VRPTW is tested on Solomon's benchmarks. Simulation results and comparisons illustrate the effectiveness and efficiency of the algorithm.

Journal ArticleDOI
TL;DR: In this paper, a spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique is proposed, which combines the spectral-stochastic approach and the deterministic approach.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: It is shown that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible and the optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering.
Abstract: Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.

Journal ArticleDOI
TL;DR: In this article, the authors used a penalized residual sum of squares for MARS as a Tikhonov regularization problem, and treated this with continuous optimization technique, in particular, the framework of conic quadratic programming.
Abstract: Regression analysis is a widely used statistical method for modelling relationships between variables. Multivariate adaptive regression splines (MARS) especially is very useful for high-dimensional problems and fitting nonlinear multivariate functions. A special advantage of MARS lies in its ability to estimate contributions of some basis functions so that both additive and interactive effects of the predictors are allowed to determine the response variable. The MARS method consists of two parts: forward and backward algorithms. Through these algorithms, it seeks to achieve two objectives: a good fit to the data, but a simple model. In this article, we use a penalized residual sum of squares for MARS as a Tikhonov regularization problem, and treat this with continuous optimization technique, in particular, the framework of conic quadratic programming. We call this new approach to MARS as CMARS, and consider it as becoming an important complementary and model-based alternative to the backward stepwise algo...

Journal ArticleDOI
TL;DR: The goal is to solve the constrained multi-objective reliability optimization problem of a system with interval valued reliability of each component by maximizing the system reliability and minimizing the system cost under several constraints.

Posted Content
TL;DR: The results show that the performance of CSO is promising on unimodal and multimodal benchmark functions with different search space dimension sizes, and its results are compared with state-of-the art optimization methods.
Abstract: Designing a fast and efficient optimization method with local optima avoidance capability on a variety of optimization problems is still an open problem for many researchers. In this work, the concept of a new global optimization method with an open implementation area is introduced as a Curved Space Optimization (CSO) method, which is a simple probabilistic optimization method enhanced by concepts of general relativity theory. To address global optimization challenges such as performance and convergence, this new method is designed based on transformation of a random search space into a new search space based on concepts of space-time curvature in general relativity theory. In order to evaluate the performance of our proposed method, an implementation of CSO is deployed and its results are compared on benchmark functions with state-of-the art optimization methods. The results show that the performance of CSO is promising on unimodal and multimodal benchmark functions with different search space dimension sizes.

Journal ArticleDOI
TL;DR: An optimization methodology for determining optimal well locations and trajectories based on the covariance matrix adaptation evolution strategy (CMA-ES) which is recognized as one of the most powerful derivative-free optimizers for continuous optimization is proposed.
Abstract: The amount of hydrocarbon recovered can be considerably increased by finding optimal placement of non-conventional wells. For that purpose, the use of optimization algorithms, where the objective function is evaluated using a reservoir simulator, is needed. Furthermore, for complex reservoir geologies with high heterogeneities, the optimization problem requires algorithms able to cope with the non-regularity of the objective function. In this paper, we propose an optimization methodology for determining optimal well locations and trajectories based on the covariance matrix adaptation evolution strategy (CMA-ES) which is recognized as one of the most powerful derivative-free optimizers for continuous optimization. In addition, to improve the optimization procedure, two new techniques are proposed: (a) adaptive penalization with rejection in order to handle well placement constraints and (b) incorporation of a meta-model, based on locally weighted regression, into CMA-ES, using an approximate stochastic ranking procedure, in order to reduce the number of reservoir simulations required to evaluate the objective function. The approach is applied to the PUNQ-S3 case and compared with a genetic algorithm (GA) incorporating the Genocop III technique for handling constraints. To allow a fair comparison, both algorithms are used without parameter tuning on the problem, and standard settings are used for the GA and default settings for CMA-ES. It is shown that our new approach outperforms the genetic algorithm: It leads in general to both a higher net present value and a significant reduction in the number of reservoir simulations needed to reach a good well configuration. Moreover, coupling CMA-ES with a meta-model leads to further improvement, which was around 20% for the synthetic case in this study.

Journal ArticleDOI
01 Oct 2012
TL;DR: An adaptive strategy without user-defined parameters, and a reversible-conversion strategy between continuous space and discrete space, are utilized, to improve the classical DE algorithm and provide an effective new approach to subpixel mapping for remote sensing imagery.
Abstract: In this paper, a novel subpixel mapping algorithm based on an adaptive differential evolution (DE) algorithm, namely, adaptive-DE subpixel mapping (ADESM), is developed to perform the subpixel mapping task for remote sensing images. Subpixel mapping may provide a fine-resolution map of class labels from coarser spectral unmixing fraction images, with the assumption of spatial dependence. In ADESM, to utilize DE, the subpixel mapping problem is transformed into an optimization problem by maximizing the spatial dependence index. The traditional DE algorithm is an efficient and powerful population-based stochastic global optimizer in continuous optimization problems, but it cannot be applied to the subpixel mapping problem in a discrete search space. In addition, it is not an easy task to properly set control parameters in DE. To avoid these problems, this paper utilizes an adaptive strategy without user-defined parameters, and a reversible-conversion strategy between continuous space and discrete space, to improve the classical DE algorithm. During the process of evolution, they are further improved by enhanced evolution operators, e.g., mutation, crossover, repair, exchange, insertion, and an effective local search to generate new candidate solutions. Experimental results using different types of remote images show that the ADESM algorithm consistently outperforms the previous subpixel mapping algorithms in all the experiments. Based on sensitivity analysis, ADESM, with its self-adaptive control parameter setting, is better than, or at least comparable to, the standard DE algorithm, when considering the accuracy of subpixel mapping, and hence provides an effective new approach to subpixel mapping for remote sensing imagery.

Journal Article
TL;DR: The proposed memetic firefly algorithm (MFFA) showed a potential that this algorithm could successfully be applied in near future to the other combinatorial optimization problems as well.
Abstract: Firefly algorithms belong to modern meta-heuristic algorithms inspired by nature that can be successfully applied to continuous optimization problems. In this paper, we have been applied the firefly algorithm, hybridized with local search heuristic, to combinatorial optimization problems, where we use graph 3-coloring problems as test benchmarks. The results of the proposed memetic firefly algorithm (MFFA) were compared with the results of the Hybrid Evolutionary Algorithm (HEA), Tabucol, and the evolutionary algorithm with SAW method (EA-SAW) by coloring the suite of medium-scaled random graphs (graphs with 500 vertices) generated using the Culberson random graph generator. The results of firefly algorithm were very promising and showed a potential that this algorithm could successfully be applied in near future to the other combinatorial optimization problems as well.

Journal ArticleDOI
TL;DR: This article surveys recent developments in theory and numerical methods for standard and generalized semi-infinite optimization problems, paying particular attention to connections with mathematical programs with complementarity constraints, lower level Wolfe duality, semi-smooth approaches, as well as branch and bound techniques in adaptive convexification procedures.

Journal ArticleDOI
TL;DR: An example-based learning PSO (ELPSO) is proposed to overcome shortcomings of the canonical PSO by keeping a balance between swarm diversity and convergence speed and outperforms all the tested PSO algorithms in terms of both solution quality and convergence time.

Journal ArticleDOI
TL;DR: This paper presents an ACO-based algorithm for numerical optimization capable of solving high-dimensional real-parameter optimization problems, called the Differential Ant-Stigmergy Algorithm (DASA), which transforms a real- Parameter optimization problem into a graph-search problem.

Journal ArticleDOI
TL;DR: Numerical results reveal that the proposed algorithms can find better solutions when compared to classical GSO and other heuristic algorithms and are powerful search algorithms for various global optimization problems.
Abstract: Glowworm swarm optimization (GSO) algorithm is the one of the newest nature inspired heuristics for optimization problems In order to enhances accuracy and convergence rate of the GSO, two strategies about the movement phase of GSO are proposed One is the greedy acceptance criteria for the glowworms update their position one-dimension by one-dimension The other is the new movement formulas which are inspired by artificial bee colony algorithm (ABC) and particle swarm optimization (PSO) To compare and analyze the performance of our proposed improvement GSO, a number of experiments are carried out on a set of well-known benchmark global optimization problems The effects of the parameters about the improvement algorithms are discussed by uniform design experiment Numerical results reveal that the proposed algorithms can find better solutions when compared to classical GSO and other heuristic algorithms and are powerful search algorithms for various global optimization problems

Journal ArticleDOI
TL;DR: The proposed fuzzy PSO (FPSO) algorithm is empirically evaluated through a preliminary sensitivity analysis of the PSO parameters and compared with fuzzy simulated annealing and fuzzy ant colony optimization algorithms, suggesting that the fuzzyPSO is a suitable algorithm for solving the DLAN topology design problem.
Abstract: Particle swarm optimization (PSO) is a powerful optimization technique that has been applied to solve a number of complex optimization problems. One such optimization problem is topology design of distributed local area networks (DLANs). The problem is defined as a multi-objective optimization problem requiring simultaneous optimization of monetary cost, average network delay, hop count between communicating nodes, and reliability under a set of constraints. This paper presents a multi-objective particle swarm optimization algorithm to efficiently solve the DLAN topology design problem. Fuzzy logic is incorporated in the PSO algorithm to handle the multi-objective nature of the problem. Specifically, a recently proposed fuzzy aggregation operator, namely the unified And-Or operator (Khan and Engelbrecht in Inf. Sci. 177: 2692---2711, 2007), is used to aggregate the objectives. The proposed fuzzy PSO (FPSO) algorithm is empirically evaluated through a preliminary sensitivity analysis of the PSO parameters. FPSO is also compared with fuzzy simulated annealing and fuzzy ant colony optimization algorithms. Results suggest that the fuzzy PSO is a suitable algorithm for solving the DLAN topology design problem.

Journal ArticleDOI
01 Feb 2012
TL;DR: The algorithm proposed is a hybridization between two optimization techniques: a special class of ant colony optimization for continuous domains entitled API and a genetic algorithm (GA), which adopts the downhill behavior of API and the good spreading in the solution space of the GA.
Abstract: Many real-life optimization problems often face an increased rank of nonsmoothness (many local minima) which could prevent a search algorithm from moving toward the global solution. Evolution-based algorithms try to deal with this issue. The algorithm proposed in this paper is called GAAPI and is a hybridization between two optimization techniques: a special class of ant colony optimization for continuous domains entitled API and a genetic algorithm (GA). The algorithm adopts the downhill behavior of API (a key characteristic of optimization algorithms) and the good spreading in the solution space of the GA. A probabilistic approach and an empirical comparison study are presented to prove the convergence of the proposed method in solving different classes of complex global continuous optimization problems. Numerical results are reported and compared to the existing results in the literature to validate the feasibility and the effectiveness of the proposed method. The proposed algorithm is shown to be effective and efficient for most of the test functions.

Journal ArticleDOI
TL;DR: The present paper shows how BBO can be applied for constrained optimization problems, where the objective is to find a solution for a given objective function, subject to both inequality and equality constraints.

Journal ArticleDOI
TL;DR: A new advanced algorithm is proposed for the process parameter optimization of machining processes, inspired by the teaching-learning process, and it works on the effect of influence of a teacher on the output of learners in a class.
Abstract: A new advanced algorithm is proposed for the process parameter optimization of machining processes. This algorithm is inspired by the teaching-learning process, and it works on the effect of influence of a teacher on the output of learners in a class. The results obtained by the proposed new algorithm have outperformed the previous results for the considered machining processes.

Journal ArticleDOI
TL;DR: Four methods for global numerical black box optimization with origins in the mathematical programming community are described and experimentally compared with the state of the art evolutionary method, BIPOP-CMA-ES and suggestions about which algorithm should be used depending on the available budget of function evaluations are drawn.
Abstract: Four methods for global numerical black box optimization with origins in the mathematical programming community are described and experimentally compared with the state of the art evolutionary method, BIPOP-CMA-ES. The methods chosen for the comparison exhibit various features that are potentially interesting for the evolutionary computation community: systematic sampling of the search space DIRECT, MCS possibly combined with a local search method MCS, or a multi-start approach NEWUOA, GLOBAL possibly equipped with a careful selection of points to run a local optimizer from GLOBAL. The recently proposed "comparing continuous optimizers" COCO methodology was adopted as the basis for the comparison. Based on the results, we draw suggestions about which algorithm should be used depending on the available budget of function evaluations, and we propose several possibilities for hybridizing evolutionary algorithms EAs with features of the other compared algorithms.