scispace - formally typeset
Search or ask a question

Showing papers on "Continuous optimization published in 2015"


Journal ArticleDOI
TL;DR: The proposed algorithm, integrated and improved with search strategies, outperforms the basic variants and other variants of the ABC algorithm and other methods in terms of solution quality and robustness for most of the experiments.

213 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed method named as TSA is better than the state-of-art methods in most cases on numeric function optimization and is an alternative optimization method for solving multilevel thresholding problem.
Abstract: This paper presents a new intelligent optimizer based on the relation between trees and their seeds for continuous optimization. The new method is in the field of heuristic and population-based search. The location of trees and seeds on n-dimensional search space corresponds with the possible solution of an optimization problem. One or more seeds are produced from the trees and the better seed locations are replaced with the locations of trees. While the new locations for seeds are produced, either the best solution or another tree location is considered with the tree location. This consideration is performed by using a control parameter named as search tendency (ST), and this process is executed for a pre-defined number of iterations. These mechanisms provide to balance exploitation and exploration capabilities of the proposed approach. In the experimental studies, the effects of control parameters on the performance of the method are firstly examined on 5 well-known basic numeric functions. The performance of the proposed method is also investigated on the 24 benchmark functions with 2, 3, 4, 5 dimensions and multilevel thresholding problems. The obtained results are also compared with the results of state-of-art methods such as artificial bee colony (ABC) algorithm, particle swarm optimization (PSO), harmony search (HS) algorithm, firefly algorithm (FA) and the bat algorithm (BA). Experimental results show that the proposed method named as TSA is better than the state-of-art methods in most cases on numeric function optimization and is an alternative optimization method for solving multilevel thresholding problem.

206 citations


Journal ArticleDOI
01 Nov 2015
TL;DR: An adaptive FA is proposed in this paper to solve mechanical design optimization problems, and the adaptivity is focused on the search mechanism and adaptive parameter settings.
Abstract: Proposing an extension of firefly algorithmEmployment of picewise chaos, for an further enhanced diversityMaking use of a simple but effective constraint handling methodMaking use of an improved local search procedure Firefly algorithm (FA) is a newer member of bio-inspired meta-heuristics, which was originally proposed to find solutions to continuous optimization problems Popularity of FA has increased recently due to its effectiveness in handling various optimization problems To enhance the performance of the FA even further, an adaptive FA is proposed in this paper to solve mechanical design optimization problems, and the adaptivity is focused on the search mechanism and adaptive parameter settings Moreover, chaotic maps are also embedded into AFA for performance improvement It is shown through experimental tests that some of the best known results are improved by the proposed algorithm

189 citations


Journal ArticleDOI
TL;DR: A survey of methods for algorithm selection in the black-box continuous optimization domain is presented and a classification of the landscape analysis methods based on their order, neighborhood structure and computational complexity is proposed.

184 citations


Journal ArticleDOI
01 Jun 2015
TL;DR: Ant colony optimization for continuous domains (ACOR) based integer programming is employed for size optimization in a hybrid photovoltaic (PV)-wind energy system and the results prove that the authors' proposed approach outperforms them in terms of reaching an optimal solution and speed.
Abstract: ACOR based integer programming is employed for size optimization.The objective function of the hybrid PV-wind system is the total design cost.Decision variables are number of solar panels, wind turbines and batteries.A complete data set, an optimization formulation and ACOR are benefits of this paper. In this paper, ant colony optimization for continuous domains (ACOR) based integer programming is employed for size optimization in a hybrid photovoltaic (PV)-wind energy system. ACOR is a direct extension of ant colony optimization (ACO). Also, it is the significant ant-based algorithm for continuous optimization. In this setting, the variables are first considered as real then rounded in each step of iteration. The number of solar panels, wind turbines and batteries are selected as decision variables of integer programming problem. The objective function of the PV-wind system design is the total design cost which is the sum of total capital cost and total maintenance cost that should be minimized. The optimization is separately performed for three renewable energy systems including hybrid systems, solar stand alone and wind stand alone. A complete data set, a regular optimization formulation and ACOR based integer programming are the main features of this paper. The optimization results showed that this method gives the best results just in few seconds. Also, the results are compared with other artificial intelligent (AI) approaches and a conventional optimization method. Moreover, the results are very promising and prove that the authors' proposed approach outperforms them in terms of reaching an optimal solution and speed.

149 citations


Journal ArticleDOI
TL;DR: It is shown in this paper that an additional approximation of the objective function is required by the construction of a surrogate objective using radial basis functions, and the proposed method is illustrated with two applications: the shape optimization of a simplified nozzle inlet model and the design optimized of a chemical reaction.
Abstract: Solving large-scale PDE-constrained optimization problems presents computational challenges due to the large dimensional set of underlying equations that have to be handled by the optimizer. Recently, projection-based nonlinear reduced-order models have been proposed to be used in place of high-dimensional models in a design optimization procedure. The dimensionality of the solution space is reduced using a reduced-order basis constructed by Proper Orthogonal Decomposition. In the case of nonlinear equations, however, this is not sufficient to ensure that the cost associated with the optimization procedure does not scale with the high dimension. To achieve that goal, an additional reduction step, hyper-reduction is applied. Then, solving the resulting reduced set of equations only requires a reduced dimensional domain and large speedups can be achieved. In the case of design optimization, it is shown in this paper that an additional approximation of the objective function is required. This is achieved by the construction of a surrogate objective using radial basis functions. The proposed method is illustrated with two applications: the shape optimization of a simplified nozzle inlet model and the design optimization of a chemical reaction.

147 citations


Posted Content
TL;DR: The authors showed that SGM is algorithmically stable in the sense of Bousquet and Elisseeff, and showed that it is stability-promoting in both convex and non-convex optimization problems.
Abstract: We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.

128 citations


Book
23 Mar 2015
TL;DR: This chapter discusses MATLAB(R) as a computational tool, linear and nonlinear programming, and more advanced Topics in Optimization, including Discrete optimization.
Abstract: Optimization in Practice with MATLAB® provides a unique approach to optimization education. It is accessible to both junior and senior undergraduate and graduate students, as well as industry practitioners. It provides a strongly practical perspective that allows the student to be ready to use optimization in the workplace. It covers traditional materials, as well as important topics previously unavailable in optimization books (e.g. numerical essentials - for successful optimization). Written with both the reader and the instructor in mind, Optimization in Practice with MATLAB® provides practical applications of real-world problems using MATLAB®, with a suite of practical examples and exercises that help the students link the theoretical, the analytical, and the computational in each chapter. Additionally, supporting MATLAB® m-files are available for download via www.cambridge.org.messac. Lastly, adopting instructors will receive a comprehensive solution manual with solution codes along with lectures in PowerPoint with animations for each chapter, and the text's unique flexibility enables instructors to structure one- or two-semester courses.

127 citations


Journal ArticleDOI
01 Aug 2015
TL;DR: This paper introduces a multi-population DE to solve large-scale global optimization problems and shows that mDE-bES has a competitive performance and scalability behavior compared to the contestant algorithms.
Abstract: Differential evolution (DE) is a simple, yet very effective, population-based search technique. However, it is challenging to maintain a balance between exploration and exploitation behaviors of the DE algorithm. In this paper, we boost the population diversity while preserving simplicity by introducing a multi-population DE to solve large-scale global optimization problems. In the proposed algorithm, called mDE-bES, the population is divided into independent subgroups, each with different mutation and update strategies. A novel mutation strategy that uses information from either the best individual or a randomly selected one is used to produce quality solutions to balance exploration and exploitation. Selection of individuals for some of the tested mutation strategies utilizes fitness-based ranks of these individuals. Function evaluations are divided into epochs. At the end of each epoch, individuals between the subgroups are exchanged to facilitate information exchange at a slow pace. The performance of the algorithm is evaluated on a set of 19 large-scale continuous optimization problems. A comparative study is carried out with other state-of-the-art optimization techniques. The results show that mDE-bES has a competitive performance and scalability behavior compared to the contestant algorithms.

124 citations


Journal ArticleDOI
TL;DR: This paper proposes several major design features that need to be incorporated into large-scale optimization benchmark suites in order to better resemble the features of real-world problems.

114 citations


Book
13 Feb 2015
TL;DR: This paper presents a meta-modelling procedure called Nonsmooth PDE-constrained Optimization, which automates the very labor-intensive and therefore time-heavy and expensive process of partial Differential Equations decomposition.
Abstract: Introduction.- Basic Theory of Partial Differential Equations and Their Discretization.- Theory of PDE-constrained Optimization.- Numerical Optimization Methods.- Box-constrained Problems.- Nonsmooth PDE-constrained Optimization.

Posted Content
TL;DR: It is established via numerical experiments that the MIO approach performs better than {\texttt {Lasso}} and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.
Abstract: In the last twenty-five years (1990-2014), algorithmic advances in integer optimization combined with hardware improvements have resulted in an astonishing 200 billion factor speedup in solving Mixed Integer Optimization (MIO) problems. We present a MIO approach for solving the classical best subset selection problem of choosing $k$ out of $p$ features in linear regression given $n$ observations. We develop a discrete extension of modern first order continuous optimization methods to find high quality feasible solutions that we use as warm starts to a MIO solver that finds provably optimal solutions. The resulting algorithm (a) provides a solution with a guarantee on its suboptimality even if we terminate the algorithm early, (b) can accommodate side constraints on the coefficients of the linear regression and (c) extends to finding best subset solutions for the least absolute deviation loss function. Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with $n$ in the 1000s and $p$ in the 100s in minutes to provable optimality, and finds near optimal solutions for $n$ in the 100s and $p$ in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than {\texttt {Lasso}} and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.

Journal ArticleDOI
TL;DR: In this article, a continuous formulation of the direction-of-arrival (DOA) estimation problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable.
Abstract: The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data.

Proceedings Article
07 Dec 2015
TL;DR: This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the δ-cover sampling, which achieves an exponential convergence rate.
Abstract: This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [1] requires access to the δ-cover sampling, which was considered to be impractical [1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.

Journal ArticleDOI
TL;DR: An improved fruit fly optimization algorithm based on differential evolution (DFOA) is proposed by modifying the expression of the smell concentration judgment value and by introducing a differential vector to replace the stochastic search.
Abstract: The expression of the smell concentration judgment value is significantly important in the application of the fruit fly optimization algorithm (FOA). The original FOA can only solve problems that have optimal solutions in zero vicinity. To make FOA more universal for the continuous optimization problems, especially for those problems with optimal solutions that are not zero. This paper proposes an improved fruit fly optimization algorithm based on differential evolution (DFOA) by modifying the expression of the smell concentration judgment value and by introducing a differential vector to replace the stochastic search. Through numerical experiments based on 12 benchmark instances, experimental results show that the improved DFOA has a stronger global search ability, faster convergence, and convergence stability in high-dimensional functions than the original FOA and evolutionary algorithms from literature. The DFOA is also applied to optimize the operation of the Texaco gasification process by maximizing the syngas yield using two decision variables, i.e., oxygen-coal ratio and coal concentration. The results show that DFOA can quickly get the optimal output, demonstrating the effectiveness of DFOA.

Proceedings ArticleDOI
11 Jan 2015
TL;DR: A continuous-time framework based on multiplicative weight updates to approximately solve continuous optimization problems and demonstrates significantly faster algorithms to maximize the multilinear relaxation of a monotone or non-monotone submodular set function subject to linear packing constraints.
Abstract: We develop a continuous-time framework based on multiplicative weight updates to approximately solve continuous optimization problems The framework allows for a simple and modular analysis for a variety of problems involving convex constraints and concave or submodular objective functions The continuous-time framework avoids the cumbersome technical details that are typically necessary in actual algorithms We also show that the continuous-time algorithms can be converted into implementable algorithms via a straightforward discretization process Using our framework and additional ideas we obtain significantly faster algorithms compared to previously known algorithms to maximize the multilinear relaxation of a monotone or non-monotone submodular set function subject to linear packing constraints

Journal ArticleDOI
01 Aug 2015
TL;DR: The experimental results show that the proposed ABCbin algorithm is an alternative and simple binary optimization tool in terms of solution quality and robustness and an alternative tool for binary optimization.
Abstract: This paper introduces an ABC variant to solve binary optimization problems.The performance of the proposed method is investigated on well-known UFLPs.The proposed method is compared with the ABC variants and PSO variants.The experimental results show that the proposed algorithm is an alternative tool for binary optimization. Artificial bee colony (ABC) algorithm, one of the swarm intelligence algorithms, has been proposed for continuous optimization, inspired intelligent behaviors of real honey bee colony. For the optimization problems having binary structured solution space, the basic ABC algorithm should be modified because its basic version is proposed for solving continuous optimization problems. In this study, an adapted version of ABC, ABCbin for short, is proposed for binary optimization. In the proposed model for solving binary optimization problems, despite the fact that artificial agents in the algorithm works on the continuous solution space, the food source position obtained by the artificial agents is converted to binary values, before the objective function specific for the problem is evaluated. The accuracy and performance of the proposed approach have been examined on well-known 15 benchmark instances of uncapacitated facility location problem, and the results obtained by ABCbin are compared with the results of continuous particle swarm optimization (CPSO), binary particle swarm optimization (BPSO), improved binary particle swarm optimization (IBPSO), binary artificial bee colony algorithm (binABC) and discrete artificial bee colony algorithm (DisABC). The performance of ABCbin is also analyzed under the change of control parameter values. The experimental results and comparisons show that proposed ABCbin is an alternative and simple binary optimization tool in terms of solution quality and robustness.

Journal ArticleDOI
TL;DR: In this paper, the authors present stationary NLP type models of gas networks that are primarily designed to include detailed nonlinear physics in the final optimization steps for mid-term planning problems after fixing discrete decisions with coarsely approximated physics.
Abstract: Economic reasons and the regulation of gas markets create a growing need for mathematical optimization of natural gas networks. Real life planning tasks often lead to highly complex and extremely challenging optimization problems whose numerical treatment requires a breakdown into several simplified problems to be solved by carefully chosen hierarchies of models and algorithms. This paper presents stationary NLP type models of gas networks that are primarily designed to include detailed nonlinear physics in the final optimization steps for mid term planning problems after fixing discrete decisions with coarsely approximated physics.

Journal ArticleDOI
TL;DR: An improved multi-objective biogeography-based optimization (MO-BBO) algorithm integrated with LINGO software is designed that achieves the better performance than MO-GA in terms of solution quality.

Journal ArticleDOI
TL;DR: A novel discrete variables handling technique is integrated into ICDE and integrating it into original ICDE to give a so-called Discrete-ICDE (D- ICDE) for solving layout truss optimization problems.
Abstract: A discrete variable technique is integrated into ICDE to give Discrete-ICDE.Discrete-ICDE is then applied for the truss layout optimization problems.Numerical results show that Discrete-ICDE is robust, effective and reliable. Recently, an improved (µ+λ) constrainted differential evolution (ICDE) has been proposed and proven to be robust and effective for solving constrainted optimization problems. However, so far, the ICDE has been developed mainly for continuous design variables, and hence it becomes inappropriate for solving layout truss optimization problems which contain both discrete and continuous variables. This paper hence fills this gap by proposing a novel discrete variables handling technique and integrating it into original ICDE to give a so-called Discrete-ICDE (D-ICDE) for solving layout truss optimization problems. Objective functions of the optimization problems are minimum weights of the whole truss structures and constraints are stress, displacement and buckling limitations. Numerical examples of five classical truss problems are carried out and compared to other state-of-the-art optimization methods to illustrate the reliability and effectiveness of the proposed method. The D-ICDE's performance shows that it not only successfully handles discrete variables but also significantly improves the convergence of layout truss optimization problem. The D-ICDE is promising to extend for determining the optimal solution of other structural optimization problems which contain both discrete and continuous variables.

Book ChapterDOI
01 Jan 2015
TL;DR: Flower Pollination Algorithm based on pollination mechanisms of flowering plants constitutes an example of such technique and results of experimental study of its properties for selected benchmark continuous optimization problems are given.
Abstract: Modern optimization has in its disposal an immense variety of heuristic algorithms which can effectively deal with both continuous and combinatorial optimization problems. Recent years brought in this area fast development of unconventional methods inspired by phenomena found in nature. Flower Pollination Algorithm based on pollination mechanisms of flowering plants constitutes an example of such technique. The paper presents first a detailed description of this algorithm. Then results of experimental study of its properties for selected benchmark continuous optimization problems are given. Finally, the performance the algorithm is discussed, predominantly in comparison with the well-known Particle Swarm Optimization Algorithm.

Book ChapterDOI
01 Jan 2015
TL;DR: This chapter describes tools and techniques that are useful for optimization via simulation—maximizing or minimizing the expected value of a performance measure of a stochastic simulation—when the decision variables are discrete.
Abstract: This chapter describes tools and techniques that are useful for optimization via simulation—maximizing or minimizing the expected value of a performance measure of a stochastic simulation—when the decision variables are discrete. Ranking and selection, globally and locally convergent random search and ordinal optimization are covered, along with a collection of “enhancements” that may be applied to many different discrete optimization via simulation algorithms. We also provide strategies for using commercial solvers.

Journal ArticleDOI
TL;DR: A multiobjective optimization based EGO (EGO-MO) for addressing balance between global exploration and local exploitation and it can generate multiple test solutions simultaneously to take the advantage of parallel computing and reduce the computational time.
Abstract: In many engineering optimization problems, objective function evaluations can be extremely computationally expensive. The effective global optimization (EGO) is a widely used approach for expensive optimization. Balance between global exploration and local exploitation is a very important issue in designing EGO-like algorithms. This paper proposes a multiobjective optimization based EGO (EGO-MO) for addressing this issue. In EGO-MO, a global surrogate model for the objective function is firstly constructed using some initial database of designs. Then, a multiobjective optimization problem (MOP) is formulated, in which two objectives measure the global exploration and local exploitation. At each generation, the multiobjective evolutionary algorithm based on decomposition is used for solving the MOP. Several solutions selected from the obtained Pareto front are evaluated. In such a way, it can generate multiple test solutions simultaneously to take the advantage of parallel computing and reduce the computational time. Numerical experiments on a suite of test problems have shown that EGO-MO outperforms EGO in terms of iteration numbers.

Journal ArticleDOI
TL;DR: In analogy to the scalarization principle in vector optimization, this paper presents a new vectorization approach for set optimization problems that is developed for the set less order relation used by Kuroiwa and the minmax less orders introduced by Ha and Jahn.
Abstract: In analogy to the scalarization principle in vector optimization, this paper presents a new vectorization approach for set optimization problems. Vectorization means the replacement of a set optimization problem by a suitable vector optimization problem. This approach is developed for the set less order relation used by Kuroiwa and the minmax less order relation introduced by Ha and Jahn.

Journal ArticleDOI
01 Aug 2015
TL;DR: A new approach for solving graph coloring problem based on COA was presented and its performance is compared with some well-known heuristic search methods to confirm the high performance of the proposed method.
Abstract: Novel discrete approach for combinational optimization based on cuckoo optimization algorithm (COA).Redefining the difference concept between two habitats as a differential list of movements.Proposed method enable to solve non-permutation problems.Modifying egg laying and immigration phase of COA in the proposed discrete cuckoo optimization algorithm (DCOA).High quality results obtained for graph coloring problems. In recent years, various heuristic optimization methods have been developed. Many of these methods are inspired by swarm behaviors in nature, such as particle swarm optimization (PSO), firefly algorithm (FA) and cuckoo optimization algorithm (COA). Recently introduced COA, has proven its excellent capabilities, such as faster convergence and better global minimum achievement. In this paper a new approach for solving graph coloring problem based on COA was presented. Since COA at first was presented for solving continuous optimization problems, in this paper we use the COA for the graph coloring problem, we need a discrete COA. Hence, to apply COA to discrete search space, the standard arithmetic operators such as addition, subtraction and multiplication existent in COA migration operator based on the distance's theory needs to be redefined in the discrete space. Redefinition of the concept of the difference between the two habitats as the list of differential movements, COA is equipped with a means of solving the discrete nature of the non-permutation. A set of graph coloring benchmark problems are solved and its performance is compared with some well-known heuristic search methods. The obtained results confirm the high performance of the proposed method.

Journal ArticleDOI
TL;DR: A new binary coded version of HS, named NBHS, is developed for solving large-scale multidimensional knapsack problem (MKP), where focus is given to the probability distribution rather than the exact value of each decision variable and the concept of mean harmony is introduced in the memory consideration.

Journal ArticleDOI
TL;DR: A reduced basis surrogate model is used for numerical optimization of parameter optimization problems subject to constraints given by parametrized partial differential equations to rapidly optimize different cost functionals using the same reduced basis model.
Abstract: We consider parameter optimization problems which are subject to constraints given by parametrized partial differential equations. Discretizing this problem may lead to a large-scale optimization problem which can hardly be solved rapidly. In order to accelerate the process of parameter optimization we will use a reduced basis surrogate model for numerical optimization. For many optimization methods sensitivity information about the functional is needed. In the following we will show that this derivative information can be calculated efficiently in the reduced basis framework in the case of a general linear output functional and parametrized evolution problems with linear parameter separable operators. By calculating the sensitivity information directly instead of applying the more widely used adjoint approach we can rapidly optimize different cost functionals using the same reduced basis model. Furthermore, we will derive rigorous a-posteriori error estimators for the solution, the gradient and the optimal parameters, which can all be computed online. The method will be applied to two parameter optimization problems with an underlying advection-diffusion equation.

Journal ArticleDOI
TL;DR: Experimental results on thirty-two standard benchmark functions demonstrate that SHPSOS outperforms original HS and the other related algorithms in terms of the solution quality and the stability.
Abstract: A self-adaptive harmony particle swarm optimization search algorithm is proposed.PSO algorithm is utilized to initial the harmony memory (HM).Pitch adjusting rate (PAR) and distance bandwidth (bw), are adjusted dynamically.A Gaussian mutation operator is added to reinforce the robustness.The convergence of the SHPSOS algorithm has been proved theoretically. Harmony Search (HS) algorithm is a new population-based meta-heuristic which imitates the music improvisation process and has been successfully applied to a variety of combination optimization problems. In this paper, a self-adaptive harmony particle swarm optimization search algorithm, named SHPSOS, is proposed to solve global continuous optimization problems. Firstly, an efficient initialization scheme based on the PSO algorithm is presented for improving the solution quality of the initial harmony memory (HM). Secondly, a new self-adaptive adjusting scheme for pitch adjusting rate (PAR) and distance bandwidth (BW), which can balance fast convergence and large diversity during the improvisation step, are designed. PAR is dynamically adapted by symmetrical sigmoid curve, and BW is dynamically adjusted by the median of the harmony vector at each generation. Meanwhile, a new effective improvisation scheme based on differential evolution and the best harmony (best individual) is developed to accelerate convergence performance and to improve solution accuracy. Besides, Gaussian mutation strategy is presented and embedded in the SHPSOS algorithm to reinforce the robustness and avoid premature convergence in the evolution process of candidates. Finally, the global convergence performance of the SHPSOS is analyzed with the Markov model to testify the stability of algorithm. Experimental results on thirty-two standard benchmark functions demonstrate that SHPSOS outperforms original HS and the other related algorithms in terms of the solution quality and the stability.

Journal ArticleDOI
TL;DR: It is shown that a slightly adapted PSO almost surely finds a local optimum, and for a very general class of objective functions, results are provided about the quality of the solution found.

Journal ArticleDOI
TL;DR: In this article, a new level set method for topological shape optimization of 3D structures considering manufacturing constraints is proposed, where the boundary of structure is implicitly represented as the zero level set of a higher-dimensional level set function, and the implicit surface is parameterized through the interpolation of a given set of compactly supported radial basis functions.