scispace - formally typeset
Search or ask a question

Showing papers on "Continuous optimization published in 2016"


Journal ArticleDOI
TL;DR: The state of the art in continuous optimization methods for such problems, and particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems are described.
Abstract: A large number of imaging problems reduce to the optimization of a cost function , with typical structural properties. The aim of this paper is to describe the state of the art in continuous optimization methods for such problems, and present the most successful approaches and their interconnections. We place particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems. We illustrate and compare the different algorithms using classical non-smooth problems in imaging, such as denoising and deblurring. Moreover, we present applications of the algorithms to more advanced problems, such as magnetic resonance imaging, multilabel image segmentation, optical flow estimation, stereo matching, and classification.

477 citations


Proceedings Article
19 Jun 2016
TL;DR: This article showed that SGM is algorithmically stable in the sense of Bousquet and Elisseeff, and showed that it is stability-promoting in both convex and non-convex optimization problems.
Abstract: We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.

452 citations


Journal ArticleDOI
TL;DR: In this article, a discrete extension of modern first-order continuous optimization methods is proposed to find high quality feasible solutions that are used as warm starts to a MIO solver that finds provably optimal solutions.
Abstract: In the period 1991–2015, algorithmic advances in Mixed Integer Optimization (MIO) coupled with hardware improvements have resulted in an astonishing 450 billion factor speedup in solving MIO problems. We present a MIO approach for solving the classical best subset selection problem of choosing $k$ out of $p$ features in linear regression given $n$ observations. We develop a discrete extension of modern first-order continuous optimization methods to find high quality feasible solutions that we use as warm starts to a MIO solver that finds provably optimal solutions. The resulting algorithm (a) provides a solution with a guarantee on its suboptimality even if we terminate the algorithm early, (b) can accommodate side constraints on the coefficients of the linear regression and (c) extends to finding best subset solutions for the least absolute deviation loss function. Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with $n$ in the 1000s and $p$ in the 100s in minutes to provable optimality, and finds near optimal solutions for $n$ in the 100s and $p$ in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than Lasso and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.

441 citations


Journal ArticleDOI
11 Jul 2016
TL;DR: Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy, and runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers.
Abstract: Fully articulated hand tracking promises to enable fundamentally new interactions with virtual and augmented worlds, but the limited accuracy and efficiency of current systems has prevented widespread adoption. Today's dominant paradigm uses machine learning for initialization and recovery followed by iterative model-fitting optimization to achieve a detailed pose fit. We follow this paradigm, but make several changes to the model-fitting, namely using: (1) a more discriminative objective function; (2) a smooth-surface model that provides gradients for non-linear optimization; and (3) joint optimization over both the model pose and the correspondences between observed data points and the model surface. While each of these changes may actually increase the cost per fitting iteration, we find a compensating decrease in the number of iterations. Further, the wide basin of convergence means that fewer starting points are needed for successful model fitting. Our system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets. We can track up to several meters from the camera to provide a large working volume for interaction, even using the noisy data from current-generation depth cameras. Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy. Qualitative results take the form of live recordings of a range of interactive experiences enabled by this new approach.

302 citations


01 Jan 2016
TL;DR: Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some theory of vector optimization references.
Abstract: Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some theory of vector optimization references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?

268 citations


Journal ArticleDOI
TL;DR: An evolutionary algorithm is presented to optimize the design of a trauma system, which is a typical offline data-driven multiobjective optimization problem, where the objectives and constraints can be evaluated using incidents only.
Abstract: Most existing work on evolutionary optimization assumes that there are analytic functions for evaluating the objectives and constraints. In the real world, however, the objective or constraint values of many optimization problems can be evaluated solely based on data and solving such optimization problems is often known as data-driven optimization. In this paper, we divide data-driven optimization problems into two categories, i.e., offline and online data-driven optimization, and discuss the main challenges involved therein. An evolutionary algorithm is then presented to optimize the design of a trauma system, which is a typical offline data-driven multiobjective optimization problem, where the objectives and constraints can be evaluated using incidents only. As each single function evaluation involves a large amount of patient data, we develop a multifidelity surrogate-management strategy to reduce the computation time of the evolutionary optimization. The main idea is to adaptively tune the approximation fidelity by clustering the original data into different numbers of clusters and a regression model is constructed to estimate the required minimum fidelity. Experimental results show that the proposed algorithm is able to save up to 90% of computation time without much sacrifice of the solution quality.

175 citations


Journal ArticleDOI
TL;DR: This review presents developed models, theory, and numerical methods for structural optimization of trusses with discrete design variables in the period 1968 – 2014 and collects, for the first time, the articles in the field presenting deterministic optimization methods and meta heuristics.
Abstract: This review presents developed models, theory, and numerical methods for structural optimization of trusses with discrete design variables in the period 1968 --- 2014. The comprehensive reference list collects, for the first time, the articles in the field presenting deterministic optimization methods and meta heuristics. The field has experienced a shift in focus from deterministic methods to meta heuristics, i.e. stochastic search methods. Based on the reported numerical results it is however not possible to conclude that this shift has improved the competences to solve application relevant problems. This, and other, observations lead to a set of recommended research tasks and objectives to bring the field forward. The development of a publicly available benchmark library is urgently needed to support development and assessment of existing and new heuristics and methods. Combined with this effort, it is recommended that the field begins to use modern methods such as performance profiles for fair and accurate comparison of optimization methods. Finally, theoretical results are rare in this field. This means that most recent methods and heuristics are not supported by mathematical theory. The field should therefore re-focus on theoretical issues such as problem analysis and convergence properties of new methods.

124 citations


Journal ArticleDOI
TL;DR: This work proposes a new metaheuristic, Yin-Yang-Pair Optimization (YYPO), which is a low complexity stochastic algorithm which works with two points and generates additional points depending on the number of decision variables in the optimization problem.

96 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: This paper proposes a new formulation of the trajectory planning problem as a Mixed-Integer Quadratic Program, which can be solved efficiently using widely available solvers, and the resulting trajectory is guaranteed to be globally optimal.
Abstract: This paper considers the problem of optimal trajectory generation for autonomous driving under both continuous and logical constraints. Classical approaches based on continuous optimization formulate the trajectory generation problem as a nonlinear program, in which vehicle dynamics and obstacle avoidance requirements are enforced as nonlinear equality and inequality constraints. In general, gradient-based optimization methods are then used to find the optimal trajectory. However, these methods are ill-suited for logical constraints such as those raised by traffic rules, presence of obstacles and, more generally, to the existence of multiple maneuver variants. We propose a new formulation of the trajectory planning problem as a Mixed-Integer Quadratic Program. This formulation can be solved efficiently using widely available solvers, and the resulting trajectory is guaranteed to be globally optimal. We apply our framework to several scenarios that are still widely considered as challenging for autonomous driving, such as obstacle avoidance with multiple maneuver choices, overtaking with oncoming traffic or optimal lane-change decision making. Simulation results demonstrate the effectiveness of our approach and its real-time applicability.

96 citations


Journal ArticleDOI
TL;DR: A new real-time optimization scheme that explores the inherent smoothness of the plant mapping to enable a reliable optimization and combines the quadratic approximation approach used in derivative-free optimization techniques with the iterative gradient-modification optimization scheme.

88 citations


Journal ArticleDOI
TL;DR: The multi-class approach proposed here increases the initial exploration capability of the optimization process resulting in a more efficient search, and the results are compared to the results of both a modified TLBO algorithm and other optimization methods.

Journal ArticleDOI
01 Jun 2016
TL;DR: A novel binary artificial algae algorithm (BAAA) is proposed to efficiently solve multidimensional knapsack problem (MKP) and shows the superiority of BAAA to many compared existing algorithms.
Abstract: Graphical abstractDisplay Omitted HighlightsA novel binary artificial algae algorithm is proposed for solving MKPs.The method is composed of discrete process, repair operators and elite local search.The results show the proposed method outperforms many existing algorithms. The multidimensional knapsack problem (MKP) is a well-known NP-hard optimization problem. Various meta-heuristic methods are dedicated to solve this problem in literature. Recently a new meta-heuristic algorithm, called artificial algae algorithm (AAA), was presented, which has been successfully applied to solve various continuous optimization problems. However, due to its continuous nature, AAA cannot settle the discrete problem straightforwardly such as MKP. In view of this, this paper proposes a binary artificial algae algorithm (BAAA) to efficiently solve MKP. This algorithm is composed of discrete process, repair operators and elite local search. In discrete process, two logistic functions with different coefficients of curve are studied to achieve good discrete process results. Repair operators are performed to make the solution feasible and increase the efficiency. Finally, elite local search is introduced to improve the quality of solutions. To demonstrate the efficiency of our proposed algorithm, simulations and evaluations are carried out with total of 94 benchmark problems and compared with other bio-inspired state-of-the-art algorithms in the recent years including MBPSO, BPSOTVAC, CBPSOTVAC, GADS, bAFSA, and IbAFSA. The results show the superiority of BAAA to many compared existing algorithms.

Journal ArticleDOI
TL;DR: A general surrogate model framework is developed and it is shown how sampling strategies of well-known surrogate model algorithms for continuous optimization can be modified for mixed-integer variables and how to combine different sampling strategies and local search to obtain high-accuracy solutions.
Abstract: We introduce MISO, the mixed-integer surrogate optimization framework. MISO aims at solving computationally expensive black-box optimization problems with mixed-integer variables. This type of optimization problem is encountered in many applications for which time consuming simulation codes must be run in order to obtain an objective function value. Examples include optimal reliability design and structural optimization. A single objective function evaluation may take from several minutes to hours or even days. Thus, only very few objective function evaluations are allowable during the optimization. The development of algorithms for this type of optimization problems has, however, rarely been addressed in the literature. Because the objective function is black-box, derivatives are not available and numerically approximating the derivatives requires a prohibitively large number of function evaluations. Therefore, we use computationally cheap surrogate models to approximate the expensive objective function and to decide at which points in the variable domain the expensive objective function should be evaluated. We develop a general surrogate model framework and show how sampling strategies of well-known surrogate model algorithms for continuous optimization can be modified for mixed-integer variables. We introduce two new algorithms that combine different sampling strategies and local search to obtain high-accuracy solutions. We compare MISO in numerical experiments to a genetic algorithm, NOMAD version 3.6.2, and SO-MI. The results show that MISO is in general more efficient than NOMAD and the genetic algorithm with respect to finding improved solutions within a limited budget of allowable evaluations. The performance of MISO depends on the chosen sampling strategy. The MISO algorithm that combines a coordinate perturbation search with a target value strategy and a local search performs best among all algorithms.

Journal ArticleDOI
TL;DR: The experimental results on a large number of benchmark functions with the different dimensions by using non-parametric statistical tests have shown that the proposed RPEO-PLM algorithm outperforms other popular population-based evolutionary algorithms, e.g., real-coded genetic algorithm with adaptive directed mutation, RCGA with polynomial mutation, and an improved RPEO algorithm with random mutation in terms of accuracy.

Proceedings Article
12 Feb 2016
TL;DR: This paper proposes the randomized coordinate shrinking classification algorithm to learn the model, forming the RACOS algorithm, for optimization in continuous and discrete domains, and proves that optimization problems with Local Lipschitz continuity can be solved in polynomial time by proper configurations of this framework.
Abstract: Many randomized heuristic derivative-free optimization methods share a framework that iteratively learns a model for promising search areas and samples solutions from the model. This paper studies a particular setting of such framework, where the model is implemented by a classification model discriminating good solutions from bad ones. This setting allows a general theoretical characterization, where critical factors to the optimization are discovered. We also prove that optimization problems with Local Lipschitz continuity can be solved in polynomial time by proper configurations of this framework. Following the critical factors, we propose the randomized coordinate shrinking classification algorithm to learn the model, forming the RACOS algorithm, for optimization in continuous and discrete domains. Experiments on the testing functions as well as on the machine learning tasks including spectral clustering and classification with Ramp loss demonstrate the effectiveness of RACOS.

Journal ArticleDOI
TL;DR: The proposed surrogate-assisted ES can be incorporated into an intelligent system that finds approximate Pareto optimal solutions to simulation-based constrained multi-objective optimization problems in various applications including engineering design optimization, production management and manufacturing.
Abstract: New surrogate-assisted ES for constrained multi-objective optimization is developed.Surrogates are used to identify the most promising among many trial offspring.A radial basis function (RBF) model is used to implement the method.Method is tested on benchmark problems and manufacturing and robotics applications.Proposed method generally outperforms an ES and NSGA-II on the problems used. In many real-world optimization problems, several conflicting objectives must be achieved and optimized simultaneously and the solutions are often required to satisfy certain restrictions or constraints. Moreover, in some applications, the numerical values of the objectives and constraints are obtained from computationally expensive simulations. Many multi-objective optimization algorithms for continuous optimization have been proposed in the literature and some have been incorporated or used in conjunction with expert and intelligent systems. However, relatively few of these multi-objective algorithms handle constraints, and even fewer, use surrogates to approximate the objective or constraint functions when these functions are computationally expensive. This paper proposes a surrogate-assisted evolution strategy (ES) that can be used for constrained multi-objective optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. Such an algorithm can be incorporated into an intelligent system that finds approximate Pareto optimal solutions to simulation-based constrained multi-objective optimization problems in various applications including engineering design optimization, production management and manufacturing. The main idea in the proposed algorithm is to generate a large number of trial offspring in each generation and use the surrogates to predict the objective and constraint function values of these trial offspring. Then the algorithm performs an approximate non-dominated sort of the trial offspring based on the predicted objective and constraint function values, and then it selects the most promising offspring (those with the smallest predicted ranks from the non-dominated sort) to become the actual offspring for the current generation that will be evaluated using the expensive objective and constraint functions. The proposed method is implemented using cubic radial basis function (RBF) surrogate models to assist the ES. The resulting RBF-assisted ES is compared with the original ES and to NSGA-II on 20 test problems involving 2-15 decision variables, 2-5 objectives and up to 13 inequality constraints. These problems include well-known benchmark problems and application problems in manufacturing and robotics. The numerical results showed that the RBF-assisted ES generally outperformed the original ES and NSGA-II on the problems used when the computational budget is relatively limited. These results suggest that the proposed surrogate-assisted ES is promising for computationally expensive constrained multi-objective optimization.

Journal ArticleDOI
15 Mar 2016
TL;DR: The proposed algorithms are proved to be effective for those heterogeneous nonlinear agents to achieve the optimization solution in the semi-global sense, even with the exponential convergence rate.
Abstract: In this paper, distributed optimization control for a group of autonomous Lagrangian systems is studied to achieve an optimization task with local cost functions. To solve the problem, two continuous-time distributed optimization algorithms are designed for multiple heterogeneous Lagrangian agents with uncertain parameters. The proposed algorithms are proved to be effective for those heterogeneous nonlinear agents to achieve the optimization solution in the semi-global sense, even with the exponential convergence rate. Moreover, simulation adequately illustrates the effectiveness of our optimization algorithms.

Journal ArticleDOI
01 Jul 2016
TL;DR: A new algorithm called “Quantum-inspired Firefly Algorithm with Particle Swarm Optimization (QIFAPSO)” that among other things, adapts the firefly approach to solve discrete optimization problems to ensure a better control of the solutions diversity.
Abstract: The firefly algorithm is a recent meta-heuristic inspired from nature. It is based on swarm intelligence of fireflies and generally used for solving continuous optimization problems. This paper proposes a new algorithm called "Quantum-inspired Firefly Algorithm with Particle Swarm Optimization (QIFAPSO)" that among other things, adapts the firefly approach to solve discrete optimization problems. The proposed algorithm uses the basic concepts of quantum computing such as superposition states of Q-bit and quantum measure to ensure a better control of the solutions diversity. Moreover, we use a discrete representation for fireflies and we propose a variant of the well-known Hamming distance to compute the attractiveness between them. Finally, we combine two strategies that cooperate in exploring the search space: the first one is the move of less bright fireflies towards the brighter ones and the second strategy is the PSO movement in which a firefly moves by taking into account its best position as well as the best position of its neighborhood. Of course, these two strategies of fireflies' movement are adapted to the quantum representation used in the algorithm for potential solutions. In order to validate our idea and show the efficiency of the proposed algorithm, we have used the multidimensional knapsack problem which is known as an NP-Complete problem and we have conducted various tests of our algorithm on different instances of this problem. The experimental results of our algorithm are competitive and in most cases are better than that of existing methods.

Journal ArticleDOI
TL;DR: Proposed Laplacian BBO is an efficient and reliable algorithm for solving not only the continuous functions but also real life problems like camera calibration and T-Test has been employed to strengthen the fact that Laplacan BBO performs better than Blended BBO.
Abstract: This paper provides three innovations Firstly, a new Laplacian BBO is presented which introduces a Laplacian migration operator based on the Laplace Crossover of Real Coded Genetic Algorithms Secondly, the performance of the Laplacian BBO and Blended BBO is exhibited on the latest benchmark collection of CEC 2014 (To the best of the knowledge of the authors, the complete CEC 2014 benchmarks have not been solved by Blended BBO) On the basis of the criteria laid down in CEC 2014 as well as popular evaluation criteria called Performance Index, It is shown that Laplacian BBO outperforms Blended BBO in terms of error value defined in CEC 2014 benchmark collection T-Test has also been employed to strengthen the fact that Laplacian BBO performs better than Blended BBO The third innovation of the paper is the use of the proposed Laplacian BBO and Blended BBO to solve a real life problem from the field of Computer Vision It is concluded that proposed Laplacian BBO is an efficient and reliable algorithm for solving not only the continuous functions but also real life problems like camera calibration

Journal ArticleDOI
TL;DR: This study model the grid by (nonlinear) AC power flow equations, and assumes that attacks take the form of increased impedance along transmission lines, and describes optimization formulations of the problem of finding the most disruptive attack.
Abstract: Potential vulnerabilities in a power grid can be exposed by identifying those transmission lines on which attacks (in the form of interference with their transmission capabilities) causes maximum disruption to the grid. In this study, we model the grid by (nonlinear) AC power flow equations, and assume that attacks take the form of increased impedance along transmission lines. We quantify disruption in several different ways, including (a) overall deviation of the voltages at the buses from $1.0$ per unit (p.u.), and (b) the minimal amount of load that must be shed in order to restore the grid to stable operation. We describe optimization formulations of the problem of finding the most disruptive attack, which are either nonlinear programing problems or nonlinear bilevel optimization problems, and describe customized algorithms for solving these problems. Experimental results on the IEEE 118-Bus system and a Polish 2383-Bus system are presented.

Journal ArticleDOI
TL;DR: A novel differential evolution algorithms based on adaptive differential evolution algorithm is proposed by implementing pbest roulette wheel selection and retention mechanism to avoid the individual gather around the pbest vector, thus diversify the population.
Abstract: A novel differential evolution algorithm based on adaptive differential evolution algorithm is proposed by implementing pbest roulette wheel selection and retention mechanism. Motivated by the observation that individuals with better function values can generate better offspring, we propose a fitness function value based pbest selection mechanism. The generated offspring with better fitness function value indicates that the pbest vector of current individual is suitable for exploitation, so the pbest vector should be retained into the next generation. This modification is used to avoid the individual gather around the pbest vector, thus diversify the population. The performance of the proposed algorithm is extensively evaluated both on the 25 famous benchmark functions and four real-world application problems. Experimental results and statistical analyses show that the proposed algorithm is highly competitive when compared with other state-of-the-art differential evolution algorithms.

Journal ArticleDOI
TL;DR: In this paper, an efficient approach for uncertain topology optimization in which the uncertain optimization problem is equivalent to that of solving a deterministic topological optimization problem with multiple load cases is reported.
Abstract: This paper reports an efficient approach for uncertain topology optimization in which the uncertain optimization problem is equivalent to that of solving a deterministic topology optimization problem with multiple load cases. Probabilistic and fuzzy property of the directional uncertainty of the applied loads is considered in the topology optimization; the cloud model is employed to describe that property which can also take the correlations of the probability and fuzziness into account. Convergent and mesh-independent bi-directional evolutionary structural optimization (BESO) algorithms are utilized to obtain the final optimal solution. The proposed method is suitable for linear elastic problems with uncertain applied loads, subject to volume constraint. Several numerical examples are presented to demonstrate the capability and effectiveness of the proposed approach. In-depth discussions are also given on the effects of considering the probability and fuzziness of the directions of the applied loads on the final layout.

Journal ArticleDOI
TL;DR: Comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, and the results showed that the VOA is a viable solution for continuous optimization.
Abstract: A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called ‘antivirus’) to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, parti...

Journal ArticleDOI
TL;DR: Simulation-based design optimization methods integrate computer simulations, design modification tools, and optimization algorithms and show a faster convergence towards the global minimum than the original global methods and are a viable option for ship hydrodynamic optimization.

Proceedings ArticleDOI
24 Jul 2016
TL;DR: The R-Package FLACCO is introduced which makes all relevant features available in a unified framework together with efficient helper functions and a case study which gives perspectives to ELA for multi-objective optimization problems is presented.
Abstract: Exploratory Landscape Analysis (ELA) aims at understanding characteristics of single-objective continuous (black-box) optimization problems in an automated way. Moreover, the approach provides the basis for constructing algorithm selection models for unseen problem instances. Recently, it has gained increasing attention and numerical features have been designed by various research groups. This paper introduces the R-Package FLACCO which makes all relevant features available in a unified framework together with efficient helper functions. Moreover, a case study which gives perspectives to ELA for multi-objective optimization problems is presented.

Journal ArticleDOI
TL;DR: This paper starts by analyzing the worst case complexity of general trust-region derivative-free methods for smooth functions, and proposes a smoothing approach, for the nonsmooth case, for which it is shown how to improve the existing resu...
Abstract: Trust-region methods are a broad class of methods for continuous optimization that found application in a variety of problems and contexts. In particular, they have been studied and applied for problems without using derivatives. The analysis of trust-region derivative-free methods has focused on global convergence, and they have been proven to generate a sequence of iterates converging to stationarity independently of the starting point. Most of such an analysis is carried out in the smooth case, and, moreover, little is known about the complexity or global rate of these methods. In this paper, we start by analyzing the worst case complexity of general trust-region derivative-free methods for smooth functions. For the nonsmooth case, we propose a smoothing approach, for which we prove global convergence and bound the worst case complexity effort. For the special case of nonsmooth functions that result from the composition of smooth and nonsmooth/convex components, we show how to improve the existing resu...

Journal ArticleDOI
TL;DR: An isothermal and stationary description of compressor stations, state MINLP and GDP models for operating a single station, and several continuous reformulations of the problem are discussed.
Abstract: When considering cost-optimal operation of gas transport networks, compressor stations play the most important role. Proper modeling of these stations leads to nonconvex mixed-integer nonlinear optimization problems. In this article, we give an isothermal and stationary description of compressor stations, state MINLP and GDP models for operating a single station, and discuss several continuous reformulations of the problem. The applicability and relevance of different model formulations, especially of those without discrete variables, is demonstrated by a computational study on both academic examples and real-world instances. In addition, we provide preliminary computational results for an entire network.

Journal ArticleDOI
TL;DR: Numerical results prove that the utilized method is an effective tool for finding optimum design of structures with frequency constraints, even if the utilized algorithm itself is a powerful method.
Abstract: Controlling the values of natural frequencies of a structure plays an important role to keep the dynamic behavior of structures in a desirable level. This paper is concerned with the optimal design of large-scale dome structures with multiple natural frequency constraints. This optimization problem is highly nonlinear with several local optimums in its search space. The idea of cascading, which allows a single optimization problem to be tackled with a number of autonomous optimization stages is used. The procedure utilized in this paper reduces the objective function value over a number of optimization stages by initially operating on a small number of design variables, which is gradually increased stage after stage. In order to show the effect of using coarsening of variables for handling the optimization problem, independent of the effect of the algorithm, the recently developed approach (enhanced colliding bodies optimization) is employed in the entire stages of the present method. Besides, we want to demonstrate the positive effect of using multi-DVC cascade optimization procedure even if the utilized algorithm itself is a powerful method. In order to test the performance of the algorithm, four dome truss design examples with 120, 600, 1180 and 1410 elements are optimized. The numerical results prove that the utilized method is an effective tool for finding optimum design of structures with frequency constraints.

Journal ArticleDOI
01 Dec 2016
TL;DR: A novel variant of SFLA for the continuous optimization problems based on the expanded framework, called the lvy flight-based shuffled frog-leaping algorithm, LSFLA, which has better performance than many state-of-the-art algorithms.
Abstract: Display OmittedAn expanded framework of shuffled frog-leaping algorithm for continuous optimization problem is performed according to the mechanism of exploration and exploitation, in which a lvy flight-based attractor was proposed. Experimental results show that our proposed algorithm has better performance than many state-of-the-art algorithms. A new framework of SFLA based on the exploration and exploitation mechanism was proposed to solve continuous optimization problems.Attractors based on the lvy flight was proposed to enhance the local search ability of the proposed algorithm.An interaction learning rule was proposed to improve the global search ability of the proposed algorithm.Proposed algorithm was compared with many well-known algorithms to demonstrate that it is successful to solve the real-world unconstrained and constrained continuous optimization problems. Shuffled frog-leaping algorithm (SFLA), a novel meta-heuristic optimization algorithm inspired by the foraging behavior of frogs, has been widely applied to many areas for combination problems. But it is easy to fall into the local optimum especially for the continuous optimization problems. This paper proposed a novel variant of SFLA for the continuous optimization problems based on the expanded framework (called the lvy flight-based shuffled frog-leaping algorithm, LSFLA). In this new framework, the shuffling process, local search step and global search step are combined according to the exploration and exploitation mechanism. An lvy flight based attractor was adopted for the local search step, which enhance the local search ability of algorithm due to the search of short walking distance and occasionally longer walking distance. An interaction learning rule was used for the global search step, which enhances the exploration ability. In order to test the effectiveness of LSFLA, thirty benchmark functions, six real-world constrained continuous optimization problems and a real-world support vector machine (SVM) parameter optimization problem were compared to the many well-known heuristic methods. The experimental results demonstrate that the performance of our proposed algorithm is better than other algorithms for the continuous optimization problems.

Journal ArticleDOI
TL;DR: This paper is about basic and easily achievable measures, and techniques for a time-wise and computationally efficient exploration of the design space, and sophisticated emerging techniques for modeling machine characteristics by paring the number of required FE simulations down to the minimum and nonlinear modeling of the targets of the optimization scenario as functions of the designs parameters.
Abstract: This paper deals with accelerating typical optimization scenarios for electrical machine designs. Besides the advantage of a reduced computation time, this leads to a reduction in computational power and thus to a lower power consumption when running the optimization. If machines of high power density are required, usually highly utilized assemblies that feature nonlinear characteristics are obtained. Optimization scenarios are considered where the evaluation of a potential design requires computationally expensive nonlinear finite element (FE) simulations. Improving the speed of optimization runs takes top priority and various measures can be considered. This paper is about basic and easily achievable measures, and techniques for a time-wise and computationally efficient exploration of the design space. Suggested improvements comprise sophisticated emerging techniques for modeling machine characteristics by paring the number of required FE simulations down to the minimum and nonlinear modeling of the targets of the optimization scenario as functions of the design parameters to further reduce the number of FE evaluations. In the case study, the analysis of a typical optimization task is given, and achievable speed improvements as well as still present issues are discussed.