scispace - formally typeset
Search or ask a question

Showing papers on "Continuous optimization published in 1998"


Journal ArticleDOI
TL;DR: This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule.
Abstract: In many engineering optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, tradeoff analysis, and optimization. In this paper, we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome.

6,914 citations


Book
31 Oct 1998
TL;DR: The Steiner Ratio of Banach-Minkowski Space and Probabilistic Verification and Non-Approximability and Network-Based Model and Algorithms in Data Mining and Knowledge Discovery are studied.
Abstract: A Unified Approach for Domination Problems on Different Network Topologies Advanced Techniques for Dynamic Programming Advances in Group Testing Advances in Scheduling Problems Algebrization and Randomization Methods Algorithmic Aspects of Domination in Graphs Algorithms and Metaheuristics for Combinatorial Matrices Algorithms for the Satisfiability Problem Bin Packing Approximation Algorithms: Survey and Classification Binary Unconstrained Quadratic Optimization Problem Combinatorial Optimization Algorithms for Probe Design and Selection Problems Combinatorial Optimization in Data Mining Combinatorial Optimization Techniques for Network-based Data Mining Combinatorial Optimization Techniques in Transportation and Logistic Networks Complexity Issues on PTAS Computing Distances between Evolutionary Trees Connected Dominating Set in Wireless Networks Connections between Continuous and Discrete Extremum Problems, Generalized Systems and Variational Inequalities Coverage Problems in Sensor Networks Data Correcting Approach for Routing and Location in Networks Dual Integrality in Combinatorial Optimization Dynamical System Approaches to Combinatorial Optimization Efficient Algorithms for Geometric Shortest Path Query Problems Energy Efficiency in Wireless Networks Equitable Coloring of Graphs Faster and Space Efficient Exact Exponential Algorithms: Combinatorial and Algebraic Approaches Fault-Tolerant Facility Allocation Fractional Combinatorial Optimization Fuzzy Combinatorial Optimization Problems Geometric Optimization in Wireless Networks Gradient-Constrained Minimum Interconnection Networks Graph Searching and Related Problems Graph Theoretic Clique Relaxations and Applications Greedy Approximation Algorithms Hardness and Approximation of Network Vulnerability Job Shop Scheduling with Petri Nets Key Tree Optimization Linear Programming Analysis of Switching Networks Map of Geometric Minimal Cuts with Applications Max-Coloring Maximum Flow Problems and an NP-complete variant on Edge Labeled Graphs Modern Network Interdiction Problems and Algorithms Network Optimization Neural Network Models in Combinatorial Optimization On Coloring Problems Online and Semi-online Scheduling Online Frequency Allocation and Mechanism Design for Cognitive Radio Wireless Networks Optimal Partitions Optimization in Multi-Channel Wireless Networks Optimization Problems in Data Broadcasting Optimization Problems in Online Social Networks Optimizing Data Collection Capacity in Wireless Networks Packing Circles in Circles and Applications Partition in High Dimensional Spaces Probabilistic Verification and Non-approximability Protein Docking Problem as Combinatorial Optimization Using Beta-complex Quadratic Assignment Problems Reactive Business Intelligence: Combining the Power of Optimization with Machine Learning Reformulation-Linearization Techniques for Discrete Optimization Problems Resource Allocation Problems Rollout Algorithms for Discrete Optimization: A Survey Simplicial Methods for Approximating Fixed Point with Applications in Combinatorial Optimizations Small World Networks in Computational Neuroscience Social Structure Detection Steiner Minimal Trees: An Introduction, Parallel Computation and Future Work Steiner Minimum Trees in E^3 Tabu Search Variations of Dominating Set Problem

921 citations




Book
30 Sep 1998
TL;DR: In this paper, the average cost optimization theory for countable state spaces is presented, as well as an inventory model for finite state spaces and a cost minimization theory for continuous time processes.
Abstract: Optimization Criteria. Finite Horizon Optimization. Infinite Horizon Discounted Cost Optimization. An Inventory Model. Average Cost Optimization for Finite State Spaces. Average Cost Optimization Theory for Countable State Spaces. Computation of Average Cost Optimal Policies for Infinite State Spaces. Optimization Under Actions at Selected Epochs. Average Cost Optimization of Continuous Time Processes. Appendices. Bibliography. Index.

475 citations


Journal Article
TL;DR: Simultaneous perturbation stochastic approximation (SPSA) as mentioned in this paper is a widely used method for multivariate optimization problems that requires only two measurements of the objective function regardless of the dimension of the optimization problem.
Abstract: ultivariate stochastic optimization plays a major role in the analysis and control of many engineering systems. In almost all real-world optimization problems, it is necessary to use a mathematical algorithm that iteratively seeks out the solution because an analytical (closed-form) solution is rarely available. In this spirit, the “simultaneous perturbation stochastic approximation (SPSA)” method for difficult multivariate optimization problems has been developed. SPSA has recently attracted considerable international attention in areas such as statistical parameter estimation, feedback control, simulation-based optimization, signal and image processing, and experimental design. The essential feature of SPSA—which accounts for its power and relative ease of implementation—is the underlying gradient approximation that requires only two measurements of the objective function regardless of the dimension of the optimization problem. This feature allows for a significant decrease in the cost of optimization, especially in problems with a large number of variables to be optimized. (

337 citations


Proceedings ArticleDOI
01 Dec 1998
TL;DR: This work presents a review of methods for optimizing stochastic systems using simulation and focuses on gradient based techniques for optimization with respect to continuous decision parameters and on random search methods for optimizationWith respect to discrete decision parameters.
Abstract: We present a review of methods for optimizing stochastic systems using simulation. The focus is on gradient based techniques for optimization with respect to continuous decision parameters and on random search methods for optimization with respect to discrete decision parameters.

254 citations


Journal ArticleDOI
TL;DR: In this paper, the mathematical model for the topological structural optimization is constructed and derivation of related optimality criteria is explained, a modified resizing scheme is suggested and illustrative examples are provided.

230 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to investigate the efficiency of combinatorial optimization methods, in particular algorithms based on evolution strategies (ES) when incorporated into the solution of large-scale, continuous or discrete, structural optimization problems.

205 citations


Journal ArticleDOI
TL;DR: A technique for design under uncertainty based on the worst-case-scenario technique of anti-optimization is described, which alleviates the computational burden.

131 citations


Journal ArticleDOI
TL;DR: A new computer program that combines evolutionary algo rithm methods with a derivative-based, quasi-Newton method to solve difficult unconstrained optimization problems, called GENOUD (GENetic Optimization Using Derivatives), is described.
Abstract: We describe a new computer program that combines evolutionary algorithm methods with a derivative-based, quasi-Newton method to solve difficult unconstrained optimization problems. The program, called GENOUD (GENetic Optimization Using Derivatives), effectively solves problems that are nonlinear or perhaps even discontinuous in the parameters of the function to be optimized. When a statistical model's estimating function (for example, a log-likelihood) is nonlinear in the model's parameters, the function to be optimized will usually not be globally concave and may contain irregularities such as saddlepoints or discontinuous jumps. Optimization methods that rely on derivatives of the objective function may be unable to find any optimum at all. Or multiple local optima may exist, so that there is no guarantee that a derivative-based method will converge to the global optimum. We discuss the theoretical basis for expecting GENOUD to have a high probability of finding global optima. We conduct Monte Carlo experiments using scalar Normal mixture densities to illustrate this capability. We also use a system of four simultaneous nonlinear equations that has many parameters and multiple local optima to compare the performance of GENOUD to that of the Gauss-Newton algorithm in SAS's PROC MODEL.

Journal ArticleDOI
TL;DR: This paper discusses three classes of dynamic optimization problems with discontinuities: path-constrained problems, hybrid discrete/continuous problems, and mixed-integer dynamic optimize problems.
Abstract: Many engineering tasks can be formulated as dynamic optimization or open-loop optimal control problems, where we search a priori for the input profiles to a dynamic system that optimize a given performance measure over a certain time period. Further, many systems of interest in the chemical processing industries experience significant discontinuities during transients of interest in process design and operation. This paper discusses three classes of dynamic optimization problems with discontinuities: path-constrained problems, hybrid discrete/continuous problems, and mixed-integer dynamic optimization problems. In particular, progress toward a general numerical technology for the solution of large-scale discontinuous dynamic optimization problems is discussed.

Journal ArticleDOI
TL;DR: The results of on-line optimization of the acetoacetylation of pyrrole with diketene in a laboratory-scale reactor and the minimization of batch time subject to endpoint constraints with respect to yield and two of the concentrations are presented.

Journal ArticleDOI
TL;DR: It is shown that the dynamic model, which is in general described by a system of differential-algebraic equations (DAEs), can become high-index during the state-constrained portions of the trajectory.

Journal ArticleDOI
TL;DR: The goal of characterizing the global solutions of an optimization problem, i.e. getting at necessary and sufficient conditions for a feasible point to be a global minimizer (or maximizer) of the objective function, is pursued.
Abstract: In this paper bearing the same title as our earlier survey-paper [11] we pursue the goal of characterizing the global solutions of an optimization problem, i.e. getting at necessary and sufficient conditions for a feasible point to be a global minimizer (or maximizer) of the objective function. We emphasize nonconvex optimization problems presenting some specific structures like ’convex-anticonvex‘ ones or quadratic ones.

Proceedings Article
01 Jul 1998
TL;DR: STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search trajectories, and is used to bias future search trajectory toward better optima.
Abstract: This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search trajectories. The learned evaluation function is used to bias future search trajectories toward better optima. We present positive results on six large-scale optimization domains.

Journal ArticleDOI
TL;DR: In this paper, a flexible algorithm for solving nonlinear engineering design optimization problems involving zero-one, discrete, and continuous variables is presented, which restricts its search only to the permissible values of the variables, thereby reducing the search effort in converging near the optimum solution.
Abstract: A flexible algorithm for solving nonlinear engineering design optimization problems involving zero-one, discrete, and continuous variables is presented. The algorithm restricts its search only to the permissible values of the variables, thereby reducing the search effort in converging near the optimum solution. The efficiency and ease of application of the proposed method is demonstrated by solving four different mechanical design problems chosen from the optimization literature. These results are encouraging and suggest the use of the technique to other more complex engineering design problems.

Journal ArticleDOI
TL;DR: IHR is a sequential random search method that has been successfully used in several engineering design applications, such as the optimal design of composite structures, and several variations have been applied to the composites design problem.
Abstract: Engineering design problems often involve global optimization of functions that are supplied as ’black box‘ functions. These functions may be nonconvex, nondifferentiable and even discontinuous. In addition, the decision variables may be a combination of discrete and continuous variables. The functions are usually computationally expensive, and may involve finite element methods. An engineering example of this type of problem is to minimize the weight of a structure, while limiting strain to be below a certain threshold. This type of global optimization problem is very difficult to solve, yet design engineers must find some solution to their problem – even if it is a suboptimal one. Sometimes the most difficult part of the problem is finding any feasible solution. Stochastic methods, including sequential random search and simulated annealing, are finding many applications to this type of practical global optimization problem. Improving Hit-and-Run (IHR) is a sequential random search method that has been successfully used in several engineering design applications, such as the optimal design of composite structures. A motivation to IHR is discussed as well as several enhancements. The enhancements include allowing both continuous and discrete variables in the problem formulation. This has many practical advantages, because design variables often involve a mixture of continuous and discrete values. IHR and several variations have been applied to the composites design problem. Some of this practical experience is discussed.

Journal ArticleDOI
TL;DR: In this paper, a 2D parametric finite element (FE) environment is presented, which is designed to be best suited for numerical optimization while maintaining its general applicability, focusing on the symbolic description of the model, minimized computation time and the user friendly definition of the optimization task.
Abstract: Nowadays, numerical optimization in combination with finite element (FE) analysis plays an important role in the design of electromagnetic devices. To apply any kind of optimization algorithm, a parametric description of the FE problem is required and the optimization task must be formulated. Most optimization tasks described in the literature, feature either specially developed algorithms for a specific optimization task, or extensions to standard finite element packages. Here, a 2D parametric FE environment is presented, which is designed to be best suited for numerical optimization while maintaining its general applicability. Particular attention is paid to the symbolic description of the model, minimized computation time and the user friendly definition of the optimization task.

Proceedings Article
24 Aug 1998
TL;DR: The approach is based on the property that for linear cost functions, each parametric optimal plan is optimal in a convex polyhedral region of the parameter space, which is used to optimize linear and non-linear cost functions.
Abstract: Query optimizers normally compile queries into one optimal plan by assuming complete knowledge of all cost parameters such as selectivity and resource availability. The execution of such plans could be sub-optimal when cost parameters are either unknown at compile time or change significantly between compile time and runtime [Loh89, GrW89]. Parametric query optimization [INS+92, CG94, GK94] optimizes a query into a number of candidate plans, each optimal for some region of the parameter space. In this paper, we present parametric query optimization algorithms. Our approach is based on the property that for linear cost functions, each parametric optimal plan is optimal in a convex polyhedral region of the parameter space. This property is used to optimize linear and non-linear cost functions. We also analyze the expected sizes of the parametric optimal set of plans and the number of plans produced by the Cole and Graefe algorithm [CG94].

Book ChapterDOI
01 Jan 1998
TL;DR: This paper focuses on the development of branch-and-cut algorithms for discrete optimization problems and in polyhedral outer-approximation methods for continuous nonconvex programming problems.
Abstract: Discrete and continuous nonconvex programming problems arise in a host of practical applications in the context of production, location-allocation, distribution, economics and game theory, process design, and engineering design situations. Several recent advances have been made in the development of branch-and-cut algorithms for discrete optimization problems and in polyhedral outer-approximation methods for continuous nonconvex programming problems. At the heart of these approaches is a sequence of linear programming problems that drive the solution process. The success of such algorithms is strongly tied in with the strength or tightness of the linear programming representations employed.

Journal ArticleDOI
TL;DR: The aim of the present paper is to compare two different approaches which make use of anti-optimization, namely a nested optimization, where the search for worst case is integrated with the main optimization and a two step optimization,where anti- Optimization is solved once for all constraints before starting the optimization allowing a great computational saving with respect to the first.

Journal ArticleDOI
TL;DR: In this paper, the genetic algorithm based on real variable coding was applied to the strain energy minimization of rectangular laminated composite plates, and the results for both a point load and uniformly distributed load compare well with those achieved using trajectory methods for continuous global optimization.
Abstract: The design of laminated structures is highly tailorable owing to the large number of available design variables, thereby requiring an optimization method for effective design. Furthermore, in practice, the design problem translates to a discrete global optimization problem which requires a robust optimization method such as the genetic algorithm. In this paper, the genetic algorithm, based on the real variable coding, is applied to the strain energy minimization of rectangular laminated composite plates. The results for both a point load and uniformly distributed load compare well with those achieved using trajectory methods for continuous global optimization.

Journal Article
TL;DR: A problem formulation and a preliminary classification, some mathematical results on hybrid dynamic optimization and examples illustrating characteristics of the problem are presented.

Journal ArticleDOI
TL;DR: The optimization model developed can be solved using many general purpose software like LINDO, SOLVER of EXCEL etc and an efficient branch and bound procedure which can be used to solve the optimization problem is presented.

Journal ArticleDOI
TL;DR: Pseudopolynomial algorithms for these problems are provided under certain conditions and Computational complexities of the corresponding min-max version of the above-mentioned problems are analyzed.
Abstract: In this paper, we study discrete optimization problems with min-max objective functions. This type of problems has direct applications in the recent development of robust optimization. The following well-known classes of problems are discussed: minimum spanning tree problem, resource allocation problem with separable cost functions, and production control problem. Computational complexities of the corresponding min-max version of the above-mentioned problems are analyzed. Pseudopolynomial algorithms for these problems are provided under certain conditions.

Patent
15 May 1998
TL;DR: In this article, prediction methods are used in conjunction with actual optimization to improve response time and reduce required computational resources for optimization problems having a hierarchical structure, which can reduce the requirements for computational resources and allow more decompositions to be examined within the available time in order to arrive at a more nearly optimal decomposition.
Abstract: Prediction methods that anticipate the outcome of a detailed optimization step are used in lieu of or in conjunction with actual optimization to improve response time and reduce required computational resources for optimization problems having a hierarchical structure. Decomposition of the optimization problem into sub-problems and sub-sub-problems is, itself, an optimization process which is iteratively performed while preferably guided by prediction of the quality of solutions to the problems into which the “master” optimization problem may be decomposed. Prediction also reduces the requirements for computational resources and allows more decompositions to be examined within the available time in order to arrive at a more nearly optimal decomposition as well as a more nearly optimal solution. Prediction is selectively used when it is determined that such a benefit is probable.

Journal ArticleDOI
TL;DR: In this paper, the independent-continuous topological variable is proposed to establish its corresponding smooth model of structural topological optimization, which can overcome difficulties that are encountered in conventional models and algorithms for the optimization of the structural topology.
Abstract: A concept of the independent-continuous topological variable is proposed to establish its corresponding smooth model of structural topological optimization. The method can overcome difficulties that are encountered in conventional models and algorithms for the optimization of the structural topology. Its application to truss topological optimization with stress and displacement constraints is satisfactory, with convergence faster than that of sectional optimizations.

Journal ArticleDOI
TL;DR: A problem formulation and a preliminary classification, some mathematical results on hybrid dynamic optimization and examples illustrating characteristics of the problem are presented.

Journal ArticleDOI
TL;DR: In this paper, a simple evolutionary method for minimizing the weight of structures subject to displacement constraints is presented, where sizing design variables are discrete and sensitivity numbers for element size reduction are derived using optimality criteria methods.