scispace - formally typeset
Search or ask a question

Showing papers in "Optimization and Engineering in 2008"


Journal ArticleDOI
TL;DR: Methods of assessing the quality of coarse/surrogate models are provided, derived from convergence results for space mapping algorithms, to predict whether a given model might be successfully used in space mapping optimization, or to choose the proper type of space mapping which would be suitable to a given engineering design problem.
Abstract: One of the central issues in space mapping optimization is the quality of the underlying coarse models and surrogates. Whether a coarse model is sufficiently similar to the fine model may be critical to the performance of the space mapping optimization algorithm and a poor coarse model may result in lack of convergence. Although similarity requirements can be expressed with proper analytical conditions, it is difficult to verify such conditions beforehand for real-world engineering optimization problems. In this paper, we provide methods of assessing the quality of coarse/surrogate models. These methods can be used to predict whether a given model might be successfully used in space mapping optimization, to compare the quality of different coarse models, or to choose the proper type of space mapping which would be suitable to a given engineering design problem. Our quality estimation methods are derived from convergence results for space mapping algorithms. We provide illustrations and several practical application examples.

105 citations


Journal ArticleDOI
TL;DR: A fast, flexible, and robust simulation-based optimization scheme using an ANN-surrogate model was developed, implemented, and validated, which result in a significant and consistent improvement in blade performance.
Abstract: A fast, flexible, and robust simulation-based optimization scheme using an ANN-surrogate model was developed, implemented, and validated. The optimization method uses Genetic Algorithm (GA), which is coupled with an Artificial Neural Network (ANN) that uses a back propagation algorithm. The developed optimization scheme was successfully applied to single-point aerodynamic optimization of a transonic turbine stator and multi-point optimization of a NACA65 subsonic compressor rotor in two-dimensional flow, both were represented by 2D linear cascades. High fidelity CFD flow simulations, which solve the Reynolds-Averaged Navier-Stokes equations, were used in generating the data base used in building the ANN low fidelity model. The optimization objective is a weighted sum of the performance objectives and is penalized with the constraints; it was constructed so as to achieve a better aerodynamic performance at the design point or over the full operating range by reshaping the blade profile. The latter is represented using NURBS functions, whose coefficients are used as the design variables. Parallelizing the CFD flow simulations reduced the turn-around computation time at close to 100% efficiency. The ANN model was able to approximate the objective function rather accurately and to reduce the optimization computing time by ten folds. The chosen objective function and optimization methodology result in a significant and consistent improvement in blade performance.

86 citations


Journal ArticleDOI
TL;DR: This paper considers three different modeling approaches for a mixed-integer nonlinear optimization problem taken from a set of water resources benchmarking problems and shows that the surrogate approach can greatly improve computational efficiency while locating a comparable, sometimes better, design point than the other approaches.
Abstract: Efficient and powerful methods are needed to overcome the inherent difficulties in the numerical solution of many simulation-based engineering design problems. Typically, expensive simulation codes are included as black-box function generators; therefore, gradient information that is required by mathematical optimization methods is entirely unavailable. Furthermore, the simulation code may contain iterative or heuristic methods, low-order approximations of tabular data, or other numerical methods which contribute noise to the objective function. This further rules out the application of Newton-type or other gradient-based methods that use traditional finite difference approximations. In addition, if the optimization formulation includes integer variables the complexity grows even further. In this paper we consider three different modeling approaches for a mixed-integer nonlinear optimization problem taken from a set of water resources benchmarking problems. Within this context, we compare the performance of a genetic algorithm, the implicit filtering algorithm, and a branch-and-bound approach that uses sequential surrogate functions. We show that the surrogate approach can greatly improve computational efficiency while locating a comparable, sometimes better, design point than the other approaches.

73 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe extensions of kriging and radial basis function (RBF) interpolation to handle linear, nonlinear, and integer constraints, and compare the performance of the three deterministic derivative-free solvers rbfSolve, ARBFMIP and EGO with three derivative-based mixed-integer nonlinear solvers, OQNLP, MINLPBB and MISQP.
Abstract: Response surface methods based on kriging and radial basis function (RBF) interpolation have been successfully applied to solve expensive, i.e. computationally costly, global black-box nonconvex optimization problems. In this paper we describe extensions of these methods to handle linear, nonlinear, and integer constraints. In particular, algorithms for standard RBF and the new adaptive RBF (ARBF) are described. Note, however, while the objective function may be expensive, we assume that any nonlinear constraints are either inexpensive or are incorporated into the objective function via penalty terms. Test results are presented on standard test problems, both nonconvex problems with linear and nonlinear constraints, and mixed-integer nonlinear problems (MINLP). Solvers in the TOMLAB Optimization Environment ( http://tomopt.com/tomlab/ ) have been compared, specifically the three deterministic derivative-free solvers rbfSolve, ARBFMIP and EGO with three derivative-based mixed-integer nonlinear solvers, OQNLP, MINLPBB and MISQP, as well as the GENO solver implementing a stochastic genetic algorithm. Results show that the deterministic derivative-free methods compare well with the derivative-based ones, but the stochastic genetic algorithm solver is several orders of magnitude too slow for practical use. When the objective function for the test problems is costly to evaluate, the performance of the ARBF algorithm proves to be superior.

71 citations


Journal ArticleDOI
TL;DR: To overcome the “curse of dimensionality” that arises in directly approximating the nonlinear constraint functions in the original robust GP, it is shown how to find globally optimal PWL approximations of these bivariate constraint functions.
Abstract: The optimal solution of a geometric program (GP) can be sensitive to variations in the problem data. Robust geometric programming can systematically alleviate the sensitivity problem by explicitly incorporating a model of data uncertainty in a GP and optimizing for the worst-case scenario under this model. However, it is not known whether a general robust GP can be reformulated as a tractable optimization problem that interior-point or other algorithms can efficiently solve. In this paper we propose an approximation method that seeks a compromise between solution accuracy and computational efficiency.

60 citations


Journal ArticleDOI
TL;DR: A new metamodeling framework is explored that may collapse the computational explosion that characterizes the modeling of complex systems under a multiobjective and/or multidisciplinary setting and holds the potential for identifying highly competitive products and systems that are well beyond today’s state of the art.
Abstract: This paper explores a new metamodeling framework that may collapse the computational explosion that characterizes the modeling of complex systems under a multiobjective and/or multidisciplinary setting. Under the new framework, a pseudo response surface is constructed for each design objective for each discipline. This pseudo response surface has the unique property of being highly accurate in Pareto optimal regions, while it is intentionally allowed to be inaccurate in other regions. In short, the response surface for each design objective is accurate only where it matters. Because the pseudo response surface is allowed to be inaccurate in other regions of the design space, the computational cost of constructing it is dramatically reduced. An important distinguishing feature of the new framework is that the response surfaces for all the design objectives are constructed simultaneously in a mutually dependent fashion, in a way that identifies Pareto regions for the multiobjective problem. The new framework supports the puzzling notion that it is possible to obtain more accuracy and radically more design space exploration capability, while actually reducing the computation effort. This counterintuitive metamodeling paradigm shift holds the potential for identifying highly competitive products and systems that are well beyond today’s state of the art.

54 citations


Journal ArticleDOI
TL;DR: The recent mesh adaptive direct search (MADS) algorithm is detailed, a direct search algorithm, so it uses only function values and does not compute or approximate derivatives, useful when the functions are noisy, costly or undefined at some points, or when derivatives are unavailable or unusable.
Abstract: In this paper, the general problem of chemical process optimization defined by a computer simulation is formulated. It is generally a nonlinear, non-convex, non-differentiable optimization problem over a disconnected set. A brief overview of popular optimization methods from the chemical engineering literature is presented. The recent mesh adaptive direct search (MADS) algorithm is detailed. It is a direct search algorithm, so it uses only function values and does not compute or approximate derivatives. This is useful when the functions are noisy, costly or undefined at some points, or when derivatives are unavailable or unusable. In this work, the MADS algorithm is used to optimize a spent potliners (toxic wastes from aluminum production) treatment process. In comparison with the best previously known objective function value, a 37% reduction is obtained even if the model failed to return a value 43% of the time.

33 citations


Journal ArticleDOI
TL;DR: The problem is solved numerically using the generalized mixed variable pattern search (MVPS) algorithm and new theoretical convergence results are proved, and numerical results are presented, which show the potential of the approach.
Abstract: This paper focuses on optimal sensor placement for structural health monitoring (SHM), in which the goal is to find an optimal configuration of sensors that will best predict structural damage. The problem is formulated as a bound constrained mixed variable programming (MVP) problem, in which the discrete variables are categorical; i.e., they may only take on values from a pre-defined list. The problem is particularly challenging because the objective function is computationally expensive to evaluate and first-order derivatives may not be available. The problem is solved numerically using the generalized mixed variable pattern search (MVPS) algorithm. Some new theoretical convergence results are proved, and numerical results are presented, which show the potential of our approach.

28 citations


Journal ArticleDOI
TL;DR: A novel inverse approach that results from including muscle physiology (both activation and contraction dynamics) in the inverse dynamic formalism is proposed, and the efficiency with which the corresponding optimization problem is solved is increased by using convex optimization techniques.
Abstract: Determining the muscle forces that underlie some experimentally observed human motion, is a challenging biomechanical problem, both from an experimental and a computational point of view. No non-invasive method is currently available for experimentally measuring muscle forces. The alternative of computing them from the observed motion is complicated by the inherent overactuation of the human body: it has many more muscles than strictly needed for driving all the degrees of freedom of the skeleton. As a result, the skeleton’s equations of motion do not suffice to determine the muscle forces unambiguously. Therefore, muscle force determination is often reformulated as a (large-scale) optimization problem. Generally, the optimization approaches are classified according to the formalism, inverse or forward, adopted for solving the skeleton’s equations of motion. Classical inverse approaches are fast but do not take into account the constraints imposed by muscle physiology. Classical forward approaches, on the other hand, do take the muscle physiology into account but are extremely costly from a computational point of view. The present paper makes a double contribution. First, it proposes a novel inverse approach that results from including muscle physiology (both activation and contraction dynamics) in the inverse dynamic formalism. Second, the efficiency with which the corresponding optimization problem is solved is increased by using convex optimization techniques. That is, an approximate convex program is formulated and solved in order to provide a hot-start for the exact nonconvex program. The key element in this approximation is a (global) linearization of muscle physiology based on techniques from experimental system identification. This approach is applied to the study of muscle forces during gait. Although the results for gait are promising, experimental study of faster motions is needed to demonstrate the full power and advantages of the proposed methodology, and therefore is the subject of subsequent research.

22 citations


Journal ArticleDOI
TL;DR: In this article, the geometry of a glass cell with computational fluid dynamics (CFD) was optimized for non-invasive nuclear magnetic resonance (NMR) measurements on a single droplet levitated in a counter current of liquid in a conical tube.
Abstract: The rigorous optimization of the geometry of a glass cell with computational fluid dynamics (CFD) is performed. The cell will be used for non-invasive nuclear magnetic resonance (NMR) measurements on a single droplet levitated in a counter current of liquid in a conical tube. The objective function of the optimization describes the stability of the droplet position required for long-period NMR measurements.

17 citations


Journal ArticleDOI
TL;DR: A global-local optimization (GLO) approach was adopted to adjust the uncertain properties of the FE model by minimizing iteratively the differences between the measured dynamic modal parameters and the corresponding analytical predictions.
Abstract: A new method for finite element model updating using simulated data is presented. A global-local optimization (GLO) approach was adopted to adjust the uncertain properties of the FE model by minimizing iteratively the differences between the measured dynamic modal parameters and the corresponding analytical predictions. In contrast with most of the existing updating techniques, which minimize modal force errors, objective functions based on Coordinate Modal Assurance Criterion (COMAC) and Frequency Response Assurance Criterion (FRAC) were employed. The GLO procedure was employed to minimize the norm of these vectors by updating the physical model variables (thickness, Young modulus, etc.). The proposed model updating procedure was applied to damage localization and quantification of structures, whose damage characteristics can be represented by a reduction of the element bending and axial stiffness. Results showed that a significant reduction in terms of computer run-time and improved damage assessment can be achieved. The procedure is illustrated on a plate-like structure by measuring dynamic properties before and after structural changes for four different damage cases.

Journal ArticleDOI
TL;DR: In this article, an approximate robust counterpart is formulated for the case where both sides of the constraint depend on the same perturbations, and the perturbation belongs to an uncertainty set which is an intersection of ellipsoids.
Abstract: This paper deals with uncertain conic quadratic constraints An approximate robust counterpart is formulated for the case where both sides of the constraint depend on the same perturbations, and the perturbations belong to an uncertainty set which is an intersection of ellipsoids Examples to problems in which such constraints occur are presented and solved

Journal ArticleDOI
TL;DR: A comparison between the optimal oblivious routing and the well-known ospf routing technique on a set of real-world networks shows that, for different levels of uncertainty, optimal oblivious routed has a substantially better performance than ospF routing.
Abstract: In telecommunication networks, a common measure is the maximum congestion (i.e., utilization) on edge capacity. As traffic demands are often known with a degree of uncertainty, network management techniques must take into account traffic variability. The oblivious performance of a routing is a measure of how congested the network may get, in the worst case, for one of a set of possible traffic demands.

Journal ArticleDOI
TL;DR: This work states that space mapping, where a cheap (low-fidelity or coarse) physics-based model provides an effective optimization surrogate for a more detailed or high-f fidelity model, has made significant inroads into the surrogate modeling field.
Abstract: Advances in optimization technology, a cornerstone in engineering modeling, simulation-based design and manufacturing, continue to push back the boundaries of feasibility Multi-disciplinary optimization continues to show success Notwithstanding advances in computing power and user-friendly management of multidisciplinary software, challenging problems will undoubtedly continue to plague the designer as long as engineering projects grow in ambition Ever more efficient and systematic procedures are proposed that exploit surrogate or approximate models with occasional reference to appropriate computationally intensive high-fidelity simulator(s) Such low-fidelity models facilitate rapid optimization Data interpolation techniques continue their development, including artificial neural network approaches, kriging, and low-order response surfaces Space mapping, where a cheap (low-fidelity or coarse) physics-based model provides an effective optimization surrogate for a more detailed or high-fidelity model, has made significant inroads into the surrogate modeling field A crucial property for traditional optimization is that each iteration towards the solution focus on a single

Journal ArticleDOI
TL;DR: A surrogate optimization method called Efficient Global Optimization (EGO) was used with a spline-based parameterization to find the shape of the horn that gives a frequency-independent beamwidth, thus giving a high quality listening experience.
Abstract: Horn-loaded loudspeakers increase the efficiency and control the spatial distribution of sound radiated from the horn mouth. They are often used as components in cinema sound systems where it is desired that the sound be broadcast onto the audience uniformly at all frequencies, improving the listening experience. The sound distribution, or beamwidth, is related to the shape of the horn and can be predicted by numerical methods, such as the boundary-element or source-superposition method, however the cost of evaluating the objective function is high. To overcome this a surrogate optimization method called Efficient Global Optimization (EGO) was used with a spline-based parameterization to find the shape of the horn that gives a frequency-independent beamwidth, thus giving a high quality listening experience.

Journal ArticleDOI
TL;DR: The proposed neuro-space mapping technique, called Neuro-SM, uses a neural network to map the voltage and current signals between an existing device model and the actual device behavior, such that the mapped model becomes an accurate representation of the new device.
Abstract: This paper presents an application of the space mapping concept in the modeling of semiconductor devices. A recently proposed device modeling technique, called neuro-space mapping (Neuro-SM), is described to meet the constant need of new device models due to rapid progress in the semiconductor technology. Neuro-SM is a systematic method allowing us to exceed the present capabilities of the existing device models. It uses a neural network to map the voltage and current signals between an existing device model (coarse model) and the actual device behavior (fine model), such that the mapped model becomes an accurate representation of the new device. An efficient training method based on analytical sensitivity analysis for such mapping neural network is also addressed. The trained Neuro-SM model can retain the speed of the existing device model while improving the model accuracy. The benefit of the Neuro-SM method is demonstrated by examples of SiGe HBT and GaAs MESFET modeling and use of the models in harmonic balance simulation.

Journal ArticleDOI
TL;DR: In this article, the authors apply semidefinite programming to an estimation problem in optical lithography, where an approximate model of the imaging system is modified so that it satisfies calibration measurements, with the purpose of incorporating into the model distortion effects due to diffusion and etching.
Abstract: We apply semidefinite programming to an estimation problem in optical lithography. In this problem, an approximate model of the imaging system is modified so that it satisfies a set of calibration measurements, with the purpose of incorporating into the model distortion effects due to diffusion and etching. The estimation prob- lem is formulated as a semidefinite program, and several techniques are presented for exploiting problem structure in an interior-point method for solving it. These in- clude efficient methods for handling upper bounds on the matrix variables, symmetry constraints on the variables, and low-rank structure in the coefficient matrices.

Journal ArticleDOI
TL;DR: In this paper, the second order adjoint estimate result for the uncompressed trapezoidal method does not hold for the compressed trapezoid method, and the authors also show how to recover the lost order and analyze convergence.
Abstract: Direct transcription methods for the numerical solution of optimal control problems have the advantage that they do not require estimates of the adjoint variables. However, it is natural to want to use the discrete NLP multipliers to estimate the adjoint variables. It has been shown earlier in the literature for a large collection of numerical discretizations that order of the state and control variables found are generally independent of the implementation of the chosen discretization if no post processing is used to find the control. This is not always true for the adjoint estimation problem. The compressed trapezoidal discretization is used in some commercial codes. In this paper we show that the second order adjoint estimate result for the uncompressed trapezoidal method does not hold for the compressed trapezoidal method. We also show how to recover the lost order and carefully analyze convergence. Some related results are also discussed.

Journal ArticleDOI
TL;DR: This paper implements a design optimization methodology for sizing, shape and topology optimization using two-level parallelism and provides a benchmark in the area of FEA-based design optimization for studying speedups with increasing number of processors to speed development of effective parallel algorithms.
Abstract: Computing clusters created with commodity chips are gaining popularity owing to relative ease of assembly and maintenance compared to a supercomputer. Such clusters are able to solve much larger problems owing to increased memory and reduced compute time. The challenge, however, is to develop new algorithms and software that can exploit multiple processors. In this paper we discuss the parallel processing options and their implementations in a gradient-based design optimization software system. The main objectives are as follows—(a) implement a design optimization methodology for sizing, shape and topology optimization using two-level parallelism and (b) provide a benchmark in the area of FEA-based design optimization for studying speedups with increasing number of processors to speed development of effective parallel algorithms. The two-level parallelism is implemented using nested parallel gradient calculations in conjunction with parallel FEA, and parallel line search with parallel FEA. Two case studies involving topology and shape optimization are studied in detail and they include three-dimensional finite element meshes with about 160 000 hexahedral elements and about 175 000 nodes. Furthermore, the case studies have been implemented using a workbench where the topology and shape optimization have an interface with a commercial CAD package, permitting a solid model representation of both the initial and the final optimized part.

Journal ArticleDOI
TL;DR: A Projected Gradient algorithm is proposed for finding the global minimum of the optimization problem, taking advantage of the particular structure of the first formulation and the performance of the algorithm for given real-life problems is presented.
Abstract: An optimization problem is described, that arises in telecommunications and is associated with multiple cross-sections of a single power cable used to supply remote telecom equipments. The problem consists of minimizing the volume of copper material used in the cables and consequently the total cable cost. Two main formulations for the problem are introduced and some properties of the functions and constraints involved are presented. In particular it is shown that the optimization problems are convex and have a unique optimal solution. A Projected Gradient algorithm is proposed for finding the global minimum of the optimization problem, taking advantage of the particular structure of the second formulation. An analysis of the performance of the algorithm for given real-life problems is also presented.