scispace - formally typeset
Search or ask a question

Showing papers on "Surrogate model published in 2010"


Journal Article
TL;DR: This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle issues of computational cost and model accuracy.
Abstract: An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.

490 citations


Journal ArticleDOI
TL;DR: The proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null‐space Monte Carlo (NSMC), that merges the strengths of gradient‐based search and parameter dimensionality reduction.
Abstract: [1] Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

184 citations


Journal ArticleDOI
TL;DR: Two different surrogate models based on genetic programming and modular neural network are developed and linked to a multi-objective genetic algorithm (MOGA) to derive the optimal pumping strategies for coastal aquifer management, considering two objectives.

176 citations


Journal ArticleDOI
Jing Li1, Dongbin Xiu1
TL;DR: A hybrid approach is proposed by sampling both the surrogate model in a large portion of the probability space and the original system in a ''small'' portion, which is significantly more efficient than the traditional sampling method and is more accurate and robust than the straightforward surrogate model approach.

140 citations


Journal ArticleDOI
TL;DR: In this paper, a multiobjective robust optimization methodology is presented to address the effects of parametric uncertainties on drawbead design, where the six sigma principle is adopted to measure the variations, a dual response surface method is used to construct surrogate model and a multi-objective particle swarm optimization is developed to generate robust Pareto solutions.

127 citations


Journal IssueDOI
TL;DR: The expected improvement approach is demonstrated on two electromagnetic problems, namely, a microwave filter and a textile antenna, and it is shown that this approach can improve the quality of designs on these problems.
Abstract: The increasing use of expensive computer simulations in engineering places a serious computational burden on associated optimization problems. Surrogate-based optimization becomes standard practice in analyzing such expensive black-box problems. This article discusses several approaches that use surrogate models for optimization and highlights one sequential design approach in particular, namely, expected improvement. The expected improvement approach is demonstrated on two electromagnetic problems, namely, a microwave filter and a textile antenna. © 2010 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010.

125 citations


Journal ArticleDOI
TL;DR: A general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems.
Abstract: We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.

120 citations


Journal ArticleDOI
TL;DR: In this article, the effect of geometric variations on the cooling performance and optimized to enhance the cooling effectiveness using three-dimensional Reynolds-averaged Navier-Stokes analysis and surrogate approximation methods was studied.

115 citations


Journal ArticleDOI
TL;DR: The algorithm is extended to multiple objective functions by instead weighting against the distance to the surrogate Pareto front; it therefore constitutes the first algorithm for expensive, noisy and multiobjective problems in the literature.
Abstract: We propose an algorithm for the global optimization of expensive and noisy black box functions using a surrogate model based on radial basis functions (RBFs). A method for RBF-based approximation is introduced in order to handle noise. New points are selected to minimize the total model uncertainty weighted against the surrogate function value. The algorithm is extended to multiple objective functions by instead weighting against the distance to the surrogate Pareto front; it therefore constitutes the first algorithm for expensive, noisy and multiobjective problems in the literature. Numerical results on analytical test functions show promise in comparison to other (commercial) algorithms, as well as results from a simulation based optimization problem.

103 citations


Journal ArticleDOI
TL;DR: It is demonstrated that although formal conditions for applying trust regions are not strictly satisfied for space-mapping surrogate models, the approach improves the overall performance of the space- mapping optimization process.
Abstract: Convergence is a well-known issue for standard space-mapping optimization algorithms It is heavily dependent on the choice of coarse model, as well as the space-mapping transformations employed in the optimization process One possible convergence safeguard is the trust region approach where a surrogate model is optimized in a restricted neighborhood of the current iteration point In this paper, we demonstrate that although formal conditions for applying trust regions are not strictly satisfied for space-mapping surrogate models, the approach improves the overall performance of the space-mapping optimization process Further improvement can be realized when approximate fine model Jacobian information is exploited in the construction of the space-mapping surrogate A comprehensive numerical comparison between standard and trust-region-enhanced space mapping is provided using several examples of microwave design problems

88 citations


Proceedings ArticleDOI
28 Jun 2010
TL;DR: The gradient/Hessian-enhanced surrogate models are shown to be useful in the development of efficient design optimization, aerodynamic database construction, and uncertainty analysis.
Abstract: In this paper, gradient/Hessian-enhanced surrogate models have been developed based on Kriging approaches. The gradient/Hessian-enhanced Kriging methods have been developed based on direct and indirect formulations. The efficiencies of these methods are compared by analytical function fitting, aerodynamic data modeling and 2D airfoil drag minimization problems. For the aerodynamic problems, efficient CFD gradient/Hessian calculation methods are utilized that make use of adjoint and automatic differentiation techniques. The gradient/Hessian-enhanced surrogate models are shown to be useful in the development of efficient design optimization, aerodynamic database construction, and uncertainty analysis.

01 Jan 2010
TL;DR: This dissertation discusses uncertainty quantication as posed in the Data Collaboration framework using techniques of nonconvex quadratically constrained quadratic programming to provide both lower and upper bounds on the various objectives.
Abstract: Author(s): Russi, Trent Michael | Advisor(s): Packard, Andrew K; Frenklach, Michael | Abstract: This dissertation discusses uncertainty quantication as posed in the Data Collaboration framework. Data Collaboration is a methodology for combining experimental data and system models to induce constraints on a set of uncertain system parameters. The framework is summarized, including outlines of notation and techniques. The main techniques include polynomial optimization and surrogate modeling to ascertain the consistency of all data and models as well as propagate uncertainty in the form of a model prediction. One of the main methods of Data Collaboration is using techniques of nonconvex quadratically constrained quadratic programming to provide both lower and upper bounds on the various objectives. The Lagrangian dual of the NQCQP provides both an outer bound to the optimal objective as well as Lagrange multipliers. These multipliers act as sensitivity measures relaying the effects of changes to the parameter constraint bounds on the optimal objective. These multipliers are rewritten to provide the sensitivity to uncertainty in the response prediction with respect to uncertainty in the parameters and experimental data. It is often of interest to find a vector of parameters that is both feasible and representative of the current community work and knowledge. This is posed as the problem of finding the minimal number of parameters that must deviate from their literature value to achieve concurrence with all experimental data constraints. This problem is heuristically solved using the l1-norm in place of the cardinality function. A lower bound on the objective is provided through an NQCQP formulation. In order to use the NQCQP techniques, the system models need to have quadratic forms. When they do not have quadratic forms, surrogate models are fitted. Surrogate modeling can be difficult for complex models with large numbers of parameters and long simulation times because of the amount of evaluation-time required to make a good fit. New techniques are developed for searching for an active subspace of the parameters, and subsequently creating an experiment design on the active subspace that adheres to the original parameter constraints. The active subspace can have a dimension signicantly lower than the original parameter dimension thereby reducing the computational complexity of generating the surrogate model. The technique is demonstrated on several examples from combustion chemistry and biology. Several other applications of the Data Collaboration framework are presented. They are used to demonstrate the complexity of describing a high dimensional feasible set of parameter values as constrained by experimental data. Approximating the feasible set can lead to a simple description, but the predictive capability of such a set is significantly reduced compared to the actual feasible set. This is demonstrated on an example from combustion chemistry.

Journal ArticleDOI
01 Jan 2010
TL;DR: In this article, a methodology for computing stochastic sensitivities with respect to the design variables, which are the mean values of the input correlated random variables, is presented for computing component reliability, system reliability, or statistical moments and their sensitivities by applying Monte Carlo simulation to the accurate surrogate model.
Abstract: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the mean values of the input correlated random variables. Assuming that an accurate surrogate model is available, the proposed method calculates the component reliability, system reliability, or statistical moments and their sensitivities by applying Monte Carlo simulation to the accurate surrogate model. Since the surrogate model is used, the computational cost for the stochastic sensitivity analysis is affordable compared with the use of actual models. The copula is used to model the joint distribution of the correlated input random variables, and the score function is used to derive the stochastic sensitivities of reliability or statistical moments for the correlated random variables. An important merit of the proposed method is that it does not require the gradients of performance functions, which are known to be erroneous when obtained from the surrogate model, or the transformation from X-space to U-space for reliability analysis. Since no transformation is required and the reliability or statistical moment is calculated in X-space, there is no approximation or restriction in calculating the sensitivities of the reliability or statistical moment. Numerical results indicate that the proposed method can estimate the sensitivities of the reliability or statistical moments very accurately, even when the input random variables are correlated.

Book ChapterDOI
11 Sep 2010
TL;DR: This paper investigates the use of rank-based Support Vector Machine as surrogate model within CMA-ES, enforcing the invariance of the approach with respect to monotonous transformations of the fitness function, and uses the Covariance Matrix adapted by C MA-ES within a Gaussian kernel.
Abstract: Taking inspiration from approximate ranking, this paper investigates the use of rank-based Support Vector Machine as surrogate model within CMA-ES, enforcing the invariance of the approach with respect to monotonous transformations of the fitness function. Whereas the choice of the SVM kernel is known to be a critical issue, the proposed approach uses the Covariance Matrix adapted by CMA-ES within a Gaussian kernel, ensuring the adaptation of the kernel to the currently explored region of the fitness landscape at almost no computational overhead. The empirical validation of the approach on standard benchmarks, comparatively to CMA-ES and recent surrogate-based CMA-ES, demonstrates the efficiency and scalability of the proposed approach.

Proceedings ArticleDOI
07 Jul 2010
TL;DR: The proposed approach aims at building a global surrogate model defined on the decision space and tightly characterizing the current Pareto set and the dominated region, in order to speed up the evolution progress toward the true Pare to set.
Abstract: Most surrogate approaches to multi-objective optimization build a surrogate model for each objective. These surrogates can be used inside a classical Evolutionary Multiobjective Optimization Algorithm (EMOA) in lieu of the actual objectives, without modifying the underlying EMOA; or to filter out points that the models predict to be uninteresting. In contrast, the proposed approach aims at building a global surrogate model defined on the decision space and tightly characterizing the current Pareto set and the dominated region, in order to speed up the evolution progress toward the true Pareto set. This surrogate model is specified by combining a One-class Support Vector Machine (SVMs) to characterize the dominated points, and a Regression SVM to clamp the Pareto front on a single value. The resulting surrogate model is then used within state-of-the-art EMOAs to pre-screen the individuals generated by application of standard variation operators. Empirical validation on classical MOO benchmark problems shows a significant reduction of the number of evaluations of the actual objective functions.

Journal ArticleDOI
TL;DR: In this article, a variable reduction technique which employs proper orthogonal decomposition to filter out undesirable or badly performing geometries from an optimization process is presented. But this technique operates at the geometric level instead of the variable level.
Abstract: When carrying out design searches, traditional variable screening techniques can find it extremely difficult to distinguish between important and unimportant variables. This is particularly true when only a small number of simulations is combined with a parameterization which results in a large number of variables of seemingly equal importance. Here the authors present a variable reduction technique which employs proper orthogonal decomposition to filter out undesirable or badly performing geometries from an optimization process. Unlike traditional screening techniques, the presented method operates at the geometric level instead of the variable level. The filtering process uses the designs which result from a geometry parameterization instead of the variables which control the parameterization. The method is shown to perform well in the optimization of a two dimensional airfoil for the minimization of drag to lift ratio, producing designs better than those resulting from traditional kriging based surrogate model optimization and with a significant reduction in surrogate tuning cost

Proceedings ArticleDOI
01 Jan 2010
TL;DR: Four techniques with increasing popularity in the design automation community are focused on: screening and variable reduction in both the input and the output spaces, simultaneous use of multiple surrogates, sequential sampling and optimization, and conservative estimators.
Abstract: Design analysis and optimization based on high-fidelity computer experiments is commonly expensive. Surrogate modeling is often the tool of choice for reducing the computational burden. However, even after years of intensive research, surrogate modeling still involves a struggle to achieve maximum accuracy within limited resources. This work summarizes advanced and yet simple statistical tools that help. We focus on four techniques with increasing popularity in the design automation community: (i) screening and variable reduction in both the input and the output spaces, (ii) simultaneous use of multiple surrogates, (iii) sequential sampling and optimization, and (iv) conservative estimators.

Proceedings ArticleDOI
21 Jun 2010
TL;DR: It is argued that surrogate models of the kind presented here can provide low-cost and accurate estimates of power consumption to drive on-line dynamic control mechanisms and for use in off-line tuning.
Abstract: We present and evaluate a surrogate model, based on hardware performance counter measurements, to estimate computer system power consumption. Power and energy are especially important in the design and operation of large data centers and of clusters used for scientific computing. Tradeoffs are made between performance and power consumption, this needs to be dynamic because activity varies over time. While it is possible to instrument systems for fine-grain power monitoring, such instrumentation is costly and not commonly available. Furthermore, the latency and sampling periods of hardware power monitors can be large compared to time scales at which workloads can change and dynamic power controls can operate. Given these limitations, we argue that surrogate models of the kind we present here can provide low-cost and accurate estimates of power consumption to drive on-line dynamic control mechanisms and for use in off-line tuning.In this brief paper, we discuss a general approach to building system power estimation models based on hardware performance counters. Using this technique, we then present a model for an Intel Core i7 system that has an absolute estimation error of 5.32 percent (median) and acceptable data collection overheads on varying workloads, CPU power states (frequency and voltage), and number of active cores. Since this method is based on event sampling of hardware counters, one can make a tradeoff between estimation accuracy and data-collection overhead.

Journal ArticleDOI
TL;DR: In this article, the magnetic vector potential in the magnetostatic finite-element analysis (FEA) is employed only for calculating the magnetic potential of the coils, and the motor performance is then estimated through analytical formulas.
Abstract: The model allows the ultrafast nonlinear simulation of the steady-state performance of synchronous machines and is particularly suitable for brushless motors with nonoverlapping windings having coils concentrated around the teeth. Finite-element analysis (FEA) is employed only for calculating the magnetic vector potential in the coils, and the motor performance is then estimated through analytical formulas. For the example interior-permanent-magnet motors presented, as little as one magnetostatic finite-element (FE) solution was used for the fundamental flux linkage and average torque computation. Two FE solutions were employed for the core flux density waveforms and power loss estimation. A minimum of three solutions is recommended for the torque ripple, back electromotive force, and induced voltage. A substantial reduction of one to two orders of magnitude was achieved for the solving time as compared with the detailed time-stepping FEA. The surrogate FE model can also be tuned for increased speed, comparable with that of magnetic equivalent circuit solvers. The general applicability of the model is discussed, and recommendations are provided. Successful validation was performed against the detailed FEA and experiments.

Journal ArticleDOI
TL;DR: The results obtained from the solution of four different case studies based on aircraft design problems reinforces the idea that quadratic interpolation is only well-suited to very simple problems.
Abstract: The replacement of the analysis portion of an optimization problem by its equivalent metamodel usually results in a lower computational cost. In this paper, a conventional nonapproximative approach is compared against three differentmetamodels: quadratic-interpolation-based response surfaces,Kriging, and artificial neural networks. The results obtained from the solution of four different case studies based on aircraft design problems reinforces the idea that quadratic interpolation is only well-suited to very simple problems. At higher dimensionality, the usage of the more complex Kriging and artificial neural networks models may result in considerable performance benefits.

Journal ArticleDOI
TL;DR: A multiobjective framework for global surrogate model generation to help tackle both problems and that is applicable in both the static and sequential design (adaptive sampling) case is presented.
Abstract: When dealing with computationally expensive simulation codes or process measurement data, surrogate modeling methods are firmly established as facilitators for design space exploration, sensitivity analysis, visualization, prototyping and optimization. Typically the model parameter (=hyperparameter) optimization problem as part of global surrogate modeling is formulated in a single objective way. Models are generated according to a single objective (accuracy). However, this requires an engineer to determine a single accuracy target and measure upfront, which is hard to do if the behavior of the response is unknown. Likewise, the different outputs of a multi-output system are typically modeled separately by independent models. Again, a multiobjective approach would benefit the domain expert by giving information about output correlation and enabling automatic model type selection for each output dynamically. With this paper the authors attempt to increase awareness of the subtleties involved and discuss a number of solutions and applications. In particular, we present a multiobjective framework for global surrogate model generation to help tackle both problems and that is applicable in both the static and sequential design (adaptive sampling) case.

Journal ArticleDOI
TL;DR: In this paper, an isogeometric-based shape design sensitivity analysis and optimization methods are developed incorporating with T-spline basis, where the NURBS basis function that is used in representing the geometric model in the CAD system is directly used in the response analysis.
Abstract: Numerical methods for shape design sensitivity analysis and optimization have been developed for several decades. However, the finite-element-based shape design sensitivity analysis and optimization have experienced some bottleneck problems such as design parameterization and design remodeling during optimization. In this paper, as a remedy for these problems, an isogeometric-based shape design sensitivity analysis and optimization methods are developed incorporating with T-spline basis. In the shape design sensitivity analysis and optimization procedure using a standard finite element approach, the design boundary should be parameterized for the smooth variation of the boundary using a separate geometric modeler, such as a CAD system. Otherwise, the optimal design usually tends to fall into an undesirable irregular shape. In an isogeometric approach, the NURBS basis function that is used in representing the geometric model in the CAD system is directly used in the response analysis, and the design boundary is expressed by the same NURBS function as used in the analysis. Moreover, the smoothness of the NURBS can allow the large perturbation of the design boundary without a severe mesh distortion. Thus, the isogeometric shape design sensitivity analysis is free from remeshing during the optimization process. In addition, the use of T-spline basis instead of NURBS can reduce the number of degrees of freedom, so that the optimal solution can be obtained more efficiently while yielding the same optimum design shape.

Journal ArticleDOI
TL;DR: In this article, a fast strip analysis model is adopted as a surrogate model to approximate the time-consuming computer simulation software for predicating the filling characteristics of injection molding, in which the original part is represented by a rectangular strip, and a finite difference method is adopted to solve one dimensional flow in the strip.
Abstract: Injection molding process parameters such as injection temperature, mold temperature, and injection time have direct influence on the quality and cost of products. However, the optimization of these parameters is a complex and difficult task. In this paper, a novel surrogate-based evolutionary algorithm for process parameters optimization is proposed. Considering that most injection molded parts have a sheet like geometry, a fast strip analysis model is adopted as a surrogate model to approximate the time-consuming computer simulation software for predicating the filling characteristics of injection molding, in which the original part is represented by a rectangular strip, and a finite difference method is adopted to solve one dimensional flow in the strip. Having established the surrogate model, a particle swarm optimization algorithm is employed to find out the optimum process parameters over a space of all feasible process parameters. Case studies show that the proposed optimization algorithm can optimize the process parameters effectively.

Journal ArticleDOI
TL;DR: This work proposes the use of cross validation for estimating the required safety margin for a desired level of conservativeness (percentage of safe predictions) and found that cross validation is effective for selecting the safety margin.
Abstract: this work we use safety margins to conservatively compensate for fitting errors associated with surrogates. We propose the use of cross validation for estimating the required safety margin for a desired level of conservativeness (percentage of safe predictions). The approach was tested on three algebraic examples for two basic surrogates: namely, kriging and polynomial response surface. For these examples we found that cross validation is effective for selecting the safety margin. We also applied the approach to the probabilistic design optimization of a composite laminate. This design under uncertainty example showed that the approach can be successfully used in engineering applications.

Journal ArticleDOI
28 Apr 2010
TL;DR: The optimization results show that the isentropic efficiency and the total PR are enhanced at both design and off-design conditions through multi-objective optimization.
Abstract: This paper presents the design optimization of a centrifugal compressor impeller with a hybrid multi-objective evolutionary algorithm. Reynolds-averaged Navier—Stokes (RANS) equations are solved with the shear stress transport turbulence model as a turbulence closure model. Flow analysis is performed on a hexahedral grid through a finite-volume solver. Two objectives, viz., the isentropic efficiency and the total pressure ratio (PR), are selected with four design variables that define the impeller hub and shroud contours in meridian terms for optimizing the system. The validation of numerical results was performed through experimental data for the total PR and the isentropic efficiency. Objective-function values are numerically evaluated through the RANS analysis at design points that are selected through the Latin hypercube sampling method. A fast and elitist non-dominated sorting genetic algorithm (NSGA-II) with an e-constraint strategy for local search coupled with a surrogate model is used for...

Journal ArticleDOI
TL;DR: A progressive simulation-based design optimization strategy is developed that can be applied to highly nonlinear impulse-type processes such as shot peening, laserpeening, and bullet impacts on aircraft structural components.
Abstract: A progressive simulation-based design optimization strategy is developed that can be applied to highly nonlinear impulse-type processes such as shot peening, laser peening, and bullet impacts on aircraft structural components. The design problems entail the use of multiple fidelities in simulation, time-consuming elastic-plastic analysis, and mixed types of optimization variables. An optimization strategy based on progressively increasing the complexity and fidelity is developed, along with suitable surrogate models. Multilevel fidelity models include axisymmetric, symmetric three-dimensional, and full-scale simulations to enable design optimization. The first two models are used to perform parametric studies and to localize the potential design space. This creates a reduced design space and an effective starting point for the subsequent optimization iterations, using the proposed modified particle swarm optimization for mixed variables. In the third step, the full-scale model is employed to find an optimum solution. The design methodology is demonstrated on laser peening of a structural component. Laser peening is a surface enhancement technique that induces compressive residual stresses at the peened surface by generating elastic-plastic deformation. These stresses improve the surface fatigue life. The parameters, pressure pulse and spot dimensions, impulse locations (all continuous), number of shots (integer), and location of shots (discrete) are the optimization variables with stress constraints.

Journal ArticleDOI
TL;DR: An improved criterion is used to provide direction in which additional training samples could be added to better the surrogate model and stronger global exploration performance and more precise optimal solution could be obtained with the improved method at the expense of increasing the infill data properly.
Abstract: This article introduces a step-by-step optimization method based on the radial basis function (RBF) surrogate model and proposes an improved expected improvement selection criterion to better the global performance of this optimization method. Then it is applied to the optimization of packing profile of injection molding process for obtaining best shrinkage evenness of molded part. The idea is first, to establish an approximation function relationship between shrinkage evenness and process parameters by a small size of design of experiment with RBF surrogate model to alleviate the expensive computational expense in the optimization iterations. And then, an improved criterion is used to provide direction in which additional training samples could be added to better the surrogate model. Two test functions are investigated and the results show that stronger global exploration performance and more precise optimal solution could be obtained with the improved method at the expense of increasing the infill data properly. Furthermore the optimal solution of packing profile is obtained for the first time which indicates that the type of optimal packing profile should be first constant and then ramp-down. Subsequently, the discussion of this result is given to explain why the optimal profile is like that.

Journal ArticleDOI
TL;DR: The use of the surrogate model accelerates the convergence of the DE optimization procedure and additionally provides a better solution at the same number of exact evaluations, compared to the original DE algorithm.

Journal ArticleDOI
TL;DR: In this paper, a crashworthiness design of a regular ship fender structure with varying geometric dimensions is presented, where specific energy absorption (SEA) and maximum crushing force (Pm) are set as two objectives and the thickness of the outer skin, the thickness and stiffness of the frames and the height of the fender are selected as four design variables.

Proceedings ArticleDOI
18 Jul 2010
TL;DR: This paper proposes a surrogate-assisted evolutionary algorithm, Minimax SAEA, for tackling minimax optimization problems, which can successfully solve five of the benchmark problems within 110 function evaluations.
Abstract: Minimax optimization requires to minimize the maximum output in all possible scenarios. It is a very challenging problem to evolutionary computation. In this paper, we propose a surrogate-assisted evolutionary algorithm, Minimax SAEA, for tackling minimax optimization problems. In Minimax SAEA, a surrogate model based on Gaussian process is built to approximate the mapping between the decision variables and the objective value. In each generation, most of the new solutions are evaluated based on the surrogate model and only the best one is evaluated by the actual objective function. Minimax SAEA is tested on six benchmark problems and the experimental results show that Minimax SAEA can successfully solve five of them within 110 function evaluations.