scispace - formally typeset
Search or ask a question

Showing papers on "Surrogate model published in 2004"


Proceedings ArticleDOI
01 Jan 2004
TL;DR: In this article, a new approach is taken to integrate data from approximate and detailed simulations to build a surrogate model to describe the relationship between output and input parameters, which is used to construct a model based on a Gaussian process.
Abstract: Preliminary design of a complex system often involves exploring a large design space. This may require repeated use of computationally expensive simulations. To ease the computational burden, surrogate models are built to provide rapid approximations of more expensive models. However, the surrogate models themselves are often expensive to build because they are based on repeated experiments with computationally expensive simulations. An alternative approach is to replace the detailed simulations with simplified approximate simulations, thereby sacrificing accuracy for reduced computational time. Naturally, surrogate models built from these approximate simulations will also be imprecise. A strategy is needed for improving the precision of surrogate models based on approximate simulations without significantly increasing computational time. In this paper, a new approach is taken to integrate data from approximate and detailed simulations to build a surrogate model to describe the relationship between output and input parameters. Experimental results from approximate simulations form the bulk of the data, and they are used to build a model based on a Gaussian process. The fitted model is then ‘adjusted’ by incorporating small amounts of data from detailed simulations to obtain a more accurate prediction model. The effectiveness of this approach is demonstrated with a design application for a cellular material that is used to cool a microprocessor. The emphasis is on the method and not on the results per se.Copyright © 2004 by ASME

233 citations


Proceedings ArticleDOI
30 Aug 2004
TL;DR: It is demonstrated that first-order consistency can be insufficient to achieve acceptable convergence rates in practice and new second-order additive, multiplicative, and combined corrections which can significantly accelerate convergence are presented.
Abstract: Surrogate-based optimization methods have become established as effective techniques for engineering design problems through their ability to tame nonsmoothness and reduce computational expense. In recent years, supporting mathematical theory has been developed to provide the foundation of provable convergence for these methods. One of the requirements of this provable convergence theory involves consistency between the surrogate model and the underlying truth model that it approximates. This consistency can be enforced through a variety of correction approaches, and is particularly essential in the case of surrogate-based optimization with model hierarchies. First-order additive and multiplicative corrections currently exist which satisfy consistency in values and gradients between the truth and surrogate models at a single point. This paper demonstrates that first-order consistency can be insufficient to achieve acceptable convergence rates in practice and presents new second-order additive, multiplicative, and combined corrections which can significantly accelerate convergence. These second-order corrections may enforce consistency with either the actual truth model Hessian or its finite difference, quasi-Newton, or Gauss-Newton approximation.

202 citations


Journal Article
TL;DR: An evolutionary algorithm for the general nonlinear programming problem using a surrogate model that is determined by their consistency in ranking the population rather than their statistical accuracy.
Abstract: The paper describes an evolutionary algorithm for the general nonlinear programming problem using a surrogate model. Surrogate models are used in optimization when model evaluation is expensive. Two surrogate models are implemented, one for the objective function and another for a penalty function based on the constraint violations. The proposed method uses a sequential technique for updating these models. The quality of the surrogate models is determined by their consistency in ranking the population rather than their statistical accuracy. The technique is evaluated on a number of standard test problems.

61 citations


Proceedings ArticleDOI
30 Aug 2004
TL;DR: In this paper, the authors developed an efficient approach for computationally expensive multiobjective design optimization problems using an improved hypercube sampling to preselect an array of design points on which the computational-fluid-dynamics code will run.
Abstract: In this work we develop an efficient approach for computationally expensive multiobjective design optimization problems. In this approach we bring together design of experiment, a response surface model, a genetic algorithm, and computational-fluid-dynamics analysis tools to provide an integrated optimization system. We use an improved hypercube sampling to preselect an array of design points on which the computational-fluid-dynamics code will run. Then a computationally cheap surrogate model is constructed based on response surface approximation. A real-coded genetic algorithm is then applied on the surrogate model to perform multiobjective optimization. Representative solutions are chosen from the Pareto-optimal front to verify against the computational-fluid-dynamics code. This proposed method is used in the redesign of a single-stage turbopump, a two-stage turbopump, and the NASA rotor67 transonic compressor blade

59 citations


Journal ArticleDOI
TL;DR: The use of lower fidelity models together with approximation methods throughout the optimization process is finding increasing popularity and some of these strategies are noted here and these are extended to include any information which may be available through sensitivities.
Abstract: Approximation methods have found an increasing use in the optimization of complex engineering systems. The approximation method provides a 'surrogate' model which, once constructed, can be called instead of the original expensive model for the purposes of optimization. Sensitivity information on the response of interest may be cheaply available in many applications, for example, through a pertubation analysis in a finite element model or through the use of adjoint methods in CFD. This information is included here within the approximation and two strategies for optimization are described. The first involves simply resampling at the best predicted point, the second is based on an expected improvement approach. Further, the use of lower fidelity models together with approximation methods throughout the optimization process is finding increasing popularity. Some of these strategies are noted here and these are extended to include any information which may be available through sensitivities. Encouraging initial results are obtained.

59 citations


Book ChapterDOI
18 Sep 2004
TL;DR: In this paper, an evolutionary algorithm for the general nonlinear programming problem using a surrogate model is described. But the quality of the surrogate models is determined by their consistency in ranking the population rather than their statistical accuracy.
Abstract: The paper describes an evolutionary algorithm for the general nonlinear programming problem using a surrogate model. Surrogate models are used in optimization when model evaluation is expensive. Two surrogate models are implemented, one for the objective function and another for a penalty function based on the constraint violations. The proposed method uses a sequential technique for updating these models. The quality of the surrogate models is determined by their consistency in ranking the population rather than their statistical accuracy. The technique is evaluated on a number of standard test problems.

58 citations


Journal ArticleDOI
TL;DR: In this paper, a space mapping technique using surrogate models together with response surfaces was used for structural optimization of crashworthiness problems, which reduced the intrusion in the passenger compartment area by 32% without compromising other crashworthiness parameters.
Abstract: The aim of this work is to illustrate how a space mapping technique using surrogate models together with response surfaces can be used for structural optimization of crashworthiness problems. To determine the response surfaces, several functional evaluations must be performed and each evaluation can be computationally demanding. The space mapping technique uses surrogate models, i.e. less costly models, to determine these surfaces and their associated gradients. The full model is used to correct the gradients from the surrogate model for the next iteration. Thus, the space mapping technique makes it possible to reduce the total computing time needed to find the optimal solution. First, two analytical functions and one analytical structural optimization problem are presented to exemplify the idea of space mapping and to compare the efficiency of space mapping to traditional response surface optimization. Secondly, a sub-model of a complete vehicle finite element (FE) model is used to study different objective functions in vehicle crashworthiness optimization. Finally, the space mapping technique is applied to a structural optimization problem of a large industrial FE vehicle model, consisting of 350.000 shell elements and a computing time of 100 h. In this problem the intrusion in the passenger compartment area was reduced by 32% without compromising other crashworthiness parameters.

51 citations


Journal ArticleDOI
TL;DR: This work proposes a rough set based approach that can identify multiple sub-regions in a design space, within which all of the design points are expected to have a performance value equal to or less than a given level.
Abstract: Modern engineering design problems often involve computation-intensive analysis and simulation processes. Design optimization based on such processes is desired to be efficient, informative and transparent. This work proposes a rough set based approach that can identify multiple sub-regions in a design space, within which all of the design points are expected to have a performance value equal to or less than a given level. The rough set method is applied iteratively on a growing sample set. A novel termination criterion is also developed to ensure a modest number of total expensive function evaluations to identify these sub-regions and search for the global optimum. The significance of the proposed method is twofold. First, it provides an intuitive method to establish the mapping from the performance space to the design space, i.e. given a performance level, its corresponding design region(s) can be identified. Such a mapping could be potentially used to explore and visualize the entire design space. Second, it can be naturally extended to a global optimization method. It also bears potential for more broad application to problems such as metamodeling-based design and robust design optimization. The proposed method was tested with a number of test problems and compared with a few well-known global optimization algorithms.

49 citations


Journal ArticleDOI
TL;DR: Kriging, an alternative method for creating surrogate models, is applied to construct approximations of legacy data for a large-scale system to replace expensive-to-run computer analysis codes or to develop a model for a set of nonuniform legacy data.
Abstract: Despite a steady increase in computing power, the complexity of engineering analyses seems to advance at the same rate. Traditional parametric design analysis is inadequate for the analysis of large-scale engineering systems because of its computational inefficiency; therefore, a departure from the traditional parametric design approach is required. In addition, the existence of legacy data for complex, large-scale systems is commonplace. Approximation techniques may be applied to build computationally inexpensive surrogate models for large-scale systems to replace expensive-to-run computer analysis codes or to develop a model for a set of nonuniform legacy data. Response-surface models are frequently utilized to construct surrogate approximations; however, they may be inefficient for systems having with a large number of design variables. Kriging, an alternative method for creating surrogate models, is applied in this work to construct approximations of legacy data for a large-scale system. Comparisons between response surfaces and kriging are made using the legacy data from the High Speed Civil Transport (HSCT) approximation challenge. Since the analysis points already exist, a modified design-of-experiments technique is needed to select the appropriate sample points. In this paper, a method to handle this problem is presented, and the results are compared against previous work.

37 citations


Journal ArticleDOI
TL;DR: The Surrogate Model can be used to accurately and unambiguously identify polymers whose fibrinogen absorption is at the limits of the range which is an essential requirement for assessing polymers for regenerative tissue applications.
Abstract: We present a Surrogate (semiempirical) Model for prediction of protein adsorption onto the surfaces of biodegradable polymers that have been designed for tissue engineering applications. The protein used in these studies, fibrinogen, is known to play a key role in blood clotting. Therefore, fibrinogen adsorption dictates the performance of implants exposed to blood. The Surrogate Model combines molecular modeling, machine learning and an Artificial Neural Network. This novel approach includes an accounting for experimental error using a Monte Carlo analysis. Briefly, measurements of human fibrinogen adsorption were obtained for 45 polymers. A total of 106 molecular descriptors were generated for each polymer. Of these, 102 descriptors were computed using the Molecular Operating Environment (MOE) software based upon the polymer chemical structures, two represented different monomer types, and two were measured experimentally. The Surrogate Model was developed in two stages. In the first stage, the three de...

37 citations


Patent
10 Sep 2004
TL;DR: In this paper, a method of optimising a sequential combinatorial process comprising an interchangeable sequence of events comprises using a master model to model a selection of the possible sequences, and using information derived from the master model in a surrogate model with a much shorter computation time.
Abstract: A method of optimising a sequential combinatorial process comprising an interchangeable sequence of events comprises using a master model to model a selection of the possible sequences, and using information derived from the master model in a surrogate model that approximates the master model with a much shorter computation time. The surrogate model calculates all the possible sequences using an algorithm to select from the information calculated by the master model that which most closely matches the events of a present sequence, following a prioritised system so that the best match is used wherever possible. All results from the surrogate model are compared so that the modelled sequence that gives the result closest to a desired optimum result for the process can be identified, and potentially applied to the process. Accuracy can be enhanced by running the optimum sequence through the master model as a check, and further by adding the optimum sequence to the information used by the surrogate model in future calculations. Any sequential combinatorial process, in which the quality of the end result of the process depends on the order in which events in the process are performed, can be optimised in this way, including manufacturing process such as machining, cutting, shaping, forming and/or heat treating a workpiece, flow of material through a factory or oil or gas through a pipeline network, chemical and material science mixing processes, computational biology modelling, and fleet management.

Proceedings ArticleDOI
19 Jun 2004
TL;DR: A hierarchical surrogate-assisted evolutionary optimization framework using a Kriging global surrogate model is used to screen the population for individuals that undergo Lamarckian learning and shows that the proposed approach leads to a further acceleration of the evolutionary search process.
Abstract: This work presents enhancements to a surrogate-assisted evolutionary optimization framework proposed earlier in the literature for solving computationally expensive design problems on a limited computational budget (Ong et al., 2003). The main idea of our former framework was to couple evolutionary algorithms with a feasible sequential quadratic programming solver in the spirit of Lamarckian learning, including a trust-region approach for interleaving the true fitness function with computationally cheap local surrogate models during gradient-based search. We propose a hierarchical surrogate-assisted evolutionary optimization framework for accelerating the convergence rate of the original surrogate-assisted evolutionary optimization framework. Instead of using the exact high-fidelity fitness function during evolutionary search, a Kriging global surrogate model is used to screen the population for individuals that undergo Lamarckian learning. Numerical results are presented for two multimodal benchmark test functions to show that the proposed approach leads to a further acceleration of the evolutionary search process.

Proceedings ArticleDOI
19 Jun 2004
TL;DR: The behavior of kriging and cokriging based surrogate models within the optimization framework is reported, built upon a stochastic, zero order, population-based optimization algorithm embedded with controlled elitism to ensure convergence in the actual function space.
Abstract: We report the behavior of kriging and cokriging based surrogate models within the optimization framework. The framework is built upon a stochastic, zero order, population-based optimization algorithm embedded with controlled elitism to ensure convergence in the actual function space. The model accuracy is maintained via periodic retraining and the number of data points required to create the surrogate model is adaptively identified using Calinski Harabasz (CH) index. Results of kriging and cokriging are compared with radial basis function models on a set of numerical and engineering design optimization problems.

Journal ArticleDOI
TL;DR: In this article, the shape optimization of a membrane wing was investigated to maximize the lift-to-drag ratio under aerodynamic and geometry constraints, and the optimized design exhibits reduced camber from root to tip.
Abstract: §† Micro air vehicles with a maximal dimension of 15cm require original design concept due to their low Reynolds number flight regime. It has been empirically observed that a flexible membrane wing can improve the aerodynamic performance of the vehicles. To help advance our knowledge in this area, we investigate shape optimization of a membrane wing. A direct membrane wing optimization employing a coupled Navier-Stokes and flexible structure analysis is computationally expensive. Therefore, we use a rigid wing as a surrogate model. We employ a moving grid technique to facilitate automatic grid generation. Our objective is to maximize the lift-to-drag ratio under aerodynamic and geometry constraints. The optimized design exhibits reduced camber from root to tip. Overall, the aerodynamic improvement is primarily derived from the inner 70% of the wing. The optimized platform is checked for performance of the membrane wings and found to improve the performance by about the same margin. Furthermore, both optimized membrane and rigid wings improve the liftto-drag ratios mainly by reducing the form drag. However, the membrane wing demonstrates less variation in lift-to-drag ratio as the angles of attack vary.

Journal ArticleDOI
TL;DR: It is shown that an adjoint-based approximation model can provide increased accuracy over traditional nongradientbased approximations at comparable cost, at least for modest numbers of design variables.
Abstract: Approximation methods have found increasing use in the optimization of complex engineering systems. The approximation method provides a surrogate model that, once constructed, can be used in lieu of the original expensive model for the purposes of optimization. These approximations may be defined locally, for example, a low-order polynomial response surface approximation that employs trust region methodology during optimization, or globally, by the use of techniques such as kriging. Adjoint methods for computational fluid dynamics have made it possible to obtain sensitivity information on the model’s response without recourse to finite differencing. This approach then allows for an efficient local optimization strategy where these sensitivities are utilized in gradient-based optimization. The combined use of an adjoint computational fluid dynamics code with approximation methods (incorporating gradients) for global optimization is shown. Several approximation methods are considered. It is shown that an adjoint-based approximation model can provide increased accuracy over traditional nongradientbased approximations at comparable cost, at least for modest numbers of design variables. As a result, these models are found to be more reliable for surrogate assisted optimization.

01 Jan 2004
TL;DR: The proposed ideas can readily be extended to other models as well, and the reasons for this behaviour along with remedies to alleviate the problem are exposed in this paper.
Abstract: The use of surrogate evaluation models has found widespread use in evolu- tionary optimization. Regardless of the model itself and the implementation scheme, ap- proximation models are used to replace exact but costly evaluations, leading thus to lower design computational cost. However, the gain in computational cost reduces consider- ably in Multi-Objective optimization problems, where the prediction capability of surrogate models worsens with respect to Single-Objective problems. The reasons for this behaviour along with remedies to alleviate the problem are exposed in this paper. Despite the fact that the surrogate model used herein is a Radial-Basis Function network, the proposed ideas can readily be extended to other models as well.

Journal ArticleDOI
TL;DR: Application of the proposed procedure for vehicle structure impact design optimization is investigated involving both sizing and shape design variables and Mesh morphing is used in conjunction with the shape design changes.
Abstract: The use of high-performance computing for rapid visualization of design alternatives and the subsequent use of such visualization for design steering during the multidisciplinary optimization (MDO) process are investigated. Surrogate models based on polynomial response surfaces and message-passing-interface-based parallel programming models are used for rapid visualization of the physical model behavior responses corresponding to changes in the design variables. Application of the proposed procedure for vehicle structure impact design optimization is investigated involving both sizing and shape design variables. Mesh morphing is used in conjunction with the shape design changes. Rapid visualization of physical model behavior for changes in design variables during the MDO process facilitates collaboration of discipline experts that in turn facilitate steering of the design and enhances efficiency of the MDO process.

Journal ArticleDOI
TL;DR: The proposed multipoint cubic approximations match actual function and gradient values, including the current expansion point, thus satisfying the zero and first-order necessary conditions for global convergence to a local minimum of the original problem.
Abstract: Multipoint cubic approximations are investigated as surrogate functions for nonlinear objective and constraint functions in the context of sequential approximate optimization. The proposed surrogate functions match actual function and gradient values, including the current expansion point, thus satisfying the zero and first-order necessary conditions for global convergence to a local minimum of the original problem. Function and gradient information accumulated from multiple design points during the iteration history is used in estimating a reduced Hessian matrix and selected cubic terms in a design subspace appropriate for problems with many design variables. The resulting approximate response surface promises to accelerate convergence to an optimal design within the framework of a trust region algorithm. The hope is to realize computational savings in solving large numerical optimization problems. Numerical examples demonstrate the effectiveness of the new multipoint surrogate function in reducing errors over large changes in design variables.

Proceedings ArticleDOI
17 May 2004
TL;DR: To prove the usefulness of the method models of different waveguide discontinuities were created and used in design of waveguide filter, as presented in the results section.
Abstract: A new surrogate model construction method is presented. The method creates the model directly from rigorous EM-response using the radial basis function (RBF) interpolation technique. The advantage of the RBF's is the guaranteed non-singularity of the interpolation problem. Additionally, an adaptive algorithm is applied to minimize the amount of the support points needed to create model with requested accuracy. To prove the usefulness of the method models of different waveguide discontinuities were created and used in design of waveguide filter, as presented in the results section.

Proceedings ArticleDOI
30 Aug 2004
TL;DR: This paper describes a method for design exploration and optimization that is mathematically suitable for functions calculated with complex computer simulations and secondly made practical with parallel computing.
Abstract: Computer simulations are an integral part of many of today’s design processes. This paper describes a method for design exploration and optimization that is flrstly mathematically suitable for functions calculated with complex computer simulations and secondly made practical with parallel computing. These assertions are conflrmed with numerical results. As shown with several examples, using the method also has side beneflts in data consistency, use of engineering time, and in explaining results in context. I. Introduction Computer simulations are an integral part of many of today’s design processes. They can be used to explore design space to discover what is possible, what are critical features of a design, and what are limiting constraints. Furthermore computer simulations can be used as augmentation for the traditional build and test cycle. In this capacity they can substitute for early tests and as a guide to decide what to build and test, thereby reducing costs, and design time. This paper presents an approach for exploring design objectives and ultimately optimizing these objectives based on complex computer simulations. We will be focusing on deterministic physics based simulations. For example, to calculate the aerodynamic properties of a wing at a given ∞ight condition, a computational ∞uid dynamics (CFD) code can be used. It can take anywhere from a few minutes to two hours or more to simulate such a ∞ight condition. To determine the shape of a wing with desired properties many runs of the CFD code are needed. Clearly, the complexity and expense of this problem grows rapidly when we expand from a single discipline to a system level trade involving many disciplines and computer simulations, both in the run time, and in the memory requirements. Other common properties of computer simulations are inherent noise, and failure to flnd a solution at unexpected design conditions. These failures might be the result of physical properties, i.e. a wing design that does not provide enough lift to ∞y, or might be caused by the simulation software directly. In the following discussions we will treat both types of failures the same way, since we cannot easily distinguish between them. Our general process for exploring design space or optimizing is

Proceedings ArticleDOI
01 Jan 2004
TL;DR: A framework for building surrogate models in multiple stages for time-dependent systems that can respond to changes in constraints and can be fine-turned as design decisions are made is proposed.
Abstract: During the early stages of the design process, designers rarely have accurate models of system behavior, yet the success of their designs depends on understanding the effect of changes in the design parameters on the system response. When models are available, they are often expensive to evaluate and difficult to run, in large part due to the imprecise knowledge available at the beginning of the design process. To circumvent these problems, a common approach proposed in the literature is the use of surrogate models. In this paper, we propose a framework for building surrogate models in multiple stages for time-dependent systems. Because they are built in stages, the surrogate models can respond to changes in constraints and can be fine-turned as design decisions are made. When designers have mathematical models available, they can perform repeated sensitivity analyses, explore trade-offs and perform optimization studies. In this framework, the observed responses are viewed as a set of time-correlated spatial processes. The framework uses optimal sampling techniques to improve the accuracy of the resulting surrogate model while keeping the number of samples to a minimum. A new non-stationary covariance structure is proposed and tested with an example design application. The resulting models are compared with the surrogates obtained with the stationary covariance structure proposed by Romero et al. [1]. The results show increased accuracy using the non-stationary covariance structure due to its superior interpolation capabilities.Copyright © 2004 by ASME

Journal ArticleDOI
TL;DR: In this article, the authors apply model-order reduction in the optimization of the scattering matrix poles and zeros of a microwave device, yielding an advanced reliability and universality at a computational cost that is comparable to a surrogate model optimization.
Abstract: This paper discusses the application of model-order reduction in the optimization of microwave devices. It focuses on the direct optimization of the scattering matrix poles and zeros, yielding an advanced reliability and universality at a computational cost that is comparable to a surrogate model optimization. The scattering matrix poles and zeros are computed from a state-space model that is obtained from a finite-integration discretization and then optimized to match a set of target poles and zeros.

Book ChapterDOI
26 Jun 2004
TL;DR: An adaptive approximate optimisation method has been developed and applied in order to speed up the computationally expensive optimisation process and can be applied with any surrogate model for optimisation.
Abstract: The colour of colon tissue, which depends on the tissue structure, its optical properties, and the quantities of the pigments present in it, can be predicted by a physics-based model of colouration. The model, created by analysing light interaction with the tissue, is aimed at correlating the histology of the colon and its colours. This could be of a great diagnostic value, as the development of tissue abnormalities and malignancies is characterised by the rearrangement of underlying histology. Once developed, the model has to be validated for correctness. The validation has been implemented as an optimisation problem, and evolutionary techniques have been applied to solve it. An adaptive approximate optimisation method has been developed and applied in order to speed up the computationally expensive optimisation process. This works by iteratively improving a surrogate model based on an approximate physical theory of light propagation (Kubelka Munk). Good fittings, obtained under the histologically plausible values of model parameters, are presented. The performances of the new method were compared to that of a simple Evolution Strategy which uses an accurate, but expensive, Monte Carlo method. The new method is general and can be applied with any surrogate model for optimisation.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: In this paper, a 3D extension to the previous work on vehicle crashworthiness design that utilizes equivalent mechanism models of vehicle structures as a tool for the early design exploration is presented, where a number of finite element (FE) models of thinwalled beams with typical cross sections and wall thicknesses are analyzed to build a surrogate model that maps the beam dimensions to nonlinear spring properties.
Abstract: This paper presents a 3D extension to our previous work on vehicle crashworthiness design that utilizes “equivalent” mechanism models of vehicle structures as a tool for the early design exploration. An equivalent mechanism (EM) is a network of rigid links with lumped masses connected by prismatic and revolute joints with nonlinear springs, which approximate aggregated behaviors of structural members during crush. A number of finite element (FE) models of thinwalled beams with typical cross sections and wall thicknesses are analyzed to build a surrogate model that maps the beam dimensions to nonlinear spring properties. Using the surrogate model, an EM model is optimized for given design objectives by selecting the nonlinear springs among the ones realizable by thin-walled beams. The optimum EM model serves to identify a good crash mode (CM), the time history of collapse of the structural members, and to suggest the dimensions of the structural members to attain it. After the optimization, the FE model of an entire structure is “assembled” from the suggested dimensions, which is further modified to attain the good CM identified by the optimum EM model. A case study of a 3D vehicle front half body demonstrates that the proposed approach can help obtain good designs with far less computational resources than the direct optimization of a FE model.

Journal ArticleDOI
01 Aug 2004-Strain
TL;DR: In this paper, a numerical homogenization code was developed based on a commercial finite element (FE) package, which is used to develop the ANN metamodel for an individual composite structure.
Abstract: The use of finite element (FE)-based homogenisation has improved the study of composite material properties. However, it involves enormous computational effort when imple- mented in engineering design problems. Therefore an artificial neural network (ANN) surrogate model is proposed here to avoid this issue. In this study, a numerical homogenisation code was developed based on a commercial FE package. It is used to develop the ANN metamodel for an individual composite structure. The effectiveness of the metamodel was examined through an analytical optimisation procedure.

Proceedings ArticleDOI
30 Aug 2004
TL;DR: A novel framework for fusing models of varying computational cost and fldelity for the purposes of optimization is presented and has useful properties that allows it to be adjusted in an adaptive fashion to mitigate the risk of the evolutionary optimizer chasing after false optima.
Abstract: We present a novel framework for fusing models of varying computational cost and fldelity for the purposes of optimization. Such model fusion results in a surrogate model that is used by an evolutionary optimizer. The fused model is built using Gaussian Processes regression. It has useful properties that allows it be adjusted in an adaptive fashion to mitigate the risk of the evolutionary optimizer chasing after false optima. We demonstrate the strength of the framework on an engineering design problem using a predetermined computational budget. Nomenclature CN Covariance Matrix of the input data „ Gaussian Process mean r Hyperparameter vector of length scales x Design variable vector xn Gaussian Process input vector D Training data set consisting of input/output pairs ·a=e The ratio in computational efiort between low and high fldelity models fa(x) Low fldelity, computationally cheap model fe(x) High fldelity, computationally expensive model ^N+1 Prediction mean pi Candidate design to be evaluated during optimization maxstdtol Maximum allowable tolerance on prediction uncertainty stdtol Currently allowable tolerance for model accuracy during the optimization ae 2N+1 Prediction variance ae(pi) Standard deviation on fusion model prediction accuracy for input pi µ1 Hyperparameter controlling overall vertical scale µ2 Hyperparameter controlling the bias of the correlation µ3 Hyperparameter setting the noise level bn Input to the model fusion Gaussian Processes model L Dimension of Gaussian Process input vector N Number of samples in the training data set na The number of fa(x) evaluations carried out during optimization Ne Maximum allowable equivalent number of fe(x) evaluations ne The number of fe(x) evaluations carried out during optimization Np Number of individuals in a population tn Gaussian Process output scalar

Book ChapterDOI
TL;DR: An extension of the simulation based optimization framework which has been previously proposed for analyzing supply chains is presented, consisting of the iterative construction of a surrogate model based on systematically accumulated simulation results to capture the causal relation between the key decision variables and supply chain performance.
Abstract: Simulation is widely used in the decision making processes associated with supply chain management In this paper, we present an extension of the simulation based optimization framework which has been previously proposed for analyzing supply chains The extension consists of the iterative construction of a surrogate model based on systematically accumulated simulation results to capture the causal relation between the key decision variables and supply chain performance The decision variables can then be optimized using the surrogate model in place of individual simulation runs to economize on the overall computational effort The extended framework is illustrated using a small example and then applied to optimize the inventory levels in a three stage supply chain

Proceedings ArticleDOI
30 Aug 2004
TL;DR: This paper presents a preference-based updating of surrogate models (PRESM) methodology to address the basic issues of how to iteratively update and validate surrogate models using Kriging techniques from a design perspective.
Abstract: *† This paper presents an integrated approach that takes advantage of the advancements in statistical-science based surrogate modeling with preference-based assessment procedures to recursively build, update and validate predictive models in the context of overall design preference. An integral part of this research is a value of information-based framework to validate models by attempting the best trade-off between the quest for higher resolution models and the subsequent higher cost of gathering new information to refine model fidelity, from the standpoint of expected payoffs of anticipated design decisions. Specifically, this paper presents a preference-based updating of surrogate models (PRESM) methodology to address the basic issues of how to iteratively update and validate surrogate models using Kriging techniques from a design perspective. In this work, surrogate model’s relevance, usefulness, and completeness are investigated based on new information at optimally sampled points and recursively using that information to subsequently update and validate model fidelity. Two case studies, including a classic engineering truss problem, are considered as test-bed application domains and the results are discussed.

Proceedings ArticleDOI
25 Jul 2004
TL;DR: The proposed method can approximate Pareto frontiers in multi-objective optimization problems effectively by employing support vectors in SVM and the effectiveness of the method is illustrated through numerical examples.
Abstract: In many practical engineering design problems, the form of objective functions is not given explicitly in terms of design variables. Given the value of design variables, under this circumstance, the values of objective functions are obtained by real/computational experiments such as structural analysis, fluid-mechanical analysis, thermodynamic analysis, and so on. Since these experiments are considerably expensive and also time consuming, thus it is actually almost impossible to find the exact solution to those problems by using conventional optimization methods. Recently, approximation methods using computational intelligence, for example, evolutionary algorithms and neural networks have been developed remarkably. Even those algorithms need a tremendous number of experiments to obtain an approximate solution. Furthermore, most engineering design problems should be formulated as multi-objective optimization problems so as to meet the diversified demands of designer. This paper suggests applying the support vector machines (SVM) in order to make the number of experiments for finding the solution of problem with multi-objective functions as few as possible. It is shown that the proposed method can approximate Pareto frontiers in multi-objective optimization problems effectively by employing support vectors in SVM. Finally, the effectiveness of our method is illustrated through numerical examples.

Journal Article
TL;DR: A neural network is established based on available samples which is taken as surrogate model to provide approximate objective value by replacing time-consuming simulation evaluation and GA is applied to search optimal solution.
Abstract: A neural network is established based on available samples which is taken as surrogate model to provide approximate objective value by replacing time-consuming simulation evaluation. Based on surrogate model, GA is applied to search optimal solution. Moreover, samples choosing according to problem information and multiple-(model) methods are discussed to enhance the reliability and consistence of optimization results. Numerical simulation based on typical pressure vessel design problem demonstrates the feasibility and effectiveness of the proposed method.