Adaptive Designs of Experiments for Accurate Approximation of a Target Region
read more
Citations
Metamodel-based importance sampling for structural reliability analysis
Reliability-based design optimization using kriging surrogates and subset simulation
Design of computer experiments: space filling and beyond
Sequential design of computer experiments for the estimation of a probability of failure
References
Gaussian Processes for Machine Learning
Response Surface Methodology: Process and Product Optimization Using Designed Experiments
Statistics for spatial data
A comparison of three methods for selecting values of input variables in the analysis of output from a computer code
Efficient Global Optimization of Expensive Black-Box Functions
Related Papers (5)
The design and analysis of computer experiments
Efficient Global Optimization of Expensive Black-Box Functions
Frequently Asked Questions (15)
Q2. What have the authors stated for future works in "Adaptive designs of experiments for accurate approximation of a target region" ?
However, it has been found some limitations to the method, which were not solved here and requires future work to apply the method to a wide range of problems: Future research may compare the results obtained with this method to alternative methods, in particular in the frameworks of reliability analysis and constrained optimization. Since it relies on numerical integration, the method can become computationally expensive if a large number of integration points are needed to compute the criterion. Although sequential strategies allow some correction of the model during the process ( through re-estimation of the parameters for instance ), the success of the method will strongly depend on the capability of the Kriging model to fit the actual response.
Q3. What is the famous adaptive strategy?
One of the most famous adaptive strategy is the EGO algorithm Jones et al. [7], used to derive sequential designs for the optimization of deterministic simulation models, by choosing at each step the point that maximizes the expected improvement, a functional that represents a compromise between exploration of unknown regions and local search.
Q4. What is the DoE for a classical space-filling?
The classical space-filling DoE leads to a uniform error behavior, while the optimal DoE lead to large errors when the response is far from the target value, while small errors when it is close to the target.
Q5. What is the importance of the Kriging distribution when approximating the limit-state?
When approximating the limit-state, it is clear that accuracy is critical in the regions where it is close to zero, since error in that region is likely to affect the probability estimate.
Q6. What is the advantage of sequential strategies over other DoEs?
In general, a particular advantage of sequential strategies over other DoEs is that they can integrate the information given by the first k observation values to choose the (k+1)th training point, for instance by reevaluating the Kriging covariance parameters.
Q7. What is the objective of the present work?
The objective of the present work is to provide a methodology to construct a design of experiments such that the metamodel accurately approximates the vicinity of a boundary in design space defined by a target value of the function of interest.
Q8. How many integration points are chosen for CMA-ES?
The number of integration points is chosen equal to 5,000, and the number of function evaluations for CMA-ES is limited to 1,000.
Q9. What are some of the methods for calculating the failure probability of a system?
Some of them use the relation between input random variables and the limit-state (e.g., first-order reliability method) and some consider the limitstate as a black-box (e.g., Monte-Carlo Simulations, MCS).
Q10. What is the cost of a metamodel to approximate the limit state?
using a metamodel to approximate the limitstate g is a natural solution to the lack of data; MCS is then performed on the metamodel that is inexpensive to evaluate.
Q11. How can the probability of failure of a system be calculated?
In particular, the probability of failure of the system can be computed using sampling techniques (i.e. Monte-Carlo Simulations, MCS), by counting the number of responses that are above a certain threshold.
Q12. What is the effect of parameter re-evaluation on the efficiency of the method?
In the numerical examples used in this work, the authors found that after a first few iterations, the parameter re-evaluation had a negligible impact on the efficiency of the method.
Q13. How do the authors address the probability distribution of input variables?
To address this probability distribution of input variables, the authors modify the weighted IMSE criterion by integrating the weighted MSE not with a uniform measure, but with the law µ of the input variables.
Q14. What is the main difference between the two criterion-based strategies?
It was found that both criterion-based strategies significantly outperformed space-filling designs, and taking into account the input distribution provides additional improvement in the accuracy of the probability of failure.
Q15. What is the second example of the fitting of random processes in six dimensions?
The second is the fitting of realizations of random processes in six dimensions with known covariance parameters, which allows us to decompose the problem and evaluate the relevance of their criterion since in this case there is no modeling error.