scispace - formally typeset
Search or ask a question

Showing papers by "John E. Dennis published in 2007"


01 May 2007
TL;DR: The LTMads-PB is a useful practical extension of the earlier L TMads-EB algorithm, particularly in the common case for real problems where no feasible point is known, and it does as well when feasible points are known.
Abstract: We propose a new constraint-handling approach for general constraints that is applicable to a widely used class of constrained derivative-free optimization methods. As in many methods that allow infeasible iterates, constraint violations are aggregated into a single constraint violation function. As in filter methods, a threshold, or barrier, is imposed on the constraint violation function, and any trial point whose constraint violation function value exceeds this threshold is discarded from consideration. In the new algorithm, unlike the filter method, the amount of constraint violation subject to the barrier is progressively decreased adaptively as the iteration evolves. We test this progressive barrier (PB) approach versus the extreme barrier (EB) with the generalized pattern search (Gps) and the lower triangular mesh adaptive direct search (LTMads) methods for nonlinear derivative-free optimization. Tests are also conducted using the Gps-filter, which uses a version of the Fletcher-Leyffer filter approach. We know that Gps cannot be shown to yield kkt points with this strategy or the filter, but we use the Clarke nonsmooth calculus to prove Clarke stationarity of the sequences of feasible and infeasible trial points for LTMads-PB. Numerical experiments are conducted on three academic test problems with up to 50 variables and on a chemical engineering problem. The new LTMads-PB method generally outperforms our LTMads-EB in the case where no feasible initial points are known, and it does as well when feasible points are known. which leads us to recommend LTMads-PB. Thus the LTMads-PB is a useful practical extension of our earlier LTMads-EB algorithm, particularly in the common case for real problems where no feasible point is known. The same conclusions hold for Gps-PB versus Gps-EB.

174 citations


Journal ArticleDOI
TL;DR: In this paper, a cost function proportional to the radiated acoustic power is derived based on the Ffowcs Williams and Hall solution to Lighthill's equation to reduce the noise generated by turbulent flow over a hydrofoil trailing edge.
Abstract: Derivative-free optimization techniques are applied in conjunction with large-eddy simulation (LES) to reduce the noise generated by turbulent flow over a hydrofoil trailing edge. A cost function proportional to the radiated acoustic power is derived based on the Ffowcs Williams and Hall solution to Lighthill's equation. Optimization is performed using the surrogate-management framework with filter-based constraints for lift and drag. To make the optimization more efficient, a novel method has been developed to incorporate Reynolds-averaged Navier–Stokes (RANS) calculations for constraint evaluation. Separation of the constraint and cost-function computations using this method results in fewer expensive LES computations. This work demonstrates the ability to fully couple optimization to large-eddy simulation for time-accurate turbulent flow. The results demonstrate an 89% reduction in noise power, which comes about primarily by the elimination of low-frequency vortex shedding. The higher-frequency broadband noise is reduced as well, by a subtle change in the lower surface near the trailing edge.

112 citations


01 Jan 2007
TL;DR: This work addresses the question of how to manage the interplay between the optimization and the fidelity of the approximation models to ensure that the process converges to a solution of the original design problem.
Abstract: A standard engineering practice is the use of approximation models in place of expensive simulations to drive an optimal design process based on nonlinear programming algorithms. The use of approximation techniques is intended to reduce the number of detailed, costly analyses required during optimization while maintaining the salient features of the design problem. The question we address is how to manage the interplay between the optimization and the fidelity of the approximation models to ensure that the process converges to a solution of the original design problem. Using well-established notions from the literature on trust-region methods and a powerful global convergence theory for pattern search methods, we can ensure that the optimization process converges to a solution of the original design problem.

111 citations


Journal ArticleDOI
TL;DR: This note shows that the proposition of Proposition 4.2 is valid and the notation used in the paper evolved since the preliminary versions, and it is not compatible with the final notation.
Abstract: In [SIAM J. Optim., 17 (2006), pp. 188-217] Audet and Dennis proposed the class of mesh adaptive direct search (MADS) algorithms for minimization of a nonsmooth function under general nonsmooth constraints. The notation used in the paper evolved since the preliminary versions, and, unfortunately, even though the statement of Proposition 4.2 is correct, it is not compatible with the final notation. The purpose of this note is to show that the proposition is valid.

44 citations


Proceedings ArticleDOI
23 Apr 2007
TL;DR: This paper addresses the issues that arise when the design problem has a large number of variables and presents a new approach that allows for overcome them.
Abstract: An emerging need in industry is to do simulation based designs with several hundred design variables. Our current approach, as implemented in Design Explorer, is not practical for problems of this size. This paper explains these limitations and presents a new approach that allows us to overcome them. Some of the issues associated with using these codes have been attacked, while others remain open. This paper addresses the issues that arise when the design problem has a large number of variables. We call our approach MoVars for \more variables" or for Multidisciplinary Optimization Via Adaptive Response Surfaces. Often these simulations have long runtimes, do not compute derivatives, and are not suciently smooth to work well with standard gradient based methods. Many of the obstacles to using these codes have been over come with automation and using alternative optimization methods. SEQOPT, sequential modelling and optimization, 9 has been very eective. It is part of Design Explorer, a suite of tools for design space exploration and optimization. However when the number of variables gets large, more than 100, the solution process in SEQOPT becomes impractical for several reasons. These include Experiments The typical number of simulation runs suggested by Design Explorer for the initial experi- ments grows with the square of the number of variables, and becomes impractical even with today's and tomorrow's large scale computers. Building models Even if the simulations could be run the number of times needed to build a model, the cost of building a kriging model like the ones used in SEQOPT is prohibitive. Building a model involves solving a global optimization problem whose dimension is related to the number of variables and the number of sites in the experiment.

10 citations


18 Oct 2007
TL;DR: In this paper, the authors proposed the use of a direct search method in conjunction with an additive surrogate, which is constructed from a combination of asimplified physics model and an interpolation that is based on thedifferences between the simplified physics model with the full fidelity model.
Abstract: Many properties of nanostructures depend on the atomicconfiguration at the surface. One common technique used for determiningthis surface structure is based on the low energy electron diffraction(LEED) method, which uses a high-fidelity physics model to compareexperimental results with spectra computed via a computer simulation.While this approach is highly effective, the computational cost of thesimulations can be prohibitive for large systems. In this work, wepropose the use of a direct search method in conjunction with an additivesurrogate. This surrogate is constructed from a combination of asimplified physics model and an interpolation that is based on thedifferences between the simplified physics model and the full fidelitymodel.

2 citations


Journal ArticleDOI
01 Dec 2007-Pamm
TL;DR: Meza et al. as discussed by the authors used a direct search method in conjunction with an additive surrogate, which is constructed from a combination of a simplified physics model and an interpolation that is based on the differences between the simplified model and the full fidelity model.
Abstract: Author(s): Meza, Juan C.; Garcia-Lekue, Arantzazu; Abramson, Mark A.; Dennis, John E. | Abstract: Many properties of nanostructures depend on the atomic configuration at the surface. One common technique used for determining this surface structure is based on the low energy electron diffraction (LEED) method, which uses a high-fidelity physics model to compare experimental results with spectra computed via a computer simulation. While this approach is highly effective, the computational cost of the simulations can be prohibitive for large systems. In this work, we propose the use of a direct search method in conjunction with an additive surrogate. This surrogate is constructed from a combination of a simplified physics model and an interpolation that is based on the differences between the simplified physics model and the full fidelity model.

2 citations


01 Nov 2007
TL;DR: In this paper, a parallel space decomposition PSD technique for the mesh adaptive direct search (MADS) algorithm is described, and some numerical results on problems with up to 500 variables illustrate the advantages and limitations of PSD-MADS.
Abstract: This paper describes a parallel space decomposition PSD technique for the mesh adaptive direct search MADS algorithm. MADS extends a generalized pattern search for constrained nonsmooth optimization problems. The objective of the present work is to obtain good solutions to larger problems than the ones typically solved by MADS. The new method PSD-MADS is an asynchronous parallel algorithm in which the processes solve problems over subsets of variables. The convergence analysis based on the Clarke calculus is essentially the same as for the MADS algorithm. A practical implementation is described, and some numerical results on problems with up to 500 variables illustrate the advantages and limitations of PSD-MADS.

1 citations