scispace - formally typeset
Search or ask a question

Showing papers in "Optimization and Engineering in 2005"


Journal ArticleDOI
TL;DR: In this article, a coupled adjoint method for sensitivity analysis that is used in an aero-structural aircraft design framework is presented. Butler et al. used a coupled-adjoint approach that is based on previously developed single discipline sensitivity analysis.
Abstract: This paper presents an adjoint method for sensitivity analysis that is used in an aero-structural aircraft design framework The aero-structural analysis uses high-fidelity models of both the aerodynamics and the structures Aero-structural sensitivities are computed using a coupled-adjoint approach that is based on previously developed single discipline sensitivity analysis Alternative strategies for coupled sensitivity analysis are also discussed The aircraft geometry and a structure of fixed topology are parameterized using a large number of design variables The aero-structural sensitivities of aerodynamic and structural functions with respect to these design variables are computed and compared with results given by the complex-step derivative approximation The coupled-adjoint procedure is shown to yield very accurate sensitivities and to be computationally efficient, making high-fidelity aero-structural design feasible for problems with thousands of design variables

254 citations


Journal ArticleDOI
TL;DR: Important developments are provided that support the next phase in the evolution of a new multiobjective decision-making tool for use in conceptual engineering design and provide a concept selection approach that capitalizes on the benefits of computational optimization.
Abstract: In a recent publication, we presented a new multiobjective decision-making tool for use in conceptual engineering design. In the present paper, we provide important developments that support the next phase in the evolution of the tool. These developments, together with those of our previous work, provide a concept selection approach that capitalizes on the benefits of computational optimization. Specifically, the new approach uses the efficiency and effectiveness of optimization to rapidly compare numerous designs, and characterize the tradeoff properties within the multiobjective design space. As such, the new approach differs significantly from traditional (non-optimization based) concept selection approaches where, comparatively speaking, significant time is often spent evaluating only a few points in the design space. Under the new approach, design concepts are evaluated using a so-calleds-Pareto frontier; this frontier originates from the Pareto frontiers of various concepts, and is the Pareto frontier for thesetof design concepts. An important characteristic of the s-Pareto frontier is that it provides a foundation for analyzing tradeoffs between design objectives and the tradeoffs between design concepts. The new developments presented in this paper include; (i) the notion ofminimally representingthe s-Pareto frontier, (ii) the quantification of concept goodness using s-Pareto frontiers, (iii) the development of an interactive design space exploration approach that can be used to visualizen-dimensional s-Pareto frontiers, and (iv) s-Pareto frontier-based approaches for considering uncertainty in concept selection. Simple structural examples are presented that illustrate representative applications of the proposed method.

237 citations


Journal ArticleDOI
TL;DR: A flexible model where traffic belongs to a polytope is introduced, which can be considered as a mathematical framework for a new flexible virtual private network service offer and also introduces a new concept: the routing of apolytope.
Abstract: Due to the success of the Internet and the diversity of communication applications, it is becoming increasingly difficult to forecast traffic patterns. To capture the traffic variations, we introduce a flexible model where traffic belongs to a polytope. We assume that the traffic demands between nodes can be carried on many paths, with respect to network resources. Moreover, to guarantee the network stability and to make the routing easy to implement, the proportions of traffic flowing through each path have to be independent of the current traffic demands. We show that a minimum-cost routing satisfying the previous properties can be efficiently computed by column and constraint generations. We then present several strategies related to certain algorithmic details. Finally, theoretical and computational studies show that this new flexible model can be much more economical than a classical deterministic model based on a given traffic matrix. This paper can be considered as a mathematical framework for a new flexible virtual private network service offer. It also introduces a new concept: the routing of a polytope.

165 citations


Journal ArticleDOI
TL;DR: The procedure of the traditional Most Probable Point (MPP) based reliability analysis method is combined with the collaborative disciplinary analyses to automatically satisfy the interdisciplinary consistency when conducting reliability analysis.
Abstract: Traditional Multidisciplinary Design Optimization (MDO) generates deterministic optimal designs, which are frequently pushed to the limits of design constraint boundaries, leaving little or no room to accommodate uncertainties in system input, modeling, and simulation. As a result, the design solution obtained may be highly sensitive to the variations of system input which will lead to performance loss and the solution is often risky (high likelihood of undesired events). Reliability-based design is one of the alternative techniques for design under uncertainty. The natural method to perform reliability analysis in multidisciplinary systems is the all-in-one approach where the existing reliability analysis techniques are applied directly to the system-level multidisciplinary analysis. However, the all-in-one reliability analysis method requires a double loop procedure and therefore is generally very time consuming. To improve the efficiency of reliability analysis under the MDO framework, a collaborative reliability analysis method is proposed in this paper. The procedure of the traditional Most Probable Point (MPP) based reliability analysis method is combined with the collaborative disciplinary analyses to automatically satisfy the interdisciplinary consistency when conducting reliability analysis. As a result, only a single loop procedure is required and all the computations are conducted concurrently at the individual discipline-level. Compared with the existing reliability analysis methods in MDO, the proposed method is efficient and therefore provides a cheaper tool to evaluate design feasibility in MDO under uncertainty. Two examples are used for the purpose of verification.

125 citations


Journal ArticleDOI
TL;DR: The purpose of the present paper is to present a decomposition strategy in a general context, provide rigorous theory justifying the decomposition, and give some simple illustrative examples.
Abstract: Numerous hierarchical and nonhierarchical decomposition strategies for the optimization of large scale systems, comprised of interacting subsystems, have been proposed. With a few exceptions, all of these strategies lack a rigorous theoretical justification. This paper focuses on a class of quasiseparable optimization problems narrow enough for a rigorous decomposition theory, yet general enough to encompass many large scale engineering design problems. The subsystems for these problems involve local design variables and global system variables, but no variables from other subsystems. The objective function is a sum of a global system criterion and the subsystems' criteria. The essential idea is to give each subsystem a budget and global system variable values, and then ask the subsystems to independently maximize their constraint margins. Using these constraint margins, a system optimization then adjusts the values of the system variables and subsystem budgets. The subsystem margin problems are totally independent, always feasible, and could even be done asynchronously in a parallel computing context. An important detail is that the subsystem tasks, in practice, would be to construct response surface approximations to the constraint margin functions, and the system level optimization would use these margin surrogate functions. The purpose of the present paper is to present a decomposition strategy in a general context, provide rigorous theory justifying the decomposition, and give some simple illustrative examples.

96 citations


Journal ArticleDOI
TL;DR: A two-stage stochastic integer programming model for the simultaneous optimization of power production and day-ahead power trading in a hydro-thermal system is developed and solved by a decomposition method combining Lagrangian relaxation of nonanticipativity with branch-and-bound in the spirit of global optimization.
Abstract: We develop a two-stage stochastic integer programming model for the simultaneous optimization of power production and day-ahead power trading in a hydro-thermal system. The model rests on mixed-integer linear formulations for the unit commitment problem and for the price clearing mechanism at the power exchange. Foreign bids enter as random components into the model. We solve the stochastic integer program by a decomposition method combining Lagrangian relaxation of nonanticipativity with branch-and-bound in the spirit of global optimization. Finally, we report some first computational experiences.

81 citations


Journal ArticleDOI
TL;DR: The creation and use of a generalized cost function methodology based on costlets for automated optimization for conformal and intensity modulated radiotherapy treatment plans is presented and the use of the costlets are described and illustrated in an automated planning system developed and used clinically at the University of Michigan Medical Center.
Abstract: We present the creation and use of a generalized cost function methodology based on costlets for automated optimization for conformal and intensity modulated radiotherapy treatment plans In our approach, cost functions are created by combining clinically relevant “costlets” Each costlet is created by the user, using an “evaluator” of the plan or dose distribution which is incorporated into a function or “modifier” to create an individual costlet Dose statistics, dose-volume points, biological model results, non-dosimetric parameters, and any other information can be converted into a costlet A wide variety of different types of costlets can be used concurrently Individual costlet changes affect not only the results for that structure, but also all the other structures in the plan (eg, a change in a normal tissue costlet can have large effects on target volume results as well as the normal tissue) Effective cost functions can be created from combinations of dose-based costlets, dose-volume costlets, biological model costlets, and other parameters Generalized cost functions based on costlets have been demonstrated, and show potential for allowing input of numerous clinical issues into the optimization process, thereby helping to achieve clinically useful optimized plans In this paper, we describe and illustrate the use of the costlets in an automated planning system developed and used clinically at the University of Michigan Medical Center We place particular emphasis on the flexibility of the system, and its ability to discover a variety of plans making various trade-offs between clinical goals of the treatment that may be difficult to meet simultaneously

61 citations


Journal ArticleDOI
TL;DR: In this article, a large-scale convex nonlinear program is decomposed according to the scheme of analytical target cascading and a Lagrangian duality-based coordination is proposed to converge to a solution of the original problem.
Abstract: Lagrangian duality is a powerful tool for dealing with large mathematical programs which require decomposition. We consider a large-scale convex nonlinear program that is decomposed according to the scheme of analytical target cascading and propose a Lagrangian duality-based coordination in which solutions of resulting subproblems converge to a solution of the original problem. We present a subgradient algorithm to achieve said solution, demonstrate with an example, and conclude with an extension to multiple level problems with multiple subsystems on each level.

54 citations


Journal ArticleDOI
TL;DR: In this paper, a fast radiotherapy planning algorithm which determines approximatively optimal gantry and table angles, kinds of wedges, leaf positions and intensities simultaneously in a global way is presented.
Abstract: We present a new fast radiotherapy planning algorithm which determines approximatively optimal gantry and table angles, kinds of wedges, leaf positions and intensities simultaneously in a global way. Other parameters are optimized only independently of each other. The algorithm uses an elaborate field management and field reduction. Beam intensities are determined via a variant of a projected Newton method of Bertsekas. The objective function is a standard piecewise quadratic penalty function, but it is built with efficient upper bounds which are calculated during the optimization process. Instead of pencil beams, basic leaf positions are included. The algorithm is implemented in the new beam modelling and dose optimization module Homo OptiS.

25 citations



Journal ArticleDOI
TL;DR: This paper discusses geometry and grid generation issues for an automated shape optimization using computational fluid dynamics and computational structural mechanics to achieve robust grid deformation.
Abstract: This paper discusses geometry and grid generation issues for an automated shape optimization using computational fluid dynamics and computational structural mechanics. Special attention is given to five major steps for shape optimization: shape parameterization, automation of model abstraction, automation of grid generation, calculation of analytical sensitivity, and robust grid deformation.

Journal ArticleDOI
TL;DR: In this article, a first-order second-moment statistical approximation method is used to propagate the assumed input uncertainty through coupled Euler CFD aerodynamic/finite element structural codes for both analysis and sensitivity analysis.
Abstract: The effect of geometric uncertainty due to statistically independent, random, normally distributed shape parameters is demonstrated in the computational design of a 3-D flexible wing. A first-order second-moment statistical approximation method is used to propagate the assumed input uncertainty through coupled Euler CFD aerodynamic/finite element structural codes for both analysis and sensitivity analysis. First-order sensitivity derivatives obtained by automatic differentiation are used in the input uncertainty propagation. These propagated uncertainties are then used to perform a robust design of a simple 3-D flexible wing at supercritical flow conditions. The effect of the random input uncertainties is shown by comparison with conventional deterministic design results. Sample results are shown for wing planform, airfoil section, and structural sizing variables.

Journal ArticleDOI
TL;DR: The convexity of the cost function ensures that any local minimum is a global minimum for the given network topology, and theoretically any descent algorithms for finding local minima can be applied to the design of minimum cost mining networks.
Abstract: In this paper we consider the problem of optimising the construction and haulage costs of underground mining networks. We focus on a model of underground mine networks consisting of ramps in which each ramp has a bounded maximum gradient. The cost depends on the lengths of the ramps, the tonnages hauled through them and their gradients. We model such an underground mine network as an edge-weighted network and show that the problem of optimising the cost of the network can be described as an unconstrained non-linear optimisation problem. We show that, under a mild condition which is satisfied in practice, the cost function is convex. Finally we briefly discuss how the model can be generalised to those underground mine networks that are composed not only of ramps but also vertical shafts, and show that the total cost in the generalised model is still convex under the same condition. The convexity of the cost function ensures that any local minimum is a global minimum for the given network topology, and theoretically any descent algorithms for finding local minima can be applied to the design of minimum cost mining networks.

Journal ArticleDOI
TL;DR: This paper aims to be an introduction to the theory of proximal algorithms borrowing ideas from descent methods for unconstrained optimization, and presents a simple and natural convergence proof.
Abstract: Proximal point methods have been used by the optimization community to analyze different algorithms like multiplier methods for constrained optimization, and bundle methods for nonsmooth problems. This paper aims to be an introduction to the theory of proximal algorithms borrowing ideas from descent methods for unconstrained optimization. This new viewpoint allows us to present a simple and natural convergence proof. We also improve slightly the results from Solodov and Svaiter (1999).

Journal ArticleDOI
TL;DR: In this paper, the authors present NP-complete decision problems concerning the deployment and utilization of baggage screening security devices, including three different deployment performance measures: uncovered baggage segments, uncovered flight segments, and uncovered passenger segments.
Abstract: Aviation security is an important problem of national interest and concern. Baggage screening security devices and operations at airports throughout the United States provide an important defense against terrorist actions targeted at commercial aircraft. Determining where to deploy such devices, and how to best use them can be quite challenging. This paper presents NP-complete decision problems concerning the deployment and utilization of baggage screening security devices. These problems incorporate three different deployment performance measures: uncovered baggage segments, uncovered flight segments, and uncovered passenger segments. Integer programming models are formulated to address optimization versions of these problems and to identify optimal baggage screening security device deployments (i.e., determine the number and type of baggage screening security devices that should be placed at different airports, and determining which baggage should be screened with such devices). The models are illustrated with an example that incorporates data extracted from the Official Airline Guide (OAG).

Journal ArticleDOI
TL;DR: Results show that the method finds the same local optimum as a conventional optimization method with as much as 50% reduction in the computational cost and without significant modifications to the analysis tools.
Abstract: The formulation and implementation of a multidisciplinary optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is presented and applied to a simple, isolated, 3-D wing in inviscid flow. The method aims to reduce the computational expense incurred in performing shape and sizing optimization using existing state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. Results show that the method finds the same local optimum as a conventional optimization method with as much as 50% reduction in the computational cost and without significant modifications to the analysis tools.

Journal ArticleDOI
TL;DR: Fast simulated annealing and a novel objective function are used to investigate the relationship between the number of shots and the quality of the resulting treatment and suggest that it is clinically valuable to improve the Gamma Knife’s delivery capabilities so that 50 shot treatments are possible.
Abstract: Radiosurgery is a non-invasive alternative to brain surgery that uses a single focused application of high radiation to destroy intracerebral target tissues. A Gamma Knife delivers such treatments by using 201 cylindrically collimated cobalt-60 sources that are arranged in a hemi-spherical pattern and aimed to a common focal point. The accumulation of radiation at the focal point, called a “shot” due to the spherical nature of the dose distribution, is used to ablate (or destroy) target tissue in the brain. If the target is small and spherical, it is easily treated by choosing one of four available collimators (4, 8, 14, or 18 mm). For large, irregular targets multiple shots are typically required to treat the entire lesion, and the process of determining the optimal arrangement and number of shots is complex.

Journal ArticleDOI
TL;DR: In this paper, the optimal 3D-nose shapes of rods are proposed for deep penetration into soil, concrete and metal media, assuming that the normal and tangent stresses on the impactor nose are of the Poncelet and Coulomb forms, respectively.
Abstract: Friction effects on the optimum shape design for a normal impacting, rigid body are investigated and the optimum 3D- nose shapes of rods are proposed for deep penetration into soil, concrete and metal media. The study is conducted by assuming that the normal and tangent stresses, that act on the impactor nose, are of the Poncelet and Coulomb forms, respectively. The geometrical characteristics of the shapes maximizing penetration depth are compared with those of the minimal resistance bodies obtained at the initial stage of the penetration event. When mass, shank radius and nose length of the rods are fixed, a comparative study of the penetration depths of the optimal impactors and impactors with conical and ogival nose shapes is carried out. The conditions, when the benefits of the optimal configurations in providing deep penetration into soil, concrete and metal media become significant in comparison with other shapes, have been obtained. The model parameters are taken from the published reports on penetration data obtained for striking velocities to 1.5 km/s while the impactors remained rigid and visibly undeformed.

Journal ArticleDOI
TL;DR: In this article, the shape of a hanging chain is considered to minimize the potential energy of the chain, and the authors present several models of the problem and demonstrate differences in the number of iterations and solution time.
Abstract: This is the second paper in a series presenting case studies in modern large-scale constrained optimization 9 In this paper, we consider the shape of a hanging chain, which, in equilibrium, minimizes the potential energy of the chain. In addition to the tutorial aspects of this paper, we also emphasize the importance of certain modeling issues such as convex vs. nonconvex formulations of given problem. We will present several models of the problem and demonstrate differences in the number of iterations and solution time.

Journal ArticleDOI
TL;DR: In this paper, a special mathematical programming problem with equilibrium constraints (MPEC) arises in material and shape optimization problems involving the contact of a rod or a plate with a rigid obstacle.
Abstract: We discuss a special mathematical programming problem with equilibrium constraints (MPEC), that arises in material and shape optimization problems involving the contact of a rod or a plate with a rigid obstacle. This MPEC can be reduced to a nonlinear programming problem with independent variables and some dependent variables implicity defined by the solution of a mixed linear complementarity problem (MLCP). A projected-gradient algorithm including a complementarity method is proposed to solve this optimization problem. Several numerical examples are reported to illustrate the efficiency of this methodology in practice.

Journal ArticleDOI
TL;DR: An efficient algorithm to solve a structured semidefinite program (SDP) with important applications in the analysis of uncertain linear systems and achieves substantial savings in computing resources for problems with a large number of parameters.
Abstract: This work presents an efficient algorithm to solve a structured semidefinite program (SDP) with important applications in the analysis of uncertain linear systems. The solution to this particular SDP gives an upper bound for the maximum singular value of a multidimensional rational matrix function, or linear fractional transformation, over a box of n real parameters. The proposed algorithm is based on a known method for solving semidefinite programs. The key features of the algorithm are low memory requirements, low cost per iteration, and efficient adaptive rules to update algorithm parameters. Proper utilization of the structure of the semidefinite program under consideration leads to an algorithm that reduced the cost per iteration and memory requirements of existing general-purpose SDP solvers by a factor of O(n). Thus, the algorithm in this paper achieves substantial savings in computing resources for problems with a large number of parameters. Additional savings are obtained when the problem data includes block-circulant matrices as is the case in the analysis of uncertain mechanical structures with spatial symmetry.