scispace - formally typeset
Search or ask a question

Showing papers on "Optimal design published in 2007"


Book
09 Jul 2007
TL;DR: The MNL Model and Comparing Designs and Practical Techniques For Constructing Choice Experiments are presented, which provide practical techniques for constructing choice set sizes for Binary Attributes.
Abstract: List of Tables. Preface. 1. Typical Stated Choice Experiments. 2. Factorial Designs. 3. The MNL Model and Comparing Designs. 4. Paired Comparison Designs for Binary Attributes. 5. Larger Choice Set Sizes for Binary Attributes. 6. Designs for Asymmetric Attributes. 7. Various Topics. 8. Practical Techniques For Constructing Choice Experiments. Bibliography. Index.

358 citations


Journal ArticleDOI
TL;DR: This work decomposes the problem of optimal linear quadratic Gaussian control of a system whose state is being measured by sensors that communicate with the controller over packet-dropping links into a standard LQR state-feedback controller design, along with an optimal encoder–decoder design for propagating and using the information across the unreliable links.

350 citations


Book ChapterDOI
14 Dec 2007
TL;DR: This chapter gives a survey on the use of statistical designs for what-if analysis in simula- tion, including sensitivity analysis, optimization, and validation/verification, including regression analysis and statistical designs.
Abstract: This chapter gives a survey on the use of statistical designs for what-if analysis in simula- tion, including sensitivity analysis, optimization, and validation/verification. Sensitivity analysis is divided into two phases. The first phase is a pilot stage, which consists of screening or searching for the important factors among (say) hundreds of potentially important factors. A novel screening technique is presented, namely sequential bifurcation. The second phase uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as a metamodel or a response surface. Regression analysis gives better results when the simu- lation experiment is well designed, using either classical statistical designs (such as frac- tional factorials) or optimal designs (such as pioneered by Fedorov, Kiefer, and Wolfo- witz). To optimize the simulated system, the analysts may apply Response Surface Metho- dology (RSM); RSM combines regression analysis, statistical designs, and steepest-ascent hill-climbing. To validate a simulation model, again regression analysis and statistical designs may be applied. Several numerical examples and case-studies illustrate how statisti- cal techniques can reduce the ad hoc character of simulation; that is, these statistical techniques can make simulation studies give more general results, in less time. Appendix 1 summarizes confidence intervals for expected values, proportions, and quantiles, in termi- nating and steady-state simulations. Appendix 2 gives details on four variance reduction techniques, namely common pseudorandom numbers, antithetic numbers, control variates or regression sampling, and importance sampling. Appendix 3 describes jackknifing, which may give robust confidence intervals.

261 citations


Journal ArticleDOI
TL;DR: Differential evolution (DE) and its various strategies are applied for the optimal design of shell-and-tube heat exchangers in this paper, where the main objective is the estimation of the minimum heat transfer area required for a given heat duty, as it governs the overall cost of the heat exchanger.

225 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an optimal kinematic design methodology, referred to as Performance Chart based Design Methodology (PCbDM), for parallel mechanisms with fewer than five linear parameters.

149 citations


Journal ArticleDOI
TL;DR: In this paper, a detailed thermodynamic model for a refrigerator based on an irreversible Carnot cycle is developed with the focus on forced-air heat exchangers, and a multi-objective optimization procedure is implemented to find optimal design values for design variables.

130 citations


Journal ArticleDOI
TL;DR: In this paper, a methodology is proposed to determine the design space for synthesis, analysis, and optimization of solar water heating systems, which incorporates different design constraints to identify all possible designs or a design space on a collector area vs. storage volume diagram.

130 citations


Journal ArticleDOI
TL;DR: In this paper, three primary objectives for a rolling bearing, namely, the dynamic capacity (Cd), the static capacity (Cs) and the elastohydrodynamic minimum film thickness (Hmin) have been optimized separately, pairwise and simultaneously using an advanced multi-objective optimisation algorithm: NSGA II (non-dominated sorting based genetic algorithm).

123 citations


Journal ArticleDOI
TL;DR: The simulation results clearly show that the particle swarm optimization algorithm provides better solutions in terms of performance and computational time than the genetic algorithm based approaches.

119 citations


Journal ArticleDOI
TL;DR: In this paper, a new criterion based on the Kullback-Leibler distance is proposed to discriminate between rival models with non-normally distributed observations, which is coherent with the approaches mentioned above.
Abstract: Summary. Typically T-optimality is used to obtain optimal designs to discriminate between homoscedastic models with normally distributed observations. Some extensions of this criterion have been made for the heteroscedastic case and binary response models in the literature. In this paper, a new criterion based on the Kullback–Leibler distance is proposed to discriminate between rival models with non-normally distributed observations. The criterion is coherent with the approaches mentioned above. An equivalence theorem is provided for this criterion and an algorithm to compute optimal designs is developed. The criterion is applied to discriminate between the popular Michaelis–Menten model and a typical extension of it under the log-normal and the gamma distributions.

113 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a comprehensive horizon planning model using a perspective that encompasses all necessary parameters and constraints, including substation and distribution transformer capacities, number, size, and lengths of distribution feeders and secondary conductors; and primary voltage class.
Abstract: This paper is the first of a two-part paper on optimal distribution system planning. The horizon distribution planning problem and optimal distribution system model formulation are described. The horizon planning mission is to minimize future costs by determining optimal design parameters given assumptions about the future. Prior work addressed short-range and expansion planning of subsets or combinations of design parameters. The many distribution requirements and associated constraints inhibited an all-inclusive evaluation of total horizon design requirements for the 20+ year period. The proposed model and optimization formulation provides a generalized horizon planning approach and introduce a fully functioning comprehensive horizon planning model using a perspective that encompasses all necessary parameters and constraints. Parameters include: substation and distribution transformer capacities; number, size, and lengths of distribution feeders and secondary conductors; and primary voltage class. Optimal design voltage drops and reliability indices are determined. The horizon planning optimization application is described and solved in the second companion paper using continuous constrained nonlinear programming methods. The application is demonstrated with Snohomish PUD case studies

01 Jan 2007
TL;DR: This work addresses the question of how to manage the interplay between the optimization and the fidelity of the approximation models to ensure that the process converges to a solution of the original design problem.
Abstract: A standard engineering practice is the use of approximation models in place of expensive simulations to drive an optimal design process based on nonlinear programming algorithms. The use of approximation techniques is intended to reduce the number of detailed, costly analyses required during optimization while maintaining the salient features of the design problem. The question we address is how to manage the interplay between the optimization and the fidelity of the approximation models to ensure that the process converges to a solution of the original design problem. Using well-established notions from the literature on trust-region methods and a powerful global convergence theory for pattern search methods, we can ensure that the optimization process converges to a solution of the original design problem.

Journal ArticleDOI
TL;DR: In this paper, the optimal passive control of adjacent structures interconnected by nonlinear hysteretic devices is studied and solved in the case of a simple two-degrees-of-freedom model.

Journal ArticleDOI
TL;DR: This paper addresses the design of a network of observation locations in a spatial domain that will be used to estimate unknown parameters of a distributed parameter system by solving a relaxed problem through the application of a simplicial decomposition algorithm.
Abstract: This paper addresses the design of a network of observation locations in a spatial domain that will be used to estimate unknown parameters of a distributed parameter system. We consider a setting where we are given a finite number of possible sites at which to locate a sensor, but cost constraints allow only some proper subset of them to be selected. We formulate this problem as the selection of the gauged sites so as to maximize the log-determinant of the Fisher information matrix associated with the estimated parameters. The search for the optimal solution is performed using the branch-and-bound method in which an extremely simple and efficient technique is employed to produce an upper bound to the maximum objective function. Its idea consists in solving a relaxed problem through the application of a simplicial decomposition algorithm in which the restricted master problem is solved using a multiplicative algorithm for optimal design. The use of the proposed approach is illustrated by a numerical example involving sensor selection for a two-dimensional convective diffusion process.

Journal ArticleDOI
TL;DR: In this article, a framework for covariate-adjusted response-adaptive (CARA) designs is proposed for the allocation of subjects to K (≥ 2) treatments.
Abstract: Response-adaptive designs have been extensively studied and used in clinical trials. However, there is a lack of a comprehensive study of response-adaptive designs that include covariates, despite their importance in clinical trials. Because the allocation scheme and the estimation of parameters are affected by both the responses and the covariates, covariate-adjusted response-adaptive (CARA) designs are very complex to formulate. In this paper, we overcome the technical hurdles and lay out a framework for general CARA designs for the allocation of subjects to K (≥ 2) treatments. The asymptotic properties are studied under certain widely satisfied conditions. The proposed CARA designs can be applied to generalized linear models. Two important special cases, the linear model and the logistic regression model, are considered in detail.

Journal ArticleDOI
Kaushik Sinha1
TL;DR: In this article, a methodology for reliability-based multiobjective optimization of large-scale engineering systems is presented, which is applied to the vehicle crashworthiness design optimization for side impact, considering both structural crashworthiness and occupant safety.
Abstract: This paper presents a methodology for reliability-based multiobjective optimization of large-scale engineering systems. This methodology is applied to the vehicle crashworthiness design optimization for side impact, considering both structural crashworthiness and occupant safety, with structural weight and front door velocity under side impact as objectives. Uncertainty quantification is performed using two first order reliability method-based techniques: approximate moment approach and reliability index approach. Genetic algorithm-based multiobjective optimization software GDOT, developed in-house, is used to come up with an optimal pareto front in all cases. The technique employed in this study treats multiple objective functions separately without combining them in any form. It shows that the vehicle weight can be reduced significantly from the baseline design and at the same time reduce the door velocity. The obtained pareto front brings out useful inferences about optimal design regions. A decision-making criterion is subsequently invoked to select the “best” subset of solutions from the obtained nondominated pareto optimal solutions. The reliability, thus computed, is also checked with Monte Carlo simulations. The optimal solution indicated by knee point on the optimal pareto front is verified with LS-DYNA simulation results.

Journal ArticleDOI
TL;DR: In this article, the problem of wing box design optimization using composite laminates with blending constraints is formulated and two different blending schemes are presented: outer and inner blending, and the result shows that the optimum design obtained using the current methodology has better continuity of laminate lay ups and also the reported weight of composite wing box is on the lower side.
Abstract: In this paper we formulate the problem of wing box design optimization using composite laminates with blending constraints. The use of composite laminates necessitates the inclusion of fiber orientation angle of the layers as well as total thickness of the laminate as design variables in the design optimization problem. The wing box design problem is decomposed into several independent local panel design problems. In general such an approach results in a nonblended solution with no continuity of laminate lay ups across the panels, which may not only increase the lay up cost but may also be structurally unsafe due to discontinuities. The need for a blended solution increases the complexity of the problem many fold. In this paper we impose the blending constraints globally by using a guide based design methodology within the genetic algorithm optimization scheme and compare the results with the published ones. Two different blending schemes – outer and inner blending are presented. The result shows that the optimum design obtained using the current methodology has better continuity of laminate lay ups and also the reported weight of the composite wing box is on the lower side. Finally, a parametric study of the effect of global deflection constraint on the total weight of the optimum design is presented.

Journal ArticleDOI
TL;DR: In this article, a multi-objective optimization technique that incorporates the performance-based seismic design methodology of concrete building structures is proposed to minimize the life cycle cost of a reinforced concrete building frame subject to multiple levels of seismic performance design criteria.
Abstract: In order to meet the emerging trend of performance-based design of structural systems, attempts have been made to develop a multiobjective optimization technique that incorporates the performance-based seismic design methodology of concrete building structures. Specifically, the life-cycle cost of a reinforced concrete building frame is minimized subject to multiple levels of seismic performance design criteria. In formulating the total life-cycle cost, the initial material cost can be expressed in terms of the design variables, and the expected damage loss can be stated as a function of seismic performance levels and their associated failure probability by the means of a statistical model. Explicit formulation of design constraints involving inelastic drift response performance caused by pushover loading is expressed with the consideration of the occurrence of reinforced concrete plasticity and the formation of plastic hinges. Due to the fact that the initial material cost and the expected damage loss are conflicting by nature, the life-cycle cost of a building structure can be posed as a multiobjective optimization problem and solved by the e-constraint method to produce a Pareto optimal set, from which the best compromise solution can be selected. The methodology for each Pareto optimal solution is fundamentally based on the Optimality Criteria approach. A ten-story planar framework example is presented to illustrate the effectiveness of the proposed optimal design method.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a novel unilevel formulation for reliability-based design optimization (RBDO), where the lower level optimization (evaluation of reliability constraints in the double-loop formulation) is replaced by its corresponding first-order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the upper level optimization.
Abstract: Reliability-based design optimization (RBDO) is a methodology for finding optimized designs that are characterized with a low probability of failure. Primarily, RBDO consists of optimizing a merit function while satisfying reliability constraints. The reliability constraints are constraints on the probability of failure corresponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a reliability analysis. During the last few years, a variety of different formulations have been developed for RBDO. Traditionally, these have been formulated as a double-loop (nested) optimization problem. The upper level optimization loop generally involves optimizing a merit function subject to reliability constraints, and the lower level optimization loop(s) compute(s) the probabilities of failure corresponding to the failure mode(s) that govern(s) the system failure. This formulation is, by nature, computationally intensive. Researchers have provided sequential strategies to address this issue, where the deterministic optimization and reliability analysis are decoupled, and the process is performed iteratively until convergence is achieved. These methods, though attractive in terms of obtaining a workable reliable design at considerably reduced computational costs, often lead to premature convergence and therefore yield spurious optimal designs. In this paper, a novel unilevel formulation for RBDO is developed. In the proposed formulation, the lower level optimization (evaluation of reliability constraints in the double-loop formulation) is replaced by its corresponding first-order Karush–Kuhn–Tucker (KKT) necessary optimality conditions at the upper level optimization. Such a replacement is computationally equivalent to solving the original nested optimization if the lower level optimization problem is solved by numerically satisfying the KKT conditions (which is typically the case). It is shown through the use of test problems that the proposed formulation is numerically robust (stable) and computationally efficient compared to the existing approaches for RBDO.

Journal ArticleDOI
Ki-Chan Kim1, Joon Seon Ahn1, Sung Hong Won1, Jung-Pyo Hong1, Ju Lee1 
TL;DR: The optimal design of SynRM including rotor structure is proposed by using finite element method (FEM) and simulation design of experiment (DOE) and the characteristics of optimal model are compared with those of the worst one.
Abstract: This paper presents the optimal design method of synchronous reluctance motor (SynRM) for the high torque and power factor. It is difficult to design optimal barrier shape of rotor by analytical method because of leakage flux between barriers and saturation in the core. Therefore, the optimal design of SynRM including rotor structure is proposed in this paper by using finite element method (FEM) and simulation design of experiment (DOE). Finally, the characteristics of optimal model are compared with those of the worst one

Journal ArticleDOI
TL;DR: The research results presented demonstrate the benefit of implementing MOGA optimization as an integral part of a reliability-based optimization procedure for three-dimensional trusses.
Abstract: A hybrid methodology for performing reliability-based structural optimization of three-dimensional trusses is presented. This hybrid methodology links the search and optimization capabilities of multi-objective genetic algorithms (MOGA) with structural performance information provided by finite element reliability analysis. To highlight the strengths of the proposed methodology, a practical example is presented that concerns optimizing the topology, geometry, and member sizes of electrical transmission towers. The weight and reliability index of a tower are defined as the two objectives used by MOGA to perform Pareto ranking of tower designs. The truss deformation and the member stresses are compared to threshold values to assess the reliability of each tower under wind loading. Importance sampling is used for the reliability analysis. Both the wind pressure and the wind direction are considered as random variables in the analysis. The research results presented demonstrate the benefit of implementing MOGA optimization as an integral part of a reliability-based optimization procedure for three-dimensional trusses.

Journal ArticleDOI
TL;DR: In this article, a formulation that allows the simultaneous distribution of non-piezoelectric and piezoceramic material in the design domain to achieve certain specified actuation movements is presented.
Abstract: Piezoelectric actuators offer significant promise in a wide range of applications. The piezoelectric actuators considered in this work essentially consist of a flexible structure actuated by piezoceramics that must generate output displacement and force at a certain specified direction and point of the domain. The design of these piezoelectric actuators is complex, and a systematic design method such as topology optimization has been successfully applied in recent years, with appropriate formulation of the optimization problem to obtain optimized designs. However, in these previous design formulations, the position of the piezoceramic is usually kept fixed in the design domain and only the flexible part of the structure is designed by distributing some non-piezoelectric material (aluminum, for example). This imposes a constraint in the position of the piezoelectric material in the optimization problem, limiting the optimality of the solution. Thus, in this work, a formulation that allows the simultaneous distribution of non-piezoelectric and piezoelectric material in the design domain to achieve certain specified actuation movements will be presented. The optimization problem is posed as the simultaneous search for an optimal topology of a flexible structure as well as the optimal position of the piezoceramic in the design domain and optimal rotation angles of piezoceramic material axes that maximize output displacements or output forces in a certain specified direction and point of the domain. The method is implemented based on the SIMP ('solid isotropic material with penalization') material model where fictitious densities are interpolated in each finite element, providing a continuum material distribution in the domain. The examples presented are limited to two-dimensional models, since most of the applications for such piezoelectric actuators are planar devices.

Journal ArticleDOI
TL;DR: An improved response surface based optimization technique is presented for two-dimensional airfoil design at transonic speed by adding the actual function value to the data set used to construct the polynomials.
Abstract: An improved response surface based optimization technique is presented for two-dimensional airfoil design at transonic speed. The method is based on an iterative scheme where least-square fitted quadratic polynomials of objective function and constraints are repeatedly corrected locally, about the current minimum, by adding the actual function value to the data set used to construct the polynomials. When no further cost function reduction is achieved, the design domain upon which the optimization is initially performed is changed, preserving its initial size, by updating the center point with the position of the last minimum found. The optimization is then conducted by using the same approximations built over the initial design space, which are again iteratively corrected until convergence to a given tolerance. To construct the response surfaces, the design space is explored by using a uniform Latin hypercube, aiming at reducing the bias error, in contrast with previous techniques based on D-optimality criterion. The geometry is modeled by using the PARSEC parameterization

Journal ArticleDOI
TL;DR: In this paper, the sensitivity number of an element is derived from using an adjoint method to address two principal design parameters, namely absorbed energy per unit volume and absorbed energy ratio, e 2.
Abstract: This paper presents a method for and examples of topology optimization of energy absorption structures. The topology optimization problem is solved by using the elements as design variables. The sensitivity number of an element is derived from using an adjoint method to address two principal design parameters, namely absorbed energy per unit volume, e 1, and absorbed energy ratio, e 2. Filter techniques are employed to smooth sensitivities in the design space and to eliminate unnecessary structural details below a certain length-scale. The bi-directional evolutionary optimization (BESO) technique is used to search for the optimal design in the whole design domain by gradually removing and adding material. Several examples are presented to demonstrate the capability and effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: It is shown that optimal design and control allows for permanent-magnet machine, high flux-weakening mode, and high-efficiency operations even for a simple machine structure.
Abstract: This paper proposes to apply optimal approaches to the design and control of a highly constrained electric machine. The developed approach is applied on a permanent-magnet integrated starter generator (ISG) but may be applied on any high-constrained electric machine. One of the main problems in the use of optimal approaches is the accuracy of the models used by the optimizer. In our approach, we propose to proceed in two steps. 1) Optimal design: the model is purely analytic, and some phenomena are neglected (cross saturation). Under these conditions, the electric machine design is optimal for a limited number of constraints. The design model uses a classic uncoupled d,q reluctant circuit model (with saturation taken into account). 2) Optimal control: once the machine is calculated, the design constraints are validated by a finite-element (FE) method. The FE method allows to use a more accurate model to compute optimal currents for the control on the whole torque-speed plane. In our case, we use FE results to model the cross-saturation phenomenon. The optimizer is common to both cases and is a classic commercial sequential quadratic programming algorithm. This model is validated by experimental results based on an ISG. This paper shows that optimal design and control allows for permanent-magnet machine, high flux-weakening mode, and high-efficiency operations even for a simple machine structure

Journal ArticleDOI
TL;DR: In this article, a convex hull approach is adopted to isolate the points corresponding to unwanted bifurcations in the design space, which is applied to a tube impacting a rigid wall representing a transient dynamic problem.

Journal ArticleDOI
TL;DR: In this article, an adaptive stochastic algorithm for water distribution systems optimal design based on the heuristic cross-entropy method for combinatorial optimization is presented, which is demonstrated using two well-known benchmark examples from the water distribution system research literature for single loading gravitational systems, and an example of multiple loadings, pumping, and storage.
Abstract: The optimal design problem of a water distribution system is to find the water distribution system component characteristics (e.g. pipe diameters, pump heads and maximum power, reservoir storage volumes, etc.) which minimize the system's capital and operational costs such that the system hydraulic laws are maintained (i.e. Kirchhoff's first and second laws), and constraints on quantities and pressures at the consumer nodes are fulfilled. In this study, an adaptive stochastic algorithm for water distribution systems optimal design based on the heuristic cross-entropy method for combinatorial optimization is presented. The algorithm is demonstrated using two well-known benchmark examples from the water distribution systems research literature for single loading gravitational systems, and an example of multiple loadings, pumping, and storage. The results show the cross-entropy dominance over previously published methods.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a constrained optimal adaptive design for a fully sequential randomized clinical trial with k arms and n patients, where the objective is to maximize the expected overall utility in a Bayesian decision-analytic approach, where utility is the sum over the utilities for individual patients over a 'patient horizon' N.
Abstract: Optimal decision-analytic designs are deterministic. Such designs are appropriately criticized in the context of clinical trials because they are subject to assignment bias. On the other hand, balanced randomized designs may assign an excessive number of patients to a treatment arm that is performing relatively poorly. We propose a compromise between these two extremes, one that achieves some of the good characteristics of both. We introduce a constrained optimal adaptive design for a fully sequential randomized clinical trial with k arms and n patients. An r-design is one for which, at each allocation, each arm has probability at least r of being chosen, 0 ≤ r ≥ 1/k. An optimal design among all r-designs is called r-optimal. An r 1 -design is also an r 2 -design if r 1 ≥ r 2 . A design without constraint is the special case r = 0 and a balanced randomized design is the special case r = 1/k. The optimization criterion is to maximize the expected overall utility in a Bayesian decision-analytic approach, where utility is the sum over the utilities for individual patients over a 'patient horizon' N. We prove analytically that there exists an r-optimal design such that each patient is assigned to a particular one of the arms with probability 1 - (k - 1)r, and to the remaining arms with probability r. We also show that the balanced design is asymptotically r-optimal for any given r, 0 ≤ r < 1/k, as N/n → oo. This implies that every r-optimal design is asymptotically optimal without constraint. Numerical computations using backward induction for k = 2 arms show that, in general, this asymptotic optimality feature for r-optimal designs can be accomplished with moderate trial size n if the patient horizon N is large relative to n. We also show that, in a trial with an r-optimal design, r < 1/2, fewer patients are assigned to an inferior arm than when following a balanced design, even for r-optimal designs having the same statistical power as a balanced design. We discuss extensions to various clinical trial settings.

Journal ArticleDOI
TL;DR: In this article, the optimal experimental design of dynamic experiments is formulated as a general dynamic optimization problem where the objective is to find those experimental conditions which result in maximum information content, as measured by the Fisher information matrix.

Posted Content
TL;DR: Compared with a traditional balanced design for this trial, it is shown that the optimal design is substantially more efficient, which implies either a gain in information, or essential savings in sample size.
Abstract: We consider a dose-finding trial in phase IIB of drug development. For choosing an appropriate design for this trial the specification of two points is critical: an appropriate model for describing the dose-effect relationship and the specification of the aims of the trial (objectives), which will be the focus in the present paper. For many practical situations it is essential to have a robust trial objective that has little risk of changing during the complete trial due to external information. An important and realistic objective of a dose-finding trial is to obtain precise information about the interesting part of the dose-effect curve. We reflect this goal in a statistical optimality criterion and derive efficient designs using optimal design theory. In particular we determine non-adaptive Bayesian optimal designs, i.e. designs which are not changed by information obtained from an interim analysis. Compared with a traditional balanced design for this trial it is shown that the optimal design is substantially more efficient. This implies either again in information or essential savings in sample size. Further, we investigate an adaptive Bayesian optimal design that uses two different optimal designs before and after an interim analysis, and we compare the adaptive with the non-adaptive Bayesian optimal design. The basic concept is illustrated using a modification of a recent AstraZeneca trial.