scispace - formally typeset
Search or ask a question

Showing papers on "Optimal design published in 2003"


Journal ArticleDOI
TL;DR: The designs are evaluated according to their ability to predict the true marginal willingness to pay under different specifications of the utility function in Monte Carlo simulations and suggest that the designs produce unbiased estimations, but orthogonal designs result in larger mean square error in comparison to D-optimal designs.
Abstract: This paper discusses different design techniques for stated preference surveys in health economic applications. In particular, we focus on different design techniques, i.e. how to combine the attribute levels into alternatives and choice sets, for choice experiments. Design is a vital issue in choice experiments since the combination of alternatives in the choice sets will determine the degree of precision obtainable from the estimates and welfare measures. In this paper we compare orthogonal, cyclical and D-optimal designs, where the latter allows expectations about the true parameters to be included when creating the design. Moreover, we discuss how to obtain prior information on the parameters and how to conduct a sequential design procedure during the actual experiment in order to improve the precision in the estimates. The designs are evaluated according to their ability to predict the true marginal willingness to pay under different specifications of the utility function in Monte Carlo simulations. Our results suggest that the designs produce unbiased estimations, but orthogonal designs result in larger mean square error in comparison to D-optimal designs. This result is expected when using correct priors on the parameters in D-optimal designs. However, the simulations show that welfare measures are not very sensitive if the choice sets are generated from a D-optimal design with biased priors.

362 citations


Book
01 Aug 2003

276 citations


Proceedings ArticleDOI
01 Jan 2003
TL;DR: A new algorithm for constructing optimal experimental designs is developed and is found to be much more efficient in terms of the computation time, the number of exchanges needed for generating new designs, and the achieved optimality criteria.
Abstract: Metamodeling approach has been widely used due to the high computational cost of using high-fidelity simulations in engineering design. The accuracy of metamodels is directly related to the experimental designs used. Optimal experimental designs have been shown to have good “space filling” and projective properties. However, the high cost in constructing them limits their use. In this paper, a new algorithm for constructing optimal experimental designs is developed. There are two major developments involved in this work. One is on developing an efficient global optimal search algorithm, named as enhanced stochastic evolutionary (ESE) algorithm. The other is on developing efficient algorithms for evaluating optimality criteria. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time, the number of exchanges needed for generating new designs, and the achieved optimality criteria. The algorithm is also very flexible to construct various classes of optimal designs to retain certain structural properties.Copyright © 2003 by ASME

220 citations


Journal ArticleDOI
Junil Ryu1, Do Hyung Choi1, Sung Jin Kim1
TL;DR: In this paper, a three-dimensional analysis procedure for the thermal performance of a manifold microchannel heat sink has been developed and applied to optimize the heat-sink design, and the optimal dimensions and corresponding thermal resistance have a power-law dependence on the pumping power.

182 citations


Journal ArticleDOI
TL;DR: Some properties of estimators of expected information gains based on Markov chain Monte Carlo (MCMC) and Laplacian approximations are discussed and some issues that arise when applying these methods to the problem of experimental design in the (technically nontrivial) random fatigue-limit model of Pascual and Meeker are investigated.
Abstract: Expected gain in Shannon information is commonly suggested as a Bayesian design evaluation criterion. Because estimating expected information gains is computationally expensive, examples in which they have been successfully used in identifying Bayes optimal designs are both few and typically quite simplistic. This article discusses in general some properties of estimators of expected information gains based on Markov chain Monte Carlo (MCMC) and Laplacian approximations. We then investigate some issues that arise when applying these methods to the problem of experimental design in the (technically nontrivial) random fatigue-limit model of Pascual and Meeker. An example comparing follow-up designs for a laminate panel study is provided.

151 citations


Journal ArticleDOI
01 Dec 2003
TL;DR: It is demonstrated that experimental design significantly improves the parameter estimation accuracy and also reveals difficulties in parameter estimation due to robustness.
Abstract: To obtain a systems-level understanding of a biological system, the authors conducted quantitative dynamic experiments from which the system structure and the parameters have to be deduced. Since biological systems have to cope with different environmental conditions, certain properties are often robust with respect to variations in some of the parameters. Hence, it is important to use optimal experimental design considerations in advance of the experiments to improve the information content of the measurements. Using the MAP-Kinase pathway as an example, the authors present a simulation study investigating the application of different optimality criteria. It is demonstrated that experimental design significantly improves the parameter estimation accuracy and also reveals difficulties in parameter estimation due to robustness.

148 citations


Journal ArticleDOI
TL;DR: In this article, the optimal design of yielding metallic dampers and friction dampers together was investigated for seismic response control and protection of building structures, and the genetic algorithm was used to obtain the globally optimal solution.
Abstract: This paper deals with the optimal design of yielding metallic dampers and friction dampers together as they both have similar design characteristics and parameters. Ample tests and analytical studies have confirmed the effectiveness of these energy dissipation devices for seismic response control and protection of building structures. Since these devices are strongly non-linear with several parameters controlling their behaviour, their current design procedures are usually cumbersome and not optimal. In this paper, a methodology is presented to determine the optimal design parameters for the devices installed at different locations in a building for a desired performance objective. For a yielding metallic damper, the design parameters of interest are the device yield level, device stiffness, and brace stiffness. For a friction device, the parameters are the slip load level and brace stiffness. Since the devices and the structures installed with these devices behave in a highly non-linearly manner, and thus must be evaluated by a step-by-step time history approach, the genetic algorithm is used to obtain the globally optimal solution. This optimal search approach allows an unusual flexibility in the choice of performance objectives. For demonstration purposes, several sets of numerical examples of optimal damper designs with different performance objectives are presented. Copyright © 2003 John Wiley & Sons, Ltd.

136 citations


Journal ArticleDOI
TL;DR: A Bayesian sequential optimal design scheme comprising a pilot study on a small number of patients followed by the allocation of patients to doses one at a time is developed and its properties explored by simulation.
Abstract: A broad approach to the design of Phase I clinical trials for the efficient estimation of the maximum tolerated dose is presented. The method is rooted in formal optimal design theory and involves the construction of constrained Bayesian c- and D-optimal designs. The imposed constraint incorporates the optimal design points and their weights and ensures that the probability that an administered dose exceeds the maximum acceptable dose is low. Results relating to these constrained designs for log doses on the real line are described and the associated equivalence theorem is given. The ideas are extended to more practical situations, specifically to those involving discrete doses. In particular, a Bayesian sequential optimal design scheme comprising a pilot study on a small number of patients followed by the allocation of patients to doses one at a time is developed and its properties explored by simulation.

124 citations


Journal ArticleDOI
D. Aklog1, Y. Hosoi1
TL;DR: The results obtained show that the new model preserves loops and results in a system with better reliability; and, if appropriate, minimum allowable pipe sizes are specified in the least-cost design, a required reliability can be attained with a reasonably low cost.
Abstract: Maintaining network loops, and hence attaining acceptable system reliability, has been a challenge in the optimal design of water distribution networks. Aimed at a possible solution to the problem, this paper has two objectives: to introduce a new reliability-based optimal design formulation and a model, and to examine the effect of specifying minimum allowable pipe sizes during least-cost designs on system reliability. System reliability is estimated using the minimum cut-set method, but instead of using the mechanical failure probabilities of pipes, weighted failure probabilities are calculated by considering the ratio of the actual supply to demand. One of the salient features of this study and the new reliability-based design model in particular, is that a pressure-driven network simulation model is used to determine the actual supply at each demand point when a component fails. A simplified two-loop network is used to illustrate the performance of the new model and to study the effect of specifying minimum allowable pipe sizes. The results obtained show that the new model preserves loops and results in a system with better reliability; and, if appropriate, minimum allowable pipe sizes are specified in the least-cost design, a required reliability can be attained with a reasonably low cost.

122 citations


Journal ArticleDOI
TL;DR: The development of the expression of the Fisher information matrix in nonlinear mixed effects models for designs evaluation is extended and two methods using a Taylor expansion of the model around the expectation of the random effects or a simulated value are proposed and compared.
Abstract: We extend the development of the expression of the Fisher information matrix in nonlinear mixed effects models for designs evaluation. We consider the dependence of the marginal variance of the observations with the mean parameters and assume an heteroscedastic variance error model. Complex models with interoccasions variability and parameters quantifying the influence of covariates are introduced. Two methods using a Taylor expansion of the model around the expectation of the random effects or a simulated value, using then Monte Carlo integration, are proposed and compared. Relevance of the resulting standard errors is investigated in a simulation study with NONMEM.

109 citations


Journal ArticleDOI
TL;DR: In this paper, minimax design of infinite-impulse-response (IIR) filters with prescribed stability margin is formulated as a conic quadratic programming (CQP) problem and extended to quadrantally symmetric two-dimensional digital filters.
Abstract: In this paper, minimax design of infinite-impulse-response (IIR) filters with prescribed stability margin is formulated as a conic quadratic programming (CQP) problem. CQP is known as a class of well-structured convex programming problems for which efficient interior-point solvers are available. By considering factorized denominators, the proposed formulation incorporates a set of linear constraints that are sufficient and near necessary for the IIR filter to have a prescribed stability margin. A second-order cone condition on the magnitude of each update that ensures the validity of a key linear approximation used in the design is also included in the formulation and eliminates a line-search step. Collectively, these features lead to improved designs relative to several established methods. The paper then moves on to extend the proposed design methodology to quadrantally symmetric two-dimensional (2-D) digital filters. Simulation results for both one-dimensional (1-D) and 2-D cases are presented to illustrate the new design algorithms and demonstrate their performance in comparison with several existing methods.

Journal ArticleDOI
TL;DR: It is shown that two-level factorial and fractional factorial designs are D-optimal for estimating first-order response surface models for specific numbers and sizes of whole plots.
Abstract: The design of split-plot experiments has received considerable attention during the last few years. The goal of this article is to provide an efficient algorithm to compute D-optimal split-plot designs with given numbers of whole plots and given whole-plot sizes. The algorithm is evaluated and applied to a protein extraction experiment. In addition, it is shown that two-level factorial and fractional factorial designs are D-optimal for estimating first-order response surface models for specific numbers and sizes of whole plots.

Journal ArticleDOI
TL;DR: In this article, the choice sets in the D-optimal design for a choice experiment for testing main effects and two-factor interactions were established, when there are k attributes, each with two levels, for choice set size m.
Abstract: In this article we establish the choice sets in the D-optimal design for a choice experiment for testing main effects and for testing main effects and two-factor interactions, when there are k attributes, each with two levels, for choice set size m. We also give a method to construct optimal and near-optimal designs with small numbers of choice sets.

Journal ArticleDOI
TL;DR: The use of optimal orthogonal array latin hypercube designs is proposed, and the designs found are in general agreement with existing optimal designs reported elsewhere.
Abstract: The use of optimal orthogonal array latin hypercube designs is proposed. Orthogonal arrays were proposed for constructing latin hypercube designs by Tang (1993). Such designs generally have better space filling properties than random latin hypercube designs. Even so, these designs do not necessarily fill the space particularly well. As a result, we consider orthogonal-array-based latin hypercube designs that try to achieve optimality in some sense. Optimization is performed by adapting strategies found in Morris & Mitchell (1995) and Ye et al. (2000). The strategies here search only orthogonal-array-based latin hypercube designs and, as a result, optimal designs are found in a more efficient fashion. The designs found are in general agreement with existing optimal designs reported elsewhere.

Journal Article
TL;DR: A Bayesian method based on the idea of model discrimination that uncovers the active factors is developed for designing a follow-up experiment to resolve ambiguity in fractional experiments.
Abstract: Fractional factorial, Plackett-Burman, and other multifactor designs are often effective in practice due to factor sparsity. That is, just a few of the many factors studied will have major effects. In those active factors, these designs can have high resolution. We have previously developed a Bayesian method based on the idea of model discrimination that uncovers the active factors. Sometimes, the results of a fractional experiment are ambiguous due to confounding among the possible effects, and more than one model may be consistent with the data. Within the Bayesian construct, we have developed a method for designing a follow-up experiment to resolve this ambiguity. The idea is to choose runs that allow maximum discrimination among the plausible models. This method is more general than methods that algebraically decouple aliased interactions and more appropriate than optimal design methods that require specification of a single model. The method is illustrated through examples of fractional experiments.

Journal ArticleDOI
TL;DR: Algorithmic details for the design of basic and multistageFRM filters are presented to show that the proposed method offers a unified design framework for a variety of FRM filters.
Abstract: Since Lim's paper (see IEEE Trans. Circuits Syst., vol.33, p. 357-364, Apr. 1986) on the frequency-response-masking (FRM) technique for the design of finite-impulse response digital filters with very small transition widths, the analysis and design of FRM filters has been a subject of study. In this paper, a new optimization technique for the design of various FRM filters is proposed. Central to the new design method is a sequence of linear updates for the design variables, with each update carried out by semidefinite programming. Algorithmic details for the design of basic and multistage FRM filters are presented to show that the proposed method offers a unified design framework for a variety of FRM filters. Design simulations are included to illustrate the proposed algorithms and to evaluate the design performance in comparison with that of several existing methods.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the two-stage formulation for design under uncertainty and derive new formulations for the multiperiod and feasibility problems, and also introduce a KS constraint aggregation function and derive a single, smooth nonlinear program that approximates the feasibility problem.
Abstract: Optimal design under unknown information is a key task in process systems engineering. This study considers formulations that incorporate two types of unknown input parameters, uncertain model parameters, and variable process parameters. In the former case, a process must be designed that is feasible over the entire domain of uncertain parameters, while in the latter case, control variables can be adjusted during process operation to compensate for variable process parameters. To address this problem we extend the two-stage formulation for design under uncertainty and derive new formulations for the multiperiod and feasibility problems. Moreover, to simplify the feasibility problem in the two-stage algorithm, we also introduce a KS constraint aggregation function and derive a single, smooth nonlinear program that approximates the feasibility problem. Three case studies are presented to demonstrate the proposed approach.

Journal ArticleDOI
TL;DR: For the Michaelis-Menten model, the best two point designs can be found explicitly, and a characterization is given when these designs are optimal within the class of all designs as mentioned in this paper.
Abstract: For the Michaelis–Menten model, we determine designs that maximize the minimum of the D-efficiencies over a certain interval for the nonlinear parameter. The best two point designs can be found explicitly, and a characterization is given when these designs are optimal within the class of all designs. In most cases of practical interest, the determined designs are highly efficient and robust with respect to misspecification of the nonlinear parameter. The results are illustrated and applied in an example of a hormone receptor assay.

Journal ArticleDOI
TL;DR: A new CAD optimization approach based on Monte Carlo search methods is introduced which allows the design space to be rapidly explored in an automated fashion and the most optimal designs and design approaches to be identified.
Abstract: This paper investigates computer-aided optimization of DC/DC converters, with a focus on converters for dual-voltage automotive electrical systems. A new CAD optimization approach based on Monte Carlo search methods is introduced which allows the design space to be rapidly explored in an automated fashion and the most optimal designs and design approaches to be identified. The optimization approach also allows the effects of variations in design specifications on the cost, weight, and volume of an optimized converter to be readily determined. A prototype converter designed using this optimization procedure is evaluated and compared to a converter designed by conventional means.

Journal ArticleDOI
TL;DR: In this paper, a more fundamental criterion is introduced which, in principle, can be used to design any nonlinear problem and is entropy-based and depends on the calculation of marginal probability distributions.
Abstract: SUMMARY When designing an experiment, the aim is usually to find the design which minimizes expected post-experimental uncertainties on the model parameters. Classical methods for experimental design are shown to fail in nonlinear problems because they incorporate linearized design criteria. A more fundamental criterion is introduced which, in principle, can be used to design any nonlinear problem. The criterion is entropy-based and depends on the calculation of marginal probability distributions. In turn, this requires the numerical calculation of integrals for which we use Monte Carlo sampling. The choice of discretization in the parameter/data space strongly influences the number of samples required. Thus, the only practical limitation for this technique appears to be computational power. A synthetic experiment with an oscillatory, highly nonlinear parameter‐data relationship and a simple seismic amplitude versus offset (AVO) experiment are used to demonstrate the method. Interestingly, in our AVO example, although overly coarse discretizations lead to incorrect evaluation of the entropy, the optimal design remains unchanged.

Journal ArticleDOI
TL;DR: In this paper, Paizman et al. proposed a simple, quick and elegant design algorithm for a general correlation structure by an interpretation in terms of norms, which is useful for generating exact designs by sampling from the obtained design measures.
Abstract: SUMMARY In this paper we consider optimal design of experiments in the case of correlated obser- vations. We use and further develop the concept of design measures introduced by Paizman & Mtiller (1998) for the construction of a simple, quick and elegant design algorithm. We support the construction of this algorithm for a general correlation structure by an interpretation in terms of norms. Examples demonstrate that our results are useful for generating exact designs by sampling from the obtained design measures. Most of the literature on experimental design operates under the assumption of uncorre- lated errors and by exploiting the concept of a design measure introduced by Kiefer (1959). In this setting an optimal design measure is usually computed by an iterative algorithm and then an exact design with independent replications approximately proportional to that measure is employed. In the correlated case when replications are not allowed the implementation of design measures is not so straightforward. Recently, however, by adding virtual design-dependent noise to the process, Paizman & Muiller (1998) have introduced a way of using this popular concept, albeit requiring a very different interpretation. Furthermore, Paizman & Muller (2000) showed that solving minimisation problems in terms of these measures corresponds to computation of exact designs. These measures serve as building blocks for the methods presented in this paper, which provides a general numerical tool for screening designs. By a different type of smoothing of some important nondifferentiable terms in the algorithm, and without imposing restric- tions on the number of support points, we obtain a design measure. The relative magnitude of this measure for each support point reflects the importance of its inclusion in any exact design. This algorithm can also be interpreted in terms of certain 'information' norms as pre-

Journal ArticleDOI
TL;DR: In this paper, locally D-and c-optimal designs for exponential decay models with one to three parameters were derived analytically, and the invariance of optimal designs to reparametrization was discussed and examples of Bayesian optimal designs were also provided.

Journal ArticleDOI
TL;DR: In this paper, the authors considered optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment and derived power functions for the exact tests for the main effect of treatment.
Abstract: This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the cost ratio between the treatment and control regardless of whether the randomization of sampling units occurs at levels 1, 2, or 3. Power functions for the exact tests for the main effect of treatment are derived for prototypical multilevel designs with unequal sample sizes in the treatment and control condition.

Journal ArticleDOI
TL;DR: In this article, a topology optimization based approach is proposed to study the optimal configuration of stiffeners for the interior sound reduction, which is aimed at reducing the low frequency noise, a coupled acoustic-structural conservative system without damping effect is considered.
Abstract: A topology optimization based approach is proposed to study the optimal configuration of stiffeners for the interior sound reduction. Since our design target is aimed at reducing the low frequency noise, a coupled acoustic-structural conservative system without damping effect is considered. Modal analysis method is used to evaluate the interior sound level for this coupled system. To formulate the topology optimization problem, a recently introduced Microstructure-based Design Domain Method (MDDM) is employed. Using the MDDM, the optimal stiffener configurations problem is treated as a material distribution problem and sensitivity analysis of the coupled system is derived analytically. The norm of acoustic excitation is used as the indicator of the interior sound level. The optimal stiffener design is obtained by solving this topology optimization problem using a sequential convex approximation method. Examples of acoustic box under single frequency excitation and a band of low frequency excitations are presented and discussed. @DOI: 10.1115/1.1569512#

Journal ArticleDOI
TL;DR: It is shown that employing local search during evolution of the genetic algorithm, a memetic algorithm, yields the best network designs and does so at a reasonable computational cost.
Abstract: In many computer communications network design problems, such as those faced by hospitals, universities, research centers, and water distribution systems, the topology is fixed because of geographical and physical constraints or the existence of an existing system. When the topology is known, a reasonable approach to design is to select components among discrete alternatives for links and nodes to maximize reliability subject to cost. This problem is NP-hard with the added complication of a very computationally intensive objective function. This paper compares the performance of three classic metaheuristic procedures for solving large and realistic versions of the problem: hillclimbing, simulated annealing and genetic algorithms. Three alterations that use local search to seed the search or improve solutions during each iteration are also compared. It is shown that employing local search during evolution of the genetic algorithm, a memetic algorithm, yields the best network designs and does so at a reasonable computational cost. Hillclimbing performs well as a quick search for good designs, but cannot identify the most superior designs even when computational effort is equal to the metaheuristics.

Journal ArticleDOI
TL;DR: This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.
Abstract: Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0–30 min, 1.5–5 hr and 11–12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the “optimal” design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new method to construct the design which does not have the bias even when the statistical model is misspecified, where the distribution of the independent variable is appropriately chosen from among the continuous designs so as to decrease the integrated mean square error (IMSE) of the fitted values.

Journal ArticleDOI
TL;DR: In this article, the authors introduce an appropriate model and derive optimal designs in the presence of interactions when all attributes have the same number of levels, for paired comparisons in which either full or partial profiles of the alternatives are presented.
Abstract: In many fields of applications paired comparisons are used in which either full or partial profiles of the alternatives are presented. For this situation we introduce an appropriate model and derive optimal designs in the presence of interactions when all attributes have the same number of levels.

Proceedings ArticleDOI
25 May 2003
TL;DR: A new optimization technique for the design of FRM filters is proposed, central to the new design method is a sequence of linear updates for theDesign variables, with each update carried out by second-order cone programming.
Abstract: Since Lim's 1986 paper on the frequency-response masking (FRM) technique for the design of FIR digital filters with very small transition widths, the analysis and design of FRM filters have been a subject of study. In this paper, a new optimization technique for the design of FRM filters is proposed. Central to the new design method is a sequence of linear updates for the design variables, with each update carried out by second-order cone programming. Design simulations are presented to illustrate the proposed algorithms and to evaluate the design performance.

Journal ArticleDOI
TL;DR: Methods to produce optimal designs for multi-channel fiber Bragg gratings with identical or close to identical channel-to-channel spectral characteristics are discussed and are generally superior results for small to moderate number of channels.
Abstract: Methods to produce optimal designs for multi-channel fiber Bragg gratings (FBGs) with identical or close to identical channel-to-channel spectral characteristics are discussed The proposed approach consists of three distinct steps The first two steps (preliminary semi-analytic minimization and subsequent fine-tuning) do not depend on the grating design details, but on the number of channels only and can be readily applied to similar problems in other fields, eg, in radio-physics and coding theory The third step (spectral characteristic quality improvement) is FBG field specific A comparison with other known optimization methods shows that the proposed approach yields generally superior results for small to moderate number of channels (N < 60)