scispace - formally typeset
Search or ask a question

Showing papers on "Optimal design published in 1996"


Journal ArticleDOI
TL;DR: The perimeter method as mentioned in this paper allows the designer to control the number of holes in the optimal design and to establish their characteristic length scale, thus eliminating the need for relaxation, thereby circumventing many of the complexities and restrictions of other approaches to topology design.
Abstract: This paper introduces a method for variable-topology shape optimization of elastic structures called theperimeter method. An upper-bound constraint on the perimeter of the solid part of the structure ensures a well-posed design problem. The perimeter constraint allows the designer to control the number of holes in the optimal design and to establish their characteristic length scale. Finite element implementations generate practical designs that are convergent with respect to grid refinement. Thus, an arbitrary level of geometric resolution can be achieved, so single-step procedures for topology design and detailed shape design are possible. The perimeter method eliminates the need for relaxation, thereby circumventing many of the complexities and restrictions of other approaches to topology design.

476 citations


Journal ArticleDOI
TL;DR: In this article, a unified process design framework for obtaining integrated process and control systems design, which are economically optimal and can cope with parametric uncertainty and process disturbances, is described.
Abstract: Fundamental developments of a unified process design framework for obtaining integrated process and control systems design, which are economically optimal and can cope with parametric uncertainty and process disturbances, are described. Based on a dynamic mathematical model describing the process, including path constraints, interior and end-point constraints, a model that describes uncertain parameters and time-varying disturbances (for example, a probability distributions or lower/upper bounds), and a set of process design and control alternatives (together with a set of control objectives and types of controllers), the problem is posed as a mixed-integer stochastic optimal control formulation. An iterative decomposition algorithm proposed alternates between the solution of a multiperiod “design” subproblem, determining the process structure and design together with a suitable control structure (and its design characteristics) to satisfy a set of “critical” parameters/periods (for uncertainty disturbance) over time, and a time-varying feasibility analysis step, which identifies a new set of critical parameters for fixed design and control. Two examples are detailed, a mixing-tank problem to show the analytical steps of the procedure, and a ternary distillation design problem (featuring a rigorous tray-by-tray distillation model) to demonstrate the potential of the novel approach to reach solutions with significant cost savings over sequential techniques.

265 citations


Book
22 Feb 1996
TL;DR: I: MATHEMATICAL METHODOLOGY II: FUNDAMENTALS OF ELECTROMAGNETISM III: INVERSE PROBLEMS and OPTIMAL DESIGN in ELECTROMagnetic Applications IV: IMPLEMENTATION OF the FEM, DESIGN-SENSITIVITY and Shape DESIGN PROCEDURES
Abstract: I: MATHEMATICAL METHODOLOGY II: FUNDAMENTALS OF ELECTROMAGNETISM III: INVERSE PROBLEMS AND OPTIMAL DESIGN IN ELECTROMAGNETIC APPLICATIONS IV: IMPLEMENTATION OF THE FEM, DESIGN-SENSITIVITY AND SHAPE DESIGN PROCEDURES

156 citations


Journal ArticleDOI
TL;DR: A least-squares-type algorithm is suggested for the unconstrained optimization method (based on external penalty) for which it can reduce to calculations which are equivalent to the derivative calculations of steady-state processes and to evolution equations.
Abstract: We suggest a shape optimization method for a non-linear and non-steady-state metal forming problem. It consists in optimizing the initial shape of the part as well as the shape of the preform tool during a two-step forging operation, for which the shape of the second operation is known. Shapes are described using spline functions and optimal parameter values of the splines are searched in order to produce, at the end of the forging sequence, a part with a prescribed geometric accuracy, optimal metallurgical properties and for a minimal production cost. The finite element method, including numerous remeshing operations, is used for the simulation of the process. We suggest using a least-squares-type algorithm for the unconstrained optimization method (based on external penalty) for which we describe the calculation of the derivatives of the objective function. We show that it can reduce to calculations which are equivalent to the derivative calculations of steady-state processes and to evolution equations. Therefore, the computational cost of such an optimization is quite reasonable, even for complex forging processes. Lastly, in order to reduce the errors due to the numerous remeshings during the simulation, we introduce error estimation and adaptive remeshing methods with respect to the calculation of derivatives.

126 citations


Journal ArticleDOI
TL;DR: A formal definition of “robustness” for the uncapacitated network design problem is presented, and algorithms aimed at finding robust network designs are developed that are adaptations of the Benders decomposition methodology that are tailored so they can efficiently identify robust network Designs.

126 citations


Book ChapterDOI
01 Jan 1996
TL;DR: A stochastic algorithm is applied to the optimal design of a fermentation process, to determine multiphase equilibria, for the optimal control of a penicillin reactor, for a non-differentiable system, and for the optimization of a catalyst blend in a tubular reactor.
Abstract: Many systems in chemical engineering are difficult to optimize using gradient-based algorithms. These include process models with multimodal objective functions and discontinuities. Herein, a stochastic algorithm is applied for the optimal design of a fermentation process, to determine multiphase equilibria, for the optimal control of a penicillin reactor, for the optimal control of a non-differentiable system, and for the optimization of a catalyst blend in a tubular reactor. The advantages of the algorithm for the efficient and reliable location of global optima are examined. The properties of these algorithms, as applied to chemical processes, are considered, with emphasis on the ease of handling constraints and the ease of implementation and interpretation of results. For the five processes, the efficiency of computation is improved compared with selected stochastic and deterministic algorithms. Results closer to the global optimum are reported for the optimal control of the penicillin reactor and the non-differentiable system.

102 citations


Journal ArticleDOI
TL;DR: In this paper, the equivalence theorem for the Bayesian constrained design problem was proved for Bayesian nonlinear design problems with several objectives, including weighted and constrained design problems, and it was used to show that the results of Cook and Wong on equivalence of the weighted and bounded problems apply much more generally.
Abstract: Several competing objectives may be relevant in the design of an experiment The competing objectives may not be easy to characterize in a single optimality criterion One approach to these design problems has been to weight each criterion and find the design that optimizes the weighted average of the criteria An alternative approach has been to optimize one criterion subject to constraints on the other criteria An equivalence theorem is presented for the Bayesian constrained design problem Equivalence theorems are essential in verifying optimality of proposed designs, especially when (as in most nonlinear design problems) numerical optimization is required This theorem is used to show that the results of Cook and Wong on the equivalence of the weighted and constrained problems apply much more generally The results are applied to Bayesian nonlinear design problems with several objectives

99 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of recent developments in response surface models, focusing on the potential or actual usefulness of response surface designs for a wide range of applications, including life testing and models in which time is a factor.
Abstract: Optimum experimental designs were originally developed by Kiefer, mainly for response surface models. This survey of recent developments emphasizes potential or actual usefulness. For linear models the construction of exact designs, particularly over irregular design regions, is stressed, as is the blocking of response surface designs. Other important areas include systematic designs that are robust against trend and designs for mixtures with irregular design regions : several industrial examples are mentioned. Both D- and c-optimum designs are found for a non-linear model of the economic response of cereal production to fertilizer level, the c-optimum design being for the conditions of maximum economic return. Locally optimum and Bayesian designs are both described. Similar results for generalized linear models lead to designs for the LD 95 in a logistic model in which male and female subjects respond differently. Designs with structure in the variance suggest alternatives to the potentially wasteful product designs of Taguchi. Designs for sequential clinical trials to include random balance are presented. The last section outlines some applications, including life testing and models in which time is a factor.

91 citations


Journal ArticleDOI
Yun Li1, Kim Chwee Ng1, David J. Murray-Smith1, G.J. Gray1, Ken Sharman1 
TL;DR: A reusable computing paradigm based on genetic algorithms is developed to transform the ‘unsolvable problem' of optimal designs into a practically solvable ‘non-deterministic polynomial problem’, which results in computer automated designs directly from nonlinear plants.
Abstract: Although various nonlinear control theories, such as sliding mode control, have proved sound and successful, there is a serious lack of effective or tractable design methodologies due to difficulties encountered in the application of traditional analytical and numerical methods. This paper develops a reusable computing paradigm based on genetic algorithms to transform the ‘unsolvable problem’ of optimal designs into a practically solvable ‘non-deterministic polynomial problem’, which results in computer automated designs directly from nonlinear plants. The design methodology takes into account practical system constraints and extends the solution space, allowing new control terms to be included in the controller structure. In addition, the practical implementations using laboratory-scale systems demonstrate that such ‘off-the-computer’ designs offer a superior performance to manual designs in terms of transient and steady-state responses and of robustness. Various contributions to the genetic algorithm te...

87 citations


01 Mar 1996
TL;DR: On the basis of an exact spectral analysis of the outgoing stochastic spike train, it is shown for the first time that there are optimal values for the threshold which yield under given environmental conditions optimal performance.
Abstract: We consider the detection of noisy signals with neuron-like threshold crossing detectors in the context of stochastic resonance. On the basis of an exact spectral analysis of the outgoing stochastic spike train, we show for the first time that there are optimal values for the threshold which yield under given environmental conditions optimal performance.

87 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian method based on the idea of model discrimination that uncovers the active factors is proposed to solve the ambiguity of the results of a fractional experiment.
Abstract: Fractional factorial, Plackett-Burman, and other multifactor designs are often effective in practice due to factor sparsity That is, just a few of the many factors studied will have major effects In those active factors, these designs can have high resolution We have previously developed a Bayesian method based on the idea of model discrimination that uncovers the active factors Sometimes, the results of a fractional experiment are ambiguous due to confounding among the possible effects, and more than one model may be consistent with the data Within the Bayesian construct, we have developed a method for designing a follow-up experiment to resolve this ambiguity The idea is to choose runs that allow maximum discrimination among the plausible models This method is more general than methods that algebraically decouple aliased interactions and more appropriate than optimal design methods that require specification of a single model The method is illustrated through examples of fractional experiments

Book ChapterDOI
TL;DR: This chapter presents several designs for nonlinear and generalized linear models that incorporate a prior distribution on the parameters and to incorporate this prior into appropriate design criteria, usually by integrating D - or c -optimality criteria over the prior distribution.
Abstract: Publisher Summary This chapter presents several designs for nonlinear and generalized linear models. Optimum experimental designs for nonlinear models depend on the values of the unknown parameters and the problem of their construction is therefore necessarily more complicated than that for linear models. One approach to the problem of design for nonlinear models is to adopt a best guess for the parameters. The simplicity of optimum design for linear models is then recovered and, in particular, locally D -optimum designs for the precise estimation of all the parameters in the nonlinear model, and locally c -optimum designs for the precise estimation of specified linear combinations of these parameters, are readily constructed and their optimality confirmed. An alternative and, in a sense, a more realistic approach to this design problem is to introduce a prior distribution on the parameters and to incorporate this prior into appropriate design criteria, usually by integrating D - or c -optimality criteria over the prior distribution. Designs optimizing these criteria are termed “Bayesian optimum designs” and are usually constructed numerically.

Journal ArticleDOI
TL;DR: The results indicate that biaxial tests can be improved over presently common procedures and show that this conclusion applies for a variety of circumstances.
Abstract: A rational methodology is developed for optimal design of biaxial stretch tests intended for estimating material parameters of flat tissues. It is applied to a structural model with a variety of constitutive equations and test protocols, and for a wide range of parameter levels. The results show nearly identical optimal designs under all circumstances. Optimality is obtained with two uniaxial stretch tests at mutually normal directions inclined by 22.5 deg to the axes of material symmetry. Protocols which include additional equibiaxial tests provide superior estimation with lower variance of estimates. Tests performed at angles 0, 45, and 90 deg to the axes of material symmetry provide unreliable estimates. The optimal sampling is variable and depends on the protocols and model parameters. In conclusion, the results indicate that biaxial tests can be improved over presently common procedures and show that this conclusion applies for a variety of circumstances.


Journal ArticleDOI
TL;DR: The methods presented in this paper provide a means of deriving and implementing optimal designs that will maximize precision for a fixed total budget or minimize the study cost necessary to achieve a desired precision.
Abstract: The optimal allocation of available resources is the concern of every investigator in choosing a study design. The recent development of statistical methods for the analysis of two-stage data makes these study designs attractive for their economy and efficiency. However, little work has been done on deriving two-stage designs that are optimal under the kinds of constraints encountered in practice. The methods presented in this paper provide a means of deriving designs that will maximize precision for a fixed total budget or minimize the study cost necessary to achieve a desired precision. These optimal designs depend on the relative information content and the relative cost of gathering the first- and second-stage data. In place of the usual sample size calculations, the investigator can use pilot data to estimate the study size and second-stage sampling fractions. The gains in efficiency that can result from such carefully designed studies are illustrated here by deriving and implementing optimal designs using data from the Coronary Artery Surgery Study (Circulation 1980 ;62 :254-61).

Journal ArticleDOI
TL;DR: In this article, the optimal design of uniaxially loaded laminated plates subject to elastic in-plane restraints along the unloaded edges is given for a maximum combination of prebuckling stiffness, postbuckling stiffness and buckling load.

Journal ArticleDOI
TL;DR: In this paper, the Gumbel model for bivariate logistic regression was used to model efficacy and toxicity in drug-testing experiments and the robustness of these designs to parameter misspecification was discussed.
Abstract: SUMMARY In drug-testing experiments the primary responses of interest are efficacy and toxicity. These can be modeled as a bivariate quantal response using the Gumbel model for bivariate logistic regression. D-optimal and Q-optimal experimental designs are developed for this model. The Q-optimal design minimizes the average asymptotic prediction variance of p(l, 0; d), the probability of efficacy without toxicity at dose d, over a desired range of doses. The robustness of these designs to parameter misspecification is discussed. In addition, D-efficiencies of Q-optimal designs and Q-efficiencies of Doptimal designs are presented. An extension of the general equivalence theorem to the multivariate case is applied to these designs.

Journal ArticleDOI
TL;DR: In this article, the authors formulate the distributed-parameter optimization and topology design problems (using the perimeter method) for non-linear thermoelasticity, and present a finite element optimization procedure based on this formulation.


Journal ArticleDOI
18 Aug 1996
TL;DR: A formal process for selecting objective functions can be made, so that the resulting optimal design model has an appropriate decomposed form and also possesses desirable properties for the scalar substitute functions used in multicriteria optimization.
Abstract: Optimal design of large systems is easier if the optimization model can be decomposed and solved as a set of smaller, coordinated subproblems. Casting a given design problem into a particular optimization model by selecting objectives and constraints is generally a subjective task. In system models where hierarchical decomposition is possible, a formal process for selecting objective functions can be made, so that the resulting optimal design model has an appropriate decomposed form and also possesses desirable properties for the scalar substitute functions used in multicriteria optimization. Such a process is often followed intuitively during the development of a system optimization model by summing selected objectives from each subsystem into a single overall system objective. The more formal process presented in this article is simple to implement and amenable to automation.

Journal ArticleDOI
TL;DR: In this paper, a new model for the optimal design of X¯ charts utilized for the statistical monitoring of processes where production runs have a finite duration is presented, which considers the effect of the setup operation on the chart design.
Abstract: A new model is presented for the optimal design of X¯ charts utilized for the statistical monitoring of processes where production runs have a finite duration. The proposed model considers the effect of the setup operation on the chart design. The model contains both Duncan's model and a model due to Ladany as particular cases, yet it allows the user to consider more realistic production environments. Two types of finite-length production process are considered: a repetitive manufacturing process and a job-shop process. New relationships between the length of the production run, the power of the chart and the nature of the process setup are found by numerically analyzing the behavior of the model.

Journal ArticleDOI
TL;DR: In this article, a training framework is developed to design optimal nonlinear filters for various signal and image processing tasks, including Boolean filters and stack filters, based on some representative training set, and the training framework shows explicitly the essential part of the initial specification and how it affects the resulting optimal solution.
Abstract: A training framework is developed in this paper to design optimal nonlinear filters for various signal and image processing tasks. The targeted families of nonlinear filters are the Boolean filters and stack filters. The main merit of this framework at the implementation level is perhaps the absence of constraining models, making it nearly universal in terms of application areas. We develop fast procedures to design optimal or close to optimal filters, based on some representative training set. Furthermore, the training framework shows explicitly the essential part of the initial specification and how it affects the resulting optimal solution. Symmetry constraints are imposed on the data and, consequently, on the resulting optimal solutions for improved performance and ease of implementation. The case study is dedicated to natural images. The properties of optimal Boolean and stack filters, when the desired signal in the training set is the image of a natural scene, are analyzed. Specifically, the effect of changing the desired signal (using various natural images) and the characteristics of the noise (the probability distribution function, the mean, and the variance) is analyzed. Elaborate experimental conditions were selected to investigate the robustness of the optimal solutions using a sensitivity measure computed on data sets. A remarkably low sensitivity and, consequently, a good generalization power of Boolean and stack filters are revealed. Boolean-based filters are thus shown to be not only suitable for image restoration but also robust, making it possible to build libraries of "optimal" filters, which are suitable for a set of applications.

Journal ArticleDOI
TL;DR: In this article, all E-optimal designs for the mean parameter vector in polynomial regression of degree d without intercept in one real variable were derived based on interplays between E-optimality design problems in the present and in certain heteroscedastic polynomials setups with intercept.

Book ChapterDOI
TL;DR: This chapter discusses several designs for comparing treatments with a control for various experimental settings, models and inference methods.
Abstract: Publisher Summary This chapter discusses several designs for comparing treatments with a control for various experimental settings, models and inference methods. Bechhofer and Tamhane rediscovered designs with supplemented balance when they were considering the problem of constructing simultaneous confidence intervals for the treatment–control contrasts. They called their designs “Balanced Treatment Incomplete Block (BTIB) designs,” a terminology which has been adopted by many authors. It has long been known that one way to obtain an optimal block design is to construct, if possible, an orthogonal block design, such that within each block the replication of treatments are optimal for a zero-way elimination of heterogeneity model. There are two methods for deriving inferences on the treatment-control contrasts. One is estimation, and the other is simultaneous confidence intervals. The approximate, or continuous, block design theory is wide in its scope. As the theorems in this approach specify proportions of units, which are assigned to each treatment, they give an overall idea of the nature of optimal designs. Also, one rule applies to (almost) all block sizes, and requires only minor computations when the block sizes are altered. These are very desirable properties. On the other hand, application of these designs is possible only after rounding off the treatment-block incidences to nearby integers, and this could result in loss of efficiency.

Journal ArticleDOI
TL;DR: A new technique based on the genetic algorithm for constructing experimental designs is described and it is shown that this class of algorithms cannot be used in case of high dimensionality.

Journal ArticleDOI
TL;DR: In this article, the authors considered a centered stochastic process with known and continuous covariance function and constructed asymptotically optimal sequences of designs, which satisfy a generalized Sacks-Ylvisaker regularity condition of order zero.

Journal ArticleDOI
18 Aug 1996
TL;DR: The article presents an integer programming formulation and solution techniques for synthesizing hierarchically decomposed optimal design problems and examples for designing a pressure vessel, an automotive caliper disc brake and a speed reducer are presented.
Abstract: Decomposition synthesis in optimal design is the process of creating an optimal design model by selecting objectives and constraints so that it can be directly partitioned into an appropriate decomposed form. Such synthesis results are not unique since there may be many partitions that satisfy the decomposition requirements. Introducing suitable criteria an optimal decomposition synthesis process can be defined in a manner analogous to optimal partitioning formulations. The article presents an integer programming formulation and solution techniques for synthesizing hierarchically decomposed optimal design problems. Examples for designing a pressure vessel, an automotive caliper disc brake and a speed reducer are also presented.

Journal ArticleDOI
TL;DR: In this paper, a version of Elfving's theorem for the Bayesian D-optimality criterion in nonlinear regression models is presented, which allows a representation of a (uniquely determined) boundary point of a convex subset of $L^2$-integrable functions.
Abstract: We present a version of Elfving's theorem for the Bayesian D-optimality criterion in nonlinear regression models. The Bayesian optimal design can be characterized as a design which allows a representation of a (uniquely determined) boundary point of a convex subset of $L^2$-integrable functions. A similar characterization is given for the Bayesian c-optimality criterion where a (possible) nonlinear function of the unknown parameters has to be estimated. The results are illustrated in the example of an exponential growth model using a gamma prior distribution.

Journal ArticleDOI
TL;DR: An approach to constructing tailor-made designs which it is hoped will lead to ill-fitting 'off the peg' designs being a thing of the past and the criterion of optimality used in this paper is A-optimality.
Abstract: There are many diseases and conditions that can be studied using a cross-over clinical trial, where the subjects receive sequences of treatments. The treatments are then compared using the repeated measurements taken 'within' subjects. The actual plan or design of the trial is usually obtained by consulting a published table of designs or by applying relatively simple rules such as using all possible permutations of the treatments. However, there is a danger is this approach because the model assumed for the data when the tables or rules were constructed may not be appropriate for the new trial being planned. Also, there may be restrictions in the new trial on the number of treatment sequences that can be used or on the number of periods of treatment particular subjects can be given. Such restrictions may mean that a published design of the ideal size cannot be found unless compromises are made. A better approach is to make the design satisfy the objectives of the trial rather than vice versa. In this paper we describe an approach to constructing such tailor-made designs which we hope will lead to ill-fitting 'off the peg' designs being a thing of the past. We use a computer algorithm to search for optimal designs and illustrate it using a number of examples. The criterion of optimality used in this paper is A-optimality but our approach is not restricted to one particular criterion. The model used in the search for the optimal design is chosen to suit the nature of the trial at hand and as an example a variety of models for three treatments are considered. We also illustrate the construction of designs for the comparison of two active treatments and a placebo where it can be assumed that the carry-over effects of the active treatments are similar. Finally, we illustrate an augmentation of a design that could arise when the objectives of a trial change.

Journal ArticleDOI
TL;DR: A new method for the optimal design of large-set reference models using an improved LVQ3 combined with Simulated Annealing which has been proven to be a useful technique in many areas of optimization problems is proposed.