scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Optimum designs for parameter estimation in mixture experiments with group synergism

03 May 2021-Communications in Statistics-theory and Methods (Taylor & Francis)-Vol. 50, Iss: 9, pp 2001-2014
TL;DR: Mixture models were first introduced in canonical form of different degrees to represent the response function in a mixture experiment, and designs for the same were suggested as discussed by the authors, and severing...
Abstract: Mixture models were first introduced in canonical form of different degrees to represent the response function in a mixture experiment, and designs for the same were suggested. Later, sever...
Citations
More filters
Journal ArticleDOI
TL;DR: The concept of R-optimality was introduced in the literature as an alternative to the commonly used Doptimality criteria when the objective is to construct rectangular confidence regions as mentioned in this paper, and it has been used in many applications.
Abstract: The concept of R-optimality was introduced in the literature as an alternative to the commonly used D-optimality criteria when the objective is to construct rectangular confidence regions. This pre...

1 citations

References
More filters
Book
01 Jan 1972

2,557 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of defining probability measures with finite support, i.e., measures that assign probability one to a set consisting of a finite number of points.
Abstract: Let f1 , …, fk be linearly independent real functions on a space X, such that the range R of (f1, …, fk) is a compact set in k dimensional Euclidean space. (This will happen, for example, if the fi are continuous and X is a compact topological space.) Let S be any Borel field of subsets of X which includes X and all sets which consist of a finite number of points, and let C = {e} be any class of probability measures on S which includes all probability measures with finite support (that is, which assign probability one to a set consisting of a finite number of points), and which are such that is defined. In all that follows we consider only probability measures e which are in C.

872 citations


Additional excerpts

  • ...The inverse of the information matrix of n0 is given by M 1ðn0Þ ¼ pþ 12 þ qþ 1 2 ðX1X10Þ 1 Or t O0 ðX2X20Þ 1 where X1 ¼ Ip B O I p 2 2 64 3 75, X2 ¼ Iq C O I q 2 2 64 3 75 (3.3) are pþ 1 2 pþ 1 2 and qþ 1 2 þ qþ 1 2 matrices respectively with ðX1X01Þ 1 ¼ Ip 4B 4B0 4I p 2 24 3 5, ðX2X02Þ 1 ¼ Iq 4C 4C0 4I q 2 2 64 3 75, Im is an identity matrix of order m m, O is a null matrix of order pþ 12 qþ 1 2 , and B and C are respectively p p 2 and q q 2 matrices given by b12 b13::: b1p 1 b1p b23 b24:::b2p 1 b2p:::bp 1, p B ¼ 1=2 1=2 ::: 1=2 1=2 0 0 ::: 0 0 ::: 0 1=2 0 ::: 0 0 1=2 1=2 ::: 1=2 1=2 ::: 0 ::: ::: ::: ::: ::: ::: ::: ::: 0 0 ::: 1=2 0 0 0 ::: 1=2 0 ::: 1=2 0 0 ::: 0 1=2 0 0 ::: 0 0 ::: 1=2 2 6666664 3 7777775 bpþ1, pþ2 bpþ1, pþ3::: bpþ1pþq 1 bpþ1pþq bpþ2, pþ3 :::bpþ2pþq 1 bpþ2, pþq:::bpþq 1, pþq C ¼ 1=2 1=2 ::: 1=2 1=2 0 ::: 0 0 ::: 0 1=2 0 ::: 0 0 1=2 ::: 1=2 1=2 ::: 0 ::: ::: ::: ::: ::: ::: ::: ::: ::: ::: ::: 0 0 ::: 1=2 0 0 1=2 0 ::: 1=2 0 0 ::: 0 0 0 ::: 0 1=2 ::: 1=2 2 6666664 3 7777775: We check the optimality or otherwise of n0 using Equivalence Theorem 3.1....

    [...]

  • ...The criteria are For D-optimality : Maximize Det: ½MðnÞ ¼ Det:½M11ðnÞ Det:½M22ðnÞ For A-optimality : MinimizeTr: ½MðnÞ 1 ¼ Tr:½M11ðnÞ 1 þ Tr:½M22ðnÞ 1 To get an idea of the support points of the D-optimal and A-optimal designs, we make use of the Equivalence Theorems due to Kiefer and Wolfowitz (1960) and Fedorov (1972) given below: Theorem 3.1....

    [...]

  • ...He proved the optimality of the design with the help of the Equivalence Theorem....

    [...]

  • ...In view of this and Theorem 3.3, we first search for the A-optimal design within the class D1 if only, say p¼ 3, and within the class D2 if p¼ q¼ 3, where designs in D1 and D2 are as follows: i. n2D1 has support points: a. (1,0,… ,0) and permutations among the first 3 components, each with mass w11; b. ð1=2, 1=2, 0, :::, 0Þ and permutations among the first 3 components each with mass w12; c. ð1=3, 1=3, 1=3, 0, :::, 0Þ with mass w13; d. ð0, :::, 0|fflffl{zfflffl} p , 1, 0, :::, 0|fflfflfflffl{zfflfflfflffl} q Þ and permutations among the last q components each with massw21; e. ð0, :::, 0|fflffl{zfflffl} p , 1=2, 1=2, 0, :::, 0|fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} q Þ and permutations among the last q components each with mass w22; where wij 0, 8i, j, 3ðw11 þ w12Þ þ w13 þ qw21ð q2 Þw22 ¼ 1; ii. n2D2 has support points a. (1,0,0,0,0,0) and permutations among the first 3 components, each with mass w11; b. ð1=2, 1=2, 0, 0, 0, 0Þ and permutations among the first 3 components each with mass w12; c. ð1=3, 1=3, 1=3, 0, 0, 0Þ and with mass w13; d. ð0, 0, 0, 1, 0, 0Þ and permutations among the last 3 components each with mass w21; e. ð0, 0, 0, 1=2, 1=2, 0Þ and permutations among the last 3 components each with mass w22; f. ð0, 0, 0, 1=3, 1=3, 1=3Þ with mass w23; where wij 0, 8i, j, 3ðw11 þ w12 þ w21 þ w22Þ þ w13 þ w23 ¼ 1: The A-optimal design n0 within D1 (or D2) is obtained by finding the masses that minimize Trace½M 1ðnÞ : Since algebraic derivations are lengthy and tedious, we numerically compute dðn0, xÞ ¼ f 0ðxÞM 2ðn0Þf ðxÞ for enumerable x 2v(with q 7 in case (i)) to observe that the conditions of Equivalence Theorem are satisfied....

    [...]

  • ...…½MðnÞ ¼ Det:½M11ðnÞ Det:½M22ðnÞ For A-optimality : MinimizeTr: ½MðnÞ 1 ¼ Tr:½M11ðnÞ 1 þ Tr:½M22ðnÞ 1 To get an idea of the support points of the D-optimal and A-optimal designs, we make use of the Equivalence Theorems due to Kiefer and Wolfowitz (1960) and Fedorov (1972) given below: Theorem 3.1....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of finding an optimum design of experiments in regression problems, where the desired inference concerns one of the regression coefficients, and illustrative examples will be given in Section 3.
Abstract: Although regression problems have been considered by workers in all sciences for many years, until recently relatively little attention has been paid to the optimum design of experiments in such problems. At what values of the independent variable should one take observations, and in what proportions? The purpose of this paper is to develop useful computational procedures for finding optimum designs in regression problems of estimation, testing hypotheses, etc. In Section 2 we shall develop the theory for the case where the desired inference concerns just one of the regression coefficients, and illustrative examples will be given in Section 3. In Section 4 the theory for the case of inference on several coefficients is developed; here there is a choice of several possible optimality criteria, as discussed in [1]. In Section 5 we treat the problem of global estimation of the regression function, rather than of the individual coefficients. We shall now indicate briefly some of the computational aspects of the search for optimum designs by considering the problem of Section 2 wherein the inference concerns one of $k$ regression coefficients. For the sake of concreteness, we shall occasionally refer here to the example of polynomial regression on the real interval $\lbrack -1, 1\rbrack$, where all observations are independent and have the same variance. The quadratic case is rather trivial to treat by our methods, so we shall sometimes refer here to the case of cubic regression. In the latter case we suppose all four regression coefficients to be unknown, and we want to estimate or test a hypothesis about the coefficient $a_3$ of $x^3$. If a fixed number $N$ of observations is to be taken, we can think of representing the proportion of observations taken at any point $x$ by $\xi(x)$, where $\xi$ is a probability measure on $\lbrack -1, 1\rbrack$. To a first approximation (which is discussed in Section 2), we can ignore the fact that in what follows $N\xi$ can take only integer values. We consider three methods of attacking the problem of finding an optimum $\xi$: A. The direct approach is to compute the variance of the best linear estimator of $a_3$ as a function of the values of the independent variable at which observations are taken or, equivalently, as a function of the moments of $\xi$. Denoting by $\mu_i$ the $i$th moment of $\xi$, and assuming $\xi$ to be concentrated entirely on more than three points (so that $a_3$ is estimable), we find easily that the reciprocal of this variance is proportional to $$\frac{\mu^2_5(\mu^2_1 - \mu_2) + 2\mu_5(\mu^2_2 \mu_3 + \mu_3 \mu_4 - \mu_1 \mu^2_3 - \mu_1 \mu_2 \mu_4)\\- \mu^3_4 + \mu^2_4(\mu^2_2 + 2\mu_1 \mu_3) - 3\mu_4 \mu_2 \mu^2_3 + \mu^4_3}{\mu_4(\mu_2 - \mu^2_1) - \mu^2_3 - \mu^3_2 + 2\mu_1 \mu_2 \mu_3} + \mu_6$$ in the case of cubic regression. The problem is to find a $\xi$ on $\lbrack -1, 1\rbrack$ which maximizes this expression. Thus, this direct approach leads to a calculation which appears quite formidable. This is true even if one uses the remark on symmetry of the next paragraph and restricts attention to symmetrical $\xi$, so that $\mu_i = 0$ for $i$ odd. For polynomials of higher degree or for regression functions which are not polynomials, the difficulties are greater. B. The results of Section 2 yield the following approach to the problem: Let $c_0 + c_1x + c_2x^2$ be a best Chebyshev approximation to $x^3$ on $\lbrack -1, 1\rbrack$, i.e., such that the maximum over $\lbrack -1, 1\rbrack$ of $|x^3 - (c_0 + c_1x + c_2x^2)|$ is a minimum over all choices of the $c_i$, and suppose $B$ is the subset of $\lbrack -1, 1\rbrack$ where the maximum of this absolute value is taken on. Then $\xi$ must give measure one to $B$, and the weights assigned by $\xi$ to the various points of $B$ (there are four in this case) can be found either by solving the linear equations (2.10) or by computing these weights so as to make $\xi$ a maximum strategy for the game discussed in Section 2. Two points should be mentioned: (1) In the general polynomial case, where there are $k$ parameters ($k = 4$ here), the results described in [10], p. 42, or in Section 2 below imply that there is an optimum $\xi$ concentrated on at most $k$ points. Thus, even if we use this result with the approach of the previous paragraph, we obtain the following comparison in a $k$-parameter problem in Section 2: Method A: minimize a nonlinear function of $2k - 1$ real variables. Method B: solve the Chebyshev problem and then solve $k - 1$ simultaneous linear equations. The fact that the solution of the Chebyshev problem can often be found in the literature (e.g., [2]) makes the comparison of the second method with the first all the more favorable. (2) Although the computational difficulty cannot in general be reduced further, in the case of polynomial regression on $\lbrack -1, 1\rbrack$ there is present a kind of symmetry (discussed in Section 2) which implies that there is an optimum $\xi$ which is symmetrical about 0 and which is concentrated on four points; thus, in the case of cubic regression, this fact reduces the computation under Method A to a minimization in 3 variables, but Method B involves only the solution of a single linear equation. C. A third method, which rests on the game-theoretic results of Section 2, and which is especially useful when one has a reasonable guess of what an optimum $\xi$ is, involves the following steps: first guess a $\xi$, say $\xi^{\ast}$, and compute the minimum on the left side of (2.8); second, if this minimum is achieved for $c = c^{\ast}$, compute the square of the maximum on the right side of (2.9); then, if these two computations yield the same number, $\xi^{\ast}$ is optimum. If one has a guess of a class of $\xi$'s depending on one or several parameters, among which it is thought that there is an optimum $\xi$, then one can maximize over that class at the end of the first step and, the maximum being at $\xi^{\ast}$, go through the same analysis as above. This method is illustrated in Example 3.5 and Example 4. Of course, the remarks (1) and (2) of the previous paragraph can be used in applying Method C, as in these examples. In the example of cubic regression just cited, the optimum procedure turns out to be $\xi(-1) = \xi(1) = \frac{1}{6}, \xi(\frac{1}{2}) = \xi(-\frac{1}{2}) = \frac{1}{3}$. It is striking that any of the commonly used procedures which take equal numbers of observations at equally spaced points on $\lbrack -1, 1\rbrack$ requires over 38% more observations than this optimum procedure in order to yield the same variance for the best linear estimator of $a_3$ (see Example 3.1); the comparison is even more striking for higher degree regression. The unique optimum procedure in the case of degree $h$ is given by (3.3). The comparison of a direct computational attack, analogous to that of A above, with the methods developed in Sections 4 and 5 for the problems considered there, indicates even more the inferiority of the direct attack. In particular cases, e.g., Example 5.1, special methods may prove useful. Among recent work in the design of experiments we may mention the papers of Elfving [3], [4], Chernoff [5], Williams [11], Ehrenfeld [12], Guest [13], and Hoel [15]. Only Guest and Hoel explicitly consider computational problems of the kind discussed below. Our methods of employing Chebyshev and game theoretic results seem to be completely new. The results obtained in the examples below are also new, except for some slight overlap with results of [13] and [15], which is explicitly described below. We shall consider elsewhere some further problems of the type considered in this paper.

803 citations


"Optimum designs for parameter estim..." refers methods in this paper

  • ...Kiefer (1961) first studied the optimality of designs in the mixture set-up, and established the D-optimality of (q, 2) simplex lattice design, which puts equal mass at the support points of the design, for estimating the parameters of the second degree model....

    [...]