scispace - formally typeset
Search or ask a question
Author

Peter W.M. John

Bio: Peter W.M. John is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Block design & Factorial experiment. The author has an hindex of 9, co-authored 23 publications receiving 247 citations.

Papers
More filters
Journal Article•DOI•
TL;DR: In this paper, Davison et al. extend the methodology to second-order balance, which principally affects bootstrap estimation of variance, and propose Latin square and balanced incomplete block designs.
Abstract: SUMMARY Davison et al. (1986) have shown that finite bootstrap simulations can be improved by forcing balance in the aggregate of simulated data sets. Their methods yield first-order balance, which principally affects bootstrap estimation of bias. Here we extend the methodology to second-order balance, which principally affects bootstrap estimation of variance. The particular techniques involve Latin square and balanced incomplete block designs. Numerical examples are given to illustrate both the positive and the negative features of the balanced simulations.

45 citations

Journal Article•DOI•
TL;DR: In this paper, the problem of partitioning the runs of four mixture components into two orthogonal blocks when a quadratic model is fitted is considered, motivated by an industrial investigation of bread-making flours carried out at Spillers Milling Limited, a member of the Dalgety group of companies.
Abstract: The problem of partitioning the blends (runs) of four mixture components into two orthogonal blocks when a quadratic model is fitted is considered. This is motivated by an industrial investigation of bread-making flours carried out at Spillers Milling Limited, a member of the Dalgety group of companies in the United Kingdom. The design solution proposed by John and described by Cornell is discussed and extended. Study of the characteristics of Latin squares of side 4 leads to reliable rules for quickly obtaining designs of specified kinds. One such design was selected for the experiment at Spillers Milling. Mixture-component values that cause singularity in the new designs are identified, and values that provide designs with highest D-criterion values are obtained for the class of designs discussed. Conveniently rounded, near-optimal mixture component values were chosen for the Spillers Milling experiment, and the analysis led to the prediction of an optimal flour mixture.

45 citations

Journal Article•DOI•
TL;DR: In this paper, a balanced incomplete block experiment is described in which the nine treatments were quantitative rather than qualitative, being actually two additives each at four levels and a third at one level.
Abstract: The analysis of balanced incomplete block experiments is discussed in most of the standard textbooks on experimental design. These discussions are usually confined to qualitative treatments; it being customary to obtain an adjusted sum of squares for treatments and to give procedures for determining the significance of the observed difference between two treatment totals. This paper describes a balanced incomplete block experiment in which the nine treatments were quantitative rather than qualitative, being actually two additives each at four levels and a third at one level. The unusual feature of the analysis is found in Section 3 where the adjusted sum of squares for treatments is subdivided into individual degrees of freedom, each of which is meaningful and specific to this example, and with which we obtain from the data response curves for the two factors which were used at four levels each.

32 citations

Journal Article•DOI•
TL;DR: In this article, the authors used the principle of foldover designs to arrange the runs in a factorial experiment in sequences so that the main effects and, sometimes, the two-factor interactions are uncorrelated with linear or quadratic time trends.
Abstract: When factorial experiments are carried out over a period of time, the response may be subject to time trends. This might happen, for example, with the steady buildup of deposits in a test engine. This article shows how to arrange the runs in a factorial experiment in sequences so that the main effects and, sometimes, the two-factor interactions are uncorrelated with linear or quadratic time trends. This is achieved by using the principle of foldover designs. Trendfree sequences are obtained for both 2 n and 3 n factorials.

31 citations

Journal Article•DOI•
TL;DR: In this article, a method called semifolding is presented for choosing the points in the second experiment, in which the main effects are clean and the interactions are aliased in chains, then, having analyzed the initial experiment, they plan further runs to isolate certain interactions by breaking the chains.
Abstract: Some experimenters carry out their investigation in stages. They begin with an initial 2n-p fraction of resolution IV, in which the main effects are clean and the interactions are aliased in chains, Then, having analyzed the initial experiment, they plan further runs to isolate certain interactions by breaking the chains. In this paper a method called semifolding, for choosing the points in the second experiment, is presented.

14 citations


Cited by
More filters
Journal Article•DOI•
TL;DR: The weighted likelihood bootstrap (WLB) as mentioned in this paper is a generalization of the Rubin's Bayesian bootstrap, which is used to simulate the posterior distribution of a posterior distribution.
Abstract: We introduce the weighted likelihood bootstrap (WLB) as a way to simulate approximately from a posterior distribution This method is often easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as iteratively reweighted least squares In the generic weighting scheme, the WLB is first order correct under quite general conditions Inaccuracies can be removed by using the WLB as a source of samples in the sampling-importance resampling (SIR) algorithm, which also allows incorporation of particular prior information The SIR-adjusted WLB can be a competitive alternative to other integration methods in certain models Asymptotic expansions elucidate the second-order properties of the WLB, which is a generalization of Rubin's Bayesian bootstrap The calculation of approximate Bayes factors for model comparison is also considered We note that, given a sample simulated from the posterior distribution, the required marginal likelihood may be simulation consistently estimated by the harmonic mean of the associated likelihood values; a modification of this estimator that avoids instability is also noted These methods provide simple ways of calculating approximate Bayes factors and posterior model probabilities for a very wide class of models

1,474 citations

Journal Article•DOI•

1,275 citations

Journal Article•DOI•
TL;DR: In this paper, the authors investigate the tradeoff between the number of profiles per subject and number of subjects on the statistical accuracy of the estimators that describe the partworth heterogeneity.
Abstract: The drive to satisfy customers in narrowly defined market segments has led firms to offer wider arrays of products and services. Delivering products and services with the appropriate mix of features for these highly fragmented market segments requires understanding the value that customers place on these features. Conjoint analysis endeavors to unravel the value or partworths, that customers place on the product or service's attributes from experimental subjects' evaluation of profiles based on hypothetical products or services. When the goal is to estimate the heterogeneity in the customers' partworths, traditional estimation methods, such as least squares, require each subject to respond to more profiles than product attributes, resulting in lengthy questionnaires for complex, multiattributed products or services. Long questionnaires pose practical and theoretical problems. Response rates tend to decrease with increasing questionnaire length, and more importantly, academic evidence indicates that long questionnaires may induce response biases. The problems associated with long questionnaires call for experimental designs and estimation methods that recover the heterogeneity in the partworths with shorter questionnaires. Unlike more popular estimation methods, Hierarchical Bayes HB random effects models do not require that individual-level design matrices be of full rank, which leads to the possibility of using fewer profiles per subject than currently used. Can this theoretical possibility be practically implemented? This paper tests this conjecture with empirical studies and mathematical analysis. The random effects model in the paper describes the heterogeneity in subject-level partworths or regression coefficients with a linear model that can include subject-level covariates. In addition, the error variances are specific to the subjects, thus allowing for the differential use of the measurement scale by different subjects. In the empirical study, subjects' responses to a full profile design are randomly deleted to test the performance of HB methods with declining sample sizes. These simple experiments indicate that HB methods can recover heterogeneity and estimate individual-level partworths, even when individual-level least squares estimators do not exist due to insufficient degrees of freedom. Motivated by these empirical studies, the paper analytically investigates the trade-off between the number of profiles per subject and the number of subjects on the statistical accuracy of the estimators that describe the partworth heterogeneity. The paper considers two experimental designs: each subject receives the same set of profiles, and subjects receive different blocks of a fractional factorial design. In the first case, the optimal design, subject to a budget constraint, uses more subjects and fewer profiles per subject when the ratio of unexplained, partworth heterogeneity to unexplained response variance is large. In the second case, one can maintain a given level of estimation accuracy as the number of profiles per subject decreases by increasing the number of subjects assigned to each block. These results provide marketing researchers the option of using shorter questionnaires for complex products or services. The analysis assumes that response quality is independent of questionnaire length and does not address the impact of design factors on response quality. If response quality and questionnaire length were, in fact, unrelated, then marketing researchers would still find the paper's results useful in improving the efficiency of their conjoint designs. However, if response quality were to decline with questionnaire length, as the preponderance of academic research indicates, then the option to use shorter questionnaires would become even more valuable.

512 citations

Journal Article•DOI•
TL;DR: In this article, the authors review major developments in the design of experiments, offer their thoughts on important directions for the future, and make specific recommendations for experimenters and statisticians who are students and teachers of experimental design.
Abstract: We review major developments in the design of experiments, offer our thoughts on important directions for the future, and make specific recommendations for experimenters and statisticians who are students and teachers of experimental design, practitioners of experimental design, and researchers jointly exploring new frontiers. Specific topics covered are optimal design, computer-aided design, robust design, response surface design, mixture design, factorial design, block design, and designs for nonlinear models.

293 citations