Optimality criteria in survey sampling
01 Jun 1987-Biometrika (Oxford University Press)-Vol. 74, Iss: 2, pp 337-345
TL;DR: In this article, a class of optimality criteria similar to those well accepted in optimal experimental design theory is introduced to fill the gap between some different and conflicting approaches in survey sampling.
Abstract: SUMMARY This paper attempts to fill the gap between some different and conflicting approaches in survey sampling. Based on a fixed population regression-type model, a class of optimality criteria similar to those well accepted in optimal experimental design theory is introduced. The minimax and superpopulation approaches in survey sampling turn out to correspond to two extreme criteria in the proposed class. This helps understand the role of randomization. The strategy of simple random sampling with sample mean and the Rao-Hartley-Cochran strategy are shown to be criterion-robust.
TL;DR: The earliest important papers on planned experiments in Biometrika appeared within a year of each other in 1917 and 1918 as mentioned in this paper and were concerned with design optimality and industrial experiments, the other with agricultural trials and blocking.
Abstract: The earliest important papers on planned experiments in Biometrika appeared within a year of each other in 1917 and 1918. Roughly speaking, one was concerned with design optimality and industrial experiments, the other with agricultural trials and blocking. We find this approximate division of the subject into two parts helpful in describing its development and growth. As a result of the hostility between Fisher and Karl Pearson, much of the development of designs for agriculture in the 1920s and 1930s occurred off the pages of Biometrika. Despite this, we are able to trace a coherent history of the development of the subject from field trials and response surface methods to clinical trials and Bayesian versions of design optimality.
TL;DR: A survey is given of recent statistical work on the design of experiments, based on the literature of the last six years, with the emphasis on nonstandard applications of optimum design theory.
Abstract: Summary A survey is given of recent statistical work on the design of experiments, based on the literature of the last six years. The emphasis is on nonstandard applications of optimum design theory. Reference is made to surveys on the theory of optimum experimental design, crossover designs and incomplete block designs.
TL;DR: The parallel concepts for control in the two areas lead naturally to a discussion of embedding experiments in surveys or surveys in experiments as mentioned in this paper, and the parallel controversies between the two modes of inference, design-based and model-based, used in both the experimental design and sample survey literatures.
Abstract: Summary The design and analysis of randomized experiments and randomly selected sample surveys are traced to the work of Fisher, Neyman and Tchuprov in the 1920's and 1930's, although precursors to their work appeared many years earlier. This paper explores some of the developments flowing from their pioneering efforts with an emphasis on the parallels between the methodologies. After reviewing the basic parallels between concepts in the design of experiments and the design of sample surveys, the paper turns to a new class of parallels linking restricted forms of sampling to the design-of-experiments literature on treatment structures, such as that on balanced incomplete block designs. The parallel concepts for control in the two areas lead naturally to a discussion of embedding experiments in surveys or surveys in experiments. After speculating on the possible causes of the separation of the areas, the paper summarizes the parallel controversies between the two modes of inference, design-based and model-based, used in both the experimental design and sample survey literatures. In summary, the paper proposes how new intertwining concepts and constructs may emerge in future research and enrich future practice.
01 Jan 1991
Cites background from "Optimality criteria in survey sampl..."
...9) (9) Application of Formalism to small-area estimation...
TL;DR: In the life sciences, the major areas have been in variable probability designs, the foundations of inference for finite populations, analytic surveys and the role of selection mechanisms in inference.
Abstract: SUMMARY Despite Karl Pearson's stated aim that the 'evolutionist has to become in the widest sense of the words a registrar-general for all forms of life', the papers published in Biometrika on sample surveys have been mainly theoretical and do not start to appear until the 1940s. There has been little work on sampling in the life sciences, with the exception of capture-recapture methods. The major areas have been in variable probability designs, the foundations of inference for finite populations, analytic surveys and the role of selection mechanisms in inference. In recent years the work on survey analysis has been the outcome of an interaction between sample selection mechanisms and mainstream theory, with interesting consequences for both areas.
TL;DR: For general optimality criteria, this article obtained criteria equivalent to $\Phi$-optimality under various conditions on ''Phi'' and showed that such equivalent criteria are useful for analytic or machine computation of ''phi''-optimum designs.
Abstract: For general optimality criteria $\Phi$, criteria equivalent to $\Phi$-optimality are obtained under various conditions on $\Phi$. Such equivalent criteria are useful for analytic or machine computation of $\Phi$-optimum designs. The theory includes that previously developed in the case of $D$-optimality (Kiefer-Wolfowitz) and $L$-optimality (Karlin-Studden-Fedorov), as well as $E$-optimality and criteria arising in response surface fitting and minimax extrapolation. Multiresponse settings and models with variable covariance and cost structure are included. Methods for verifying the conditions required on $\Phi$, and for computing the equivalent criteria, are illustrated.
01 Jan 1975
TL;DR: In this article, the authors derived optimal strategies involving linear estimators under certain variance assumptions and compared them under various assumptions under a superpopulation model, and showed that the conventional ratio estimator is, in a certain natural sense, optimal.
Abstract: SUMMARY Problems of estimating totals in finite populations, when auxiliary information regarding variate values is available, are considered under some linear regression, 'super-population', models. Optimal strategies involving linear estimators are derived under certain variance assumptions and compared under various assumptions. For a model which seems to apply in many practical problems, the conventional ratio estimator is shown to be, in a certain natural sense, optimal, but for all models considered, the optimal sampling plans are purposive, i.e. nonrandom. With a squared error loss function, the strategy of using a probability proportional to size sampling plan and the Horvitz-Thompson estimator is shown to be inadmissible in many models for which the strategy seems 'reasonable' and in a particular model for which it is, in one sense, optimal. Some of the results concerning purposive sampling and the ratio estimator are supported by an empirical study.
Related Papers (5)
01 Jan 1988