scispace - formally typeset
Search or ask a question
Author

W. Jocelyn

Bio: W. Jocelyn is an academic researcher. The author has contributed to research in topics: Estimator & Confidence interval. The author has an hindex of 2, co-authored 2 publications receiving 19 citations.


Cited by
More filters
Journal ArticleDOI
TL;DR: As the "father" of multiple imputation (MI), Soren Nielsen as mentioned in this paper has received a great deal of attention in the last few years and has been referred to as the 'father' of MI.
Abstract: As the "father" of multiple imputation (MI), it gives me great pleasure to be able to comment on this collection of contributions on MI. The nice review by Paul Zhang serves as an excellent introduction to the more critical attention lavished on MI by Soren Nielsen and the extensive discussion by Xiao-Li Meng and Martin Romero. I have a few comments on this package, which are designed to clarify a few points and supplement other points from my "applied statistician's" perspective. My focus in the following is more on Nielsen's article because the expressed views are less consistent with my own than the contributions of the other authors. Nevertheless, despite differences of emphasis, I want to express my sincere gratitude to Nielsen for bringing his technical adroitness to address the issue of multiple imputation, in particular, and the problem of missing data in general (e.g., Nielsen, 1997, 2000).

61 citations

Journal ArticleDOI
TL;DR: Probability sampling designs and randomization inference are widely accepted as the standard approach in sample surveys, particularly by federal agencies and other survey organizations conducting complex large scale surveys on topics related to public policy.
Abstract: According to Hansen, Madow and Tepping [J. Amer. Statist. Assoc. 78 (1983) 776–793], “Probability sampling designs and randomization inference are widely accepted as the standard approach in sample surveys.” In this article, reasons are advanced for the wide use of this design-based approach, particularly by federal agencies and other survey organizations conducting complex large scale surveys on topics related to public policy. Impact of Bayesian methods in survey sampling is also discussed in two different directions: nonparametric calibrated Bayesian inferences from large samples and hierarchical Bayes methods for small area estimation based on parametric models.

47 citations

J. N. K. Rao1
01 Jan 2005
TL;DR: A large part of sample survey theory has been directly motivated by practical problems encountered in the design and analysis of sample surveys as mentioned in this paper, and this has influenced practice, often leading to significant improvements.
Abstract: A large part of sample survey theory has been directly motivated by practical problems encountered in the design and analysis of sample surveys. On the other hand, sample survey theory has influenced practice, often leading to significant improvements. This paper will examine this interplay over the past 60 years or so. Examples where new theory is needed or where theory exists but is not used will also be presented. 1. J.N.K. Rao, School of Mathematics and Statistics, Carleton University, Ottawa, Ontario, Canada, K1S 5B6.

46 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed the multiple imputation by ordered monotone blanks (MIMBL) method, where the missingness pattern is defined as a set of variables with fewer or the same number of missing values.
Abstract: Multiple imputation (MI) has become a standard statistical technique for dealing with missing values. The CDC Anthrax Vaccine Research Program (AVRP) dataset created new challenges for MI due to the large number of variables of different types and the limited sample size. A common method for imputing missing data in such complex studies is to specify, for each of J variables with missing values, a univariate conditional distribution given all other variables, and then to draw imputations by iterating over the J conditional distributions. Such fully conditional imputation strategies have the theoretical drawback that the conditional distributions may be incompatible. When the missingness pattern is monotone, a theoretically valid approach is to specify, for each variable with missing values, a conditional distribution given the variables with fewer or the same number of missing values and sequentially draw from these distributions. In this article, we propose the “multiple imputation by ordered monotone bl...

33 citations

Journal ArticleDOI
TL;DR: The N/T compensation effect is illustrated: with an increasing number of persons N at constant T, the model estimation performance increases, and vice versa, with an increase number of time points T at constant N, the performance increases as well.
Abstract: Autoregressive modeling has traditionally been concerned with time-series data from one unit (N = 1). For short time series (T < 50), estimation performance problems are well studied and documented...

26 citations