scispace - formally typeset
Search or ask a question
Author

Pranab Kumar Sen

Bio: Pranab Kumar Sen is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Estimator & Nonparametric statistics. The author has an hindex of 51, co-authored 570 publications receiving 19997 citations. Previous affiliations of Pranab Kumar Sen include Indian Statistical Institute & Academia Sinica.


Papers
More filters
Journal Article
TL;DR: In this paper, the problem of selection of the smoothing parameter for the density estimator using the Poisson distribution is considered, and two cross validation methods, namely likelihood based cross validation and integrated squared error cross validation, are compared through a numerical study.
Abstract: In this paper the problem of selection of the smoothing parameter for the density estimator using the Poisson distribution [see Gowronski and Stadm¨ uller (1980) and Chaubey and Sen (1996)] is considered. Two cross validation methods, namely likelihood based cross validation and integrated squared error cross validation, are compared through a numerical study. It is found that the choice proposed in Chaubey and Sen (1996) may not be appropriate for large samples. Instead, data adaptive choice works well for large as well as small samples. Based on this study we also claim that the smoothing parameters selected using any of the two cross validation methods are asymptotically equivalent and seem to provide the smallest Hellinger distance between the estimator and the true density.

17 citations

Journal ArticleDOI
TL;DR: In this paper, a modified version of the James-Stein rule (incorporating the idea of preliminary test estimators) is utilized in formulating some estimators based on U-statistics and their jackknifed estimator of dispersion matrix.
Abstract: For a vector of estimable parameters, a modified version of the James-Stein rule (incorporating the idea of preliminary test estimators) is utilized in formulating some estimators based on U-statistics and their jackknifed estimator of dispersion matrix. Asymptotic admissibility properties of the classical U-statistics, their preliminary test version and the proposed estimators are studied.

17 citations

Book ChapterDOI
TL;DR: The Chen--Stein theorem and Kendall's tau statistic are exploited in this context with emphasis on dimensional rather than sample size asymptotics and applications of these findings in some microarray data models are illustrated.
Abstract: High-dimensional data models, often with low sample size, abound in many interdisciplinary studies, genomics and large biological systems being most noteworthy. The conventional assumption of multinormality or linearity of regression may not be plausible for such models which are likely to be statistically complex due to a large number of parameters as well as various underlying restraints. As such, parametric approaches may not be very effective. Anything beyond parametrics, albeit, having increased scope and robustness perspectives, may generally be baffled by the low sample size and hence unable to give reasonable margins of errors. Kendall's tau statistic is exploited in this context with emphasis on dimensional rather than sample size asymptotics. The Chen--Stein theorem has been thoroughly appraised in this study. Applications of these findings in some microarray data models are illustrated.

17 citations

Journal ArticleDOI
TL;DR: In this article, the minimum risk estimation of the mean vector of a multivariate normal distribution, based on the James-Stein and an allied rule, is considered, and the relative risk efficiency results are studied.
Abstract: In the context of minimum risk (point) estimation of the mean vector of a multivariate normal distribution, based on the James-Stein and an allied rule, a two stage procedure is considered, and the relative risk efficiency results are studied.

17 citations

Journal ArticleDOI
TL;DR: For a multinormal distribution with an unknown dispersion matrix, union intersection (UI) tests for the mean against one-sided alternatives are considered in this article, where the null distribution of the UI test statistic is derived and its power monotonicity properties are studied.

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A nonparametric approach to the analysis of areas under correlated ROC curves is presented, by using the theory on generalized U-statistics to generate an estimated covariance matrix.
Abstract: Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix.

16,496 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Book
21 Mar 2002
TL;DR: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data is as discussed by the authors, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced.
Abstract: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data The text begins with a revision of estimation and hypothesis testing methods, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced Special emphasis is placed on checking assumptions, exploratory data analysis and presentation of results The main analyses are illustrated with many examples from published papers and there is an extensive reference list to both the statistical and biological literature The book is supported by a website that provides all data sets, questions for each chapter and links to software

9,509 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that a simple FDR controlling procedure for independent test statistics can also control the false discovery rate when test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses.
Abstract: Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate $t$. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased.

9,335 citations

Journal ArticleDOI
TL;DR: In this article, a simple and robust estimator of regression coefficient β based on Kendall's rank correlation tau is studied, where the point estimator is the median of the set of slopes (Yj - Yi )/(tj-ti ) joining pairs of points with ti ≠ ti.
Abstract: The least squares estimator of a regression coefficient β is vulnerable to gross errors and the associated confidence interval is, in addition, sensitive to non-normality of the parent distribution. In this paper, a simple and robust (point as well as interval) estimator of β based on Kendall's [6] rank correlation tau is studied. The point estimator is the median of the set of slopes (Yj - Yi )/(tj-ti ) joining pairs of points with ti ≠ ti , and is unbiased. The confidence interval is also determined by two order statistics of this set of slopes. Various properties of these estimators are studied and compared with those of the least squares and some other nonparametric estimators.

8,409 citations