scispace - formally typeset
Search or ask a question
Author

Pranab Kumar Sen

Bio: Pranab Kumar Sen is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Estimator & Nonparametric statistics. The author has an hindex of 51, co-authored 570 publications receiving 19997 citations. Previous affiliations of Pranab Kumar Sen include Indian Statistical Institute & Academia Sinica.


Papers
More filters
01 Jan 2008
TL;DR: In this article, Kendall's tau-type rank statistics are employed for statistical inference, avoiding most of parametric assumptions to a greater extent, and the proposed procedures are compared with Kendall’s tau statistic based ones for microarray data models.
Abstract: High-dimensional data models abound in genomics studies, where often inadequately small sample sizes create impasses for incorporation of standard statistical tools Conventional assumptions of linearity of regression, homoscedasticity and (multi-) normality of errors may not be tenable in many such interdisciplinary setups In this study, Kendall’s tau-type rank statistics are employed for statistical inference, avoiding most of parametric assumptions to a greater extent The proposed procedures are compared with Kendall’s tau statistic based ones Applications in microarray data models are stressed
OtherDOI
29 Sep 2014
TL;DR: In this article, the authors review the well-known smoothing techniques for density and hazard function estimation under random censoring, developed in the recent past, and outline how the recently proposed smoothing technique by Chaubey and Sen for uncensored data can be adapted to the case of random censorship.
Abstract: We review the well-known smoothing techniques for density and hazard function estimation under random censoring, developed in the recent past. We then outline how the recently proposed smoothing technique by Chaubey and Sen for uncensored data can be adapted to the case of random censoring. Keywords: density; hazard; cumulative hazard; mean residual life; random censoring
TL;DR: In this article , it was shown that, for suitably large t , applying a unitary chosen uniformly at random from an approximate t -design on a quantum system followed by a quantum operation almost decouples, with high probability, the given system from another reference system to which it may initially have been correlated.
Abstract: We prove a new concentration result for non-catalytic decoupling by showing that, for suitably large t , applying a unitary chosen uniformly at random from an approximate t -design on a quantum system followed by a fixed quantum operation almost decouples, with high probability, the given system from another reference system to which it may initially have been correlated. Earlier works either did not obtain high decoupling probability, or used provably inefficient unitaries, or required catalytic entanglement for decoupling. In contrast, our approximate unitary designs always guarantee decoupling with exponentially high probability and, under certain conditions, lead to computationally efficient unitaries. As a result we conclude that, under suitable conditions, efficiently implementable approximate unitary designs achieve relative thermalisation in quantum thermodynamics with exponentially high probability. We also provide, as an application, of a corollary of our main theorem (the FQSW theorem), that if a black evolves less random than Haar random, it still behaves as an information mirror (similar to Hayden and Preskill toy model of Haar random evolution of black holes).
Journal ArticleDOI
TL;DR: In this paper, a one-compartment model with biologically relevant parameters, such as organ volume, uptake rate and excretion rate, or clearance, is used to derive the TK predictor while a two-parameter Emax model is used as a predictor for TD measures.
Abstract: In environmental cancer risk assessment of a toxic chemical, the main focus is in understanding induced target organ toxicity that may in turn lead to carcinogenicity. Mathematical models based on systems of ordinary differential equations with biologically relevant parameters are tenable methods for describing the disposition of chemicals in target organs. In evaluation of a toxic chemical, dose–response assessment often addresses only toxicodynamics (TD) of the chemical, while its toxicokinetics (TK) do not enter into consideration. The primary objective of this research is to integrate both TK and TD in evaluation of toxic chemicals while performing dose–response assessment. Population models, with hierarchical setup and nonlinear predictors, for TK concentration and TD effect measures are considered. A one-compartment model with biologically relevant parameters, such as organ volume, uptake rate and excretion rate, or clearance, is used to derive the TK predictor while a two parameter Emax model is used as a predictor for TD measures. Inference of the model parameters with nonnegative and assay's Limit of Detection (LOD) constraints was carried out by Bayesian approaches using Markov Chain Monte Carlo (MCMC) techniques. Copyright © 2006 John Wiley & Sons, Ltd.
01 Jan 1976
TL;DR: In this article, suitably progressively censored tests based on chisquare statistics are proposed and studied for batch-arrival models relating to categorical data under timesequential studies.
Abstract: For some batch-arrival models relating to categorical data under timesequential studies, suitably progressively censored tests based on chisquare statistics are proposed and studied. The necessary (asymptotic) distribution theory is considered for the null as well as local alternative hypotheses situations. To facilitate comparisons of the different proposed tests, a numerical illustration is presented at the end. AMS 1970 Classification No: 62E20, 62FOS, 62GIO, 62L99.

Cited by
More filters
Journal ArticleDOI
TL;DR: A nonparametric approach to the analysis of areas under correlated ROC curves is presented, by using the theory on generalized U-statistics to generate an estimated covariance matrix.
Abstract: Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix.

16,496 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Book
21 Mar 2002
TL;DR: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data is as discussed by the authors, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced.
Abstract: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data The text begins with a revision of estimation and hypothesis testing methods, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced Special emphasis is placed on checking assumptions, exploratory data analysis and presentation of results The main analyses are illustrated with many examples from published papers and there is an extensive reference list to both the statistical and biological literature The book is supported by a website that provides all data sets, questions for each chapter and links to software

9,509 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that a simple FDR controlling procedure for independent test statistics can also control the false discovery rate when test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses.
Abstract: Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate $t$. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased.

9,335 citations

Journal ArticleDOI
TL;DR: In this article, a simple and robust estimator of regression coefficient β based on Kendall's rank correlation tau is studied, where the point estimator is the median of the set of slopes (Yj - Yi )/(tj-ti ) joining pairs of points with ti ≠ ti.
Abstract: The least squares estimator of a regression coefficient β is vulnerable to gross errors and the associated confidence interval is, in addition, sensitive to non-normality of the parent distribution. In this paper, a simple and robust (point as well as interval) estimator of β based on Kendall's [6] rank correlation tau is studied. The point estimator is the median of the set of slopes (Yj - Yi )/(tj-ti ) joining pairs of points with ti ≠ ti , and is unbiased. The confidence interval is also determined by two order statistics of this set of slopes. Various properties of these estimators are studied and compared with those of the least squares and some other nonparametric estimators.

8,409 citations