Author
Rahul Mukerjee
Other affiliations: Siemens, Chiba University, Indian Statistical Institute ...read more
Bio: Rahul Mukerjee is an academic researcher from Indian Institute of Management Calcutta. The author has contributed to research in topics: Frequentist inference & Prior probability. The author has an hindex of 30, co-authored 206 publications receiving 3507 citations. Previous affiliations of Rahul Mukerjee include Siemens & Chiba University.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: For a general class of empirical-type likelihoods for the population mean, higher-order asymp totics are developed with a view to characterizing its members which allow, for any given prior, the existence of a confidence interval that has approximately correct posterior as well as fre quentist coverage as mentioned in this paper.
Abstract: SUMMARY For a general class of empirical-type likelihoods for the population mean, higher-order asymp totics are developed with a view to characterizing its members which allow, for any given prior, the existence of a confidence interval that has approximately correct posterior as well as fre quentist coverage. In particular, it is seen that the usual empirical likelihood always allows such a confidence interval, while many of its variants proposed in the literature do not enjoy this property. An explicit form of the confidence interval is also given.
22 citations
••
TL;DR: In this paper, the authors characterize priors which ensure frequentist validity, up to o(n-1), of confidence regions based on the highest posterior density, and investigate the role of Jeffreys' prior in this regard.
Abstract: In a multiparameter set-up, this paper characterizes priors which ensure frequentist validity, up to o(n-1), of confidence regions based on the highest posterior density. The role of Jeffreys' prior in this regard has also been investigated.
22 citations
••
TL;DR: This article showed that for sensitive and confidential characters the usual loss of efficiency due to randomized rather than direct response may be partially recouped if the interviewees are left free to opt for either mode of response.
Abstract: It is shown that for sensitive and confidential characters the usual loss of efficiency due to randomized rather than direct response may be partially recouped if the interviewees are left free to opt for either mode of response. Specific results are derived with multinomial sampling.
22 citations
••
TL;DR: In this paper, the authors consider cDNA microarray experiments when the cell populations have a factorial structure, and investigate the problem of their optimal designing under a baseline parametrization where the objects of interest differ from those under the more common orthogonal parameter.
Abstract: We consider cDNA microarray experiments when the cell populations have a factorial structure, and investigate the problem of their optimal designing under a baseline parametrization where the objects of interest differ from those under the more common orthogonal parametrization. First, analytical results are given for the 2×2 factorial. Since practical applications often involve a more complex factorial structure, we next explore general factorials and obtain a collection of optimal designs in the saturated, that is, most economic, case. This, in turn, is seen to yield an approach for finding optimal or efficient designs in the practically more important nearly saturated cases. Thereafter, the findings are extended to the more intricate situation where the underlying model incorporates dye-coloring effects, and the role of dye-swapping is critically examined.
21 citations
••
TL;DR: A nested orthogonal array is an OA(N,k,s,g) which contains a subarray M(k,r, g) as aSubarray as a sub array.
21 citations
Cited by
More filters
••
TL;DR: This paper reviews the literature on Bayesian experimental design, both for linear and nonlinear models, and presents a uniied view of the topic by putting experimental design in a decision theoretic framework.
Abstract: This paper reviews the literature on Bayesian experimental design. A unified view of this topic is presented, based on a decision-theoretic approach. This framework casts criteria from the Bayesian literature of design as part of a single coherent approach. The decision-theoretic structure incorporates both linear and nonlinear design problems and it suggests possible new directions to the experimental design problem, motivated by the use of new utility functions. We show that, in some special cases of linear design problems, Bayesian solutions change in a sensible way when the prior distribution and the utility function are modified to allow for the specific structure of the experiment. The decision-theoretic approach also gives a mathematical justification for selecting the appropriate optimality criterion.
1,903 citations
••
TL;DR: In this paper, a review of techniques for constructing non-informative priors is presented and some of the practical and philosophical issues that arise when they are used are discussed.
Abstract: Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet in practice, most Bayesian analyses are performed with so-called “noninformative” priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his viewpoint about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly: When sample sizes are small (relative to the number of parameters being estimated), it is dangerous to put faith in any “default” solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated b...
1,243 citations
•
01 Jun 1989
TL;DR: In this article, the authors provide an overview of recent developments in the design and analysis of cross-over trials and present methods for testing for a treatment difference when the data are binary.
Abstract: This chapter provides an overview of recent developments in the design and analysis of cross-over trials. We first consider the analysis of the trial that compares two treatments, A and B, over two periods and where the subjects are randomized to the treatment sequences AB and BA. We make the distinction between fixed and random effects models and show how these models can easily be fitted using modern software. Issues with fitting and testing for a difference in carry-over effects are described and the use of baseline measurements is discussed. Simple methods for testing for a treatment difference when the data are binary are also described. Various designs with two or more treatments but with three or four periods are then described and compared. These include the balanced and partially balanced designs for three or more treatments and designs for factorial treatment combinations. Also described are nearly balanced and nearly strongly balanced designs. Random subject-effects models for the designs with two or more treatments are described and methods for analysing non-normal data are also given. The chapter concludes with a description of the use of cross-over designs in the testing of bioequivalence.
1,201 citations
••
[...]
TL;DR: Experimental design is reviewed here for broad classes of data collection and analysis problems, including: fractioning techniques based on orthogonal arrays, Latin hypercube designs and their variants for computer experimentation, efficient design for data mining and machine learning applications, and sequential design for active learning.
Abstract: Maximizing data information requires careful selection, termed design, of the points at which data are observed. Experimental design is reviewed here for broad classes of data collection and analysis problems, including: fractioning techniques based on orthogonal arrays, Latin hypercube designs and their variants for computer experimentation, efficient design for data mining and machine learning applications, and sequential design for active learning. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.
1,025 citations
••
TL;DR: It is shown that UD's have many desirable properties for a wide variety of applications and the global optimization algorithm, threshold accepting, is used to generate UD's with low discrepancy.
Abstract: A uniform design (UD) seeks design points that are uniformly scattered on the domain. It has been popular since 1980. A survey of UD is given in the first portion: The fundamental idea and construction method are presented and discussed and examples are given for illustration. It is shown that UD's have many desirable properties for a wide variety of applications. Furthermore, we use the global optimization algorithm, threshold accepting, to generate UD's with low discrepancy. The relationship between uniformity and orthogonality is investigated. It turns out that most UD's obtained here are indeed orthogonal.
825 citations