scispace - formally typeset
Search or ask a question

Showing papers by "Rahul Mukerjee published in 2013"


Journal ArticleDOI
TL;DR: This paper proposes a new construction method for key predistribution schemes based on combinations of duals of standard block designs which works for any intersection threshold and obtains explicit algebraic expressions for the metrics for local connectivity and resiliency.
Abstract: Key predistribution schemes for distributed sensor networks have received significant attention in the recent literature. In this paper we propose a new construction method for these schemes based on combinations of duals of standard block designs. Our method is a broad spectrum one which works for any intersection threshold. By varying the initial designs, we can generate various schemes and this makes the method quite flexible. We also obtain explicit algebraic expressions for the metrics for local connectivity and resiliency. These schemes are quite efficient with regard to connectivity and resiliency and at the same time they allow a straightforward shared-key discovery.

24 citations


Journal ArticleDOI
TL;DR: A step-up/down procedure is proposed which is seen to work very well and to be quite robust to possible dye-color effects and heteroscedasticity and works equally well also for hybrids of the two and other parametrizations.
Abstract: A general method for obtaining highly efficient factorial designs of relatively small sizes is developed for cDNA microarray experiments. It allows the main effects and interactions to be of possibly unequal importance. First, the approximate theory is employed to get an optimal design measure which is then discretized. It is, however, observed that a naive discretization may fail to yield an exact design of the stipulated size and, even when it yields such an exact design, there is often scope for improvement in efficiency. To address these issues, we propose a step-up/down procedure which is seen to work very well. The resulting designs turn out to be quite robust to possible dye-color effects and heteroscedasticity. We focus on the baseline and all-to-next parametrizations but our method works equally well also for hybrids of the two and other parametrizations.

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied two-level minimum aberration (MA) designs in N = 1 (mod 4) runs and derived theoretical results on and constructed tables of MA designs in the present context.
Abstract: Two-level minimum aberration (MA) designs in N = 1 (mod 4) runs are studied. For this purpose, we consider designs obtained by adding any single run to a two-symbol orthogonal array (OA) of strength two and then, among these designs, sequentially minimize a measure of bias due to interactions of successively higher orders. The reason for considering such OA plus one run designs is that they are optimal main effect plans in a very broad sense in the absence of interactions. Our approach aims at ensuring model robustness even when interactions are possibly present. It is shown that the MA criterion developed here has an equivalent formulation which is similar but not identical to the minimum moment aberration criterion. This formulation is utilized to derive theoretical results on and construct tables of MA designs in the present context.

4 citations


Journal ArticleDOI
TL;DR: In this paper, the role of data-dependent priors in ensuring approximate frequentist validity of posterior credible regions based on the inversion of these statistics is investigated, and it is shown that the resulting probability matching conditions readily admit solutions which entail approximate frequentists validity of the highest posterior density region as well.
Abstract: We consider likelihood ratio statistics based on the usual profile likelihood and the standard adjustments thereof proposed in the literature in the presence of nuisance parameters The role of data-dependent priors in ensuring approximate frequentist validity of posterior credible regions based on the inversion of these statistics is investigated Unlike what happens with data-free priors, it is seen that the resulting probability matching conditions readily admit solutions which entail approximate frequentist validity of the highest posterior density region as well

3 citations


Journal ArticleDOI
TL;DR: The authors developed a complementary set theory for characterizing optimal quaternary code designs that are highly fractionated in the sense of accommodating a large number of factors, and established a link with foldovers of regular fractions.
Abstract: Quaternary code (QC) designs form an attractive class of nonregular factorial fractions. We develop a complementary set theory for characterizing optimal QC designs that are highly fractionated in the sense of accommodating a large number of factors. This is in contrast to existing theoretical results which work only for a relatively small number of factors. While the use of imaginary numbers to represent the Gray map associated with QC designs facilitates the derivation, establishing a link with foldovers of regular fractions helps in presenting our results in a neat form.

2 citations


Journal ArticleDOI
TL;DR: This article developed a complementary set theory for characterizing optimal quaternary code designs that are highly fractionated in the sense of accommodating a large number of factors, and established a link with foldovers of regular fractions.
Abstract: Quaternary code (QC) designs form an attractive class of nonregular factorial fractions. We develop a complementary set theory for characterizing optimal QC designs that are highly fractionated in the sense of accommodating a large number of factors. This is in contrast to existing theoretical results which work only for a relatively small number of factors. While the use of imaginary numbers to represent the Gray map associated with QC designs facilitates the derivation, establishing a link with foldovers of regular fractions helps in presenting our results in a neat form.

1 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider two-level factorials in N = 2 (mod 4) runs as obtained by adding two runs, with a certain coincidence pattern, to an orthogonal array of strength two.

Posted Content
TL;DR: A variance inequality and a covariance inequality are completely proved that ensure the convergence of an algorithm for the reconstruction of Λ only on the basis of the covariance matrix of X truncated to the Euclidean ball.
Abstract: Let X Nv(0, {\Lambda}) be a normal vector in v dimensions, where {\Lambda} is diagonal. With reference to the truncated distribution of X on the interior of a v-dimensional Euclidean ball, we completely prove a variance inequality and a covariance inequality that were recently discussed by F. Palombi and S. Toti [J. Multivariate Anal. 122 (2013) 355-376]. These inequalities ensure the convergence of an algorithm for the reconstruction of {\Lambda} only on the basis of the covariance matrix of X truncated to the Euclidean ball. The concept of monotone likelihood ratio is useful in our proofs. Moreover, we also prove and utilize the fact that the cumulative distribution function of any positive linear combination of independent chi-square variates is log-concave, even though the same may not be true for the corresponding density function.