scispace - formally typeset
Search or ask a question

Showing papers on "Random effects model published in 1976"


Journal ArticleDOI
TL;DR: In this paper, the problems with random effects designs and approximate statistical tests (quasiF-ratios) are reviewed, and it is suggested that researchers use fixed factors, which are better understood statistically, and seek non-statistical generality by means of replication.

182 citations


Journal ArticleDOI
TL;DR: In this paper, confidence regions are constructed for linear combinations of fixed effects and realized or sample values of random effects, where the ratios of the variance components can be regarded as known, and the prescribed long run frequency of coverage when there is repeated sampling of the random effects as well as the residual effects.
Abstract: Confidence regions are constructed for linear combinations of fixed effects and realized or sample values of random effects. These regions can be used in instances where the ratios of the variance components can be regarded as known. They have the prescribed long-run frequency of coverage when there is repeated sampling of the random effects as well as the residual effects. They have smaller expected volume than confidence regions obtained by proceeding as though the random effects are fixed effects.

18 citations


Book ChapterDOI
H. Linhart1
01 Jan 1976
TL;DR: In this article, the authors compare the MSA/MSE test with the usual test in random analysis of variance, and show that the wrong test is not "worse" than the usual hypothesis.
Abstract: In random analysis of variance one uses the statistic MSA/MSAB to test the hypothesis that ‘factor A has no effect’. Here the wrong test, using MSA/MSE as statistic, is compared with the usual test. The two tests do, of course, test two different hypotheses. An analysis of the employed terminology reveals that the wrong hypothesis is not ‘worse’ than the usual hypothesis. It is analogous to the corresponding hypothesis in fixed analysis of variance. Because of that, and by power considerations, it is argued that the wrong test should be preferable in most applications.