scispace - formally typeset
Search or ask a question

Showing papers on "Random effects model published in 1989"


Journal ArticleDOI
TL;DR: In this article, the authors describe the problem of latent variable analysis failure to recognize that data may be obtained from several populations with different sets of parameter values, and give an overview of methodology that can address heterogeneity.
Abstract: Common applications of latent variable analysis fail to recognize that data may be obtained from several populations with different sets of parameter values. This article describes the problem and gives an overview of methodology that can address heterogeneity. Artificial examples of mixtures are given, where if the mixture is not recognized, strongly distorted results occur. MIMIC structural modeling is shown to be a useful method for detecting and describing heterogeneity that cannot be handled in regular multiple-group analysis. Other useful methods instead take a random effects approach, describing heterogeneity in terms of random parameter variation across groups. These random effects models connect with emerging methodology for multilevel structural equation modeling of hierarchical data. Examples are drawn from educational achievement testing, psychopathology, and sociology of education. Estimation is carried out by the LISCOMP program.

979 citations


Journal ArticleDOI
TL;DR: Estimates are obtained by evaluating the likelihood explicitly and using standard, derivative-free optimization procedures to locate its maximum by the so-called Animal Model, which includes the additive genetic merit of animals as a random effect, and incorporates all information on relationships between animals.
Abstract: Summary - A method is described for the simultaneous estimation of variance components due to several genetic and environmental effects from unbalanced data by restricted maximum likelihood (REML). Estimates are obtained by evaluating the likelihood explicitly and using standard, derivative-free optimization procedures to locate its maximum. The model of analysis considered is the so-called Animal Model which includes the additive genetic merit of animals as a random effect, and incorporates all information on relationships between animals. Furthermore, random effects in addition to animals’ additive genetic effects, such as maternal genetic, dominance or permanent environmental effects are taken into account. Emphasis is placed entirely upon univariate analyses. Simulation is employed to investigate the efficacy of three different maximization techniques and the scope for approximation of sampling errors. Computations are illustrated with a numerical example. variance components - restricted maximum likelihood - animal model - additional random effects - derivative - free approach

447 citations


Journal ArticleDOI
TL;DR: In this paper, a user point-of-view review of the use of graphical techniques to visualize the ANOVA model as well as to analyse residuals is recommended. And the main models of ANOVA are developed in some detail including one-factor ANOVA, crossed designs, nested designs, repeated measures ANOVA and variance components estimation.

392 citations


Book
01 May 1989
TL;DR: Chain binomial models chain models with random effects latent and infectious periods heterogeneity of disease spread through a community generalised liner models Martingale methods review of other methods.
Abstract: 1. Introduction 2. Chain binomial models 3. Chain models with random effects 4. Latent and infectious periods 5. Heterogeneity of disease spread through a community 6. Generalized Linear Models 7. Martingale methods 8. Methods of inference for large populations

389 citations


Journal ArticleDOI
TL;DR: In this article, a meta-analysis of studies that have taken place between 1974 and mid-1987 on sex differences in mathematical tasks is presented, showing that the average sex difference is very small; a confidence interval for it covers zero, though the interval lies mainly on the side of male advantage.
Abstract: This paper is a meta-analysis of studies that have taken place between 1974 and mid-1987 on sex differences in mathematical tasks. The methods used are estimations of (a) parameters for a random effects model and (b) coefficients for a linear regression equation, all based on effect sizes calculated from each study. These results are compared with meta-analyses of the studies on quantitative skill collected by Maccoby and Jacklin. These comparisons, together with ad hoc comparisons of Scholastic Aptitude Test effect sizes over the years, yield two conclusions. First, the average sex difference is very small; a confidence interval for it covers zero, though the interval lies mainly on the side of male advantage. Second, sex differences in performance are decreasing over the years.

282 citations


Journal ArticleDOI
TL;DR: A quantitative method for measuring the information capacity of an animal's ‘signature system’, i.e. the set of cues by which individuals are identified, is developed and may prove valuable for comparative analyses where evolutionary hypotheses predict one species to have a better developed signature system than another.

259 citations


Journal ArticleDOI
TL;DR: In this paper, a score test for autocorrelation in the within-individual errors for the conditional independence random effects model was developed and an explicit maximum likelihood estimation procedure using the scoring method for the model with random effects and AR(1) errors was derived.
Abstract: For longitudinal data on several individuals, linear models that contain both random effects across individuals and autocorrelation in the within-individual errors are studied. A score test for autocorrelation in the within-individual errors for the “conditional independence” random effects model is first developed. An explicit maximum likelihood estimation procedure using the scoring method for the model with random effects and (autoregressive) AR(1) errors is then derived. Empirical Bayes estimation of the random effects and prediction of future responses of an individual based on this random effects with AR(1) errors model are also considered. A numerical example is presented to illustrate these methods.

250 citations


Journal ArticleDOI
TL;DR: In this paper, a family of growth-curve models with different numbers of random effects for the individual sampling units and with a fixed structure on the mean was studied, where the maximum likelihood estimator of the fixed effects is identical to the ordinary least squares estimator.
Abstract: Intuition suggests that altering the covariance structure of a parametric model for repeated-measures data alters the variances of the model's estimated mean parameters. The purpose of this article is to sharpen such intuition for a family of growth-curve models with differing numbers of random effects for the individual sampling units and with a fixed structure on the mean. For every member of this family, the maximum likelihood (ML) estimator of the fixed effects is identical to the ordinary least squares (OLS) estimator. In addition, simple closed-form ML and restricted maximum likelihood estimators for the variance and covariance parameters exist for every member. As a consequence, closed-form expressions for the estimated variance-covariance matrix of the OLS estimator of the fixed effects also exist for the entire family. We derive explicit relationships between the variance and covariance parameter estimators from different members of the family and thereby extend some familiar results. Fo...

115 citations


Journal ArticleDOI
TL;DR: In this paper, transformation and weighting techniques are applied to dose-response curve models, in particular, weighting methods derived from a controlled-variable, random effect model and a closely related random-coefficient model are studied.
Abstract: SUMMARY Transformation and weighting techniques are applied to dose-response curve models. In particular, weighting methods derived from a controlled-variable, random-effect model and a closely related random-coefficient model are studied. These two models correspond to additive and multiplicative effects of variations in the dose, and both lead to variance components proportional to the square of the derivative of the response function with respect to dose. When the dose-response curve is nonlinear in dose, the variance components are typically identifiable even without replicate measurements of dose. In a bioassay example the fit of a logistic model is studied. The transform-both-sides technique with a power transformation is shown to give a vast improvement in fit, compared to the analysis with no transformation and no weighting, and it also gives considerably better estimates of the parameters in the logistic function. For the data set studied, a significant further improvement in the fit is possible by use of the random-effect models.

96 citations


Journal ArticleDOI
TL;DR: The modified x2 test as mentioned in this paper is motivated by a likelihood argument based on a random effects model for the true proportions, which works well in an example concerning error rates in computer records.
Abstract: The x2 test for proportions (or for independence in a 2 x m contingency table) requires that expected frequencies are not too small. An alternative to amalgamating low frequencies is to retain them but to give them less weight in the analysis. The resulting modified x2 test is also motivated by a likelihood argument based on a random effects model for the true proportions. The test works well in an example concerning error rates in computer records.

50 citations


ReportDOI
TL;DR: In this paper, the authors formalized and tested the notion that state governments' expenditures depend on the spending of similarly situated states, and they found that even after allowing for fixed state effects, year effects, and common random effects between neighbors, as state government's level of per capita expenditure is positively and significantly affected by the expenditure levels of its neighbors.
Abstract: This paper formalizes and tests the notion that state governments' expenditures depend on the spending of similarly situated states. We find that even after allowing for fixed state effects, year effects, and common random effects between neighbors, as state government's level of per capita expenditure is positively and significantly affected by the expenditure levels of its neighbors. Ceteris paribus, a one dollar increase in a state's neighbors' expenditures increases its own expenditure by over 70 cents.

Journal ArticleDOI
TL;DR: Analysis of the Gypsy moth data suggests that the addition of an enzyme to an environmentally safe, but not very potent, microbial control agent produces a mixture that is a more effective toxicant for the gypsy moth than the microbial agent used alone.
Abstract: This paper is concerned with the modeling and analysis of data collected in a large experiment designed to study the mortality in gypsy moths exposed to a mixture of two toxicants and observed over three time periods. The stochastic survival model employed is based on a pertinent biological model that describes the mode of action of synergism between the toxicants. Conditional probability of death in an interval, given survival up to that interval, is fitted by a binary response model with nested random effects added to fixed treatment effects. The random effects factors are used to account for intercorrelation and extravariation. Approximate maximum likelihood estimates of the parameters are evaluated by adapting the iteratively weighted least squares algorithm within GLIM. Results from the nested random effect model are compared with those from the quasi-likelihood procedure for overdispersed data. Analysis of the gypsy moth data suggests that the addition of an enzyme to an environmentally safe, but not very potent, microbial control agent produces a mixture that is a more effective toxicant for the gypsy moth than the microbial agent used alone.

Journal ArticleDOI
TL;DR: In this paper, the authors derived two-sided approximate β-content tolerance limits for multiway balanced random-effects models and used these to evaluate the firing time precision in a highexplosives system.
Abstract: In this article, we derive two-sided approximate β-content tolerance limits for multiway balanced random-effects models. We provide factors, obtained from numerical integration, that can be used to obtain β-content tolerance intervals. We describe methods for extending the results to nested models and discuss the use of the tabled tolerance factors for exact intervals for simple random samples when we have an independent estimate of the variance. We demonstrate the procedure with an experimental design used to evaluate the firing time precision in a high-explosives system.

Journal ArticleDOI
TL;DR: In this paper, the robust collocation solution in the original Mixed Linear Model can identically be derived as traditionalLESS (LEast Squares Solution) in amodified mixed linear model without using artifacts like pseudo-observations.
Abstract: The now classical collocation method in geodesy has been derived byH. Moritz (1970; 1973) within an appropriate Mixed Linear Model. According toB. Schaffrin (1985; 1986) even a generalized form of the collocation solution can be proved to represent a combined estimation/prediction procedure of typeBLUUE (Best Linear Uniformly Unbiased Estimation) for the fixed parameters, and of type inhomBLIP (Best inhomogeneously LInear Prediction) for the random effects with not necessarily zero expectation. Moreover, “robust collocation” has been introduced by means of homBLUP (Best homogeneously Linear weakly Unbiased Prediction) for the random effects together with a suitableLUUE for the fixed parameters. Here we present anequivalence theorem which states that the robust collocation solution in theoriginal Mixed Linear Model can identically be derived as traditionalLESS (LEast Squares Solution) in amodified Mixed Linear Model without using artifacts like “pseudo-observations”. This allows us a nice interpretation of “robust collocation” as an adjustment technique in the presence of “weak prior information”.

Journal ArticleDOI
TL;DR: In this article, the authors show that for the case of only one random factor plus error, balanced designs are optimal, and also consider the possibility that the costs of replications and of different levels of the random effect may differ, and find the expression of the optimal replication number.

Journal ArticleDOI
TL;DR: A simulation study has shown that an improvement over the Normality assumption of the Bayes' factor estimates is obtained by using a kernel method when the random effects are not Normally distributed.
Abstract: The Bayes' factor, or likelihood ratio, plays an important role in the assessment of forensic evidence. Four methods of determining the Bayes' factor are developed. Backgraound data collected by forensic scientists often have a random effects structure where the random effects do not have a Normal distribution. The methods of Assessing these data compare results obtained where a group structure in the back ground data is and is not assumed, and where the within group variance is and is not assumed known. The distribution of the random effects is modelled using kernel density estimation. A simulation study shown that an improvement over the Normality assumption of the Bayes' factor estimates is obtained by using a kernel method when the random effects are not Normally distributed.

Journal ArticleDOI
TL;DR: In this paper, the optimum and robustness properties of usual F test for balanced random and mixed effects nested models are derived, for nested models with the underlying distribution assumed to be elliptically symmetric.
Abstract: In this note certain optimum and robustness properties of usual F test are derived, for balanced random and mixed effects nested models The underlying distribution assumed is elliptically symmetric WIJSMAN'S Representation theorem is applied as a tool Attention is confined to the testing of random effects only

Journal ArticleDOI
TL;DR: A two-stage random effects model was applied to pulmonary function data from 31 sarcoidosis patients to illustrate its usefulness in analysing unbalanced longitudinal data and indicated that deterioration in FVC% is associated with a higher initial F VC% value and large numbers of both total cells and eosinophils in bronchoalveolar lavage at the initial assessment.
Abstract: We applied a two-stage random effects model to pulmonary function data from 31 sarcoidosis patients to illustrate its usefulness in analysing unbalanced longitudinal data. For the first stage, repeated measurements of percentage of predicted forced vital capacity (FVC%) from an individual were modelled as a function of time since initial clinical assessment. At the second stage, parameters of this function were modelled as a function of certain patient characteristics. We used three methods for estimating the model parameters: maximum likelihood; empirical Bayes; and a two-step least-squares procedure. Similar results were obtained from each, but we recommend the empirical Bayes, since it provides unbiased estimates of variance components. Results indicated that deterioration in FVC% is associated with a higher initial FVC% value and large numbers of both total cells and eosinophils in bronchoalveolar lavage at the initial assessment. Improvement is associated with higher values of pulmonary Gallium uptake at initial assessment and race. Blacks are more likely to improve than whites.

01 Jan 1989
TL;DR: A number of models are presented and estimated describing the correlation of trip making over time, using the generalized methods of moments procedure, which is asymptotically efficient and does not require assumptions about the initial conditions.
Abstract: A number of models are presented and estimated describing the correlation of trip making over time. Unobserved heterogeneity is taken into account using random effects. The basic models considered are the serial correlation and the state-dependence model. Trip making in total and by transit was best described using state-dependence models; trip making by car by a model with lagged exogenous variables. The generalized methods of moments procedure is used for estimation of the models: it is asymptotically efficient and does not require assumptions about the initial conditions.

Journal ArticleDOI
TL;DR: In this article, the effect of imbalance on two methods of testing hypotheses about the between-group variance component in a one-way random effects model is investigated over a wide range of designs.
Abstract: The effect of imbalance on two methods of testing hypotheses about the between-group variance component in a one-way random effects model is investigated over a wide range of designs. For testing non-zero values of it is found that the likelihood A ratio statistic is rarely preferable to the F-statistic, even for substantial amounts of extremely imbalanced data. However the likelihood ratio statistic can be appreciably more powerful than the F-statistic in testing for a null value of if the data is extremely imbalanced.

Journal ArticleDOI
A.I. Khuri1
TL;DR: In this article, the covariance structure associated with Scheffe's mixed two-way classification model for balanced as well as unbalanced data is considered and a multivariate test is presented to check the validity of this form.


Journal ArticleDOI
TL;DR: In this paper, a tutorial review of two-factor nested experiments with random factors is presented, and the power of the F-test in such experiments is compared with random factor analysis.
Abstract: A tutorial review of two-factor nested experiments with random factors is presented. Results are given for the power of the F-test in such experiments...

Journal ArticleDOI
TL;DR: In this article, the authors examined the behavior of the estimates of the eigenvalues of the covariance matrix of the model for various sample sizes and compared several approaches for the estimation of eigen values.

Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, a threshold model is defined to provide a link between the ordinal scale of measurement and a linear scale on which treatments are supposed to act, where random effects are added to the linear predictor.
Abstract: This paper is concerned with the analysis of ordinal data obtained from stratified experiments. A threshold model is defined to provide a link between the ordinal scale of measurement and a linear scale on which treatments are supposed to act. In order to account for stratification in the data random effects are added to the linear predictor. It is shown that maximum likelihood estimates can be obtained by iterative weighted least quares. Relationships with other types of data are discussed.

01 Jan 1989
TL;DR: Quantitative genetic theory provides models to predict the probability to obtain superior recombinant inbreds in the offspring of a cross between two pure breeding lines by investigating various violations of the assumptions, such as non-normality of genotypic effects, heteroscedasticity, and fixed versus random effects.
Abstract: Quantitative genetic theory provides models to predict the probability to obtain superior recombinant inbreds in the offspring of a cross between two pure breeding lines. The prediction procedure is prone to various types of error, which possibly invalidate the prediction procedure: 1) stochastic variation, 2) incorrectness of the genetic assumptions, on which the theory is founded, and 3) genotype- environment interaction, in particular intergenotypic competition. The predictive value of the procedure is evaluated by studying the effects of the individual sources of error. Chapter 2 deals with stochastic variation; it establishes the superiority of an alternative estimator of the additive genotypic variance under most practical circumstances. Chapter 2 also presents a method to optimize the population design (number of lines, size of the lines) with respect to the accuracy of the estimator. Chapter 3 investigates various violations of the assumptions, on which the theory is founded, such as non-normality of genotypic effects, heteroscedasticity, and fixed versus random effects. Chapters 4 and 5 investigate the bias on the estimates of the F∞- mean and -variance, respectively, caused by intergenotypic competition.

Book ChapterDOI
01 Jan 1989
TL;DR: A normal random effects model for multiple measurements recorded on an ordinal scale with c categories is introduced, appropriate for a wide range of practical applications and enabling a comparison between the method and an alternative GLIM approach.
Abstract: This paper introduces a normal random effects model for multiple measurements recorded on an ordinal scale with c categories The model is general and appropriate for a wide range of practical applications One such application is a cross-over study for which a detailed examination is provided Particular attention will be focussed on the special case of binary responses, enabling a comparison between the method of this paper and an alternative GLIM approach

01 Jan 1989
TL;DR: The Random effects model for longitudinal data and the method of Identifying Non-Trackers and Explanation of computer algorithm are presented.
Abstract: Chapter 1 Introduction section 1 Random effects model for longitudinal data section 2 Estimation of parameters Chapter 2 Method of Identifying Non-Trackers section 1 Method of Identification section 2 Explanation of computer algorithm Chapter 3 Conclusion

Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, a unified approach for the estimation of unknown fixed parameters and prediction of random effects in a mixed Gauss-Markov linear model is developed for both the estimators and their mean square errors can be expressed in terms of the elements of a g-inverse of a partitioned matrix.
Abstract: A unified approach is developed for the estimation of unknown fixed parameters and prediction of random effects in a mixed Gauss- Markov linear model. It is shown that both the estimators and their mean square errors can be expressed in terms of the elements of a g-inverse of a partitioned matrix which can be set up in terms of the matrices used in expressing the model. No assumptions are made on the ranks of the matrices involved. The method is parallel to the one developed by the author in the case of the fixed effects Gauss-Markov model using a g-inverse of a partitioned matrix (Rao, 1971, 1972, 1973, 1985).

Book ChapterDOI
01 Jan 1989
TL;DR: This chapter discusses the random effects models, which describe at least a two-stage hierarchical process, where first subpopulations are sampled and then observations are made within each selected subpopulation.
Abstract: Publisher Summary This chapter is discusses the random effects models. These models describe at least a two-stage hierarchical process, where first subpopulations are sampled and then observations are made within each selected subpopulation. Traditionally, analysis of such models has been restricted to inferences about variance components: one component to describe variation within subpopulations, and one or more component for variation among subpopulations. There are several attractions to considering Bayesian inferences for random effects. Bayesian analysis treats all unknown parameters as random variables. Thus, the distinction between fixed and random effects is less fundamental in a Bayesian than in a sampling theory framework. Earlier Bayesian writers emphasized that in the random effects model information from other sampled subpopulations—collateral information—has the potential of greatly reducing the impact of prior specifications on posterior inferences for a given subpopulation, even when the sample available for that subpopulation is small.