scispace - formally typeset
Search or ask a question
Topic

Random effects model

About: Random effects model is a research topic. Over the lifetime, 8388 publications have been published within this topic receiving 438823 citations. The topic is also known as: random effects & random effect.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the probit-normal model for binary data is extended to allow correlated random effects, and the EM algorithm with its M-step greatly simplified under the assumption of a probit link and its E-step made feasible by Gibbs sampling.
Abstract: The probit-normal model for binary data (McCulloch, 1994, Journal of the American Statistical Association 89, 330-335) is extended to allow correlated random effects. To obtain maximum likelihood estimates, we use the EM algorithm with its M-step greatly simplified under the assumption of a probit link and its E-step made feasible by Gibbs sampling. Standard errors are calculated by inverting a Monte Carlo approximation of the information matrix rather than via the SEM algorithm. A method is also suggested that accounts for the Monte Carlo variation explicitly. As an illustration, we present a new analysis of the famous salamander mating data. Unlike previous analyses, we find it necessary to introduce different variance components for different species of animals. Finally, we consider models with correlated errors as well as correlated random effects.

91 citations

Book
02 Nov 2015
TL;DR: Theoretical reasons for multilevel modeling and why should I use it are discussed in this article, where a review of single-level regression nested structures in our data is presented.
Abstract: Chapter 1: What Is Multilevel Modeling and Why Should I Use It? Mixing levels of analysis Theoretical reasons for multilevel modeling What are the advantages of using multilevel models? Statistical reasons for multilevel modeling Assumptions of OLS Software How this book is organized Chapter 2: Random Intercept Models: When intercepts vary A review of single-level regression Nesting structures in our data Getting starting with random intercept models What do our findings mean so far? Changing the grouping to schools Adding Level 1 explanatory variables Adding Level 2 explanatory variables Group mean centring Interactions Model fit What about R-squared? R-squared? A further assumption and a short note on random and fixed effects Chapter 3: Random Coefficient Models: When intercepts and coefficients vary Getting started with random coefficient models Trying a different random coefficient Shrinkage Fanning in and fanning out Examining the variances A dichotomous variable as a random coefficient More than one random coefficient A note on parsimony and fitting a model with multiple random coefficients A model with one random and one fixed coefficient Adding Level 2 variables Residual diagnostics First steps in model-building Some tasters of further extensions to our basic models Where to next? Chapter 4: Communicating Results to a Wider Audience Creating journal-formatted tables The fixed part of the model The importance of the null model Centring variables Stata commands to make table-making easier What do you talk about? Models with random coefficients What about graphs? Cross-level interactions Parting words

91 citations

Journal ArticleDOI
TL;DR: This work presents the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach.
Abstract: Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.

91 citations

Journal ArticleDOI
TL;DR: In meta-analysis, there is an increasing trend to explicitly acknowledge the presence of study variability through random-effects models as discussed by the authors, where one assumes that for each study there is a study-specific effect and one is observing an estimate of this latent variable.
Abstract: In meta-analysis, there is an increasing trend to explicitly acknowledge the presence of study variability through random-effects models. That is, one assumes that for each study there is a study-specific effect and one is observing an estimate of this latent variable. In a random-effects model, one assumes that these study-specific effects come from some distribution, and one can estimate the parameters of this distribution, as well as the study-specific effects themselves. This distribution is most often modeled through a parametric family, usually a family of normal distributions. The advantage of using a normal distribution is that the mean parameter plays an important role, and much of the focus is on determining whether or not this mean is 0. For example, it may be easier to justify funding further studies if it is determined that this mean is not 0. Typically, this normality assumption is made for the sake of convenience, rather than from some theoretical justification, and may not actually hold. W...

91 citations

Journal ArticleDOI
TL;DR: It is proved that the underspecification creates bias in both small and large samples, indicating that recruiting more participants will not alleviate inflation of the Type I error rate associated with fixed effect inference.
Abstract: Analysis of a large longitudinal study of children motivated our work. The results illustrate how accurate inference for fixed effects in a general linear mixed model depends on the covariance model selected for the data. Simulation studies have revealed biased inference for the fixed effects with an underspecified covariance structure, at least in small samples. One underspecification common for longitudinal data assumes a simple random intercept and conditional independence of the within-subject errors (i.e., compound symmetry). We prove that the underspecification creates bias in both small and large samples, indicating that recruiting more participants will not alleviate inflation of the Type I error rate associated with fixed effect inference. Enumerations and simulations help quantify the bias and evaluate strategies for avoiding it. When practical, backwards selection of the covariance model, starting with an unstructured pattern, provides the best protection. Tutorial papers can guide the reader in minimizing the chances of falling into the often spurious software trap of nonconvergence. In some cases, the logic of the study design and the scientific context may support a structured pattern, such as an autoregressive structure. The sandwich estimator provides a valid alternative in sufficiently large samples. Authors reporting mixed-model analyses should note possible biases in fixed effects inference because of the following: (i) the covariance model selection process; (ii) the specific covariance model chosen; or (iii) the test approximation.

91 citations


Network Information
Related Topics (5)
Sample size determination
21.3K papers, 961.4K citations
91% related
Regression analysis
31K papers, 1.7M citations
88% related
Multivariate statistics
18.4K papers, 1M citations
88% related
Linear model
19K papers, 1M citations
88% related
Linear regression
21.3K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023198
2022433
2021409
2020380
2019404