scispace - formally typeset
Search or ask a question
Topic

Random effects model

About: Random effects model is a research topic. Over the lifetime, 8388 publications have been published within this topic receiving 438823 citations. The topic is also known as: random effects & random effect.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present several extensions of the most familiar models for count data, the Poisson and negative binomial models, and develop an encompassing model for two well known variants of the NB1 and NB2 forms.
Abstract: This study presents several extensions of the most familiar models for count data, the Poisson and negative binomial models. We develop an encompassing model for two well known variants of the negative binomial model (the NB1 and NB2 forms). We then propose some alternative approaches to the standard log gamma model for introducing heterogeneity into the loglinear conditional means for these models. The lognormal model provides a versatile alternative specification that is more flexible (and more natural) than the log gamma form, and provides a platform for several â¬Stwo partâ¬? extensions, including zero inflation, hurdle and sample selection models. We also resolve some features in Hausman, Hall and Grilichesâ¬"s (1984) widely used panel data treatments for the Poisson and negative binomial models that appear to conflict with more familiar models of fixed and random effects. Finally, we consider a bivariate Poisson model that is also based on the lognormal heterogeneity model. Two recent applications have used this model. We suggest that the correlation estimated in their model frameworks is an ambiguous measure of the correlation of the variables of interest, and may substantially overstate it. We conclude with a detailed application of the proposed methods using the data employed in one of the two aforementioned bivariate Poisson studies.

157 citations

Journal Article
TL;DR: In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis, and discuss the value of confidence intervals, show how they could be used in addition to or instead of retrospective power analysis, and also demonstrate that confidence intervals can convey information more effectively in some situations than power analyses alone.
Abstract: In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide some suggestions for thinking about these parameters, in particular for the random effects variance component. The authors also show how the typically uninformative retrospective power analysis can be made more informative. The authors then discuss the value of confidence intervals, show how they could be used in addition to or instead of retrospective power analysis, and also demonstrate that confidence intervals can convey information more effectively in some situations than power analyses alone. Finally, the authors take up the question “How many studies do you need to do a meta-analysis?” and show that, given the need for a conclusion, the answer is “two studies...

157 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that unless all studies are of similar size, the DerSimonian and Laird procedure is inefficient when estimating the between-study variance, but is remarkably efficient when estimating treatment effect.

156 citations

Journal ArticleDOI
TL;DR: Two simple models for binary response data were studied, and the effects of assuming normality or of using a nonparametric fitting procedure for random effects, when the true distribution is potentially far from normal.

156 citations

Journal ArticleDOI
TL;DR: In this article, the authors explore the joint modeling approach under the accelerated failure time assumption when covariates are assumed to follow a linear mixed effects model with measurement errors, and the procedure is based on maximising the joint likelihood function with random effects treated as missing data.
Abstract: SUMMARY The accelerated failure time model is an attractive alternative to the Cox model when the proportionality assumption fails to capture the relationship between the survival time and longitudinal covariates Several complications arise when the covariates are measured intermittently at different time points for different subjects, possibly with measurement errors, or measurements are not available after the failure time Joint modelling of the failure time and longitudinal data offers a solution to such complications We explore the joint modelling approach under the accelerated failure time assumption when covariates are assumed to follow a linear mixed effects model with measurement errors The procedure is based on maximising the joint likelihood function with random effects treated as missing data A Monte Carlo EM algorithm is used to estimate all the unknown parameters, including the unknown baseline hazard function The performance of the proposed procedure is checked in simulation studies A case study of reproductive egg-laying data for female Mediterranean fruit flies and their relationship to longevity demonstrate the effectiveness of the new procedure

156 citations


Network Information
Related Topics (5)
Sample size determination
21.3K papers, 961.4K citations
91% related
Regression analysis
31K papers, 1.7M citations
88% related
Multivariate statistics
18.4K papers, 1M citations
88% related
Linear model
19K papers, 1M citations
88% related
Linear regression
21.3K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023198
2022433
2021409
2020380
2019404