scispace - formally typeset
Search or ask a question
Topic

Random effects model

About: Random effects model is a research topic. Over the lifetime, 8388 publications have been published within this topic receiving 438823 citations. The topic is also known as: random effects & random effect.


Papers
More filters
Journal ArticleDOI
TL;DR: This article showed that the success of a transformation may be judged solely in terms of how closely the total error follows a Gaussian distribution, which avoids the complexity of separately evaluating pure errors and random effects.
Abstract: Summary. For a univariate linear model, the Box–Cox method helps to choose a response transformation to ensure the validity of a Gaussian distribution and related assumptions. The desire to extend the method to a linear mixed model raises many vexing questions. Most importantly, how do the distributions of the two sources of randomness (pure error and random effects) interact in determining the validity of assumptions? For an otherwise valid model, we prove that the success of a transformation may be judged solely in terms of how closely the total error follows a Gaussian distribution. Hence the approach avoids the complexity of separately evaluating pure errors and random effects. The extension of the transformation to the mixed model requires an exploration of its potential effect on estimation and inference of the model parameters. Analysis of longitudinal pulmonary function data and Monte Carlo simulations illustrate the methodology discussed.

103 citations

Journal ArticleDOI
TL;DR: Researchers and policy makers need to carefully consider the balance between false positives and false negatives when choosing statistical models for determining which hospitals have higher than acceptable mortality in performance profiling.
Abstract: Background. There is an increasing movement towards the release of hospital “report-cards.” However, there is a paucity of research into the abilities of the different methods to correctly classify hospitals as performance outliers.Objective.To examine the ability of risk-adjusted mortality rates computed using conventional logistic regression and random-effects logistic regression models to correctly identify hospitals that have higher than acceptable mortality.Research Design.Monte Carlo simulations.Measures.Sensitivity, specificity, and positive predictive value of a classification as a high-outlier for identifying hospitals with higher than acceptable mortality rates.Results.When the distribution of hospital-specific log-odds of death was normal, random-effects models had greater specificity and positive predictive value than fixed-effects models. However, fixed-effects models had greater sensitivity than random-effects models.Conclusions.Researchers and policy makers need to carefully consider the ba...

103 citations

Journal ArticleDOI
TL;DR: The aim of this paper was to explain the assumptions underlying each model and their implications in the interpretation of summary results, and to use two illustrative examples from a published meta-analysis to highlight differences.
Abstract: Objective Systematic reviewers often need to choose between two statistical methods when synthesising evidence in a meta-analysis: the fixed effect and the random effects models. The two approaches entail different assumptions about the treatment effect in the included studies. The aim of this paper was to explain the assumptions underlying each model and their implications in the interpretation of summary results. Methods We discussed the key assumptions underlying the two methods and the subsequent implications on interpreting results. We used two illustrative examples from a published meta-analysis and highlighted differences in results. Results The two meta-analytic approaches may yield similar or contradicting results. Even if results between the two models are similar, summary estimates should be interpreted in a different way. Conclusions Selection between fixed or random effects should be based on the clinical relevance of the assumptions that characterise each approach. Researchers should consider the implications of the analysis model in the interpretation of the findings and use prediction intervals in the random effects meta-analysis.

103 citations

Journal ArticleDOI
TL;DR: The exact and approximate distributions are applied to obtain the corresponding distributions of the recently proposed heterogeneity measures I(2) and H(M)(2), the power of the standard test for the presence of heterogeneity and confidence intervals for the between-study variance parameter when the DerSimonian-Laird or the Hartung-Makambi estimator is used.
Abstract: The presence and impact of heterogeneity in the standard one-way random effects model in meta-analysis are often assessed using the Q statistic due to Cochran. We derive the exact distribution of this statistic under the assumptions of the random effects model, and also suggest two moment-based approximations and a saddlepoint approximation for Q. The exact and approximate distributions are then applied to obtain the corresponding distributions of the recently proposed heterogeneity measures I(2) and H(M)(2), the power of the standard test for the presence of heterogeneity and confidence intervals for the between-study variance parameter when the DerSimonian-Laird or the Hartung-Makambi estimator is used. The methodology is illustrated by revisiting a recent simulation study concerning the heterogeneity measures and applying all the proposed methods to four published meta-analyses.

102 citations

Journal ArticleDOI
TL;DR: In this article, a meta-analysis of variations in seaports' mean technical efficiency (MTE) scores based on 40 studies published in refereed academic journals is presented, where the variation in estimated MTE scores are linked to differences in the frontier methodology used, which essentially are the Data Envelopment Analysis (DEA) and the Stochastic Frontier Analysis (SFA).
Abstract: This paper presents a meta-analysis of variations in seaports’ Mean Technical Efficiency (MTE) scores based on 40 studies published in refereed academic journals. We link the variation in estimated MTE scores to differences in the following factors: the frontier methodology used, which essentially are the Data Envelopment Analysis (DEA) and the Stochastic Frontier Analysis (SFA); regions where seaports are situated; type of data used; number of observations; and the total number of variables used. Furthermore, we compare fixed-effects against a random-effects regression model where the latter assumes that the individual study specific characteristics matter while the former assumes that there is one general tendency across all studies. We present several findings based on the data: (1) the random-effects model outperforms the fixed effects model in explaining the variations in MTEs, (2) recently published studies have lower MTE scores as compared with earlier published studies, (3) studies that used nonparametric DEA models depict higher MTE scores as compared with those that used SFA models, (4) panel data studies have lower TE scores as compared with cross-sectional data, and (5) studies using European seaport data produce lower MTE scores when compared with the rest of the world. Finally, our results contradict some previous meta-analysis studies of TE scores. We encourage the use of random-effects models in meta-analysis studies because they account for individual study specific effects.

102 citations


Network Information
Related Topics (5)
Sample size determination
21.3K papers, 961.4K citations
91% related
Regression analysis
31K papers, 1.7M citations
88% related
Multivariate statistics
18.4K papers, 1M citations
88% related
Linear model
19K papers, 1M citations
88% related
Linear regression
21.3K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023198
2022433
2021409
2020380
2019404