scispace - formally typeset
Search or ask a question
Topic

Random effects model

About: Random effects model is a research topic. Over the lifetime, 8388 publications have been published within this topic receiving 438823 citations. The topic is also known as: random effects & random effect.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors challenge fixed effects (FE) for time-series-cross-sectional and panel data, and argue not simply for technical solutions to endogeneity, but the substantive importance of context/heterogeneity, modelled using RE.
Abstract: This article challenges Fixed Effects (FE) modelling as the ‘default’ for time-series-cross-sectional and panel data. Understanding different within- and between-effects is crucial when choosing modelling strategies. The downside of Random Effects (RE) modelling – correlated lower-level covariates and higher-level residuals – is omitted-variable bias, solvable with Mundlak’s (1978a) formulation. Consequently, RE can provide everything FE promises and more, as confirmed by Monte-Carlo simulations, which additionally show problems with Plumper and Troeger’s FE Vector Decomposition method when data are unbalanced. As well as incorporating time-invariant variables, RE models are readily extendable, with random coefficients, cross-level interactions, and complex variance functions. We argue not simply for technical solutions to endogeneity, but the substantive importance of context/heterogeneity, modelled using RE. The implications extend beyond political science, to all multilevel datasets. However, omitted variables could still bias estimated higher-level variable effects; as with any model, care is required in interpretation.

1,036 citations

Journal ArticleDOI
TL;DR: The investigated examples demonstrate that pcVPCs have an enhanced ability to diagnose model misspecification especially with respect to random effects models in a range of situations.
Abstract: Informative diagnostic tools are vital to the development of useful mixed-effects models. The Visual Predictive Check (VPC) is a popular tool for evaluating the performance of population PK and PKPD models. Ideally, a VPC will diagnose both the fixed and random effects in a mixed-effects model. In many cases, this can be done by comparing different percentiles of the observed data to percentiles of simulated data, generally grouped together within bins of an independent variable. However, the diagnostic value of a VPC can be hampered by binning across a large variability in dose and/or influential covariates. VPCs can also be misleading if applied to data following adaptive designs such as dose adjustments. The prediction-corrected VPC (pcVPC) offers a solution to these problems while retaining the visual interpretation of the traditional VPC. In a pcVPC, the variability coming from binning across independent variables is removed by normalizing the observed and simulated dependent variable based on the typical population prediction for the median independent variable in the bin. The principal benefit with the pcVPC has been explored by application to both simulated and real examples of PK and PKPD models. The investigated examples demonstrate that pcVPCs have an enhanced ability to diagnose model misspecification especially with respect to random effects models in a range of situations. The pcVPC was in contrast to traditional VPCs shown to be readily applicable to data from studies with a priori and/or a posteriori dose adaptations.

1,034 citations

Journal ArticleDOI
TL;DR: A survey of the specification and estimation of spatial panel data models can be found in this paper, where the authors discuss the asymptotic properties of the estimators and provide guidance with respect to the estimation procedures.
Abstract: This article provides a survey of the specification and estimation of spatial panel data models. These models include spatial error autocorrelation, or the specification is extended with a spatially lagged dependent variable. In particular, the author focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the fixed coefficients model, and the random coefficients model. The survey discusses the asymptotic properties of the estimators and provides guidance with respect to the estimation procedures, which should be useful for practitioners.

1,008 citations

Journal ArticleDOI
TL;DR: It is recommended that block cross-validation be used wherever dependence structures exist in a dataset, even if no correlation structure is visible in the fitted model residuals, or if the fitted models account for such correlations.
Abstract: Ecological data often show temporal, spatial, hierarchical (random effects), or phylogenetic structure. Modern statistical approaches are increasingly accounting for such dependencies. However, when performing cross-validation, these structures are regularly ignored, resulting in serious underestimation of predictive error. One cause for the poor performance of uncorrected (random) cross-validation, noted often by modellers, are dependence structures in the data that persist as dependence structures in model residuals, violating the assumption of independence. Even more concerning, because often overlooked, is that structured data also provides ample opportunity for overfitting with non-causal predictors. This problem can persist even if remedies such as autoregressive models, generalized least squares, or mixed models are used. Block cross-validation, where data are split strategically rather than randomly, can address these issues. However, the blocking strategy must be carefully considered. Blocking in space, time, random effects or phylogenetic distance, while accounting for dependencies in the data, may also unwittingly induce extrapolations by restricting the ranges or combinations of predictor variables available for model training, thus overestimating interpolation errors. On the other hand, deliberate blocking in predictor space may also improve error estimates when extrapolation is the modelling goal. Here, we review the ecological literature on non-random and blocked cross-validation approaches. We also provide a series of simulations and case studies, in which we show that, for all instances tested, block cross-validation is nearly universally more appropriate than random cross-validation if the goal is predicting to new data or predictor space, or for selecting causal predictors. We recommend that block cross-validation be used wherever dependence structures exist in a dataset, even if no correlation structure is visible in the fitted model residuals, or if the fitted models account for such correlations.

998 citations

Journal ArticleDOI
TL;DR: In this article, the authors describe the problem of latent variable analysis failure to recognize that data may be obtained from several populations with different sets of parameter values, and give an overview of methodology that can address heterogeneity.
Abstract: Common applications of latent variable analysis fail to recognize that data may be obtained from several populations with different sets of parameter values. This article describes the problem and gives an overview of methodology that can address heterogeneity. Artificial examples of mixtures are given, where if the mixture is not recognized, strongly distorted results occur. MIMIC structural modeling is shown to be a useful method for detecting and describing heterogeneity that cannot be handled in regular multiple-group analysis. Other useful methods instead take a random effects approach, describing heterogeneity in terms of random parameter variation across groups. These random effects models connect with emerging methodology for multilevel structural equation modeling of hierarchical data. Examples are drawn from educational achievement testing, psychopathology, and sociology of education. Estimation is carried out by the LISCOMP program.

979 citations


Network Information
Related Topics (5)
Sample size determination
21.3K papers, 961.4K citations
91% related
Regression analysis
31K papers, 1.7M citations
88% related
Multivariate statistics
18.4K papers, 1M citations
88% related
Linear model
19K papers, 1M citations
88% related
Linear regression
21.3K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023198
2022433
2021409
2020380
2019404