scispace - formally typeset
Search or ask a question
Topic

Random effects model

About: Random effects model is a research topic. Over the lifetime, 8388 publications have been published within this topic receiving 438823 citations. The topic is also known as: random effects & random effect.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a fixed effects model is extended to the stochastic frontier model using results that specifically employ the nonlinear specification and the random effects model was reformulated as a special case of the random parameters model.
Abstract: Received stochastic frontier analyses with panel data have relied on traditional fixed and random effects models. We propose extensions that circumvent two shortcomings of these approaches. The conventional panel data estimators assume that technical or cost inefficiency is time invariant. Second, the fixed and random effects estimators force any time invariant cross unit heterogeneity into the same term that is being used to capture the inefficiency. Inefficiency measures in these models may be picking up heterogeneity in addition to or even instead of inefficiency. A fixed effects model is extended to the stochastic frontier model using results that specifically employ the nonlinear specification. The random effects model is reformulated as a special case of the random parameters model. The techniques are illustrated in applications to the U.S. banking industry and a cross country comparison of the efficiency of health care delivery.

721 citations

Journal ArticleDOI
TL;DR: A simulation approach was used to clarify the application of random effects under three common situations for telemetry studies and found that random intercepts accounted for unbalanced sample designs, and models withrandom intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection.
Abstract: 1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

718 citations

Journal ArticleDOI
TL;DR: Improved mathematical and statistical tools and computer technology can help researchers gain more accurate information from published studies and improve future research, to result in better prediction equations of biological systems and a more accurate description of their prediction errors.

713 citations

Journal ArticleDOI
TL;DR: The adaptive quadrature approach is extended to general random coefficient models with limited and discrete dependent variables, which can include several nested random effects representing unobserved heterogeneity at different levels of a hierarchical dataset.

702 citations

Journal ArticleDOI
TL;DR: D2 seems a better alternative than I2 to consider model variation in any random-effects meta-analysis despite the choice of the between trial variance estimator that constitutes the model.
Abstract: There is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from an intervention effect suggested by trials with low-risk of bias. Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta-analysis. We devise a measure of diversity (D 2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D 2 is the percentage that the between-trial variability constitutes of the sum of the between-trial variability and a sampling error estimate considering the required information size. D 2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I 2), which may underestimate the required information size. Thus, D 2 and I 2 are compared and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D 2 ≥ I 2, for all meta-analyses. We conclude that D 2 seems a better alternative than I 2 to consider model variation in any random-effects meta-analysis despite the choice of the between trial variance estimator that constitutes the model. Furthermore, D 2 can readily adjust the required information size in any random-effects model meta-analysis.

701 citations


Network Information
Related Topics (5)
Sample size determination
21.3K papers, 961.4K citations
91% related
Regression analysis
31K papers, 1.7M citations
88% related
Multivariate statistics
18.4K papers, 1M citations
88% related
Linear model
19K papers, 1M citations
88% related
Linear regression
21.3K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023198
2022433
2021409
2020380
2019404