scispace - formally typeset
Search or ask a question
Topic

Random effects model

About: Random effects model is a research topic. Over the lifetime, 8388 publications have been published within this topic receiving 438823 citations. The topic is also known as: random effects & random effect.


Papers
More filters
Book ChapterDOI
01 Jan 1981
TL;DR: A random effect factor is defined as a factor that represents a large set of interest as discussed by the authors, where the levels of the factor represent a larger set of topics of interest than a small subset of topics.
Abstract: A factor is called a random effects factor if the levels of the factor represent a larger set of interest. Examples: 1. Medicine: How accurate are labs for testing for a certain disease? Do labs differ in their accuracy? Suppose we have (different) people tested at 3 different labs. Factor = Lab (i = 1, 2, 3) Unit = a person having a medical test Y ij = accuracy rating of the test for person j and lab i n i = number of people tested at Lab i Lab is a fixed effect if we care only about those labs. Lab is a random effect if the 3 labs are a random sample of all such labs. 2. Education: How well do California students learn to read by the end of first grade? Choose 6 schools in California. Then randomly choose n i students in school i to take a reading test. Factor = School (i = 1 to 6) Unit = student (j = 1 to n i) Y ij = reading score for student j in school i. School is a fixed effect if we care only about those 6 schools. School is a random effect if those schools are randomly sampled from a larger set of interest. 3. Psychology: Compare therapists for effectiveness. Factor = therapist Unit = patient Y ij = change in score on depression test after one year of therapy for patient j, therapist i. Therapist is a fixed effect if we are interested in those specific therapists Therapist is a random effect if the therapists are randomly selected from all therapists of interest.

235 citations

Journal ArticleDOI
TL;DR: The fixed effects OR, random effects OR and random effects RR appear to be reasonably constant across different baseline risks, and clinicians may wish to rely on the random effects model RR and use the PEER to individualize NNT when they apply the results of a meta-analysis in their practice.
Abstract: Background Meta-analyses summarize the magnitude of treatment effect using a number of measures of association, including the odds ratio (OR), risk ratio (RR), risk difference (RD) and/or number needed to treat (NNT). In applying the results of a meta-analysis to individual patients, some textbooks of evidence-based medicine advocate individualizing NNT, based on the RR and the patient's expected event rate (PEER). This approach assumes constant RR but no empirical study to date has examined the validity of this assumption. Methods We randomly selected a subset of meta-analyses from a recent issue of the Cochrane Library (1998, Issue 3). When a meta-analysis pooled more than three randomized controlled trials (RCT) to produce a summary measure for an outcome, we compared the OR, RR and RD of each RCT with the corresponding pooled OR, RR and RD from the meta-analysis of all the other RCT. Using the conventional P-value of 0.05, we calculated the percentage of comparisons in which there were no statistically significant differences in the estimates of OR, RR or RD, and refer to this percentage as the 'concordance rate'. Results For each effect measure, we made 1843 comparisons, extracted from 55 meta-analyses. The random effects model OR had the highest concordance rate, closely followed by the fixed effects model OR and random effects model RR. The minimum concordance rate for these indices was 82%, even when the baseline risk differed substantially. The concordance rates for RD, either fixed effects or random effects model, were substantially lower (54-65%). Conclusions The fixed effects OR, random effects OR and random effects RR appear to be reasonably constant across different baseline risks. Given the interpretational and arithmetic ease of RR, clinicians may wish to rely on the random effects model RR and use the PEER to individualize NNT when they apply the results of a meta-analysis in their practice.

235 citations

Journal ArticleDOI
01 Apr 1994-Ecology
TL;DR: Several statistical guidelines that should be followed are suggested, including the inclusion of explicit consideration of effects as fixed or random and clear descriptions of F tests of interest would provide the reader with confidence that the author has performed the analysis correctly.
Abstract: Analysis of variance is one of the most commonly used statistical techniques among ecologists and evolutionary biologists. Because many ecological experiments involve random as well as fixed effects, the most appropriate analysis of variance model to use is often the mixed model. Consideration of effects in an analysis of variance as fixed or random is critical if correct tests are to be made and if correct inferences are to be drawn from these tests. A literature review was conducted to determine whether authors are generally aware of the differences between fixed and random effects and whether they are performing analyses consistent with their consideration. All articles (excluding Notes and Comments) in Ecology and Evolution for the years 1990 and 1991 were reviewed. In general, authors that stated that their model contained both fixed and random effects correctly analyzed it as a mixed model. There were two cases, however, where authors attempted to define fixed effects as random in order to justify broader generalizations about the effects. Most commonly (63% of articles using two—way or greater ANOVA), authors neglected to mention whether they were dealing with a completely fixed, random, or mixed model. In such instances, it was not clear if the author was aware of the distinction between fixed and random effects, and it was often difficult to ascertain from the article whether their analysis was consistent with their experimental methods. These findings suggest several statistical guidelines that should be followed. In particular, the inclusion of explicit consideration of effects as fixed or random and clear descriptions of F tests of interest would provide the reader with confidence that the author has performed the analysis correctly. In addition, such an explicit statement would clarify the limits of the inferences about significant effects.

234 citations

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the efficiency of conjoint choice designs based on the mixed multinomial logit model and derive an expression for the information matrix for that purpose.
Abstract: A computationally attractive model for the analysis of conjoint choice experiments is the mixed multinomial logit model, a multinomial logit model in which it is assumed that the coefficients follow a (normal) distribution across subjects. This model offers the advantage over the standard multinomial logit model of accommodating heterogeneity in the coefficients of the choice model across subjects, a topic that has received considerable interest recently in the marketing literature. With the advent of such powerful models, the conjoint choice design deserves increased attention as well. Unfortunately, if one wants to apply the mixed logit model to the analysis of conjoint choice experiments, the problem arises that nothing is known about the efficiency of designs based on the standard logit for parameters of the mixed logit. The development of designs that are optimal for mixed logit models or other random effects models has not been previously addressed and is the topic of this paper.The development of efficient designs requires the evaluation of the information matrix of the mixed multinomial logit model. We derive an expression for the information matrix for that purpose. The information matrix of the mixed logit model does not have closed form, since it involves integration over the distribution of the random coefficients. In evaluating it we approximate the integrals through repeated samples from the multivariate normal distribution of the coefficients. Since the information matrix is not a scalar we use the determinant scaled by its dimension as a measure of design efficiency. This enables us to apply heuristic search algorithms to explore the design space for highly efficient designs. We build on previously published heuristics based on relabeling, swapping, and cycling of the attribute levels in the design.Designs with a base alternative are commonly used and considered to be important in conjoint choice analysis, since they provide a way to compare the utilities of pro- files in different choice sets. A base alternative is a product profile that is included in all choice sets of a design. There are several types of base alternatives, examples being a socalled outside alternative or an alternative constructed from the attribute levels in the design itself. We extend our design construction procedures for mixed logit models to include designs with a base alternative and investigate and compare four design classes: designs with two alternatives, with two alternatives plus a base alternative, and designs with three and with four alternatives.Our study provides compelling evidence that each of these mixed logit designs provide more efficient parameter estimates for the mixed logit model than their standard logit counterparts and yield higher predictive validity. As compared to designs with two alternatives, designs that include a base alternative are more robust to deviations from the parameter values assumed in the designs, while that robustness is even higher for designs with three and four alternatives, even if those have 33% and 50% less choice sets, respectively. Those designs yield higher efficiency and better predictive validity at lower burden to the respondent. It is noteworthy that our "best" choice designs, the 3- and 4-alternative designs, resulted not only in a substantial improvement in efficiency over the standard logit design but also in an expected predictive validity that is over 50% higher in most cases, a number that pales the increases in predictive validity achieved by refined model specifications.

233 citations

Journal ArticleDOI
TL;DR: In this article, the authors test empirically the hypothesis of the inverted U-shaped relationship between environmental damage from sulfur emissions and economic growth as expressed by GDP using a large database of panel data consisting of 73 OECD and non-OECD countries.
Abstract: The purpose of this study is to test empirically the hypothesis of the inverted U-shaped relationship between environmental damage from sulfur emissions and economic growth as expressed by GDP. Using a large database of panel data consisting of 73 OECD and non-OECD countries for 31 years (1960–1990) we apply for the first time random coefficients and Arellano-Bond Generalized Method of Moments (A–B GMM) econometric methods. Our findings indicate that the EKC hypothesis is not rejected in the case of the A–B GMM. On the other hand there is no support for an EKC in the case of using a random coefficients model. Our turning points range from 6230/c. These results are completely different compared to the results derived using the same database and fixed and random effects models.

233 citations


Network Information
Related Topics (5)
Sample size determination
21.3K papers, 961.4K citations
91% related
Regression analysis
31K papers, 1.7M citations
88% related
Multivariate statistics
18.4K papers, 1M citations
88% related
Linear model
19K papers, 1M citations
88% related
Linear regression
21.3K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023198
2022433
2021409
2020380
2019404