scispace - formally typeset
Search or ask a question
Topic

Random effects model

About: Random effects model is a research topic. Over the lifetime, 8388 publications have been published within this topic receiving 438823 citations. The topic is also known as: random effects & random effect.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the performance of confidence intervals and hypothesis tests when each type of statistical procedure is used for each kind of inference and confirm that each procedure is best for making the kind of inferences for which it was designed.
Abstract: There are 2 families of statistical procedures in meta-analysis: fixed- and randomeffects procedures. They were developed for somewhat different inference goals: making inferences about the effect parameters in the studies that have been observed versus making inferences about the distribution of effect parameters in a population of studies from a random sample of studies. The authors evaluate the performance of confidence intervals and hypothesis tests when each type of statistical procedure is used for each type of inference and confirm that each procedure is best for making the kind of inference for which it was designed. Conditionally random-effects procedures (a hybrid type) are shown to have properties in between those of fixed- and random-effects procedures. The use of quantitative methods to summarize the results of several empirical research studies, or metaanalysis, is now widely used in psychology, medicine, and the social sciences. Meta-analysis usually involves describing the results of each study by means of a numerical index (an estimate of effect size, such as a correlation coefficient, a standardized mean difference, or an odds ratio) and then combining these estimates across studies to obtain a summary. Two somewhat different statistical models have been developed for inference about average effect size from a collection of studies, called the fixed-effects and random-effects models. (A third alternative, the mixedeffects model, arises in conjunction with analyses involving study-level covariates or moderator variables, which we do not consider in this article; see Hedges, 1992.) Fixed-effects models treat the effect-size parameters as fixed but unknown constants to be estimated and usually (but not necessarily) are used in conjunction with assumptions about the homogeneity of effect parameters (see, e.g., Hedges, 1982; Rosenthal & Rubin, 1982). Random-effects models treat the effectsize parameters as if they were a random sample from

2,513 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a restricted maximum likelihood (reml) approach which takes into account the loss in degrees of freedom resulting from estimating fixed effects, and developed a satisfactory asymptotic theory for estimators of variance components.
Abstract: Recent developments promise to increase greatly the popularity of maximum likelihood (ml) as a technique for estimating variance components. Patterson and Thompson (1971) proposed a restricted maximum likelihood (reml) approach which takes into account the loss in degrees of freedom resulting from estimating fixed effects. Miller (1973) developed a satisfactory asymptotic theory for ml estimators of variance components. There are many iterative algorithms that can be considered for computing the ml or reml estimates. The computations on each iteration of these algorithms are those associated with computing estimates of fixed and random effects for given values of the variance components.

2,440 citations

Posted Content
TL;DR: In this paper, the joint maximum likelihood estimator of the structural parameters is not consistent as the number of groups increases, with a fixed number of observations per group, and a conditional likelihood function is maximized, conditional on sufficient statistics for the incidental parameters.
Abstract: In data with a group structure, incidental parameters are included to control for missing variables. Applications include longitudinal data and sibling data. In general, the joint maximum likelihood estimator of the structural parameters is not consistent as the number of groups increases, with a fixed number of observations per group. Instead a conditional likelihood function is maximized, conditional on sufficient statistics for the incidental parameters. In the logit case, a standard conditional logit program can be used. Another solution is a random effects model, in which the distribution of the incidental parameters may depend upon the exogenous variables.

2,338 citations

BookDOI
08 Jul 2002
TL;DR: This book discusses the design of Diagnostic Accuracy Studies, the construction of a Smooth ROC Curve, and how to select a Sampling Plan for Readers based on Sensitivity and Specificity.
Abstract: Preface. Acknowledgments. 1. Introduction. 1.1 Why This Book? 1.2 What Is Diagnostic Accuracy? 1.3 Landmarks in Statistical Methods for Diagnostic Medicine. 1.4 Software. 1.5 Topics not Covered in This Book. 1.6 Summary. I BASIC CONCEPTS AND METHODS. 2. Measures of Diagnostic Accuracy. 2.1 Sensitivity and Specificity. 2.2 The Combined Measures of Sensitivity and Specificity. 2.3 The ROC Curve. 2.4 The Area Under the ROC Curve. 2.5 The Sensitivity at a Fixed FPR. 2.6 The Partial Area Under the ROC Curve. 2.7 Likelihood Ratios. 2.8 Other ROC Curve Indices. 2.9 The Localization and Detection of Multiple Abnormalities. 2.10 Interpretation of Diagnostic Tests. 2.11 Optimal Decision Threshold on the ROC Curve. 2.12 Multiple Tests. 3. The Design of Diagnostic Accuracy Studies. 3.1 Determining the Objective of the Study. 3.2 Identifying the Target Patient Population. 3.3 Selecting a Sampling Plan for Patients. 3.3.1 Phase I: Exploratory Studies. 3.3.2 Phase II: Challenge Studies. 3.3.3 Phase III: Clinical Studies. 3.4 Selecting the Gold Standard. 3.5 Choosing a Measure of Accuracy. 3.6 Identifying the Target Reader Population. 3.7 Selecting a Sampling Plan for Readers. 3.8 Planning the Data Collection. 3.8.1 Format for the Test Results. 3.8.2 Data Collection for the Reader Studies. 3.8.3 Reader Training. 3.9 Planning the Data Analyses. 3.9.1 Statistical Hypotheses. 3.9.2 Reporting the Test Results. 3.10 Determining the Sample Size. 4. Estimation and Hypothesis Testing in a Single Sample. 4.1 Binary Scale Data. 4.1.1 Sensitivity and Specificity. 4.1.2 The Sensitivity and Specificity of Clustered Binary Data. 4.1.3 The Likelihood Ratio (LR). 4.1.4 The Odds Ratio. 4.2 Ordinal Scale Data. 4.2.1 The Empirical ROC Curve. 4.2.2 Fitting a Smooth Curve (Parametric Model). 4.2.3 Estimation of Sensitivity at a Particular FPR. 4.2.4 The Area and Partial Area Under the ROC Curve (Parametric Model). 4.2.5 The Area Under the Curve (Nonparametric Method). 4.2.6 Nonparametric Analysis of Clustered Data. 4.2.7 The Degenerate Data. 4.2.8 Choosing Between Parametric and Nonparametric Methods. 4.3 Continuous Scale Data. 4.3.1 The Empirical ROC Curve. 4.3.2 Fitting a Smooth ROC Curve (Parametric and Nonparametric Methods). 4.3.3 Area Under the ROC Curve (Parametric and Nonparametric). 4.3.4 Fixed FPR The Sensitivity and Decision Threshold. 4.3.5 Choosing the Optimal Operating Point. 4.3.6 Choosing Between Parametric and Nonparametric Techniques. 4.4 Hypothesis Testing About the ROC Area. 5. Comparing the Accuracy of Two Diagnostic Tests. 5.1 Binary Scale Data. 5.1.1 Sensitivity and Specificity. 5.1.2 Sensitivity and Specificity of Clustered Binary Data. 5.2 Ordinal and Continuous Scale Data. 5.2.1 Determining the Equality of Two ROC Curves. 5.2.2 Comparing ROC Curves at a Particular Point. 5.2.3 Determining the Range of FPR for Which TPR Differ. 5.2.4 A Comparison of the Area or Partial Area. 5.3 Tests of Equivalence. 6. Sample Size Calculation. 6.1 The Sample Size for Accuracy Studies of a Single Test. 6.1.1 Sensitivity and Specificity. 6.1.2 The Area Under the ROC Curve. 6.1.3 The Sensitivity at a Fixed FPR. 6.1.4 The Partial Area Under the ROC Curve. 6.2 The Sample Size for the Accuracy of Two Tests. 6.2.1 Sensitivity and Specificity. 6.2.2 The Area Under the ROC Curve. 6.2.3 The Sensitivity at a Fixed FPR. 6.2.4 The Partial Area Under the ROC Curve. 6.3 The Sample Size for Equivalent Studies of Two Tests. 6.4 The Sample Size for Determining a Suitable Cutoff Value. 7. Issues in Meta Analysis for Diagnostic Tests. 7.1 Objectives. 7.2 Retrieval of the Literature. 7.3 Inclusion Exclusion Criteria. 7.4 Extracting Information From the Literature. 7.5 Statistical Analysis. 7.6 Public Presentation. II ADVANCED METHODS. 8. Regression Analysis for Independent ROC Data. 8.1 Four Clinical Studies. 8.1.1 Surgical Lesion in a Carotid Vessel Example. 8.1.2 Pancreatic Cancer Exampl. 8.1.3 Adult Obesity Example. 8.1.4 Staging of Prostate Cancer Example. 8.2 Regression Models for Continuous Scale Tests. 8.2.1 Indirect Regression Models for Smooth ROC Curves. 8.2.2 Direct Regression Models for Smooth ROC Curves. 8.2.3 MRA Use for Surgical Lesion Detection in the Carotid Vessel. 8.2.4 Biomarkers for the Detection of Pancreatic Cancer. 8.2.5 Prediction of Adult Obesity by Using Childhood BMI Measurements. 8.3 Regression Models for Ordinal Scale Tests. 8.3.1 Indirect Regression Models for Latent Smooth ROC Curves. 8.3.2 Direct Regression Model for Latent Smooth ROC Curves. 8.3.3 Detection of Periprostatic Invasion With US. 9. Analysis of Correlated ROC Data. 9.1 Studies With Multiple Test Measurements of the Same Patient. 9.1.1 Indirect Regression Models for Ordinal Scale Tests. 9.1.2 Neonatal Examination Example. 9.1.3 Direct Regression Models for Continuous Scale Tests. 9.2 Studies With Multiple Readers and Tests. 9.2.1 A Mixed Effects ANOVA Model for Summary Measures of Diagnostic Accuracy. 9.2.2 Detection of TAD Example. 9.2.3 The Mixed Effects ANOVA Model for Jackknife Pseudovalues. 9.2.4 Neonatal Examination Example. 9.2.5 A Bootstrap Method. 9.3 Sample Size Calculation for Multireader Studies. 10. Methods for Correcting Verification Bias. 10.1 A Single Binary Scale Test. 10.1.1 Correction Methods With the MAR Assumption. 10.1.2 Correction Methods Without the MAR Assumption. 10.1.3 Hepatic Scintigraph Example. 10.2 Correlated Binary Scale Tests. 10.2.1 An ML Approach Without Covariates. 10.2.2 An ML Approach With Covariates. 10.2.3 Screening Tests for Dementia Disorder Example. 10.3 A Single Ordinal Scale Test. 10.3.1 An ML Approach Without Covariates. 10.3.2 Fever of Uncertain Origin Example. 10.3.3 An ML Approach With Covariates. 10.3.4 Screening Test for Dementia Disorder Example. 10.4 Correlated Ordinal Scale Tests. 10.4.1 The Weighted GEE Approach for Latent Smooth ROC Curves. 10.4.2 A Likelihood Based Approach for ROC Areas. 10.4.3 Use of CT and MRI for Staging Pancreatic Cancer Example. 11. Methods for Correcting Imperfect Standard Bias. 11.1 One Single Test in a Single Population. 11.1.1 Hypothetical and Strongyloides Infection Examples. 11.2 One Single Test in G Populations. 11.2.1 Tuberculosis Example. 11.3 Multiple Tests in One Single Population. 11.3.1 MLEs Under the CIA. 11.3.2 Assessment of Pleural Thickening Example. 11.3.3 ML Approaches Without the CIA. 11.3.4 Bioassays for HIV Example. 11.4 Multiple Binary Tests in G Populations. 11.4.1 ML Approaches Under the CIA. 11.4.2 ML Approaches Without the CIA. 12. Statistical Methods for Meta Analysis. 12.1 Sensitivity and Specificity Pairs. 12.1.1 One Common SROC Curve. 12.1.2 Study Specific SROC Curve. 12.1.3 Evaluation of Duplex Ultrasonography, With and Without Color Guidance. 12.2 ROC Curve Areas. 12.2.1 Fixed Effects Models. 12.2.2 Random Effects Models. 12.2.3 Evaluation of the Dexamethasone Suppression.Test. Index.

2,003 citations

Journal ArticleDOI
01 Jan 2011-Ecology
TL;DR: It is argued that the arcsine transform should not be used in either binomial or non-binomial data, and the logit transformation is proposed as an alternative approach to address these issues.
Abstract: The arcsine square root transformation has long been standard procedure when analyzing proportional data in ecology, with applications in data sets containing binomial and non-binomial response variables. Here, we argue that the arcsine transform should not be used in either circumstance. For binomial data, logistic regression has greater interpretability and higher power than analyses of transformed data. However, it is important to check the data for additional unexplained variation, i.e., overdispersion, and to account for it via the inclusion of random effects in the model if found. For non-binomial data, the arcsine transform is undesirable on the grounds of interpretability, and because it can produce nonsensical predictions. The logit transformation is proposed as an alternative approach to address these issues. Examples are presented in both cases to illustrate these advantages, comparing various methods of analyzing proportions including untransformed, arcsine- and logit-transformed linear models and logistic regression (with or without random effects). Simulations demonstrate that logistic regression usually provides a gain in power over other methods.

1,951 citations


Network Information
Related Topics (5)
Sample size determination
21.3K papers, 961.4K citations
91% related
Regression analysis
31K papers, 1.7M citations
88% related
Multivariate statistics
18.4K papers, 1M citations
88% related
Linear model
19K papers, 1M citations
88% related
Linear regression
21.3K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023198
2022433
2021409
2020380
2019404