scispace - formally typeset
Search or ask a question

Showing papers on "Random effects model published in 2007"


Journal ArticleDOI
TL;DR: It is shown that the leading methods for estimating the inter-study variance are special cases of a general method-of-moments estimate of the inter"-study variance" and suggested two new two-step methods.

1,787 citations


Journal ArticleDOI
TL;DR: WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood.
Abstract: WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

721 citations


Journal ArticleDOI
16 Mar 2007-Test
TL;DR: The proliferation of panel data studies is explained in terms of data availability, the more heightened capacity for modeling the complexity of human behavior than a single cross-section or time series data can possibly allow, and challenging methodology.
Abstract: We explain the proliferation of panel data studies in terms of (i) data availability, (ii) the more heightened capacity for modeling the complexity of human behavior than a single cross-section or time series data can possibly allow, and (iii) challenging methodology. Advantages and issues of panel data modeling are also discussed.

691 citations


Journal ArticleDOI
TL;DR: The study shows that inter-subject variability plays a prominent role in the relatively low sensitivity and reliability of group studies and focuses on the notion of reproducibility by bootstrapping.

541 citations


Book ChapterDOI
14 Sep 2007

424 citations


Journal ArticleDOI
TL;DR: To include all relevant data regardless of effect measure chosen, reviewers should also include zero total event trials when calculating pooled estimates using OR and RR.
Abstract: Meta-analysis handles randomized trials with no outcome events in both treatment and control arms inconsistently, including them when risk difference (RD) is the effect measure but excluding them when relative risk (RR) or odds ratio (OR) are used. This study examined the influence of such trials on pooled treatment effects. Analysis with and without zero total event trials of three illustrative published meta-analyses with a range of proportions of zero total event trials, treatment effects, and heterogeneity using inverse variance weighting and random effects that incorporates between-study heterogeneity. Including zero total event trials in meta-analyses moves the pooled estimate of treatment effect closer to nil, decreases its confidence interval and decreases between-study heterogeneity. For RR and OR, inclusion of such trials causes small changes, even when they comprise the large majority of included trials. For RD, the changes are more substantial, and in extreme cases can eliminate a statistically significant effect estimate. To include all relevant data regardless of effect measure chosen, reviewers should also include zero total event trials when calculating pooled estimates using OR and RR.

406 citations


Journal ArticleDOI
TL;DR: Ullah et al. as mentioned in this paper considered a spatial panel data regression model with serial correlation on each spatial unit over time as well as spatial dependence between the spatial units at each point in time.

338 citations


Journal ArticleDOI
TL;DR: A novel method for constructing confidence intervals for the amount of heterogeneity in the effect sizes is proposed that guarantees nominal coverage probabilities even in small samples when model assumptions are satisfied and yields the most accurate coverage probabilities under conditions more analogous to practice.
Abstract: Effect size estimates to be combined in a systematic review are often found to be more variable than one would expect based on sampling differences alone. This is usually interpreted as evidence that the effect sizes are heterogeneous. A random-effects model is then often used to account for the heterogeneity in the effect sizes. A novel method for constructing confidence intervals for the amount of heterogeneity in the effect sizes is proposed that guarantees nominal coverage probabilities even in small samples when model assumptions are satisfied. A variety of existing approaches for constructing such confidence intervals are summarized and the various methods are applied to an example to illustrate their use. A simulation study reveals that the newly proposed method yields the most accurate coverage probabilities under conditions more analogous to practice, where assumptions about normally distributed effect size estimates and known sampling variances only hold asymptotically.

283 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a simple, generic and highly accurate efficient importance sampling (EIS) Monte Carlo (MC) procedure for the evaluation of high-dimensional numerical integrals.

280 citations


Journal ArticleDOI
TL;DR: In this article, the authors conducted a systematic literature search and identified 13 randomized studies examining the effects of problem-solving therapy (PST) for depression, with a total of 1133 subjects.

243 citations


Journal ArticleDOI
TL;DR: Maximum likelihood estimation of a general normal model and a generalised model for bivariate random-effects meta-analysis (BRMA), which highlights some of these benefits in both a normal and generalised modelling framework, and examines the estimation of between-study correlation to aid practitioners.
Abstract: When multiple endpoints are of interest in evidence synthesis, a multivariate meta-analysis can jointly synthesise those endpoints and utilise their correlation. A multivariate random-effects meta-analysis must incorporate and estimate the between-study correlation (ρ B ). In this paper we assess maximum likelihood estimation of a general normal model and a generalised model for bivariate random-effects meta-analysis (BRMA). We consider two applied examples, one involving a diagnostic marker and the other a surrogate outcome. These motivate a simulation study where estimation properties from BRMA are compared with those from two separate univariate random-effects meta-analyses (URMAs), the traditional approach. The normal BRMA model estimates ρ B as -1 in both applied examples. Analytically we show this is due to the maximum likelihood estimator sensibly truncating the between-study covariance matrix on the boundary of its parameter space. Our simulations reveal this commonly occurs when the number of studies is small or the within-study variation is relatively large; it also causes upwardly biased between-study variance estimates, which are inflated to compensate for the restriction on B . Importantly, this does not induce any systematic bias in the pooled estimates and produces conservative standard errors and mean-square errors. Furthermore, the normal BRMA is preferable to two normal URMAs; the mean-square error and standard error of pooled estimates is generally smaller in the BRMA, especially given data missing at random. For meta-analysis of proportions we then show that a generalised BRMA model is better still. This correctly uses a binomial rather than normal distribution, and produces better estimates than the normal BRMA and also two generalised URMAs; however the model may sometimes not converge due to difficulties estimating ρ B . A BRMA model offers numerous advantages over separate univariate synthesises; this paper highlights some of these benefits in both a normal and generalised modelling framework, and examines the estimation of between-study correlation to aid practitioners.

Journal ArticleDOI
TL;DR: The paper presents the results from applying the nonparametric model and comparing it to the original model estimated using a conventional parametric random effects model, and discusses the implications for future applications of the SF-6D and further work in this field.

Journal ArticleDOI
TL;DR: In this paper, generalized linear mixed models (GLMMs) are used to model portfolio credit default risk, which allows for a flexible specification of the portfolio risk in terms of observed fixed effects and unobserved random effects, in order to explain the phenomena of default dependence and time-inhomogeneity in historical default data.

Journal ArticleDOI
TL;DR: In this paper, the authors used a mixed logit model to estimate the economic benefits associated with rural landscape improvements in Ireland using the panel nature of the dataset to retrieve willingness-to-pay values for every individual in the sample.
Abstract: This paper reports the findings from a discrete-choice experiment designed to estimate the economic benefits associated with rural landscape improvements in Ireland. Using a mixed logit model, the panel nature of the dataset is exploited to retrieve willingness-to-pay values for every individual in the sample. This departs from customary approaches in which the willingness-to-pay estimates are normally expressed as measures of central tendency of an a priori distribution. Random-effects models for panel data are subsequently used to identify the determinants of the individual-specific willingness-to-pay estimates. In comparison with the standard methods used to incorporate individual-specific variables into the analysis of discrete-choice experiments, the analytical approach outlined in this paper is shown to add considerable explanatory power to the welfare estimates.

Journal ArticleDOI
TL;DR: Inference for the fixed effects under the assumption of independent normally distributed errors with constant variance is shown to be robust when the errors are either non-gaussian or heteroscedastic, except when the error variance depends on a covariate included in the model with interaction with time.

Journal ArticleDOI
TL;DR: The benefits and limitations of multivariate meta-analysis are illustrated to provide helpful insight for practitioners, and how and why a BRMA is able to 'borrow strength' across outcomes is shown.
Abstract: Often multiple outcomes are of interest in each study identified by a systematic review, and in this situation a separate univariate meta-analysis is usually applied to synthesize the evidence for each outcome independently; an alternative approach is a single multivariate meta-analysis model that utilizes any correlation between outcomes and obtains all the pooled estimates jointly. Surprisingly, multivariate meta-analysis is rarely considered in practice, so in this paper we illustrate the benefits and limitations of the approach to provide helpful insight for practitioners. We compare a bivariate random-effects meta-analysis (BRMA) to two independent univariate random-effects meta-analyses (URMA), and show how and why a BRMA is able to 'borrow strength' across outcomes. Then, on application to two examples in healthcare, we show: (i) given complete data for both outcomes in each study, BRMA is likely to produce individual pooled estimates with very similar standard errors to those from URMA; (ii) given some studies where one of the outcomes is missing at random, the 'borrowing of strength' is likely to allow BRMA to produce individual pooled estimates with noticeably smaller standard errors than those from URMA; (iii) for either complete data or missing data, BRMA will produce a more appropriate standard error of the pooled difference between outcomes as it incorporates their correlation, which is not possible using URMA; and (iv) despite its advantages, BRMA may often not be possible due to the difficulty in obtaining the within-study correlations required to fit the model. Bivariate meta-regression and further research priorities are also discussed.

Journal ArticleDOI
TL;DR: This paper provides a tutorial on the practical implementation of a flexible random effects model based on methodology developed in Bayesian non‐parametrics literature, and implemented in freely available software, by providing code for Winbugs.
Abstract: Random effects models are used in many applications in medical statistics, including meta-analysis, cluster randomized trials and comparisons of health care providers. This paper provides a tutorial on the practical implementation of a flexible random effects model based on methodology developed in Bayesian non-parametrics literature, and implemented in freely available software. The approach is applied to the problem of hospital comparisons using routine performance data, and among other benefits provides a diagnostic to detect clusters of providers with unusual results, thus avoiding problems caused by masking in traditional parametric approaches. By providing code for Winbugs we hope that the model can be used by applied statisticians working in a wide variety of applications.

Journal ArticleDOI
TL;DR: In this article, the authors present several extensions of the most familiar models for count data, the Poisson and negative binomial models, and develop an encompassing model for two well known variants of the NB1 and NB2 forms.
Abstract: This study presents several extensions of the most familiar models for count data, the Poisson and negative binomial models. We develop an encompassing model for two well known variants of the negative binomial model (the NB1 and NB2 forms). We then propose some alternative approaches to the standard log gamma model for introducing heterogeneity into the loglinear conditional means for these models. The lognormal model provides a versatile alternative specification that is more flexible (and more natural) than the log gamma form, and provides a platform for several â¬Stwo partâ¬? extensions, including zero inflation, hurdle and sample selection models. We also resolve some features in Hausman, Hall and Grilichesâ¬"s (1984) widely used panel data treatments for the Poisson and negative binomial models that appear to conflict with more familiar models of fixed and random effects. Finally, we consider a bivariate Poisson model that is also based on the lognormal heterogeneity model. Two recent applications have used this model. We suggest that the correlation estimated in their model frameworks is an ambiguous measure of the correlation of the variables of interest, and may substantially overstate it. We conclude with a detailed application of the proposed methods using the data employed in one of the two aforementioned bivariate Poisson studies.

Journal ArticleDOI
TL;DR: A systematic review of all population pharmacokinetic and/or pharmacodynamic analyses published between 2002 and 2004 to survey the current methods used to evaluate models and to assess whether those models were adequately evaluated.
Abstract: Model evaluation is an important issue in population analyses. We aimed to perform a systematic review of all population pharmacokinetic and/or pharmacodynamic analyses published between 2002 and 2004 to survey the current methods used to evaluate models and to assess whether those models were adequately evaluated. We selected 324 articles in MEDLINE using defined key words and built a data abstraction form composed of a checklist of items to extract the relevant information from these articles with respect to model evaluation. In the data abstraction form, evaluation methods were divided into three subsections: basic internal methods (goodness-of-fit [GOF] plots, uncertainty in parameter estimates and model sensitivity), advanced internal methods (data splitting, resampling techniques and Monte Carlo simulations) and external model evaluation. Basic internal evaluation was the most frequently described method in the reports: 65% of the models involved GOF evaluation. Standard errors or confidence intervals were reported for 50% of fixed effects but only for 22% of random effects. Advanced internal methods were used in approximately 25% of models: data splitting was more often used than bootstrap and cross-validation; simulations were used in 6% of models to evaluate models by a visual predictive check or by a posterior predictive check. External evaluation was performed in only 7% of models. Using the subjective synthesis of model evaluation for each article, we judged the models to be adequately evaluated in 28% of pharmacokinetic models and 26% of pharmacodynamic models. Basic internal evaluation was preferred to more advanced methods, probably because the former is performed easily with most software. We also noticed that when the aim of modelling was predictive, advanced internal methods or more stringent methods were more often used.

Journal ArticleDOI
TL;DR: In this article, the multilevel Rasch model with cross or partially crossed random effects is used to estimate the teacher x content strand interaction in an educational testing scenario, where students are grouped into classrooms and many test items share a common grouping structure such as a content strand or a reading passage.
Abstract: Traditional Rasch estimation of the item and student parameters via marginal maximum likelihood, joint maximum likelihood or conditional maximum likelihood, assume individuals in clustered settings are uncorrelated and items within a test that share a grouping structure are also uncorrelated. These assumptions are often violated, particularly in educational testing situations, in which students are grouped into classrooms and many test items share a common grouping structure, such as a content strand or a reading passage. Consequently, one possible approach is to explicitly recognize the clustered nature of the data and directly incorporate random effects to account for the various dependencies. This article demonstrates how the multilevel Rasch model can be estimated using the functions in R for mixed-effects models with crossed or partially crossed random effects. We demonstrate how to model the following hierarchical data structures: a) individuals clustered in similar settings (e.g., classrooms, schools), b) items nested within a particular group (such as a content strand or a reading passage), and c) how to estimate a teacher x content strand interaction.

Journal ArticleDOI
TL;DR: Results suggest that power can be highly dependent on the statistical model used to meta-analyse the data and even very large studies may have little impact on a meta-analysis when there is considerable between study heterogeneity.
Abstract: Meta-analyses of randomized controlled trials (RCTs) provide the highest level of evidence regarding the effectiveness of interventions and as such underpin much of evidence-based medicine. Despite this, meta-analyses are usually produced as observational by-products of the existing literature, with no formal consideration of future meta-analyses when individual trials are being designed. Basing the sample size of a new trial on the results of an updated meta-analysis which will include it, may sometimes make more sense than powering the trial in isolation. A framework for sample size calculation for a future RCT based on the results of a meta-analysis of the existing evidence is presented. Both fixed and random effect approaches are explored through an example. Bayesian Markov Chain Monte Carlo simulation modelling is used for the random effects model since it has computational advantages over the classical approach. Several criteria on which to base inference and hence power are considered. The prior expectation of the power is averaged over the prior distribution for the unknown true treatment effect. An extension to the framework allowing for consideration of the design for a series of new trials is also presented. Results suggest that power can be highly dependent on the statistical model used to meta-analyse the data and even very large studies may have little impact on a meta-analysis when there is considerable between study heterogeneity. This raises issues regarding the appropriateness of the use of random effect models when designing and drawing inferences across a series of studies.

Journal ArticleDOI
TL;DR: A theoretical result is found which states that whenever a subset of fixed-effects parameters, not included in the random-effects structure equals zero, the corresponding maximum likelihood estimator will consistently estimate zero, which implies that under certain conditions a significant effect could be considered as a reliable result, even if therandom-effects distribution is misspecified.
Abstract: Generalized linear mixed models (GLMMs) have become a frequently used tool for the analysis of non-Gaussian longitudinal data. Estimation is based on maximum likelihood theory, which assumes that the underlying probability model is correctly specified. Recent research is showing that the results obtained from these models are not always robust against departures from the assumptions on which these models are based. In the present work we have used simulations with a logistic random-intercept model to study the impact of misspecifying the random-effects distribution on the type I and II errors of the tests for the mean structure in GLMMs. We found that the misspecification can either increase or decrease the power of the tests, depending on the shape of the underlying random-effects distribution, and it can considerably inflate the type I error rate. Additionally, we have found a theoretical result which states that whenever a subset of fixed-effects parameters, not included in the random-effects structure equals zero, the corresponding maximum likelihood estimator will consistently estimate zero. This implies that under certain conditions a significant effect could be considered as a reliable result, even if the random-effects distribution is misspecified.

Journal ArticleDOI
TL;DR: The authors consider the difference between the Hausman, Hall and Griliches (HHG) FENB model and a more conventional negative binomial model using a log gamma heterogeneity term, and consider the lognormal model as an alternative RE Poisson model in which the common effect appears in a natural index function form.
Abstract: The most familiar fixed effects (FE) and random effects (RE) panel data treatments for count data were proposed by Hausman, Hall and Griliches (HHG) (1984). The Poisson FE model is particularly simple and is one of a small few known models in which the incidental parameters problem is, in fact, not a problem. The same is not true of the negative binomial (NB) model. Researchers are sometimes surprised to find that the HHG formulation of the FENB model allows an overall constant – a quirk that has also been documented elsewhere. We resolve the source of the ambiguity, and consider the difference between the HHG FENB model and a ‘true’ FENB model that appears in the familiar index function form. The familiar RE Poisson model using a log gamma heterogeneity term produces the NB model. The HHG RE NB model is also unlike what might seem the natural application in which the heterogeneity term appears as an additive common effect in the conditional mean. We consider the lognormal model as an alternative RENB model in which the common effect appears in a natural index function form.

Journal ArticleDOI
TL;DR: This work addresses the problem of selecting which variables should be included in the fixed and random components of logistic mixed effects models for correlated data using a stochastic search Gibbs sampler to estimate the exact model-averaged posterior distribution.
Abstract: We address the problem of selecting which variables should be included in the fixed and random components of logistic mixed effects models for correlated data. A fully Bayesian variable selection is implemented using a stochastic search Gibbs sampler to estimate the exact model-averaged posterior distribution. This approach automatically identifies subsets of predictors having nonzero fixed effect coefficients or nonzero random effects variance, while allowing uncertainty in the model selection process. Default priors are proposed for the variance components and an efficient parameter expansion Gibbs sampler is developed for posterior computation. The approach is illustrated using simulated data and an epidemiologic example.

Journal ArticleDOI
TL;DR: A generalized linear model is proposed, accommodating overdispersion and clustering through two separate sets of random effects, of gamma and normal type, respectively, which is implemented in the SAS procedure NLMIXED.
Abstract: Non-Gaussian outcomes are often modeled using members of the so-called exponential family. The Poisson model for count data falls within this tradition. The family in general, and the Poisson model in particular, are at the same time convenient since mathematically elegant, but in need of extension since often somewhat restrictive. Two of the main rationales for existing extensions are (1) the occurrence of overdispersion, in the sense that the variability in the data is not adequately captured by the model's prescribed mean-variance link, and (2) the accommodation of data hierarchies owing to, for example, repeatedly measuring the outcome on the same subject, recording information from various members of the same family, etc. There is a variety of overdispersion models for count data, such as, for example, the negative-binomial model. Hierarchies are often accommodated through the inclusion of subject-specific, random effects. Though not always, one conventionally assumes such random effects to be normally distributed. While both of these issues may occur simultaneously, models accommodating them at once are less than common. This paper proposes a generalized linear model, accommodating overdispersion and clustering through two separate sets of random effects, of gamma and normal type, respectively. This is in line with the proposal by Booth et al. (Stat Model 3:179-181, 2003). The model extends both classical overdispersion models for count data (Breslow, Appl Stat 33:38-44, 1984), in particular the negative binomial model, as well as the generalized linear mixed model (Breslow and Clayton, J Am Stat Assoc 88:9-25, 1993). Apart from model formulation, we briefly discuss several estimation options, and then settle for maximum likelihood estimation with both fully analytic integration as well as hybrid between analytic and numerical integration. The latter is implemented in the SAS procedure NLMIXED. The methodology is applied to data from a study in epileptic seizures.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a characterization of the class of weights that produce estimators that are first-order unbiased and show that such bias-reducing weights must depend on the data unless an orthogonal reparameterization or an essentially equivalent condition is available.
Abstract: Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall in this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first-order unbiased. We show that such bias-reducing weights must depend on the data unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Finally, we show that random effects estimators are not bias reducing in general and discuss important exceptions. Three examples and some Monte Carlo experiments illustrate the results.

Journal ArticleDOI
TL;DR: This Monte Carlo study examined the impact of misspecifying the 𝚺 matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks to discuss the compensatory relationship between the random effects of the growth parameters and the longitudinal error structure for model specification.
Abstract: This Monte Carlo study examined the impact of misspecifying the Σ matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the Σ matrix usually resulted in overestimation of the variances of the random effects (e.g., τ00, ττ11 ) and standard errors of the corresponding growth parameter estimates (e.g., SEβ 0, SEβ 1). Overestimates of the standard errors led to lower statistical power in tests of the growth parameters. An unstructured Σ matrix under the mixed model framework generally led to underestimates of standard errors of the growth parameter estimates. Underestimates of the standard errors led to inflation of the type I error rate in tests of the growth parameters. Implications of the compensatory relationship between the random effects of the growth parameters and the longitudinal error structure for model specification were discussed.

Journal ArticleDOI
TL;DR: In this article, the authors developed a mixed model methodology for Cox-type hazard regression models where the usual linear predictor is generalized to a geoadditive predictor incorporating non-parametric terms for the (log-)baseline hazard rate, time-varying coefficients and non-linear effects of continuous covariates, a spatial component, and additional cluster-specific frailties.
Abstract: . Mixed model based approaches for semiparametric regression have gained much interest in recent years, both in theory and application. They provide a unified and modular framework for penalized likelihood and closely related empirical Bayes inference. In this article, we develop mixed model methodology for a broad class of Cox-type hazard regression models where the usual linear predictor is generalized to a geoadditive predictor incorporating non-parametric terms for the (log-)baseline hazard rate, time-varying coefficients and non-linear effects of continuous covariates, a spatial component, and additional cluster-specific frailties. Non-linear and time-varying effects are modelled through penalized splines, while spatial components are treated as correlated random effects following either a Markov random field or a stationary Gaussian random field prior. Generalizing existing mixed model methodology, inference is derived using penalized likelihood for regression coefficients and (approximate) marginal likelihood for smoothing parameters. In a simulation we study the performance of the proposed method, in particular comparing it with its fully Bayesian counterpart using Markov chain Monte Carlo methodology, and complement the results by some asymptotic considerations. As an application, we analyse leukaemia survival data from northwest England.

01 Nov 2007
TL;DR: In this article, the authors make use of longitudinal data for Denmark, taken from the waves 1995-1999 of the European Community Household Panel, and estimate fixed effects ordered logit models using the estimation methods proposed by Ferrer-i-Carbonel and Frijters (2004) and Das and van Soest (1999).
Abstract: A growing literature seeks to explain differences in individuals' self-reported satisfaction with their jobs. The evidence so far has mainly been based on cross-sectional data and when panel data have been used, individual unobserved heterogeneity has been modelled as an ordered probit model with random effects. This article makes use of longitudinal data for Denmark, taken from the waves 1995-1999 of the European Community Household Panel, and estimates fixed effects ordered logit models using the estimation methods proposed by Ferrer-i-Carbonel and Frijters (2004) and Das and van Soest (1999). For comparison and testing purposes a random effects ordered probit is also estimated. Estimations are carried out separately on the samples of men and women for individuals' overall satisfaction with the jobs they hold. We find that using the fixed effects approach (that clearly rejects the random effects specification), considerably reduces the number of key explanatory variables. The impact of central economic factors is the same as in previous studies, though. Moreover, the determinants of job satisfaction differ considerably between the genders, in particular once individual fixed effects are allowed for.

Journal ArticleDOI
TL;DR: In a meta-analysis, one often finds that the observed outcomes from a set of related studies differ more from each other than would be expected based on sampling variability alone.
Abstract: . To conduct a meta-analysis, one needs to express the results from a set of related studies in terms of an outcome measure, such as a standardized mean difference, correlation coefficient, or odds ratio. The observed outcome from a single study will differ from the true value of the outcome measure because of sampling variability. The observed outcomes from a set of related studies measuring the same outcome will, therefore, not coincide. However, one often finds that the observed outcomes differ more from each other than would be expected based on sampling variability alone. A likely explanation for this phenomenon is that the true values of the outcome measure are heterogeneous. One way to account for the heterogeneity is to assume that the heterogeneity is entirely random. Another approach is to examine whether the heterogeneity in the outcomes can be accounted for, at least in part, by a set of study-level variables describing the methods, procedures, and samples used in the different studies...