scispace - formally typeset
Search or ask a question
Topic

Standard error

About: Standard error is a research topic. Over the lifetime, 2562 publications have been published within this topic receiving 159284 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is concluded that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity, and one or both should be presented in publishedMeta-an analyses in preference to the test for heterogeneity.
Abstract: The extent of heterogeneity in a meta-analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between-study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta-analysis. We develop measures of the impact of heterogeneity on a meta-analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the chi2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta-analysis to the standard error of a fixed effect meta-analytic estimate, and I2 is a transformation of (H) that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta-analyses in preference to the test for heterogeneity.

25,460 citations

Journal ArticleDOI
TL;DR: In this article, the authors randomly generate placebo laws in state-level data on female wages from the Current Population Survey and use OLS to compute the DD estimate of its "effect" as well as the standard error of this estimate.
Abstract: Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the autocorrelation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre”- and “post”-period and explicitly takes into account the effective sample size works well even for small numbers of states.

9,397 citations

Journal ArticleDOI
TL;DR: Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: a method based on the distribution of the product of two normal random variables, and resampling methods.
Abstract: The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall.

6,267 citations

Journal ArticleDOI
TL;DR: The generalized least squares approach of Parks produces standard errors that lead to extreme overconfidence, often underestimating variability by 50% or more, and a new method is offered that is both easier to implement and produces accurate standard errors.
Abstract: We examine some issues in the estimation of time-series cross-section models, calling into question the conclusions of many published studies, particularly in the field of comparative political economy. We show that the generalized least squares approach of Parks produces standard errors that lead to extreme overconfidence, often underestimating variability by 50% or more. We also provide an alternative estimator of the standard errors that is correct when the error structures show complications found in this type of model. Monte Carlo analysis shows that these “panel-corrected standard errors” perform well. The utility of our approach is demonstrated via a reanalysis of one “social democratic corporatist” model.

5,670 citations

Journal ArticleDOI
28 Jun 2015
TL;DR: In this paper, the cross-sectional properties of return forecasts derived from Fama-MacBeth regressions were studied, and the authors found that the forecasts vary substantially across stocks and have strong predictive power for actual returns.
Abstract: This paper studies the cross-sectional properties of return forecasts derived from Fama-MacBeth regressions. These forecasts mimic how an investor could, in real time, combine many firm characteristics to obtain a composite estimate of a stock’s expected return. Empirically, the forecasts vary substantially across stocks and have strong predictive power for actual returns. For example, using ten-year rolling estimates of Fama- MacBeth slopes and a cross-sectional model with 15 firm characteristics (all based on low-frequency data), the expected-return estimates have a cross-sectional standard deviation of 0.87% monthly and a predictive slope for future monthly returns of 0.74, with a standard error of 0.07.

4,406 citations


Network Information
Related Topics (5)
Regression analysis
31K papers, 1.7M citations
87% related
Population
2.1M papers, 62.7M citations
76% related
Meta-analysis
20.1K papers, 1.2M citations
75% related
Cohort
58.4K papers, 2M citations
74% related
Estimator
97.3K papers, 2.6M citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202395
2022208
202197
202086
201990
201875