scispace - formally typeset
Search or ask a question
Topic

Estimation statistics

About: Estimation statistics is a research topic. Over the lifetime, 170 publications have been published within this topic receiving 30255 citations.


Papers
More filters
Book
01 Jan 1979
TL;DR: The introductory text as mentioned in this paper provides students with a conceptual understanding of basic statistical procedures, as well as the computational skills needed to complete them, focusing on concepts critical to understanding current statistical research such as power and sample size, multiple comparison tests, multiple regression, and analysis of covariance.
Abstract: This introductory text provides students with a conceptual understanding of basic statistical procedures, as well as the computational skills needed to complete them. The clear presentation, accessible language, and step-by-step instruction make it easy for students from a variety of social science disciplines to grasp the material. The scenarios presented in chapter exercises span the curriculum, from political science to marketing, so that students make a connection between their own area of interest and the study of statistics. Unique coverage focuses on concepts critical to understanding current statistical research such as power and sample size, multiple comparison tests, multiple regression, and analysis of covariance. Additional SPSS coverage throughout the text includes computer printouts and expanded discussion of their contents in interpreting the results of sample exercises. 1. Introduction. 2. Organizing and Graphing Data. 3. Describing Distributions: Individual Scores, Central Tendency, and Variation. 4. The Normal Distribution. 5. Correlation: A Measure of Relationship. 6. Linear Regression: Prediction. 7. Sampling, Probability, and Sampling Distributions. 8. Hypothesis Testing: One-Sample Case for the Mean. 9. Estimation: One-Sample Case for the Mean. 10. Hypothesis Testing: One-Sample Case for Other Statistics. 11. Hypothesis Testing: Two-Sample Case for the Mean. 12. Hypothesis Testing: Two-Sample Case for Other Statistics. 13. Determining Power and Sample Size. 14. Hypothesis Testing, K-Sample Case: Analysis of Variance, One-Way Classification. 15. Multiple-Comparison Procedures. 16. Analysis of Variance, Two-Way Classification. 17. Linear Regression: Estimation and Hypothesis Testing. 18. Multiple Linear Regression. 19. Analysis of Covariance. 20. Other Correlation Coefficients. 21. Chi-Square (X2) Tests for Frequencies. 22. Other Nonparametric Tests.

4,010 citations

Journal ArticleDOI
Jacob Cohen1
TL;DR: The authors reviewed the problems with null hypothesis significance testing, including near universal misinterpretation of p as the probability that H is false, the misinterpretation that its complement is the probability of successful replication, and the mistaken assumption that if one rejects H₀ one thereby affirms the theory that led to the test.
Abstract: After 4 decades of severe criticism, the ritual of null hypothesis significance testing (mechanical dichotomous decisions around a sacred .05 criterion) still persists. This article reviews the problems with this practice, including near universal misinterpretation of p as the probability that H₀ is false, the misinterpretation that its complement is the probability of successful replication, and the mistaken assumption that if one rejects H₀ one thereby affirms the theory that led to the test. Exploratory data analysis and the use of graphic methods, a steady improvement in and a movement toward standardization in measurement, an emphasis on estimating effect sizes using confidence intervals, and the informed use of available statistical methods are suggested. For generalization, psychologists must finally rely, as has been done in all the older sciences, on replication. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

3,838 citations

Journal ArticleDOI
TL;DR: A straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis is provided.
Abstract: The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.

3,117 citations

Journal ArticleDOI
TL;DR: This article extensively discusses two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta‐analysis.
Abstract: Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non-independent. Although interpretations of effect sizes are often difficult, we provide some pointers to help researchers. This paper serves both as a beginner’s instruction manual and a stimulus for changing statistical practice for the better in the biological sciences.

3,041 citations

Journal ArticleDOI
15 Mar 1986-BMJ
TL;DR: Some methods of calculating confidence intervals for means and differences between means are given, with similar information for proportions, and the paper also gives suggestions for graphical display.
Abstract: Overemphasis on hypothesis testing--and the use of P values to dichotomise significant or non-significant results--has detracted from more useful approaches to interpreting study results, such as estimation and confidence intervals. In medical studies investigators are usually interested in determining the size of difference of a measured outcome between groups, rather than a simple indication of whether or not it is statistically significant. Confidence intervals present a range of values, on the basis of the sample data, in which the population value for such a difference may lie. Some methods of calculating confidence intervals for means and differences between means are given, with similar information for proportions. The paper also gives suggestions for graphical display. Confidence intervals, if appropriate to the type of study, should be used for major findings in both the main text of a paper and its abstract.

1,841 citations


Network Information
Related Topics (5)
Statistical hypothesis testing
19.5K papers, 1M citations
77% related
Sample size determination
21.3K papers, 961.4K citations
75% related
Inference
36.8K papers, 1.3M citations
71% related
Nonparametric statistics
19.9K papers, 844.1K citations
70% related
Regression analysis
31K papers, 1.7M citations
69% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20211
20201
20192
20178
20164
20156