scispace - formally typeset
Search or ask a question

Showing papers on "Resampling published in 1977"


Book
01 Jan 1977
TL;DR: In this paper, the authors present an image of the "Image" of statistics, which is a combination of two types of statistics: descriptive statistics and inferential statistics, including regression analysis.
Abstract: 1. Introduction. The "Image" of Statistics. Two Types of Statistics. Descriptive Statistics. Inferential Statistics. The Interdisciplinary Nature of Statistics. Statistics and Mathematics. Case Study with Computer applications. Secrets of Success. The Rewards of Your Labor. 2. Frequency Distributions. Variables. Measurements of Variables. Use of Symbols. Frequency Distributions. Organizing Data for Meaning. An Example. Ungrouped Frequency Distributions. Grouped Frequency Distributions. Tukey's Tallies. Percents and Cumulative Percents. Graphs of Frequency Distributions. The Histogram or Bar Graph. Frequency Polygons. Polygons vs. Histograms. The Ogive Curve. Median, Quartiles, and Percentiles. Box-and-Whisker Plots. Time-Series Graphs. Pie Charts. Describing Distributions. Misleading Graphs: How to Lie with Statistics. Distorted Representation. Misleading Scaling and Calibration. Combination Graphs. 3. Measures of Central Tendency and Scales of Measurement. Measurement Scales. Nominal Scales. Ordinal Scales. Interval Scales. Ration Scales. Measurement Scales and Statistics. Measures of Central Tendency. The Mean. The Median. The Mode. Mean, Median, and Mode of Combined Subgroups. Central Tendency and Skewness. Mean, Median, or Mode-which Measure is Best. 4. Measures of Variability. Introduction. Assessing Variability. Deviation Scores. Sum of Squares. The Population Variance. An Example. The Standard Deviation of a Population. Parameters vs. Statistics. Sampling Error and the Sample Variance. Expected Values. The Sample Standard Deviation as an Estimate of the Parameter. Range. The H-Spread and the Interquartile Range. The Influence of Sample Size on the Range. Reliability and Consistency of Estimators. 5. The Normal Distribution and Standard Scores. Introduction. Discrete and Continuous Measures. God Loves the Normal Curve. Characteristics of the Normal Curve. Standard Scores. The Basic Standard Score, the z-Scale. Other Standard Scores. T-Scores. Percentile vs. Standard Score Units. Proportions and Areas within the Normal Curve. Determining the Percentile Rank of Observed Scores. Determining the Raw Score Equivalent of Percentiles. Determining the Area between Two z-Scores. Use of Standard Scores with Samples. 6. Correlation: Concept and Computation. Introduction. The Need for a Measure of Relationship. How Correlation is Expressed. The Use of Correlation Coefficients. Scatterplots. Linear and Curvilinear Relationships. The Person Product-Moment Correlation Coefficient. Another Alternative Formula for r. Correlation is not Causation. Zero Correlation and Causation. 7. Interpreting Correlation Coefficients. Introduction. Linear Transformations and Correlation. Scatterplots. The Pearson Correlation Coefficient as an Inferential Statistic. Effect of Measurement Error on r. The Pearson r and Marginal Distributions. Effects of Heterogeneity on Correlation. Correcting for Restricted Variability. 8. Prediction and Regression. Purposes of Regression Analysis. Independent and Dependent Variables. The Regression Effect. The Regression Equation Expressed in Standard z-Scores. Correlation as Percentage. Use of Regression Equations. The Regression Line. Residuals and the Criterion of Best Fit. Homoscedasticity. The Raw-Score Regression Equation. The Standard Error of Estimate. Determining Probabilities of Predictions. Regression of Pretest-Posttest Gains. Multiple Correlation. Partial Correlation. 9. Statistical Inference: Sampling and Interval Estimation. The Function of Statistical Inference. Populations and Samples: Parameters and Statistics. Infinite and Finite Populations. The Need for Representative Samples. Types of Samples. Random Samples. Random Sampling Using a Table of Random Numbers. Systematic Samples. Accidental Samples. Point and Interval Estimates. Sampling Distributions: The Sampling Distribution of the Mean. The Standard Error of the Mean. Confidence Intervals. Confidence Intervals when s Is Known: An Example. Confidence Intervals when s Is Unknown. Sampling Distributions and Confidence Intervals with Nonnormal Distributions. The Assumption of Normality and the Central Limit Theorem. A Demonstration of the Central Limit Theorem. Accuracy of Confidence Intervals. The Concept of the Sampling Distribution. 10. Hypothesis Testing: Inferences Regarding the Population Mean. Introduction to Hypothesis Testing. Statistical Hypotheses. Testing Hypotheses about a Population Mean. Testing H0 : m=K: the One-Sample z-Test. Certainty and Statistical Inference. An Example in which H0 is Rejected. Hypothesis Testing and Confidence Intervals. The z-Ratio and the t-Ratio. The One-Sample t-Test: An Example. One-Tail vs. Two-Tail Tests. 11. Testing Hypotheses About the Difference Between Two Means. Introduction. Testing Statistical Hypotheses Involving Two Means. The Null Hypothesis. The z-Test for Differences between Independent Means. The Standard Error of the Difference between Means. The t-Distribution and the t-Test. Assumptions of the t-Test. Computing the Standard Error of the Difference between Means. Testing the Null Hypothesis Using the t-Test. The t-Test: An Illustration. One-Tail and Two-Tail Test Revisited. t-Test Assumptions and Robustness. Normality. Homogeneity of Variance. Testing for Homogeneity of Variance. Independence of Observations. Testing the Null Hypothesis with Paired Observations. Confidence Intervals for the Mean Difference. Effect Size. Cautions Regarding Matched-Pair Research Designs. 12. Inferences About Proportions. Statistics for Categorical Variables. The Sampling Distribution of a Proportion. The Standard Error of the Proportion. The Influence of the Sampling Fraction on sp. The Effect of Sample Size on sp. Confidence Intervals for p for Normal Sampling Distributions. The Sampling Distribution of p: An Example. The Influence of P on the Sampling Distribution of p. Confidence Intervals for P. Confidence Intervals for P Using Graphs. The Chi-Square Goodness-of-Fit Test. An Example of a Chi-Square Goodness-of-Fit Test. Chi-Square Goodness-of-Fit Test of Normality. The Chi-Square Test of Association. Independence of Observations. 13. Inferences Regarding Correlations. Introduction. The Bivariate Normal Distribution. Sampling Distributions of Pearson r. Testing the Hypothesis that Rho = 0. Testing the Significance of r Using the t-Test. Directional Alternatives: Two-Tailed vs. One-Tailed Tests. Confidence Intervals for Rho Using Fisher's Z Transformation. The Sampling Distribution of Fisher's Zr. Determining Confidence Intervals Graphically. Testing Independent Correlation Coefficients. 14. The One-Factor Analysis of Variance. Introduction. Why ANOVA Rather Than Multiple t-Tests? The ANOVA F-Ratio. The F-Distribution. Hypothesis Testing Using the ANOVA F-Ratio. One-Factor ANOVA : An Illustration. The ANOVA Table. Another ANOVA Illustration. The F-Ratio vs. the t-Ratio. Total Sum of Squares. The Mean Square between Groups. MSB, Revisited. Mean Square Within, MSW, Revisited. Overview of the ANOVA Rationale. Consequences of Violating ANOVA Assumptions. 15. Multiple Comparisons: The Tukey and Newman-Keuls Methods. Introduction. The Tukey Method. The Studentized Range Statistic: q. An Example Using the Tukey Method. The Family of Null Hypotheses ads the Basis for Alpha. The Newman-Keuls Method. An Example Using the Newman-Keuls Method of Multiple Comparisons. Newman-Keuls vs. Tukey Multiple Comparisons. 16. Two-Factor ANOVA : An Introduction to Factorial Design. Introduction. The Meaning of Interaction. Interaction Examples. Interaction and Generalizability. The Rationale of the ANOVA F-Test Revisited. Notation in Two-Factor ANOVA. Computational Steps for Balanced Two-Factor ANOVA Designs. Two-Factor ANOVA Example. A Second Computational Example of Two-Factor ANOVA. Confidence Intervals for Row and Column Means. Appendixes. Appendix A - Math Notes. Appendix B - Tables of Reference. Table A, Areas of the Unit Normal Curve Table B, Random Digits Table C, Percentile Points of the t-Distribution Table D, Critical Values of Chi-Square Table E, Critical Values of r Table F, Critical Values of F Table G, Fisher's Z-Transformation of r Table H. Critical Values of q Table I, High School and Beyond (HSB) Case Study Data Appendix C - Glossary of Statistical Symbols Used in BSBS-III. Appendix D - Glossary of Statistical Formulas Used in BSBS-III. Appendix E - Glossary of Statistical Terminology.

187 citations


Journal ArticleDOI
TL;DR: For a broad class of jackknife statistics, it was shown in this article that the Tukey estimator of the variance converges almost surely to its population counterpart, and that the usual invariance principles (relating to the Wiener process approximations) usually filter through jackknifing under no extra regularity conditions.
Abstract: For a broad class of jackknife statistics, it is shown that the Tukey estimator of the variance converges almost surely to its population counterpart. Moreover, the usual invariance principles (relating to the Wiener process approximations) usually filter through jackknifing under no extra regularity conditions. These results are then incorporated in providing a bounded-length (sequential) confidence interval and a preassigned-strength sequential test for a suitable parameter based on jackknife estimators.

72 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm was presented for carrying out Fisher's two sample permutation test for integer-valued data with ties, and it was shown that the same method can be applied to carry out the permutation Wilcoxon test in the presence of ties using average ranks.
Abstract: An algorithm, especially suitable for a computer, is presented for carrying out Fisher's two sample permutation test for integer-valued data with ties. It is shown that the same method can be applied to carry out the permutation Wilcoxon test in the presence of ties using average ranks. Some numerical examples are given.

18 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric test of the hypothesis of no treatment effect is suggested for a situation where measures of the severity of the condition treated can be obtained and ranked both pre- and post-treatment.
Abstract: A nonparametric test of the hypothesis of no treatment effect is suggested for a situation where measures of the severity of the condition treated can be obtained and ranked both pre- and post-treatment. The test allows use to be made of the pre-treatment rank as a "concomitant variable," and is based on the nature and degree of permutation of the post-treatment ranks relative to the pre-treatment ranks. Evidence is given which shows that the distribution of the suggested test statistic rapidly approaches the F distribution as the number of subjects is increased. For small samples, a randomization test may be performed.

13 citations