scispace - formally typeset
Search or ask a question
Author

Olive Jean Dunn

Bio: Olive Jean Dunn is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Population & Multivariate normal distribution. The author has an hindex of 29, co-authored 49 publications receiving 9540 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors considered the possibility of picking in advance a number (say m) of linear contrasts among k means, and then estimating these m linear contrasts by confidence intervals based on a Student t statistic, in such a way that the overall confidence level for the m intervals is greater than or equal to a preassigned value.
Abstract: Methods for constructing simultaneous confidence intervals for all possible linear contrasts among several means of normally distributed variables have been given by Scheffe and Tukey. In this paper the possibility is considered of picking in advance a number (say m) of linear contrasts among k means, and then estimating these m linear contrasts by confidence intervals based on a Student t statistic, in such a way that the overall confidence level for the m intervals is greater than or equal to a preassigned value. It is found that for some values of k, and for m not too large, intervals obtained in this way are shorter than those using the F distribution or the Studentized range. When this is so, the experimenter may be willing to select the linear combinations in advance which he wishes to estimate in order to have m shorter intervals instead of an infinite number of longer intervals.

3,641 citations

Journal ArticleDOI
TL;DR: In this paper, the use of rank sums from a combined ranking of k independent samples in order to decide which populations differ is suggested as a convenient alternative to making separate rankings for each pair of samples and the two methods are compared.
Abstract: This paper considers the use of rank sums from a combined ranking of k independent samples in order to decide which populations differ. Such a procedure is suggested as a convenient alternative to making separate rankings for each pair of samples, and the two methods are compared. Asymptotic use of the normal tables is given and the treatment of ties is discussed. A numerical example is given.

3,305 citations

Book
01 Jan 1974
TL;DR: In this paper, one-way analysis of variance with fixed effects was used to test whether the data fit the one-Way ANOVA model. But the results showed that the model was not robust enough to handle large numbers of samples.
Abstract: Preface.1. Data Screening.1.1 Variables and Their Classification.1.2 Describing the Data.1.2.1 Errors in the Data.1.2.2 Descriptive Statistics.1.2.3 Graphical Summarization.1.3 Departures from Assumptions.1.3.1 The Normal Distribution.1.3.2 The Normality Assumption.1.3.3 Transformations.1.3.4 Independence.1.4 Summary.Problems.References.2. One-Way Analysis of Variance Design.2.1 One-Way Analysis of Variance with Fixed Effects.2.1.1 Example.2.1.2 The One-Way Analysis of Variance Model with Fixed Effects.2.1.3 Null Hypothesis: Test for Equality of Population Means.2.1.4 Estimation of Model Terms.2.1.5 Breakdown of the Basic Sum of Squares.2.1.6 Analysis of Variance Table.2.1.7 The F Test.2.1.8 Analysis of Variance with Unequal Sample Sizes.2.2 One-Way Analysis of Variance with Random Effects.2.2.1 Data Example.2..2.2 The One-Way Analysis of Variance Model with Random Effects.2.2.3 Null Hypothesis: Test for Zero Variance of Population Means.2.2.4 Estimation of Model Terms.2.2.5 The F Test.2.3 Designing an Observational Study or Experiment.2.3.1 Randomization for Experimental Studies.2.3.2 Sample Size and Power.2.4 Checking if the Data Fit the One-Way ANOVA Model.2.4.1 Normality.2.4.2 Equality of Population Variances.2.4.3 Independence.2.4.4 Robustness.2.4.5 Missing Data.2.5 What to Do if the Data Do Not Fit the Model.2.5.1 Making Transformations.2.5.2 Using Nonparametric Methods.2.5.3 Using Alternative ANOVAs.2.6 Presentation and Interpretation of Results.2.7 Summary.Problems.References.3. Estimation and Simultaneous Inference.3.1 Estimation for Single Population Means.3.1.1 Parameter Estimation.3.1.2 Confidence Intervals.3.2 Estimation for Linear Combinations of Population Means.3.2.1 Differences of Two Population Means.3.2.2 General Contrasts for Two or More Means.3.2.3 General Contrasts for Trends.3.3 Simultaneous Statistical Inference.3.1.1 Straightforward Approach to Inference.3.3.2 Motivation for Multiple Comparison Procedures and Terminology.3.3.3 The Bonferroni Multiple Comparison Method.3.3.4 The Tukey Multiple Comparison Method.3.3.5 The Scheffe Multiple Comparison Method.3.4 Inference for Variance Components.3.5 Presentation and Interpretation of Results.3.6 Summary.Problems.References.4. Hierarchical or Nested Design.4.1 Example.4.2 The Model.4.3 Analysis of Variance Table and F Tests.4.3.1 Analysis of Variance Table.4.3.2 F Tests.4.3.3 Pooling.4.4 Estimation of Parameters.4.4.1 Comparison with the One-Way ANOVA Model of Chapter 2.4.5 Inferences with Unequal Sample Sizes.4.5.1 Hypothesis Testing.4.5.2 Estimation.4.6 Checking If the Data Fit the Model.4.7 What to Do If the Data Don't Fit the Model.4.8 Designing a Study.4.8.1 Relative Efficiency.4.9 Summary.Problems.References.5. Two Crossed Factors: Fixed Effects and Equal Sample Sizes.5.1 Example.5.2 The Model.5.3 Interpretation of Models and Interaction.5.4 Analysis of Variance and F Tests.5.5 Estimates of Parameters and Confidence Intervals.5.6 Designing a Study.5.7 Presentation and Interpretation of Results.5.8 Summary.Problems.References.6 Randomized Complete Block Design.6.1 Example.6.2 The Randomized Complete Block Design.6.3 The Model.6.4 Analysis of Variance Table and F Tests.6.5 Estimation of Parameters and Confidence Intervals.6.6 Checking If the Data Fit the Model.6.7 What to Do if the Data Don't Fit the Model.6.7.1 Friedman's Rank Sum Test.6.7.2 Missing Data.6.8 Designing a Randomized Complete Block Study.6.8.1 Experimental Studies.6.8.2 Observational Studies.6.9 Model Extensions.6.10 Summary.Problems.References.7. Two Crossed Factors: Fixed Effects and Unequal Sample Sizes.7.1 Example.7.2 The Model.7.3 Analysis of Variance and F Tests.7.4 Estimation of Parameters and Confidence Intervals.7.4.1 Means and Adjusted Means.7.4.2 Standard Errors and Confidence Intervals.7.5 Checking If the Data Fit the Two-Way Model.7.6 What To Do If the Data Don't Fit the Model.7.7 Summary.Problems.References.8. Crossed Factors: Mixed Models.8.1 Example.8.2 The Mixed Model.8.3 Estimation of Fixed Effects.8.4 Analysis of Variance.8.5 Estimation of Variance Components.8.6 Hypothesis Testing.8.7 Confidence Intervals for Means and Variance Components.8.7.1 Confidence Intervals for Population Means.8.7.2 Confidence Intervals for Variance Components.8.8 Comments on Available Software.8.9 Extensions of the Mixed Model.8.9.1 Unequal Sample Sizes.8.9.2 Fixed, Random, or Mixed Effects.8.9.3 Crossed versus Nested Factors.8.9.4 Dependence of Random Effects.8.10 Summary.Problems.References.9. Repeated Measures Designs.9.1 Repeated Measures for a Single Population.9.1.1 Example.9.1.2 The Model.9.1.3 Hypothesis Testing: No Time Effect.9.1.4 Simultaneous Inference.9.1.5 Orthogonal Contrasts.9.1.6 F Tests for Trends over Time.9.2 Repeated Measures with Several Populations.9.2.1 Example.9.2.2 Model.9.2.3 Analysis of Variance Table and F Tests.9.3 Checking if the Data Fit the Repeated Measures Model.9.4 What to Do if the Data Don't Fit the Model.9.5 General Comments on Repeated Measures Analyses.9.6 Summary.Problems.References.10. Linear Regression: Fixed X Model.10.1 Example.10.2 Fitting a Straight Line.10.3 The Fixed X Model.10.4 Estimation of Model Parameters and Standard Errors.10.4.1 Point Estimates.10.4.2 Estimates of Standard Errors.10.5 Inferences for Model Parameters: Confidence Intervals.10.6 Inference for Model Parameters: Hypothesis Testing.10.6.1 t Tests for Intercept and Slope.10.6.2 Division of the Basic Sum of Squares.10.6.3 Analysis of Variance Table and F Test.10.7 Checking if the Data Fit the Regression Model.10.7.1 Outliers.10.7.2 Checking for Linearity.10.7.3 Checking for Equality of Variances.10.7.4 Checking for Normality.10.7.5 Summary of Screening Procedures.10.8 What to Do if the Data Don't Fit the Model.10.9 Practical Issues in Designing a Regression Study.10.9.1 Is Fixed X Regression an Appropriate Technique?10.9.2 What Values of X Should Be Selected?10.9.3 Sample Size Calculations.10.10 Comparison with One-Way ANOVA.10.11 Summary.Problems.References.11. Linear Regression: Random X Model and Correlation.11.1 Example.11.1.1 Sampling and Summary Statistics.11.2 Summarizing the Relationship Between X and Y.11.3 Inferences for the Regression of Y and X.11.3.1 Comparison of Fixed X and Random X Sampling.11.4 The Bivariate Normal Model.11.4.1 The Bivariate Normal Distribution.11.4.2 The Correlation Coefficient.11.4.3 The Correlation Coefficient: Confidence Intervals and Tests.11.5 Checking if the Data Fit the Random X Regression Model.11.5.1 Checking for High-Leverage, Outlying, and Influential Observations.11.6 What to Do if the Data Don't Fit the Random X Model.11.6.1 Nonparametric Alternatives to Simple Linear Regression.11.6.2 Nonparametric Alternatives to the Pearson Correlation.11.7 Summary.Problem.References.12. Multiple Regression.12.1 Example.12.2 The Sample Regression Plane.12.3 The Multiple Regression Model.12.4 Parameters Standard Errors, and Confidence Intervals.12.4.1 Prediction of E(Y\X1,...,Xk).12.4.2 Standardized Regression Coefficients.12.5 Hypothesis Testing.12.5.1 Test That All Partial Regression Coefficients Are 0.12.5.2 Tests that One Partial Regression Coefficient is 0.12.6 Checking If the Data Fit the Multiple Regression Model.12.6.1 Checking for Outlying, High Leverage and Influential Points.12.6.2 Checking for Linearity.12.6.3 Checking for Equality of Variances.12.6.4 Checking for Normality of Errors.12.6.5 Other Potential Problems.12.7 What to Do If the Data Don't Fit the Model.12.8 Summary.Problems.References.13. Multiple and Partial Correlation.13.1 Example.13.2 The Sample Multiple Correlation Coefficient.13.3 The Sample Partial Correlation Coefficient.13.4 The Joint Distribution Model.13.4.1 The Population Multiple Correlation Coefficient.13.4.2 The Population Partial Correlation Coefficient.13.5 Inferences for the Multiple Correlation Coefficient.13.6 Inferences for Partial Correlation Coefficients.13.6.1 Confidence Intervals for Partial Correlation Coefficients.13.6.2 Hypothesis Tests for Partial Correlation Coefficients.13.7 Checking If the Data Fit the Joint Normal Model.13.8 What to Do If the Data Don't Fit the Model.13.9 Summary.Problems.References.14. Miscellaneous Topics in Regression.14.1 Models with Dummy Variables.14.2 Models with Interaction Terms.14.3 Models with Polynomial Terms.14.3.1 Polynomial Model.14.4 Variable Selection.14.4.1 Criteria for Evaluating and Comparing Models.14.4.2 Methods for Variable Selection.14.4.3 General Comments on Variable Selection.14.5 Summary.Problems.References.15. Analysis of Covariance.15.1 Example.15.2 The ANCOVA Model.15.3 Estimation of Model Parameters.15.4 Hypothesis Tests.15.5 Adjusted Means.15.5.1 Estimation of Adjusted Means and Standard Errors.15.5.2 Confidence Intervals for Adjusted Means.15.6 Checking If the Data Fit the ANCOVA Model.15.7 What to Do if the Data Don't Fit the Model.15.8 ANCOVA in Observational Studies.15.9 What Makes a Good Covariate.15.10 Measurement Error.15.11 ANCOVA versus Other Methods of Adjustment.15.12 Comments on Statistical Software.15.13 Summary.Problems.References.16. Summaries, Extensions, and Communication.16.1 Summaries and Extensions of Models.16.2 Communication of Statistics in the Context of Research Project.References.Appendix A.A.1 Expected Values and Parameters.A.2 Linear Combinations of Variables and Their Parameters.A.3 Balanced One-Way ANOVA, Expected Mean Squares.A.3.1 To Show EMS(MSa) = sigma2 + n SIGMAai= 1 alpha2i /(a - 1).A.3.2 To Show EMS(MSr) = sigma2.A.4 Balanced One-Way ANOVA, Random Effects.A.5 Balanced Nested Model.A.6 Mixed Model.A.6.1 Variances and Covariances of Yijk.A.6.2 Variance of Yi.A.6.3 Variance of Yi. - Yi'..A.7 Simple Linear Regression-Derivation of Least Squares Estimators.A.8 Derivation of Variance Estimates from Simple Linear Regression.Appendix B.Index.

607 citations

Journal ArticleDOI
TL;DR: In this paper, one-way analysis of variance with fixed effects was used to test whether the data fit the one-Way ANOVA model. But the results showed that the model was not robust enough to handle large numbers of samples.
Abstract: Preface.1. Data Screening.1.1 Variables and Their Classification.1.2 Describing the Data.1.2.1 Errors in the Data.1.2.2 Descriptive Statistics.1.2.3 Graphical Summarization.1.3 Departures from Assumptions.1.3.1 The Normal Distribution.1.3.2 The Normality Assumption.1.3.3 Transformations.1.3.4 Independence.1.4 Summary.Problems.References.2. One-Way Analysis of Variance Design.2.1 One-Way Analysis of Variance with Fixed Effects.2.1.1 Example.2.1.2 The One-Way Analysis of Variance Model with Fixed Effects.2.1.3 Null Hypothesis: Test for Equality of Population Means.2.1.4 Estimation of Model Terms.2.1.5 Breakdown of the Basic Sum of Squares.2.1.6 Analysis of Variance Table.2.1.7 The F Test.2.1.8 Analysis of Variance with Unequal Sample Sizes.2.2 One-Way Analysis of Variance with Random Effects.2.2.1 Data Example.2..2.2 The One-Way Analysis of Variance Model with Random Effects.2.2.3 Null Hypothesis: Test for Zero Variance of Population Means.2.2.4 Estimation of Model Terms.2.2.5 The F Test.2.3 Designing an Observational Study or Experiment.2.3.1 Randomization for Experimental Studies.2.3.2 Sample Size and Power.2.4 Checking if the Data Fit the One-Way ANOVA Model.2.4.1 Normality.2.4.2 Equality of Population Variances.2.4.3 Independence.2.4.4 Robustness.2.4.5 Missing Data.2.5 What to Do if the Data Do Not Fit the Model.2.5.1 Making Transformations.2.5.2 Using Nonparametric Methods.2.5.3 Using Alternative ANOVAs.2.6 Presentation and Interpretation of Results.2.7 Summary.Problems.References.3. Estimation and Simultaneous Inference.3.1 Estimation for Single Population Means.3.1.1 Parameter Estimation.3.1.2 Confidence Intervals.3.2 Estimation for Linear Combinations of Population Means.3.2.1 Differences of Two Population Means.3.2.2 General Contrasts for Two or More Means.3.2.3 General Contrasts for Trends.3.3 Simultaneous Statistical Inference.3.1.1 Straightforward Approach to Inference.3.3.2 Motivation for Multiple Comparison Procedures and Terminology.3.3.3 The Bonferroni Multiple Comparison Method.3.3.4 The Tukey Multiple Comparison Method.3.3.5 The Scheffe Multiple Comparison Method.3.4 Inference for Variance Components.3.5 Presentation and Interpretation of Results.3.6 Summary.Problems.References.4. Hierarchical or Nested Design.4.1 Example.4.2 The Model.4.3 Analysis of Variance Table and F Tests.4.3.1 Analysis of Variance Table.4.3.2 F Tests.4.3.3 Pooling.4.4 Estimation of Parameters.4.4.1 Comparison with the One-Way ANOVA Model of Chapter 2.4.5 Inferences with Unequal Sample Sizes.4.5.1 Hypothesis Testing.4.5.2 Estimation.4.6 Checking If the Data Fit the Model.4.7 What to Do If the Data Don't Fit the Model.4.8 Designing a Study.4.8.1 Relative Efficiency.4.9 Summary.Problems.References.5. Two Crossed Factors: Fixed Effects and Equal Sample Sizes.5.1 Example.5.2 The Model.5.3 Interpretation of Models and Interaction.5.4 Analysis of Variance and F Tests.5.5 Estimates of Parameters and Confidence Intervals.5.6 Designing a Study.5.7 Presentation and Interpretation of Results.5.8 Summary.Problems.References.6 Randomized Complete Block Design.6.1 Example.6.2 The Randomized Complete Block Design.6.3 The Model.6.4 Analysis of Variance Table and F Tests.6.5 Estimation of Parameters and Confidence Intervals.6.6 Checking If the Data Fit the Model.6.7 What to Do if the Data Don't Fit the Model.6.7.1 Friedman's Rank Sum Test.6.7.2 Missing Data.6.8 Designing a Randomized Complete Block Study.6.8.1 Experimental Studies.6.8.2 Observational Studies.6.9 Model Extensions.6.10 Summary.Problems.References.7. Two Crossed Factors: Fixed Effects and Unequal Sample Sizes.7.1 Example.7.2 The Model.7.3 Analysis of Variance and F Tests.7.4 Estimation of Parameters and Confidence Intervals.7.4.1 Means and Adjusted Means.7.4.2 Standard Errors and Confidence Intervals.7.5 Checking If the Data Fit the Two-Way Model.7.6 What To Do If the Data Don't Fit the Model.7.7 Summary.Problems.References.8. Crossed Factors: Mixed Models.8.1 Example.8.2 The Mixed Model.8.3 Estimation of Fixed Effects.8.4 Analysis of Variance.8.5 Estimation of Variance Components.8.6 Hypothesis Testing.8.7 Confidence Intervals for Means and Variance Components.8.7.1 Confidence Intervals for Population Means.8.7.2 Confidence Intervals for Variance Components.8.8 Comments on Available Software.8.9 Extensions of the Mixed Model.8.9.1 Unequal Sample Sizes.8.9.2 Fixed, Random, or Mixed Effects.8.9.3 Crossed versus Nested Factors.8.9.4 Dependence of Random Effects.8.10 Summary.Problems.References.9. Repeated Measures Designs.9.1 Repeated Measures for a Single Population.9.1.1 Example.9.1.2 The Model.9.1.3 Hypothesis Testing: No Time Effect.9.1.4 Simultaneous Inference.9.1.5 Orthogonal Contrasts.9.1.6 F Tests for Trends over Time.9.2 Repeated Measures with Several Populations.9.2.1 Example.9.2.2 Model.9.2.3 Analysis of Variance Table and F Tests.9.3 Checking if the Data Fit the Repeated Measures Model.9.4 What to Do if the Data Don't Fit the Model.9.5 General Comments on Repeated Measures Analyses.9.6 Summary.Problems.References.10. Linear Regression: Fixed X Model.10.1 Example.10.2 Fitting a Straight Line.10.3 The Fixed X Model.10.4 Estimation of Model Parameters and Standard Errors.10.4.1 Point Estimates.10.4.2 Estimates of Standard Errors.10.5 Inferences for Model Parameters: Confidence Intervals.10.6 Inference for Model Parameters: Hypothesis Testing.10.6.1 t Tests for Intercept and Slope.10.6.2 Division of the Basic Sum of Squares.10.6.3 Analysis of Variance Table and F Test.10.7 Checking if the Data Fit the Regression Model.10.7.1 Outliers.10.7.2 Checking for Linearity.10.7.3 Checking for Equality of Variances.10.7.4 Checking for Normality.10.7.5 Summary of Screening Procedures.10.8 What to Do if the Data Don't Fit the Model.10.9 Practical Issues in Designing a Regression Study.10.9.1 Is Fixed X Regression an Appropriate Technique?10.9.2 What Values of X Should Be Selected?10.9.3 Sample Size Calculations.10.10 Comparison with One-Way ANOVA.10.11 Summary.Problems.References.11. Linear Regression: Random X Model and Correlation.11.1 Example.11.1.1 Sampling and Summary Statistics.11.2 Summarizing the Relationship Between X and Y.11.3 Inferences for the Regression of Y and X.11.3.1 Comparison of Fixed X and Random X Sampling.11.4 The Bivariate Normal Model.11.4.1 The Bivariate Normal Distribution.11.4.2 The Correlation Coefficient.11.4.3 The Correlation Coefficient: Confidence Intervals and Tests.11.5 Checking if the Data Fit the Random X Regression Model.11.5.1 Checking for High-Leverage, Outlying, and Influential Observations.11.6 What to Do if the Data Don't Fit the Random X Model.11.6.1 Nonparametric Alternatives to Simple Linear Regression.11.6.2 Nonparametric Alternatives to the Pearson Correlation.11.7 Summary.Problem.References.12. Multiple Regression.12.1 Example.12.2 The Sample Regression Plane.12.3 The Multiple Regression Model.12.4 Parameters Standard Errors, and Confidence Intervals.12.4.1 Prediction of E(Y\\X1,...,Xk).12.4.2 Standardized Regression Coefficients.12.5 Hypothesis Testing.12.5.1 Test That All Partial Regression Coefficients Are 0.12.5.2 Tests that One Partial Regression Coefficient is 0.12.6 Checking If the Data Fit the Multiple Regression Model.12.6.1 Checking for Outlying, High Leverage and Influential Points.12.6.2 Checking for Linearity.12.6.3 Checking for Equality of Variances.12.6.4 Checking for Normality of Errors.12.6.5 Other Potential Problems.12.7 What to Do If the Data Don't Fit the Model.12.8 Summary.Problems.References.13. Multiple and Partial Correlation.13.1 Example.13.2 The Sample Multiple Correlation Coefficient.13.3 The Sample Partial Correlation Coefficient.13.4 The Joint Distribution Model.13.4.1 The Population Multiple Correlation Coefficient.13.4.2 The Population Partial Correlation Coefficient.13.5 Inferences for the Multiple Correlation Coefficient.13.6 Inferences for Partial Correlation Coefficients.13.6.1 Confidence Intervals for Partial Correlation Coefficients.13.6.2 Hypothesis Tests for Partial Correlation Coefficients.13.7 Checking If the Data Fit the Joint Normal Model.13.8 What to Do If the Data Don't Fit the Model.13.9 Summary.Problems.References.14. Miscellaneous Topics in Regression.14.1 Models with Dummy Variables.14.2 Models with Interaction Terms.14.3 Models with Polynomial Terms.14.3.1 Polynomial Model.14.4 Variable Selection.14.4.1 Criteria for Evaluating and Comparing Models.14.4.2 Methods for Variable Selection.14.4.3 General Comments on Variable Selection.14.5 Summary.Problems.References.15. Analysis of Covariance.15.1 Example.15.2 The ANCOVA Model.15.3 Estimation of Model Parameters.15.4 Hypothesis Tests.15.5 Adjusted Means.15.5.1 Estimation of Adjusted Means and Standard Errors.15.5.2 Confidence Intervals for Adjusted Means.15.6 Checking If the Data Fit the ANCOVA Model.15.7 What to Do if the Data Don't Fit the Model.15.8 ANCOVA in Observational Studies.15.9 What Makes a Good Covariate.15.10 Measurement Error.15.11 ANCOVA versus Other Methods of Adjustment.15.12 Comments on Statistical Software.15.13 Summary.Problems.References.16. Summaries, Extensions, and Communication.16.1 Summaries and Extensions of Models.16.2 Communication of Statistics in the Context of Research Project.References.Appendix A.A.1 Expected Values and Parameters.A.2 Linear Combinations of Variables and Their Parameters.A.3 Balanced One-Way ANOVA, Expected Mean Squares.A.3.1 To Show EMS(MSa) = sigma2 + n SIGMAai= 1 alpha2i /(a - 1).A.3.2 To Show EMS(MSr) = sigma2.A.4 Balanced One-Way ANOVA, Random Effects.A.5 Balanced Nested Model.A.6 Mixed Model.A.6.1 Variances and Covariances of Yijk.A.6.2 Variance of Yi.A.6.3 Variance of Yi. - Yi'..A.7 Simple Linear Regression-Derivation of Least Squares Estimators.A.8 Derivation of Variance Estimates from Simple Linear Regression.Appendix B.Index.

360 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the situation when the sample is from a multivariate normal distribution, and several possible large sample testing procedures are given, all based on Fisher's z-transformation.
Abstract: When two correlation coefficients are calculated from a single sample, rather than from two samples, they are not statistically independent, and the usual methods for testing equality of the population correlation coefficients no longer apply. This paper considers the situation when the sample is from a multivariate normal distribution. Several possible large sample testing procedures are given, all based on Fisher's z-transformation. Power curves are given for each procedure and for seven values of the asymptotic correlation between the two sample correlation coefficients.

306 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In the new version, procedures to analyze the power of tests based on single-sample tetrachoric correlations, comparisons of dependent correlations, bivariate linear regression, multiple linear regression based on the random predictor model, logistic regression, and Poisson regression are added.
Abstract: G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

20,778 citations

Book
31 Jan 1986
TL;DR: Numerical Recipes: The Art of Scientific Computing as discussed by the authors is a complete text and reference book on scientific computing with over 100 new routines (now well over 300 in all), plus upgraded versions of many of the original routines, with many new topics presented at the same accessible level.
Abstract: From the Publisher: This is the revised and greatly expanded Second Edition of the hugely popular Numerical Recipes: The Art of Scientific Computing. The product of a unique collaboration among four leading scientists in academic research and industry, Numerical Recipes is a complete text and reference book on scientific computing. In a self-contained manner it proceeds from mathematical and theoretical considerations to actual practical computer routines. With over 100 new routines (now well over 300 in all), plus upgraded versions of many of the original routines, this book is more than ever the most practical, comprehensive handbook of scientific computing available today. The book retains the informal, easy-to-read style that made the first edition so popular, with many new topics presented at the same accessible level. In addition, some sections of more advanced material have been introduced, set off in small type from the main body of the text. Numerical Recipes is an ideal textbook for scientists and engineers and an indispensable reference for anyone who works in scientific computing. Highlights of the new material include a new chapter on integral equations and inverse methods; multigrid methods for solving partial differential equations; improved random number routines; wavelet transforms; the statistical bootstrap method; a new chapter on "less-numerical" algorithms including compression coding and arbitrary precision arithmetic; band diagonal linear systems; linear algebra on sparse matrices; Cholesky and QR decomposition; calculation of numerical derivatives; Pade approximants, and rational Chebyshev approximation; new special functions; Monte Carlo integration in high-dimensional spaces; globally convergent methods for sets of nonlinear equations; an expanded chapter on fast Fourier methods; spectral analysis on unevenly sampled data; Savitzky-Golay smoothing filters; and two-dimensional Kolmogorov-Smirnoff tests. All this is in addition to material on such basic top

12,662 citations

Journal Article
TL;DR: A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets.
Abstract: While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams.

10,306 citations

Journal ArticleDOI
TL;DR: This article reviewed the literature on such tests, pointed out some statistics that should be avoided, and presented a variety of techniques that can be used safely with medium to large samples, and several illustrative numerical examples are provided.
Abstract: In a variety of situations in psychological research, it is desirable to be able to make statistical comparisons between correlation coefficients measured on the same individuals. For example, an experimenter may wish to assess whether two predictors correlate equally with a criterion variable. In another situation, the experimenter may wish to test the hypothesis that an entire matrix of correlations has remained stable over time. The present article reviews the literature on such tests, points out some statistics that should be avoided, and presents a variety of techniques that can be used safely with medium to large samples. Several illustrative numerical examples are provided.

4,245 citations

Journal ArticleDOI
TL;DR: The basics are discussed and a survey of a complete set of nonparametric procedures developed to perform both pairwise and multiple comparisons, for multi-problem analysis are given.
Abstract: a b s t r a c t The interest in nonparametric statistical analysis has grown recently in the field of computational intelligence. In many experimental studies, the lack of the required properties for a proper application of parametric procedures - independence, normality, and homoscedasticity - yields to nonparametric ones the task of performing a rigorous comparison among algorithms. In this paper, we will discuss the basics and give a survey of a complete set of nonparametric procedures developed to perform both pairwise and multiple comparisons, for multi-problem analysis. The test problems of the CEC'2005 special session on real parameter optimization will help to illustrate the use of the tests throughout this tutorial, analyzing the results of a set of well-known evolutionary and swarm intelligence algorithms. This tutorial is concluded with a compilation of considerations and recommendations, which will guide practitioners when using these tests to contrast their experimental results.

3,832 citations