scispace - formally typeset
Search or ask a question

Showing papers on "Bonferroni correction published in 2000"


Journal ArticleDOI
TL;DR: A joinpoint regression model is applied to describe continuous changes in the recent trend and the grid-search method is used to fit the regression function with unknown joinpoints assuming constant variance and uncorrelated errors.
Abstract: The identification of changes in the recent trend is an important issue in the analysis of cancer mortality and incidence data. We apply a joinpoint regression model to describe such continuous changes and use the grid-search method to fit the regression function with unknown joinpoints assuming constant variance and uncorrelated errors. We find the number of significant joinpoints by performing several permutation tests, each of which has a correct significance level asymptotically. Each p-value is found using Monte Carlo methods, and the overall asymptotic significance level is maintained through a Bonferroni correction. These tests are extended to the situation with non-constant variance to handle rates with Poisson variation and possibly autocorrelated errors. The performance of these tests are studied via simulations and the tests are applied to U.S. prostate cancer incidence and mortality rates.

3,950 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluate the effect that each of these factors has on the predictive accuracy of fitted models, using fauna and flora survey data from north-east New South Wales, and suggest that predictive accuracy is maximised by employing variable selection procedures that stringently guard against the inclusion of extraneous variables in a model.

359 citations


Journal ArticleDOI
TL;DR: This work analyzes the statistical properties of MCPs and shows how failure to adjust for these properties leads to the pathologies of induction algorithms, including attribute selection errors, overfitting, and oversearching.
Abstract: A single mechanism is responsible for three pathologies of induction algorithms: attribute selection errors, overfitting, and oversearching. In each pathology, induction algorithms compare multiple items based on scores from an evaluation function and select the item with the maximum score. We call this a multiple comparison procedure (MCP). We analyze the statistical properties of MCPs and show how failure to adjust for these properties leads to the pathologies. We also discuss approaches that can control pathological behavior, including Bonferroni adjustment, randomization testing, and cross-validation.

242 citations


Journal ArticleDOI
TL;DR: This proposed Monte Carlo procedure can be applied whenever it is suspected that markers examined have high amounts of association, or as a general approach to ensure appropriate significance levels and optimal power.
Abstract: Advances in marker technology have made a dense marker map a reality. If each marker is considered separately, and separate tests for association with a disease gene are performed, then multiple testing becomes an issue. A common solution uses a Bonferroni correction to account for multiple tests performed. However, with dense marker maps, neighboring markers are tightly linked and may have associated alleles; thus tests at nearby marker loci may not be independent. When alleles at different marker loci are associated, the Bonferroni correction may lead to a conservative test, and hence a power loss. As an alternative, for tests of association that use family data, we propose a Monte Carlo procedure that provides a global assessment of significance. We examine the case of tightly linked markers with varying amounts of association between them. Using computer simulations, we study a family-based test for association (the transmission/disequilibrium test), and compare its power when either the Bonferroni or Monte Carlo procedure is used to determine significance. Our results show that when the alleles at different marker loci are not associated, using either procedure results in tests with similar power. However, when alleles at linked markers are associated, the test using the Monte Carlo procedure is more powerful than the test using the Bonferroni procedure. This proposed Monte Carlo procedure can be applied whenever it is suspected that markers examined have high amounts of association, or as a general approach to ensure appropriate significance levels and optimal power.

103 citations


Journal ArticleDOI
TL;DR: In this review article, the problem of making false‐ positive inferences as a result of making multiple comparisons between groups of experimental units or between experimental outcomes was addressed.
Abstract: 1. In a recent review article, the problem of making false-positive inferences as a result of making multiple comparisons between groups of experimental units or between experimental outcomes was addressed. 2. It was concluded that the most universally applicable solution was to use the Ryan-Holm step-down Bonferroni procedure to control the family-wise (experiment-wise) type 1 error rate. This procedure consists of adjusting the P values resulting from hypothesis testing. It allows for correlation among hypotheses and has been validated by Monte Carlo simulation. It is a simple procedure and can be performed by hand. 3. However, some investigators prefer to estimate effect sizes and make inferences by way of confidence intervals rather than, or in addition to, testing hypotheses by way of P values and it is the policy of some editors of biomedical journals to insist on this. It is not generally recognized that confidence intervals, like P values, must be adjusted if multiple inferences are made from confidence intervals in a single experiment. 4. In the present review, it is shown how confidence intervals can be adjusted for multiplicity by an extension of the Ryan-Holm step-down Bonferroni procedure. This can be done for differences between group means in the case of continuous variables and for odds ratios or relative risks in the case of categorical variables set out as 2 x 2 tables.

63 citations



Journal ArticleDOI
TL;DR: A new method is introduced for the analysis of multiple studies measured with emission tomography that allows the direct estimation of the error for each wavelet coefficient and therefore obtains estimates of the effects of interest under the specified statistical risk.
Abstract: A new method is introduced for the analysis of multiple studies measured with emission tomography. Traditional models of statistical analysis (ANOVA, ANCOVA and other linear models) are applied not directly on images but on their correspondent wavelet transforms. Maps of model effects estimated from these models are filtered using a thresholding procedure based on a simple Bonferroni correction and then reconstructed. This procedure inherently represents a complete modeling approach and therefore obtains estimates of the effects of interest (condition effect, difference between conditions, covariate of interest, and so on) under the specified statistical risk. By performing the statistical modeling step in wavelet space, the procedure allows the direct estimation of the error for each wavelet coefficient; hence, the local noise characteristics are accounted for in the subsequent filtering. The method was validated by use of a null dataset and then applied to typical examples of neuroimaging studies to highlight conceptual and practical differences from existing statistical parametric mapping approaches.

33 citations


Journal ArticleDOI
TL;DR: Two methods for testing primary and secondary endpoints are described and compared, accounting for their hierarchical nature-the ordering preference, and results indicate that the Bonferroni adjustment method performs as well as the global test method in most cases, and even better in some cases.

29 citations


Journal Article
TL;DR: In this paper, a power study of Spanish psychological research papers published in Spanish journals was carried out to determine whether its use in psychological Spanish research is adequate, and the results were very similar to those obtained in other international power studies and lead us to think about the need for a special attention for controlling the statistical power in designing a research.
Abstract: Hypothesis testing and Spanish psychological research: Analyses and proposals. The purpose of this paper was to analyse the application of the most common statistical procedure for studying relationships among variables and empirical phenomena in psychology: The null hypothesis statistical test. In order to determine whether its use in psychological Spanish research is adequate, we carried out a power study of the papers published in Spanish journals. The analysis of the 169 experiments selected, with a total of 5,480 statistical tests, showed power values of 0.18, 0.58, 0.83, and 0.59 to low, medium, high, and estimated effect sizes, respectively. These values drastically decreased in about a 20% when the calculations were repeated controlling the Type I error inflation through Bonferroni adjustment. The results were very similar to those obtained in other international power studies and lead us to think about the need for a special attention for controlling the statistical power in designing a research. On the other hand, we discuss several complementary proposals to the use of significance tests that may improve the information obtained.

16 citations


Journal ArticleDOI
TL;DR: In this article, pairwise comparisons of the means of skewed data with special emphasis on log-normal distributions are considered, and bootstrap procedures are proposed to approximate the sampling distribution of the pivotal statistic generating the simultaneous confidence sets of the pairwise differences.

16 citations


Journal ArticleDOI
TL;DR: The Bonferroni method, the method of Holm, and two 'False Discovery Rate'-controlling methods for adjusting P-values for multiple comparisons are compared.
Abstract: Examination of brain regional neurochemistry in disease states reveals differences among brain regions. Knowing where alterations in brain function are located is crucial to understanding the disease effect. The anatomical distribution of neurotransmitter receptors is now often studied using quantitative autoradiography, but the large number of brain regions involved raises serious problems for statistical analysis of such data. Due to the dependence among the subjects in case control designs, statistical analysis based on a 'mixed model' is useful. Such an analysis is illustrated using a small autoradiographic data set. The Bonferroni method, the method of Holm, and two 'False Discovery Rate'-controlling methods for adjusting P-values for multiple comparisons are compared.

Journal ArticleDOI
TL;DR: A graphical display useful for comparing a chosen individual (e.g., a single state in a state-by-state survey) with the others in the population is presented.
Abstract: This article presents a graphical display useful for comparing a chosen individual (e.g., a single state in a state-by-state survey) with the others in the population. The display immediately shows the ranking of the individual in the population, which other individuals are “significantly” higher than the reference, and which are “significantly” lower. The confidence bars are optimized for making comparisons with the reference individual. We illustrate this display with examples from the National Assessment for Educational Progress (NAEP) and the Third International Mathematics and Science Study (TIMSS).

Journal ArticleDOI
TL;DR: In this paper, the authors discuss existing techniques and present new methods, which are powerful and robust to non-normality, for performing multiple comparisons of variances, and investigate several tests for homogeneity of variance and use variations of them to demonstrate their multiple comparisons procedures.

Journal ArticleDOI
TL;DR: A refinement to Bonferroni's correction for multiple testing provided by Worsley (1982) based on maximal spanning trees is applied to calculate accurate upper bounds for the type I error and p‐values for the maximal TDT.
Abstract: Spielman et al. (1993) popularized the transmission/disequilibrium test (TDT) to test for linkage between disease and marker loci that show a population association. Several authors have proposed extensions to the TDT for multi-allelic markers. Many of these approaches exhibit a 'swamping' effect in which a marker with a strong effect is not detected by a global test that includes many markers with no effect. To avoid this effect, Schaid (1996) proposed using the maximum of the bi-allelic TDT statistics computed for each allele versus all others combined. The maximal TDT statistic, however, no longer follows a chi-square distribution. Here, a refinement to Bonferroni's correction for multiple testing provided by Worsley (1982) based on maximal spanning trees is applied to calculate accurate upper bounds for the type I error and p-values for the maximal TDT. In addition, an accurate lower Bonferroni bound is applied to calculate power. This approach does not require any simulation-based analysis and is less conservative than the standard Bonferroni correction. The bounds are given for both the exact probability calculations and for those based on the normal approximation. The results are assessed through simulations.

Journal ArticleDOI
TL;DR: In this article, a step-down likelihood ratio method for declaring differences between the test treatments or populations and the standard treatment or population was proposed, which asymptotically controls the probability of type I error.
Abstract: We are interested in comparing logistic regressions for several test treatments or populations with a logistic regression for a standard treatment or population. The research was motivated by some real life problems, which are discussed as data examples. We propose a step-down likelihood ratio method for declaring differences between the test treatments or populations and the standard treatment or population. Competitors based on the sequentially rejective Bonferroni Wald statistic, sequentially rejective exact Wald statistic and Reiers⊘l's statistic are also discussed. It is shown that the proposed method asymptotically controls the probability of type I error. A Monte Carlo simulation shows that the proposed method performs well for relatively small sample sizes, outperforming its competitors.

Journal ArticleDOI
TL;DR: A new improvement of the classical Bonferroni inequalities for any finite collection of sets {Av}v?V associated with an additional structure, which is assumed to be given by a union-closed set X of non-empty subsets of V such that ?x?XAx??v?XAv for any X?X.

Journal ArticleDOI
TL;DR: This work combined resampling step-down procedures with the Minimum Variance Adaptive method, which allows selection of an optimal test statistic from a predefined class of statistics for the data under analysis, and exhibits a significant increase in statistical power, even for small sample sizes.

Journal ArticleDOI
TL;DR: The case against the protected F test is presented and alternative methods of controlling for Type I error are discussed, including the Bonferroni adjustment and descriptive discriminant analysis.
Abstract: Researchers who examine multiple outcome variables sometimes invoke a multivariate analysis of variance approach known as the "protected F test" to control for experimentwise Type I error rate. Unfortunately, this procedure affords protection against experimentwise Type I error only in rare instances. The purpose of the present paper is to present the case against the protected F test and to discuss alternative methods of controlling for Type I error, including the Bonferroni adjustment and descriptive discriminant analysis. The latter approach is briefly elaborated as a truly multivariate solution for multivariate phenomena. The author cites multiple examples of proper and improper use of multivariate analysis of variance in research on child development.

Journal ArticleDOI
TL;DR: Technology developed in a predecessor paper (Chen and Seneta (1996) is applied to provide, in a unified manner, a sharpening of bivariate Bonferroni-type bounds on P(v 1≥r, v 2≥u) obtained by Galambos and Lee (1992; upper bound) and Chen andSeneta (1986; lower bound).
Abstract: Technology developed in a predecessor paper (Chen and Seneta (1996)) is applied to provide, in a unified manner, a sharpening of bivariate Bonferroni-type bounds on P(v 1≥r, v 2≥u) obtained by Galambos and Lee (1992; upper bound) and Chen and Seneta (1986; lower bound).

Book ChapterDOI
TL;DR: The well known Bonferroni method is applied in order to determine the optimal degree in polynomial fitting and the optimal number of hidden neurons in feedforward neural networks.
Abstract: We aim to determine which of a set of competing models is statistically best, that is, on average. A way to define "on average" is to consider the performance of these algorithms averaged over all the training sets that might be drawn from the underlying distribution. When comparing more than two means, an ANOVA F-test tells you whether the means are significantly different, but it does not tell you which means differ from each other. A simple approach is to test each possible difference by a paired t-test. However, the probability of making at least one type I error increases with the number of tests made. Multiple comparison procedures provide different solutions. We discuss these techniques and apply the well known Bonferroni method in order to determine the optimal degree in polynomial fitting and the optimal number of hidden neurons in feedforward neural networks.

Proceedings ArticleDOI
05 Jun 2000
TL;DR: The Bonferroni multiple hypothesis testing procedure is used for the selection of individual basis sequences to include in the Volterra model and has more power when selecting the true model in low noise.
Abstract: In this paper we consider the identification of a time-varying quadratic Volterra model. In the model, a set of known basis sequences are used to approximate the time-variation of the true system to enable identification. To reduce the number of parameters in the model we wish to determine which sequences can be considered significant in this approximation. The Bonferroni multiple hypothesis testing procedure is used for the selection of individual basis sequences to include in the model. This is compared with treating the multiple hypotheses separately. Not only does the Bonferroni procedure allow strong control over the false alarm but has more power when selecting the true model in low noise.

Book ChapterDOI
01 Jan 2000
TL;DR: This paper re-examines the algorithm for calculating the lower and upper bounds of an array given the complete set of its marginals, proposing some extensions.
Abstract: In this paper we re-examine our algorithm for calculating the lower and upper bounds of an array given the complete set of its marginals, proposing some extensions. The algorithm has some interesting properties that can be useful in various fields of application, such as statistical disclosure control of count tables. These properties involve both theoretical and computational issues: in particular, the algorithm has relevant links with probabilistic and statistical aspects (e.g. Frechet and Bonferroni bounds) and is particularly easy to implement, has a low storage requirement and is very fast.

Proceedings ArticleDOI
29 Oct 2000
TL;DR: In this paper, the system identification problem using a time-varying quadratic Volterra model is considered and a set of known basis sequences are used in the model to approximate the time-variation of the true system.
Abstract: We consider the system identification problem using a time-varying quadratic Volterra model. To enable identification a set of known basis sequences are used in the model to approximate the time-variation of the true system. To reduce the number of parameters in the model we wish to determine which individual sequences are significant in this approximation. Multiple hypothesis testing procedures are employed to select significant sequences. The tests include the Bonferroni test, Holm's (1979) sequentially rejective Bonferroni test, and Hommel's (1988) extension to Simes' (1986) procedure [5].