scispace - formally typeset
Search or ask a question

Showing papers on "Bonferroni correction published in 2021"


Journal ArticleDOI
09 Jun 2021-PLOS ONE
TL;DR: A much needed practical synthesis of basic statistical concepts regarding multiple hypothesis testing in a comprehensible language with well-illustrated examples and an easy-to-follow guide for selecting the most suitable correction technique is provided.
Abstract: Scientists from nearly all disciplines face the problem of simultaneously evaluating many hypotheses. Conducting multiple comparisons increases the likelihood that a non-negligible proportion of associations will be false positives, clouding real discoveries. Drawing valid conclusions require taking into account the number of performed statistical tests and adjusting the statistical confidence measures. Several strategies exist to overcome the problem of multiple hypothesis testing. We aim to summarize critical statistical concepts and widely used correction approaches while also draw attention to frequently misinterpreted notions of statistical inference. We provide a step-by-step description of each multiple-testing correction method with clear examples and present an easy-to-follow guide for selecting the most suitable correction technique. To facilitate multiple-testing corrections, we developed a fully automated solution not requiring programming skills or the use of a command line. Our registration free online tool is available at www.multipletesting.com and compiles the five most frequently used adjustment tools, including the Bonferroni, the Holm (step-down), the Hochberg (step-up) corrections, allows to calculate False Discovery Rates (FDR) and q-values. The current summary provides a much needed practical synthesis of basic statistical concepts regarding multiple hypothesis testing in a comprehensible language with well-illustrated examples. The web tool will fill the gap for life science researchers by providing a user-friendly substitute for command-line alternatives.

45 citations


Journal ArticleDOI
11 Jan 2021
TL;DR: In this paper, the authors investigated the multiple attribute decision-making problems with generalized trapezoidal hesitant fuzzy numbers (GTHF-numbers) and developed two aggregation techniques called generalized Trapezoid hesitant fuzzy Bonferroni arithmetic mean operator and GTHF geometric mean operator.
Abstract: Generalized trapezoidal hesitant fuzzy numbers are useful when ever there is indecision among several possible values for the preferences over objects in the process of decision making. In this sense, the aim of this work is to investigate the multiple attribute decision-making problems with generalized trapezoidal hesitant fuzzy numbers (GTHF-numbers). Therefore, we develop two aggregation techniques called generalized trapezoidal hesitant fuzzy Bonferroni arithmetic mean operator and generalized trapezoidal hesitant fuzzy Bonferroni geometric mean operator for aggregating the generalized trapezoidal hesitant fuzzy information. Then, we examine its properties and discuss its special cases. Also, we develop two approach for multiple attribute decision making under the generalized trapezoidal hesitant fuzzy environments. Also, we apply the proposed approaches based on Bonferroni aggregation operators under generalized trapezoidal hesitant fuzzy environments to multicriteria decision making, and we give two practical example to illustrate our results. In the end, we give an analysis of the proposed approaches by providing a brief comparative analysis of these methods with existing methods.

23 citations


Journal ArticleDOI
TL;DR: Using the presented operators a decision-making approach is developed and is illustrated with the help of a practical example and the reliability of the developed methodology is investigated with the aid of validity test criteria.
Abstract: Complex intuitionistic fuzzy sets (CIFSs), characterized by complex-valued grades of membership and non-membership, are a generalization of standard intuitionistic fuzzy (IF) sets that better speak to time-periodic issues and handle two-dimensional data in a solitary set. Under this environment, in this article, various mean-type operators, namely complex IF Bonferroni means (CIFBM) and complex IF weighted Bonferroni mean (CIFWBM) are presented along with their properties and numerous particular cases of CIFBM are discussed. Further, using the presented operators a decision-making approach is developed and is illustrated with the help of a practical example. Also, the reliability of the developed methodology is investigated with the aid of validity test criteria and the example results are compared with prevailing methods based on operators.

23 citations


Journal ArticleDOI
TL;DR: The threshold to find a particular level of family-wise significance may need to be established using separate permutations of the actual data for several MAF bins, and it is proposed that the permutation threshold is influenced by minor allele frequency of the SNPs, and by the number of individuals tested.
Abstract: An important issue affecting genome-wide association studies with deep phenotyping (multiple correlated phenotypes) is determining the suitable family-wise significance threshold. Straightforward family-wise correction (Bonferroni) of p 0.1), the permutation family-wise threshold was in close agreement with spectral decomposition methods. However, for less common SNPs (0.05 < MAF ≤ 0.1), the permutation threshold calculated over all SNPs was off by orders of magnitude. This applies to the number of individuals studied (here 777) but not to very much larger numbers. Based on these findings, we propose that the threshold to find a particular level of family-wise significance may need to be established using separate permutations of the actual data for several MAF bins.

21 citations


Journal ArticleDOI
TL;DR: There was sex difference in the prevalence of high BMI and dyslipidemia of schizophrenia patients and the contribution of clinical and metabolic components to MetS was almost same between male and female patients.

16 citations


Book ChapterDOI
01 Jan 2021
TL;DR: An approach to multi-criteria group-decision making problems under the spherical fuzzy environment is presented, and to illustrate the validity of the novel aggregation operator, a practical example is provided.
Abstract: The Bonferroni Mean (BM) which was introduced by Bonferroni has been extensively applied in Multi-Attribute Group Decision-Making (MAGDM) and support system problems because of its usefulness in the aggregation techniques One of the most important and distinguishing characteristic of the BM is its capability to capture the interrelationship between arguments Motivated by the applications of Spherical Fuzzy Sets (SFS) in recent studies and in order to consider the interrelationship between arguments, it seems necessary to develop novel aggregation operators to use in this kind of fuzzy sets in MAGDM problems Therefore, in this paper we tried to adopt BM and spherical fuzzy sets operators in order to propose newfound aggregation operators such as: Spherical Fuzzy Bonferroni mean (SFBM) and Spherical Fuzzy Normalized Weighted Bonferroni mean (SFNWBM) Finally, based on the proposed aggregation operators (SFNWBM), we present an approach to multi-criteria group-decision making problems under the spherical fuzzy environment, and to illustrate the validity of the novel aggregation operator, a practical example is provided

15 citations


Journal ArticleDOI
TL;DR: In this article, a review of the advantages and disadvantages of making adjustments when undertaking multiple comparisons is presented. And advice on when researchers should consider making adjustments in p-value thresholds and when they should be avoided, is provided.
Abstract: Researchers attempt to minimize Type-I errors (concluding there is a relationship between variables, when there in fact, isn't one) in their experiments by exerting control over the p-value thresholds or alpha level. If a statistical test is conducted only once in a study, it is indeed possible for the researcher to maintain control, so that the likelihood of a Type-I error is equal to or less than the significance (p-value) level. When making multiple comparisons in a study, however, the likelihood of making a Type-I error can dramatically increase. When conducting multiple comparisons, researchers frequently attempt to control for the increased risk of Type-I errors by making adjustments to their alpha level or significance threshold level. The Bonferroni adjustment is the most common of these types of adjustment. However, these, often rigid adjustments, are not without risk and are often applied arbitrarily. The objective of this review is to provide a balanced commentary on the advantages and disadvantages of making adjustments when undertaking multiple comparisons. A summary discussion of familiar- and experiment-wise error is also presented. Lastly, advice on when researchers should consider making adjustments in p-value thresholds and when they should be avoided, is provided.

15 citations


Journal ArticleDOI
TL;DR: In this article, a CAD test model was obtained by using 6 intraoral scanners (TRIOS2, TRIOS3, CS3500, CS3600, i500, and Primescan) to compare the 3D distortion of complete-arch scans as part of the scan strategy and analyze the clinically recommended scan range.
Abstract: Statement of problem Various strategies for intraoral scanners (IOSs) can be used to scan the oral cavity. However, research on the scan range that can be clinically is lacking. Purpose The purpose of this in vitro study was to compare the 3-dimensional (3D) distortion of complete-arch scans as part of the scan strategy and analyze the clinically recommended scan range. Material and methods A computer-aided design (CAD) reference model was obtained with an industrial scanner. A CAD test model was obtained by using 6 IOSs (TRIOS2, TRIOS3, CS3500, CS3600, i500, and Primescan) to apply 2 scan strategies and 2 dental laboratory scanners (DOF and E1) (N=15). All the teeth were segmented in the reference model by using 3D inspection software (Geomagic control X). The 3D analysis was performed by aligning the test model to the reference model and evaluating the root mean square values of all segmented teeth. The Mann-Whitney U-test was performed for a statistical comparison of the 2 scan strategies (α=.05), the Kruskal-Wallis test (α=.05) was used to compare the scanners, and the Mann-Whitney U-test and Bonferroni correction method were used as post hoc tests (α=.0017). Results The 8 scanners obtained significant differences in the root mean square values of all teeth (P Conclusions Scan strategy 2 improved the accuracy of the IOSs. TRIOS2 and CS3500 are for single crowns; TRIOS3, CS3600, and i500 are for short-span prostheses; and Primescan is for long-span prostheses.

13 citations


Journal ArticleDOI
TL;DR: In this paper, a new five-parameter model called the extended Dagum distribution was proposed, which contains as special cases the log-logistic and Burr III distributions, among others, and derived the moments, generating and quantile functions, mean deviations and Bonferroni, Lorenz and Zenga curves.
Abstract: We study a new five-parameter model called the extended Dagum distribution. The proposed model contains as special cases the log-logistic and Burr III distributions, among others. We derive the moments, generating and quantile functions, mean deviations and Bonferroni, Lorenz and Zenga curves. We obtain the density function of the order statistics. The parameters are estimated by the method of maximum likelihood. The observed information matrix is determined. An application to real data illustrates the importance of the new model.

11 citations


Journal ArticleDOI
TL;DR: A new approach is introduced to handle multi-attribute decision making problems in the environment of 2DULVs by the proposed operators and the validity and superiority of this new method is testified by comparing with several other methods.
Abstract: The dual generalized Bonferroni mean operator is a further extension of the generalized Bonferroni mean operator which can take the interrelationship of different numbers of attributes into account by changing the embedded parameter The 2-dimensional uncertain linguistic variable (2DULV) adds a second dimensional uncertain linguistic variable (ULV) to express the reliability of the assessment information in first dimensional information, which is more rational and accurate than the ULV In this paper, for combining the advantages of them, we propose the dual generalized weighted Bonferroni mean operator for 2DULVs (2DULDGWBM) and the dual generalized weighted Bonferroni geometric mean operator for 2DULVs (2DULDGWBGM) In addition, we explore several particular cases and some rational characters of them Further, a new approach is introduced to handle multi-attribute decision making problems in the environment of 2DULVs by the proposed operators Finally, we utilize several illustrative examples to testify the validity and superiority of this new method by comparing with several other methods

11 citations



Journal ArticleDOI
TL;DR: In this paper, it was shown that FWER asymptotically is a convex function of the correlation ( ρ ) and hence an upper bound on the FWER of Bonferroni- α procedure is α ( 1 − ρ ).

Journal ArticleDOI
TL;DR: The results suggest that RELN rs7341475 is associated with a lower risk of SCZ in the overall population and Caucasian population, but rs262355 is associatedwith an increased risk ofSCZ only in the Caucasian population.
Abstract: Schizophrenia (SCZ) is a destructive neuropsychiatric illness affecting millions of people worldwide. The correlation between RELN gene polymorphisms and SCZ was investigated by previous researches, though the results remained conflicting. Based on the available studies, we conducted this meta-analysis to provide a more comprehensive outcome on whether the RELN gene polymorphisms (rs7341475 and rs262355) are associated with SCZ. A total of 15 studies with 25,403 subjects (9047 cases and 16,356 controls) retrieved from PubMed, ScienceDirect, EMBASE, Wiley, BMC, Cochrane, Springer, MDPI, SAGE, and Google Scholar up to June 2020 were included. Meta-analysis was performed using Review Manager 5.3. The heterogeneity was checked using I2 statistics and Q-test, whereas publication bias was also measured. The rs7341475 polymorphism showed a significantly lower risk for SCZ for the allele (A vs. G: OR = 0.93, 95%CI = 0.87–0.99), codominant 1 (AG vs. GG: OR = 0.92, 95%CI = 0.85–0.99), dominant model (AA+AG vs. GG: OR = 0.92, 95%CI = 0.86–0.98), and over dominant model (AG vs. AA+GG: OR = 0.92, 95%Cl = 0.86–0.99). The allele, codominant model 1, and dominant models remained statistically significant after the correction of the Bonferroni (p < 0.025). Subgroup analysis confirmed the association of allele and dominant models in the Caucasian after Bonferroni correction. For rs262355 polymorphism, a significantly increased risk of SCZ was found only in Caucasians for codominant 2, dominant, and allele models, but significance exists only for the allele model after Bonferroni correction. Publication bias was found in the case of codominant 2 and recessive models for rs7341475 in the overall population, but this publication was not found after performing the Bonferroni correction or after performing the subgroup analysis. No such publication was found for rs262355. The results suggest that RELN rs7341475 is associated with a lower risk of SCZ in the overall population and Caucasian population, but rs262355 is associated with an increased risk of SCZ only in the Caucasian population.

Journal ArticleDOI
TL;DR: In this article, the authors examined whether polygenic risk scores (PRSs) for major depressive disorder, schizophrenia, cross-disorder, and pharmacological antidepressant response are associated with ECT effectiveness.

Journal ArticleDOI
TL;DR: In this paper, deep learning-based background phase error correction improved the consistency of flow measurements in abdominopelvic four-dimensional flow MRI and simplified hemodynamic analysis for clinical use.
Abstract: Deep learning-based background phase error correction improved the consistency of flow measurements in abdominopelvic four-dimensional flow MRI and simplified hemodynamic analysis for clinical use.

Posted ContentDOI
01 Jun 2021-medRxiv
TL;DR: Although PRSs are still not able to predict non-response or non-remission, the results are in line with previous works; methodological improvements inPRSs calculation may improve their predictive performance and have a meaningful role in precision psychiatry.
Abstract: About two-thirds of patients with major depressive disorder (MDD) fail to achieve symptom remission after the initial antidepressant treatment. Despite a role of genetic factors was proven, the specific underpinnings are not fully understood yet. Polygenic risk scores (PRSs), which summarise the additive effect of multiple risk variants across the genome, might provide insights into the underlying genetics. This study aims to investigate the possible association of PRSs for bipolar disorder, MDD, neuroticism, and schizophrenia (SCZ) with antidepressant non-response or non-remission in patients with MDD. PRSs were calculated at eight genome-wide P-thresholds based on publicly available summary statistics of the largest genome-wide association studies. Logistic regressions were performed between PRSs and non-response or non-remission in six European clinical samples, adjusting for age, sex, baseline symptom severity, recruitment sites, and population stratification. Results were meta-analysed across samples, including up to 3,637 individuals. Bonferroni correction was applied. In the meta-analysis, no result was significant after Bonferroni correction. The top result was found for MDD-PRS and non-remission (p=0.004), with patients in the highest vs. lowest PRS quintile being more likely not to achieve remission (OR=1.5, 95% CI=1.11-1.98, p=0.007). Nominal associations were also found between MDD-PRS and non-response (p=0.013), as well as between SCZ-PRS and non-remission (p=0.035). Although PRSs are still not able to predict non-response or non-remission, our results are in line with previous works; methodological improvements in PRSs calculation may improve their predictive performance and have a meaningful role in precision psychiatry.

Journal ArticleDOI
TL;DR: In this article, an alternative closed-form expression for the estimation of the number of non-redundant metabolic variates based on the spectral decomposition of their correlation matrix is derived.
Abstract: The search for statistically significant relationships between molecular markers and outcomes is challenging when dealing with high-dimensional, noisy and collinear multivariate omics data, such as metabolomic profiles. Permutation procedures allow for the estimation of adjusted significance levels without assuming independence among metabolomic variables. Nevertheless, the complex non-normal structure of metabolic profiles and outcomes may bias the permutation results leading to overly conservative threshold estimates i.e. lower than those from a Bonferroni or Sidak correction. Within a univariate permutation procedure we employ parametric simulation methods based on the multivariate (log-)Normal distribution to obtain adjusted significance levels which are consistent across different outcomes while effectively controlling the type I error rate. Next, we derive an alternative closed-form expression for the estimation of the number of non-redundant metabolic variates based on the spectral decomposition of their correlation matrix. The performance of the method is tested for different model parametrizations and across a wide range of correlation levels of the variates using synthetic and real data sets. Both the permutation-based formulation and the more practical closed form expression are found to give an effective indication of the number of independent metabolic effects exhibited by the system, while guaranteeing that the derived adjusted threshold is stable across outcome measures with diverse properties.

Journal ArticleDOI
TL;DR: In this paper, simultaneous confidence intervals (SCIs) for the ratios of means of zero-heavy log-normal populations were proposed by using the Bonferroni adjustment principle.
Abstract: This paper studies simultaneous confidence intervals (SCIs) for the ratios of means of zero-heavy log-normal populations. We propose four SCIs by using the Bonferroni adjustment principle. Specific...

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, the authors introduce linear contrasts between treatment group means as a principled way for constructing t-tests and confidence intervals for treatment comparisons, including contrasts for estimating time trends and for finding minimal effective doses.
Abstract: We introduce linear contrasts between treatment group means as a principled way for constructing t-tests and confidence intervals for treatment comparisons. We consider a variety of contrasts, including contrasts for estimating time trends and for finding minimal effective doses. Multiple comparison procedures control the family-wise error rate, and we introduce four commonly used methods by Bonferroni, Tukey, Dunnett, and Scheffe. Finally, we discuss a larger real-life example to demonstrate the use of linear contrasts and highlight the need for careful definition of contrasts to correctly reflect the desired comparisons.

Journal ArticleDOI
TL;DR: The problems of computing two-sided tolerance intervals (TIs) and equal-tailed TIs for a location-scale family of distributions are considered and the methods are simple, exact and can be used to find TIs to all location- scale families of distributions including log-location-scale families.
Abstract: The problems of computing two-sided tolerance intervals (TIs) and equal-tailed TIs for a location-scale family of distributions are considered. The TIs are constructed using one-sided tolerance limits with the Bonferroni adjustments and then adjusting the confidence levels so that the coverage probabilities of the TIs are equal to the specified nominal confidence level. The methods are simple, exact and can be used to find TIs for all location-scale families of distributions including log-location-scale families. The computational methods are illustrated for the normal, Weibull, two-parameter Rayleigh and two-parameter exponential distributions. The computational method is applicable to find TIs based on a type II censored sample. Factors for computing two-sided TIs and equal-tailed TIs are tabulated and R functions to find tolerance factors are provided in a supplementary file. The methods are illustrated using a few practical examples.

Journal ArticleDOI
21 Apr 2021
TL;DR: In this paper, the authors identify a paradox for inequality indices which is similar to the well known Simpson paradox in statistics, which vanishes if we measure inequality by the absolute Gini and Bonferroni indices of inequality.
Abstract: This paper identifies a paradox for inequality indices which is similar to the well known Simpson paradox in statistics. For the Gini and Bonferroni indices, concrete examples of the paradox are provided and general methods are described for obtaining examples of the paradox for arbitrary size population. Sufficient conditions for the paradox not to hold are developed. The paradox, however, vanishes if we measure inequality by the absolute Gini and Bonferroni indices of inequality.

Journal ArticleDOI
TL;DR: The Wilcoxon rank sum test for two independent samples and the Kruskal-Wallis rank test for the one-way model with k independent samples are very competitive robust alternatives to the two-sample tester.
Abstract: The Wilcoxon rank sum test for two independent samples and the Kruskal–Wallis rank test for the one-way model with k independent samples are very competitive robust alternatives to the two-sample t...

Journal ArticleDOI
TL;DR: A reinforced program of nine weeks (initial development) and six weeks (maintenance thorough other physical education contents) allowed teachers to increase and maintain students’ cardiorespiratory fitness levels and to develop other curricular contents at the same time.

Journal ArticleDOI
TL;DR: In this paper, the relative contribution of rare and common coding/non-coding variants of NUS1 to late-onset PD patients (LOPD) was assessed using logistic regression analysis and Sequence Kernel association test.

Posted ContentDOI
11 Jan 2021-bioRxiv
TL;DR: In this article, the authors present a step-by-step description of each multiple-testing correction method with clear examples and present an easy-to-follow guide for selecting the most suitable correction technique.
Abstract: Scientists from nearly all disciplines face the problem of simultaneously evaluating many hypotheses. Conducting multiple comparisons increases the likelihood that a non-negligible proportion of associations will be false positives, clouding real discoveries. Drawing valid conclusions require taking into account the number of performed statistical tests and adjusting the statistical confidence measures. Several strategies exist to overcome the problem of multiple hypothesis testing. We aim to summarize critical statistical concepts and widely used correction approaches while also draw attention to frequently misinterpreted notions of statistical inference. We provide a step-by-step description of each multiple-testing correction method with clear examples and present an easy-to-follow guide for selecting the most suitable correction technique. To facilitate multiple-testing corrections, we developed a fully automated solution not requiring programming skills or the use of a command line. Our registration free online tool is available at www.multipletesting.com and compiles the five most frequently used adjustment tools, including the Bonferroni, the Holm (step-down), the Hochberg (step-up) corrections, allows to calculate False Discovery Rates (FDR) and q-values. The current summary provides a much needed practical synthesis of basic statistical concepts regarding multiple hypothesis testing in a comprehensible language with well-illustrated examples. The web tool will fill the gap for life science researchers by providing a user-friendly substitute for command-line alternatives.

Journal ArticleDOI
TL;DR: This work introduces and proves an error flexible Bonferroni modification for medium size hypotheses applications, called SiMaFlex procedure, which is able to flexibly outline Familywise Error Rate, False Discovery Rate and unadjusted conservatism at the same time.

Journal ArticleDOI
TL;DR: This article addresses some aspects of response variable selection focusing on the above-mentioned examples concerning methodological developments, theoretical properties and computational algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors examined new methods accounting for the complete correlation structure in group sequential designs with hypotheses in nested subgroups and provided full control of family-wise Type I error rate.

Journal ArticleDOI
TL;DR: In this paper, one-way ANOVA, Tukey HSD test with Scheffe, Bonferroni and Holm multiple comparison for the water quality parameters were carried out.
Abstract: Surface water samples were collected for physico-chemical analysis during 2016-17 from four water bodies (Hadadi lake, Gonivada lake, Lokikere lake and Shagalealla) of Davanagere district , Karnataka. The main objectives of this study is to analyse various parameters such as pH, EC, turbidity, total alkalinity, total dissolved solids, chloride, total hardness, calcium, sodium, sulphate ,nitrogen and potassium. One-way ANOVA, Tukey HSD test with Scheffe, Bonferroni and Holm multiple comparison for the water quality parameters were carried out. pH of all the four water bodies are alkaline in nature with the total hardness included under hard to very hard category. Most of the water quality parameters are highest in Shagalehalla due to the agricultural runoff from the surrounding areas and human anthropogenic activities. One Way ANOVA for physical parameters depicted “F statistic value of 52.1265 with a “P” value of 1.112. Similarly, for chemical parameters “F”value and “P” values are 29.3941 and 5.6732 respectively. The p-value consequent to the F-statistic of one-way ANOVA is lower than 0.05 level and Tukey's HSD test to each of the six pairs of water quality parameters them exhibits statistically significant difference, We Compared the outcome against drinking water quality standards as per WHO and BIS, and it is observed that water samples from these water bodies are potable for human consumption after proper treatment due to moderate levels of pollution as per the physico-chemical data. It is concluded that the physico-chemical characteristics of the water indicates that the water bodies are moderately eutrophic in nature and there is an urgent need of preventive measures.

Book ChapterDOI
27 Sep 2021
TL;DR: In this article, a direct approach based on Joint Mutual Information (JMI) statistics is proposed to solve the multiple testing problem, where individual hypotheses of interest correspond to conditional independence of the two variables X and Y given each of the several conditioning variables.
Abstract: In the paper we study the multiple testing problem for which individual hypotheses of interest correspond to conditional independence of the two variables X and Y given each of the several conditioning variables. Approaches to such problems avoiding inflation of probability of spurious rejections are widely studied and applied. Here we introduce a direct approach based on Joint Mutual Information (JMI) statistics which restates the problem as a problem of testing of a single hypothesis. The distribution of the test statistics JMI is established and shown to be well numerically approximated for a single data sample. The corresponding test is studied on artificial data sets and is shown to work promisingly when compared to general purpose multiple testing methods such as Bonferroni or Simes procedures.