scispace - formally typeset
Search or ask a question
Topic

Statistical hypothesis testing

About: Statistical hypothesis testing is a research topic. Over the lifetime, 19580 publications have been published within this topic receiving 1037815 citations. The topic is also known as: statistical hypothesis testing & confirmatory data analysis.


Papers
More filters
Journal ArticleDOI
TL;DR: Two specification tests are proposed for this specification of the rank-ordered logit model, including a Hausman specification test for the independence from irrelevant alternatives hypothesis and an application of a weighted M -estimator that yields consistent equivalent price estimators despite any misspecification of the distribution.

315 citations

Journal ArticleDOI
TL;DR: This paper presents a new algorithm for detection of the number of sources via a sequence of hypothesis tests, and theoretically analyze the consistency and detection performance of the proposed algorithm, showing its superiority compared to the standard minimum description length (MDL)-based estimator.
Abstract: Detection of the number of signals embedded in noise is a fundamental problem in signal and array processing. This paper focuses on the non-parametric setting where no knowledge of the array manifold is assumed. First, we present a detailed statistical analysis of this problem, including an analysis of the signal strength required for detection with high probability, and the form of the optimal detection test under certain conditions where such a test exists. Second, combining this analysis with recent results from random matrix theory, we present a new algorithm for detection of the number of sources via a sequence of hypothesis tests. We theoretically analyze the consistency and detection performance of the proposed algorithm, showing its superiority compared to the standard minimum description length (MDL)-based estimator. A series of simulations confirm our theoretical analysis.

315 citations

Journal Article
TL;DR: StatXact is a statistical package for exact nonparametric inference that computes exact p-values for a core group of frequently used hypothesis tests for comparing two or more populations.
Abstract: StatXact is a statistical package for exact nonparametric inference. The most important feature of StatXact, distinguishing it from all other statistical software, is that it computes exact p-values for a core group of frequently used hypothesis tests for comparing two or more populations

314 citations

Journal ArticleDOI
TL;DR: Two computer simulations were conducted to examine the findings of previous studies of testing mediation models and found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y, b, was small.
Abstract: Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M, a, increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such...

314 citations

Journal ArticleDOI
TL;DR: It is shown that permutations of the raw observational (or ‘pre‐network’) data consistently account for underlying structure in the generated social network, and thus can reduce both type I and type II error rates.
Abstract: Null models are an important component of the social network analysis toolbox. However, their use in hypothesis testing is still not widespread. Furthermore, several different approaches for constructing null models exist, each with their relative strengths and weaknesses, and often testing different hypotheses.In this study, I highlight why null models are important for robust hypothesis testing in studies of animal social networks. Using simulated data containing a known observation bias, I test how different statistical tests and null models perform if such a bias was unknown.I show that permutations of the raw observational (or 'pre-network') data consistently account for underlying structure in the generated social network, and thus can reduce both type I and type II error rates. However, permutations of pre-network data remain relatively uncommon in animal social network analysis because they are challenging to implement for certain data types, particularly those from focal follows and GPS tracking.I explain simple routines that can easily be implemented across different types of data, and supply R code that applies each type of null model to the same simulated dataset. The R code can easily be modified to test hypotheses with empirical data. Widespread use of pre-network data permutation methods will benefit researchers by facilitating robust hypothesis testing.

312 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
88% related
Linear model
19K papers, 1M citations
88% related
Inference
36.8K papers, 1.3M citations
87% related
Regression analysis
31K papers, 1.7M citations
86% related
Sampling (statistics)
65.3K papers, 1.2M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023267
2022696
2021959
2020998
20191,033
2018943