Statistical hypothesis testing
About: Statistical hypothesis testing is a(n) research topic. Over the lifetime, 19580 publication(s) have been published within this topic receiving 1037815 citation(s). The topic is also known as: statistical hypothesis testing & confirmatory data analysis.
Papers published on a yearly basis
TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Abstract: The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.
06 May 2013
TL;DR: In this paper, the authors present a discussion of whether, if, how, and when a moderate mediator can be used to moderate another variable's effect in a conditional process analysis.
Abstract: I. FUNDAMENTAL CONCEPTS 1. Introduction 1.1. A Scientist in Training 1.2. Questions of Whether, If, How, and When 1.3. Conditional Process Analysis 1.4. Correlation, Causality, and Statistical Modeling 1.5. Statistical Software 1.6. Overview of this Book 1.7. Chapter Summary 2. Simple Linear Regression 2.1. Correlation and Prediction 2.2. The Simple Linear Regression Equation 2.3. Statistical Inference 2.4. Assumptions for Interpretation and Statistical Inference 2.5. Chapter Summary 3. Multiple Linear Regression 3.1. The Multiple Linear Regression Equation 3.2. Partial Association and Statistical Control 3.3. Statistical Inference in Multiple Regression 3.4. Statistical and Conceptual Diagrams 3.5. Chapter Summary II. MEDIATION ANALYSIS 4. The Simple Mediation Model 4.1. The Simple Mediation Model 4.2. Estimation of the Direct, Indirect, and Total Effects of X 4.3. Example with Dichotomous X: The Influence of Presumed Media Influence 4.4. Statistical Inference 4.5. An Example with Continuous X: Economic Stress among Small Business Owners 4.6. Chapter Summary 5. Multiple Mediator Models 5.1. The Parallel Multiple Mediator Model 5.2. Example Using the Presumed Media Influence Study 5.3. Statistical Inference 5.4. The Serial Multiple Mediator Model 5.5. Complementarity and Competition among Mediators 5.6. OLS Regression versus Structural Equation Modeling 5.7. Chapter Summary III. MODERATION ANALYSIS 6. Miscellaneous Topics in Mediation Analysis 6.1. What About Baron and Kenny? 6.2. Confounding and Causal Order 6.3. Effect Size 6.4. Multiple Xs or Ys: Analyze Separately or Simultaneously? 6.5. Reporting a Mediation Analysis 6.6. Chapter Summary 7. Fundamentals of Moderation Analysis 7.1. Conditional and Unconditional Effects 7.2. An Example: Sex Discrimination in the Workplace 7.3. Visualizing Moderation 7.4. Probing an Interaction 7.5. Chapter Summary 8. Extending Moderation Analysis Principles 8.1. Moderation Involving a Dichotomous Moderator 8.2. Interaction between Two Quantitative Variables 8.3. Hierarchical versus Simultaneous Variable Entry 8.4. The Equivalence between Moderated Regression Analysis and a 2 x 2 Factorial Analysis of Variance 8.5. Chapter Summary 9. Miscellaneous Topics in Moderation Analysis 9.1. Truths and Myths about Mean Centering 9.2. The Estimation and Interpretation of Standardized Regression Coefficients in a Moderation Analysis 9.3. Artificial Categorization and Subgroups Analysis 9.4. More Than One Moderator 9.5. Reporting a Moderation Analysis 9.6. Chapter Summary IV. CONDITIONAL PROCESS ANALYSIS 10. Conditional Process Analysis 10.1. Examples of Conditional Process Models in the Literature 10.2. Conditional Direct and Indirect Effects 10.3. Example: Hiding Your Feelings from Your Work Team 10.4. Statistical Inference 10.5. Conditional Process Analysis in PROCESS 10.6. Chapter Summary 11. Further Examples of Conditional Process Analysis 11.1. Revisiting the Sexual Discrimination Study 11.2. Moderation of the Direct and Indirect Effects in a Conditional Process Model 11.3. Visualizing the Direct and Indirect Effects 11.4. Mediated Moderation 11.5. Chapter Summary 12. Miscellaneous Topics in Conditional Process Analysis 12.1. A Strategy for Approaching Your Analysis 12.2. Can a Variable Simultaneously Mediate and Moderate Another Variable's Effect? 12.3. Comparing Conditional Indirect Effects and a Formal Test of Moderated Mediation 12.4. The Pitfalls of Subgroups Analysis 12.5. Writing about Conditional Process Modeling 12.6. Chapter Summary Appendix A. Using PROCESS Appendix B. Monte Carlo Confidence Intervals in SPSS and SAS
•03 Mar 1992
TL;DR: The Logic of Hierarchical Linear Models (LMLM) as discussed by the authors is a general framework for estimating and hypothesis testing for hierarchical linear models, and it has been used in many applications.
Abstract: Introduction The Logic of Hierarchical Linear Models Principles of Estimation and Hypothesis Testing for Hierarchical Linear Models An Illustration Applications in Organizational Research Applications in the Study of Individual Change Applications in Meta-Analysis and Other Cases Where Level-1 Variances are Known Three-Level Models Assessing the Adequacy of Hierarchical Models Technical Appendix
01 Nov 1980-Psychological Bulletin
TL;DR: In this article, a general null model based on modified independence among variables is proposed to provide an additional reference point for the statistical and scientific evaluation of covariance structure models, and the importance of supplementing statistical evaluation with incremental fit indices associated with the comparison of hierarchical models.
Abstract: Factor analysis, path analysis, structural equation modeling, and related multivariate statistical methods are based on maximum likelihood or generalized least squares estimation developed for covariance structure models. Large-sample theory provides a chi-square goodness-of-fit test for comparing a model against a general alternative model based on correlated variables. This model comparison is insufficient for model evaluation: In large samples virtually any model tends to be rejected as inadequate, and in small samples various competing models, if evaluated, might be equally acceptable. A general null model based on modified independence among variables is proposed to provide an additional reference point for the statistical and scientific evaluation of covariance structure models. Use of the null model in the context of a procedure that sequentially evaluates the statistical necessity of various sets of parameters places statistical methods in covariance structure analysis into a more complete framework. The concepts of ideal models and pseudo chi-square tests are introduced, and their roles in hypothesis testing are developed. The importance of supplementing statistical evaluation with incremental fit indices associated with the comparison of hierarchical models is also emphasized. Normed and nonnormed fit indices are developed and illustrated.
TL;DR: In this paper, the estimation and testing of long-run relations in economic modeling are addressed, starting with a vector autoregressive (VAR) model, the hypothesis of cointegration is formulated as a hypothesis of reduced rank of the long run impact matrix.
Abstract: The estimation and testing of long-run relations in economic modeling are addressed. Starting with a vector autoregressive (VAR) model, the hypothesis of cointegration is formulated as the hypothesis of reduced rank of the long-run impact matrix. This is given in a simple parametric form that allows the application of the method of maximum likelihood and likelihood ratio tests. In this way, one can derive estimates and test statistics for the hypothesis of a given number of cointegration vectors, as well as estimates and tests for linear hypotheses about the cointegration vectors and their weights. The asymptotic inferences concerning the number of cointegrating vectors involve nonstandard distributions. Inference concerning linear restrictions on the cointegration vectors and their weights can be performed using the usual chi squared methods. In the case of linear restrictions on beta, a Wald test procedure is suggested. The proposed methods are illustrated by money demand data from the Danish and Finnish economies.
Trending Questions (6)
Related Topics (5)
97.3K papers, 2.6M citations
19K papers, 1M citations
36.8K papers, 1.3M citations
31K papers, 1.7M citations
65.3K papers, 1.2M citations