scispace - formally typeset
Search or ask a question
Author

Robert C. MacCallum

Other affiliations: Ohio State University
Bio: Robert C. MacCallum is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Covariance & Goodness of fit. The author has an hindex of 47, co-authored 78 publications receiving 38797 citations. Previous affiliations of Robert C. MacCallum include Ohio State University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a framework for hypothesis testing and power analysis in the assessment of fit of covariance structure models is presented, where the value of confidence intervals for fit indices is emphasized.
Abstract: A framework for hypothesis testing and power analysis in the assessment of fit of covariance structure models is presented. We emphasize the value of confidence intervals for fit indices, and we stress the relationship of confidence intervals to a framework for hypothesis testing. The approach allows for testing null hypotheses of not-good fit, reversing the role of the null hypothesis in conventional tests of model fit, so that a significant result provides strong support for good fit. The approach also allows for direct estimation of power, where effect size is defined in terms of a null and alternative value of the root-mean-square error of approximation fit index proposed by J. H. Steiger and J. M. Lind (1980). It is also feasible to determine minimum sample size required to achieve a given level of power for any test of fit in this framework. Computer programs and examples are provided for power analyses and calculation of minimum sample sizes.

8,401 citations

Journal ArticleDOI
TL;DR: This paper reviewed the major design and analytical decisions that must be made when conducting exploratory factor analysis and notes that each of these decisions has important consequences for the obtained results, and the implications of these practices for psychological research are discussed.
Abstract: Despite the widespread use of exploratory factor analysis in psychological research, researchers often make questionable decisions when conducting these analyses. This article reviews the major design and analytical decisions that must be made when conducting a factor analysis and notes that each of these decisions has important consequences for the obtained results. Recommendations that have been made in the methodological literature are discussed. Analyses of 3 existing empirical data sets are used to illustrate how questionable decisions in conducting factor analyses can yield problematic results. The article presents a survey of 2 prominent journals that suggests that researchers routinely conduct analyses using such questionable methods. The implications of these practices for psychological research are discussed, and the reasons for current practices are reviewed.

7,590 citations

Journal ArticleDOI
TL;DR: A fundamental misconception about this issue is that the minimum sample size required to obtain factor solutions that are adequately stable and that correspond closely to population factors is not the optimal sample size.
Abstract: The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the

4,166 citations

Journal ArticleDOI
TL;DR: The authors present the case that dichotomization is rarely defensible and often will yield misleading results.
Abstract: The authors examine the practice of dichotomization of quantitative measures, wherein relationships among variables are examined after 1 or more variables have been converted to dichotomous variables by splitting the sample at some point on the scale(s) of measurement. A common form of dichotomization is the median split, where the independent variable is split at the median to form high and low groups, which are then compared with respect to their means on the dependent variable. The consequences of dichotomization for measurement and statistical analyses are illustrated and discussed. The use of dichotomization in practice is described, and justifications that are offered for such usage are examined. The authors present the case that dichotomization is rarely defensible and often will yield misleading results. We consider here some simple statistical procedures for studying relationships of one or more independent variables to one dependent variable, where all variables are quantitative in nature and are measured on meaningful numerical scales. Such measures are often referred to as individual-differences measures, meaning that observed values of such measures are interpretable as reflecting individual differences on the attribute of interest. It is of course straightforward to analyze such data using correlational methods. In the case of a single independent variable, one can use simple linear regression and/or obtain a simple correlation coefficient. In the case of multiple independent variables, one can use multiple regression, possibly including interaction terms. Such methods are routinely used in practice. However, another approach to analysis of such data is also rather widely used. Considering the case of one independent variable, many investigators begin by converting that variable into a dichotomous variable by splitting the scale at some point and designating individuals above and below that point as defining

2,949 citations

Journal ArticleDOI
TL;DR: This chapter presents a review of applications of structural equation modeling (SEM) published in psychological research journals in recent years and focuses first on the variety of research designs and substantive issues to which SEM can be applied productively.
Abstract: This chapter presents a review of applications of structural equation modeling (SEM) published in psychological research journals in recent years. We focus first on the variety of research designs and substantive issues to which SEM can be applied productively. We then discuss a number of methodological problems and issues of concern that characterize some of this literature. Although it is clear that SEM is a powerful tool that is being used to great benefit in psychological research, it is also clear that the applied SEM literature is characterized by some chronic problems and that this literature can be considerably improved by greater attention to these issues.

2,489 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the adequacy of the conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice were examined, and the results suggest that, for the ML method, a cutoff value close to.95 for TLI, BL89, CFI, RNI, and G...
Abstract: This article examines the adequacy of the “rules of thumb” conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice. Using a 2‐index presentation strategy, which includes using the maximum likelihood (ML)‐based standardized root mean squared residual (SRMR) and supplementing it with either Tucker‐Lewis Index (TLI), Bollen's (1989) Fit Index (BL89), Relative Noncentrality Index (RNI), Comparative Fit Index (CFI), Gamma Hat, McDonald's Centrality Index (Mc), or root mean squared error of approximation (RMSEA), various combinations of cutoff values from selected ranges of cutoff criteria for the ML‐based SRMR and a given supplemental fit index were used to calculate rejection rates for various types of true‐population and misspecified models; that is, models with misspecified factor covariance(s) and models with misspecified factor loading(s). The results suggest that, for the ML method, a cutoff value close to .95 for TLI, BL89, CFI, RNI, and G...

76,383 citations

Journal ArticleDOI
TL;DR: The extent to which method biases influence behavioral research results is examined, potential sources of method biases are identified, the cognitive processes through which method bias influence responses to measures are discussed, the many different procedural and statistical techniques that can be used to control method biases is evaluated, and recommendations for how to select appropriate procedural and Statistical remedies are provided.
Abstract: Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.

52,531 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development, and present a comprehensive, two-step modeling approach that employs a series of nested models and sequential chi-square difference tests.
Abstract: In this article, we provide guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development. We present a comprehensive, two-step modeling approach that employs a series of nested models and sequential chi-square difference tests. We discuss the comparative advantages of this approach over a one-step approach. Considerations in specification, assessment of fit, and respecification of measurement models using confirmatory factor analysis are reviewed. As background to the two-step approach, the distinction between exploratory and confirmatory analysis, the distinction between complementary approaches for theory testing versus predictive application, and some developments in estimation methods also are discussed.

34,720 citations

Journal ArticleDOI
TL;DR: The results suggest that it is important to recognize both the unity and diversity ofExecutive functions and that latent variable analysis is a useful approach to studying the organization and roles of executive functions.

12,182 citations

Journal ArticleDOI
TL;DR: The factor structure of the combined BDI and BAI items was virtually identical to that reported by Beck for a sample of diagnosed depressed and anxious patients, supporting the view that these clinical states are more severe expressions of the same states that may be discerned in normals.

9,443 citations