scispace - formally typeset
Search or ask a question
Author

Keith F. Widaman

Bio: Keith F. Widaman is an academic researcher from University of California, Riverside. The author has contributed to research in topics: Cognition & Psychology. The author has an hindex of 70, co-authored 240 publications receiving 31852 citations. Previous affiliations of Keith F. Widaman include University of California, Berkeley & University of California.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors examine the controversial practice of using parcels of items as manifest variables in structural equation modeling (SEM) procedures and conclude that the unconsidered use of parcels is never warranted, while, at the same time, the considered use of items cannot be dismissed out of hand.
Abstract: We examine the controversial practice of using parcels of items as manifest variables in structural equation modeling (SEM) procedures. After detailing arguments pro and con, we conclude that the unconsidered use of parcels is never warranted, while, at the same time, the considered use of parcels cannot be dismissed out of hand. In large part, the decision to parcel or not depends on one's philosophical stance regarding scientific inquiry (e.g., empiricist vs. pragmatist) and the substantive goal of a study (e.g., to understand the structure of a set of items or to examine the nature of a set of constructs). Prior to creating parcels, however, we recommend strongly that investigators acquire a thorough understanding of the nature and dimensionality of the items to be parceled. With this knowledge in hand, various techniques for creating parcels can be utilized to minimize potential pitfalls and to optimize the measurement structure of constructs in SEM procedures. A number of parceling techniques are des...

5,426 citations

Journal ArticleDOI
TL;DR: A fundamental misconception about this issue is that the minimum sample size required to obtain factor solutions that are adequately stable and that correspond closely to population factors is not the optimal sample size.
Abstract: The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the

4,166 citations

Journal ArticleDOI
TL;DR: The goals of exploratory and confirmatory factor analysis are described and procedural guidelines for each approach are summarized in this article, emphasizing the use of factor analysis in developing and refining clinical measures for assessing the invariance of measures across samples and for evaluating multitrait-multimethod data.
Abstract: The goals of both exploratory and confirmatory factor analysis are described and procedural guidelines for each approach are summarized, emphasizing the use of factor analysis in developing and refining clinical measures For exploratory factor analysis, a rationale is presented for selecting between principal components analysis and common factor analysis depending on whether the research goal involves either identification of latent constructs or data reduction Confirmatory factor analysis using structural equation modeling is described for use in validating the dimensional structure of a measure Additionally, the uses of confirmatory factor analysis for assessing the invariance of measures across samples and for evaluating multitrait-multimethod data are also briefly described Suggestions are offered for handling common problems with item-level data, and examples illustrating potential difficulties with confirming dimensional structures from initial exploratory analyses are reviewed

3,623 citations

Journal ArticleDOI
TL;DR: This study investigated the utility of confirmatory factor analysis (CFA) and item response theory (IRT) models for testing the comparability of psychological measurements to investigate whether mood ratings collected in Minnesota and China were comparable.
Abstract: This study investigated the utility of confirmatory factor analysis (CFA) and item response theory (IRT) models for testing the comparability of psychological measurements. Both procedures were used to investigate whether mood ratings collected in Minnesota and China were comparable. Several issues were addressed. The first issue was that of establishing a common measurement scale across groups, which involves full or partial measurement invariance of trait indicators. It is shown that using CFA or IRT models, test items that function differentially as trait indicators across groups need not interfere with comparing examinees on the same trait dimension. Second, the issue of model fit was addressed. It is proposed that person-fit statistics be used to judge the practical fit of IRT models. Finally, topics for future research are suggested. Much research and debate has been motivated by the question of how to establish that a test measures the same trait dimension, in the same way, when administered to two or more qualitatively distinct groups (e.g., men and women). The question can also be posed as follows: Are test scores for individuals who belong to different examinee populations comparable on the same measurement scale? The objectives of this study were to review linear confirmatory factor analysis' (CFA; Long, 1983) and item response theory (IRT; Lord, 1980) approaches to addressing this important question and to suggest, by way of real-data application, advantages and disadvantages of each approach.

1,140 citations

Journal ArticleDOI
TL;DR: In this paper, a taxonomy of covariance structure models for multiretrait-multimethod data is presented, which can be used to test the significance of the convergent and the discriminant validity shown by a set of measures as well as the ex tent of method variance.
Abstract: A taxonomy of covariance structure models for rep resenting multitrait-multimethod data is presented Us ing this taxonomy, it is possible to formulate alternate series of hierarchically ordered, or nested, models for such data By specifying hierarchically nested models, significance tests of differences between competing models are available Within the proposed framework, specific model comparisons may be formulated to test the significance of the convergent and the discriminant validity shown by a set of measures as well as the ex tent of method variance Application of the proposed framework to three multitrait-multimethod matrices al lowed resolution of contradictory conclusions drawn in previously published work, demonstrating the utility of the present approach

1,015 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors identify six categories of self-reports and discuss such problems as common method variance, the consistency motif, and social desirability, as well as statistical and post hoc remedies and some procedural methods for dealing with artifactual bias.

14,482 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined the change in the goodness-of-fit index (GFI) when cross-group constraints are imposed on a measurement model and found that the change was independent of both model complexity and sample size.
Abstract: Measurement invariance is usually tested using Multigroup Confirmatory Factor Analysis, which examines the change in the goodness-of-fit index (GFI) when cross-group constraints are imposed on a measurement model. Although many studies have examined the properties of GFI as indicators of overall model fit for single-group data, there have been none to date that examine how GFIs change when between-group constraints are added to a measurement model. The lack of a consensus about what constitutes significant GFI differences places limits on measurement invariance testing. We examine 20 GFIs based on the minimum fit function. A simulation under the two-group situation was used to examine changes in the GFIs (ΔGFIs) when invariance constraints were added. Based on the results, we recommend using Δcomparative fit index, ΔGamma hat, and ΔMcDonald's Noncentrality Index to evaluate measurement invariance. These three ΔGFIs are independent of both model complexity and sample size, and are not correlated with the o...

10,597 citations

Journal ArticleDOI
TL;DR: 2 general approaches that come highly recommended: maximum likelihood (ML) and Bayesian multiple imputation (MI) are presented and may eventually extend the ML and MI methods that currently represent the state of the art.
Abstract: Statistical procedures for missing data have vastly improved, yet misconception and unsound practice still abound. The authors frame the missing-data problem, review methods, offer advice, and raise issues that remain unresolved. They clear up common misunderstandings regarding the missing at random (MAR) concept. They summarize the evidence against older procedures and, with few exceptions, discourage their use. They present, in both technical and practical language, 2 general approaches that come highly recommended: maximum likelihood (ML) and Bayesian multiple imputation (MI). Newer developments are discussed, including some for dealing with missing data that are not MAR. Although not yet in the mainstream, these procedures may eventually extend the ML and MI methods that currently represent the state of the art.

10,568 citations

Journal ArticleDOI
TL;DR: The meaning of the terms "method" and "method bias" are explored and whether method biases influence all measures equally are examined, and the evidence of the effects that method biases have on individual measures and on the covariation between different constructs is reviewed.
Abstract: Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms “method” and “method bias” and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

8,719 citations

Journal Article
TL;DR: In this paper, the authors collect, in one article, information that will allow researchers and practitioners to understand the various choices available through popular software packages, and to make decisions about "best practices" in exploratory factor analysis.
Abstract: Exploratory factor analysis (EFA) is a complex, multi-step process. The goal of this paper is to collect, in one article, information that will allow researchers and practitioners to understand the various choices available through popular software packages, and to make decisions about ”best practices” in exploratory factor analysis. In particular, this paper provides practical information on making decisions regarding (a) extraction, (b) rotation, (c) the number of factors to interpret, and (d) sample size.

7,865 citations