scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Multidimensionality and Structural Coefficient Bias in Structural Equation Modeling: A Bifactor Perspective

TL;DR: In this article, the authors consider several indices to indicate whether multidimensional data are "unidimensional enough" to fit with a unidimensional measurement model, especially when the goal is to avoid excessive bias in structural parameter estimates.
Abstract: In this study, the authors consider several indices to indicate whether multidimensional data are “unidimensional enough” to fit with a unidimensional measurement model, especially when the goal is to avoid excessive bias in structural parameter estimates. They examine two factor strength indices (the explained common variance and omega hierarchical) and several model fit indices (root mean square error of approximation, comparative fit index, and standardized root mean square residual). These statistics are compared in population correlation matrices determined by known bifactor structures that vary on the (a) relative strength of general and group factor loadings, (b) number of group factors, and (c) number of items or indicators. When fit with a unidimensional measurement model, the degree of structural coefficient bias depends strongly and inversely on explained common variance, but its effects are moderated by the percentage of correlations uncontaminated by multidimensionality, a statistic that rise...

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
01 Jan 2013
TL;DR: In this article, the authors present a survey of sales in terms of total units sold in the United States for the years 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018
Abstract: Period Notes No. of Units FY 1999 FY 2000 FY 2001 FY 2002 FY 2003 FY 2004 FY 2005 FY 2006 7/1/06-12/31/06 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 FY 2012 FY 2013 FY 2014 Total Units Sold est. est. est. est. est. est. Actual Actual Actual Actual Actual Actual Actual Actual Actual Actual Actual 1,768 3,797 3,755 5,592 3,310 3,218 3,803 3,888 2,144 3,077 3,358 2,590 3,043 2,132 1,649 1,732 855 49,710

2,329 citations

Journal ArticleDOI
TL;DR: Bifactor latent structures were introduced over 70 years ago, but only recently has bifactor modeling been rediscovered as an effective approach to modeling construct-relevant multidimensionality in a set of ordered categorical item responses.
Abstract: Bifactor latent structures were introduced over 70 years ago, but only recently has bifactor modeling been rediscovered as an effective approach to modeling construct-relevant multidimensionality in a set of ordered categorical item responses. I begin by describing the Schmid-Leiman bifactor procedure (Schmid & Leiman, 1957), and highlight its relations with correlated-factors and second-order exploratory factor models. After describing limitations of the Schmid-Leiman, two newer methods of exploratory bifactor modeling are considered, namely, analytic bifactor (Jennrich & Bentler, 2011) and target bifactor rotations (Reise, Moore, & Maydeu-Olivares, 2011). In section two, I discuss limited and full-information estimation approaches to confirmatory bifactor models that have emerged from the item response theory and factor analysis traditions, respectively. Comparison of the confirmatory bifactor model to alternative nested confirmatory models and establishing parameter invariance for the general factor also are discussed. In the final section, important applications of bifactor models are reviewed. These applications demonstrate that bifactor modeling potentially provides a solid foundation for conceptualizing psychological constructs, constructing measures, and evaluating a measure's psychometric properties. However, some applications of the bifactor model may be limited due to its restrictive assumptions.

1,508 citations

Journal ArticleDOI
TL;DR: A review of the particularly valuable statistical indices one can derive from bifactor models, which include omega reliability coefficients, factor determinacy, construct reliability, explained common variance, and percentage of uncontaminated correlations are provided.
Abstract: Bifactor measurement models are increasingly being applied to personality and psychopathology measures (Reise, 2012). In this work, authors generally have emphasized model fit, and their typical conclusion is that a bifactor model provides a superior fit relative to alternative subordinate models. Often unexplored, however, are important statistical indices that can substantially improve the psychometric analysis of a measure. We provide a review of the particularly valuable statistical indices one can derive from bifactor models. They include omega reliability coefficients, factor determinacy, construct reliability, explained common variance, and percentage of uncontaminated correlations. We describe how these indices can be calculated and used to inform: (a) the quality of unit-weighted total and subscale score composites, as well as factor score estimates, and (b) the specification and quality of a measurement model in structural equation modeling. (PsycINFO Database Record

848 citations

Journal ArticleDOI
TL;DR: A set of rarely reported psychometric indices that can be derived from a standardized loading matrix in a confirmatory bifactor model are applied: omega reliability coefficients, factor determinacy, construct replicability, explained common variance, and percentage of uncontaminated correlations.
Abstract: The purpose of this study was to apply a set of rarely reported psychometric indices that, nevertheless, are important to consider when evaluating psychological measures. All can be derived from a standardized loading matrix in a confirmatory bifactor model: omega reliability coefficients, factor determinacy, construct replicability, explained common variance, and percentage of uncontaminated correlations. We calculated these indices and extended the findings of 50 recent bifactor model estimation studies published in psychopathology, personality, and assessment journals. These bifactor derived indices (most not presented in the articles) provided a clearer and more complete picture of the psychometric properties of the assessment instruments. We reached 2 firm conclusions. First, although all measures had been tagged “multidimensional,” unit-weighted total scores overwhelmingly reflected variance due to a single latent variable. Second, unit-weighted subscale scores often have ambiguous interpret...

575 citations


Cites background from "Multidimensionality and Structural ..."

  • ...A straightforward measure of degree of essential unidimensionality is the ECV (Reise et al., 2013; Sijtsma, 2009; ten Berge & So can, 2004; Stucky & Edelen, 2014; Stucky et al., 2013)....

    [...]

Journal ArticleDOI
TL;DR: This work produced cross-walk tables linking 3 popular "legacy" depression instruments to the depression metric of the National Institutes of Health Patient-Reported Outcomes Measurement Information System (PROMIS; Cella et al., 2010).
Abstract: Interest in measuring patient-reported outcomes has increased dramatically in recent decades. This has simultaneously produced numerous assessment options and confusion. In the case of depressive symptoms, there are many commonly used options for measuring the same or a very similar concept. Public and professional reporting of scores can be confused by multiple scale ranges, normative levels, and clinical thresholds. A common reporting metric would have great value and can be achieved when similar instruments are administered to a single sample and then linked to each other to produce cross-walk score tables (e.g., Dorans, 2007; Kolen & Brennan, 2004). Using multiple procedures based on item response theory and equipercentile methods, we produced cross-walk tables linking 3 popular "legacy" depression instruments-the Center for Epidemiologic Studies Depression Scale (Radloff, 1977; N = 747), the Beck Depression Inventory-II (Beck, Steer, & Brown, 1996; N = 748), and the 9-item Patient Health Questionnaire (Kroenke, Spitzer, & Williams, 2001; N = 1,120)-to the depression metric of the National Institutes of Health (NIH) Patient-Reported Outcomes Measurement Information System (PROMIS; Cella et al., 2010). The PROMIS Depression metric is centered on the U.S. general population, matching the marginal distributions of gender, age, race, and education in the 2000 U.S. census (Liu et al., 2010). The linking relationships were evaluated by resampling small subsets and estimating confidence intervals for the differences between the observed and linked PROMIS scores; in addition, PROMIS cutoff scores for depression severity were estimated to correspond with those commonly used with the legacy measures. Our results allow clinicians and researchers to retrofit existing data of 3 popular depression measures to the PROMIS Depression metric and vice versa.

347 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, the adequacy of the conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice were examined, and the results suggest that, for the ML method, a cutoff value close to.95 for TLI, BL89, CFI, RNI, and G...
Abstract: This article examines the adequacy of the “rules of thumb” conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice. Using a 2‐index presentation strategy, which includes using the maximum likelihood (ML)‐based standardized root mean squared residual (SRMR) and supplementing it with either Tucker‐Lewis Index (TLI), Bollen's (1989) Fit Index (BL89), Relative Noncentrality Index (RNI), Comparative Fit Index (CFI), Gamma Hat, McDonald's Centrality Index (Mc), or root mean squared error of approximation (RMSEA), various combinations of cutoff values from selected ranges of cutoff criteria for the ML‐based SRMR and a given supplemental fit index were used to calculate rejection rates for various types of true‐population and misspecified models; that is, models with misspecified factor covariance(s) and models with misspecified factor loading(s). The results suggest that, for the ML method, a cutoff value close to .95 for TLI, BL89, CFI, RNI, and G...

76,383 citations


"Multidimensionality and Structural ..." refers methods in this paper

  • ...Here we use those recommended by Hu and Bentler (1999); namely, RMSEA = .06, SRMR = .08, and CFI = .95....

    [...]

  • ...For each condition, we also computed the following three fit indices: (a) the SRMR (Bentler, 2006; Hu & Bentler, 1999), (b) the RMSEA (Browne & Cudeck, 1993), and (c) the CFI (Hu & Bentler, 1999)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, two types of error involved in fitting a model are considered, error of approximation and error of fit, where the first involves the fit of the model, and the second involves the model's shape.
Abstract: This article is concerned with measures of fit of a model. Two types of error involved in fitting a model are considered. The first is error of approximation which involves the fit of the model, wi...

25,611 citations


"Multidimensionality and Structural ..." refers methods in this paper

  • ...A regression equation including RMSEA, PUC, and their interaction resulted in an R2 value of only .29—arguing against any attempt to interpret its value as a ‘‘unidimensional enough’’ index....

    [...]

  • ...For each condition, we also computed the following three fit indices: (a) the SRMR (Bentler, 2006; Hu & Bentler, 1999), (b) the RMSEA (Browne & Cudeck, 1993), and (c) the CFI (Hu & Bentler, 1999)....

    [...]

  • ...The linear correlations between fit and structural coefficient are 2.30, .67, and 2.67 for RMSEA, CFI, and SRMR, respectively....

    [...]

  • ...This is not too surprising in the present conditions given that RMSEA is only weakly associated with factors that determine structural coefficient bias, such as the strength indices ECV (r = 2.24) and omegaH (r = 2.12), and mean RMSEA changes little with changes in PUC values....

    [...]

  • ...Although seldom stated, generally, it is assumed that if the item response data fit the measurement model according to commonly employed goodness-of-fit indices—for example, the comparative fit index (CFI), the root mean square error of approximation (RMSEA), and the standardized root mean residual (SRMR)—then parameter estimates in the structural model are unbiased, and it is safe to proceed with further model enhancement and evaluation.1 When the values of these indices are used to judge whether a unidimensional measurement model provides an ‘‘adequate’’ fit to the data, essentially they are being used in the same way as ‘‘first-factor strength’’ indices in IRT; that is, fit indices are used in practice as indicators that the data are ‘‘unidimensional enough’’ to avoid serious bias in model parameters....

    [...]

Book
28 Apr 1989
TL;DR: The General Model, Part I: Latent Variable and Measurement Models Combined, Part II: Extensions, Part III: Extensions and Part IV: Confirmatory Factor Analysis as discussed by the authors.
Abstract: Model Notation, Covariances, and Path Analysis. Causality and Causal Models. Structural Equation Models with Observed Variables. The Consequences of Measurement Error. Measurement Models: The Relation Between Latent and Observed Variables. Confirmatory Factor Analysis. The General Model, Part I: Latent Variable and Measurement Models Combined. The General Model, Part II: Extensions. Appendices. Distribution Theory. References. Index.

19,019 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss theoretical principles, practical issues, and pragmatic decisions to help developers maximize the construct validity of scales and subscales, and propose factor analysis as a crucial role in ensuring unidimensionality and discriminant validity.
Abstract: A primary goal of scale development is to create a valid measure of an underlying construct. We discuss theoretical principles, practical issues, and pragmatic decisions to help developers maximize the construct validity of scales and subscales. First, it is essential to begin with a clear conceptualization of the target construct. Moreover, the content of the initial item pool should be overinclusive and item wording needs careful attention. Next, the item pool should be tested, along with variables that assess closely related constructs, on a heterogeneous sample representing the entire range of the target population. Finally, in selecting scale items, the goal is unidimensionality rather than internal consistency ; this means that virtually all interitem correlations should be moderate in magnitude. Factor analysis can play a crucial role in ensuring the unidimensionality and discriminant validity of scales.

5,867 citations


"Multidimensionality and Structural ..." refers background in this paper

  • ...Rather, given the standard way that many measures of complex psychological constructs are created (Clark & Watson, 1995), we expect that all items will be influenced by at least one common trait....

    [...]

Book
01 Jan 2000
TL;DR: Item Response Theory as Model-Based Measurement as mentioned in this paper is a model-based approach to measuring persons in personality and attitude assessment, and it has been applied in Cognitive and Developmental Assessment.
Abstract: Contents: Preface. Part I: Introduction. Introduction. Part II: Item Response Theory Principles: Some Contrasts and Comparisons. The New Rules of Measurement. Item Response Theory as Model-Based Measurement. Part III: The Fundamentals of Item Response Theory. Binary IRT Models. Polytomous IRT Models. The Trait Level Measurement Scale: Meaning, Interpretations, and Measurement-Scale Properties. Measuring Persons: Scoring Examinees With IRT Models. Calibrating Items: Estimation. Assessing the Fit of IRT Models. Part IV: Applications of IRT Models. IRT Applications: DIF, CAT, and Scale Analysis. IRT Applications in Cognitive and Developmental Assessment. Applications of IRT in Personality and Attitude Assessment. Computer Programs for Conducting IRT Parameter Estimation.

3,002 citations