Journal•ISSN: 1070-5511

# Structural Equation Modeling

Taylor & Francis

About: Structural Equation Modeling is an academic journal published by Taylor & Francis. The journal publishes majorly in the area(s): Structural equation modeling & Latent variable. It has an ISSN identifier of 1070-5511. Over the lifetime, 1269 publications have been published receiving 205096 citations. The journal is also known as: SEM.

Topics: Structural equation modeling, Latent variable, Latent variable model, Latent class model, Sample size determination

##### Papers published on a yearly basis

##### Papers

More filters

••

TL;DR: In this article, the adequacy of the conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice were examined, and the results suggest that, for the ML method, a cutoff value close to.95 for TLI, BL89, CFI, RNI, and G...

Abstract: This article examines the adequacy of the “rules of thumb” conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice. Using a 2‐index presentation strategy, which includes using the maximum likelihood (ML)‐based standardized root mean squared residual (SRMR) and supplementing it with either Tucker‐Lewis Index (TLI), Bollen's (1989) Fit Index (BL89), Relative Noncentrality Index (RNI), Comparative Fit Index (CFI), Gamma Hat, McDonald's Centrality Index (Mc), or root mean squared error of approximation (RMSEA), various combinations of cutoff values from selected ranges of cutoff criteria for the ML‐based SRMR and a given supplemental fit index were used to calculate rejection rates for various types of true‐population and misspecified models; that is, models with misspecified factor covariance(s) and models with misspecified factor loading(s). The results suggest that, for the ML method, a cutoff value close to .95 for TLI, BL89, CFI, RNI, and G...

76,383 citations

••

TL;DR: In this paper, the authors examined the change in the goodness-of-fit index (GFI) when cross-group constraints are imposed on a measurement model and found that the change was independent of both model complexity and sample size.

Abstract: Measurement invariance is usually tested using Multigroup Confirmatory Factor Analysis, which examines the change in the goodness-of-fit index (GFI) when cross-group constraints are imposed on a measurement model. Although many studies have examined the properties of GFI as indicators of overall model fit for single-group data, there have been none to date that examine how GFIs change when between-group constraints are added to a measurement model. The lack of a consensus about what constitutes significant GFI differences places limits on measurement invariance testing. We examine 20 GFIs based on the minimum fit function. A simulation under the two-group situation was used to examine changes in the GFIs (ΔGFIs) when invariance constraints were added. Based on the results, we recommend using Δcomparative fit index, ΔGamma hat, and ΔMcDonald's Noncentrality Index to evaluate measurement invariance. These three ΔGFIs are independent of both model complexity and sample size, and are not correlated with the o...

10,597 citations

••

TL;DR: Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.

Abstract: Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n = 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test ...

7,716 citations

••

TL;DR: In this article, the sensitivity of goodness of fit indexes to lack of measurement invariance at three commonly tested levels: factor loadings, intercepts, and residual variances was examined, and the most intriguing finding was that changes in fit statistics are affected by the interaction between the pattern of invariance and the proportion of invariant items.

Abstract: Two Monte Carlo studies were conducted to examine the sensitivity of goodness of fit indexes to lack of measurement invariance at 3 commonly tested levels: factor loadings, intercepts, and residual variances. Standardized root mean square residual (SRMR) appears to be more sensitive to lack of invariance in factor loadings than in intercepts or residual variances. Comparative fit index (CFI) and root mean square error of approximation (RMSEA) appear to be equally sensitive to all 3 types of lack of invariance. The most intriguing finding is that changes in fit statistics are affected by the interaction between the pattern of invariance and the proportion of invariant items: when the pattern of lack of invariance is uniform, the relation is nonmonotonic, whereas when the pattern of lack of invariance is mixed, the relation is monotonic. Unequal sample sizes affect changes across all 3 levels of invariance: Changes are bigger when sample sizes are equal rather than when they are unequal. Cutoff points for t...

6,202 citations

••

Yale University

^{1}TL;DR: In this article, the authors examine the controversial practice of using parcels of items as manifest variables in structural equation modeling (SEM) procedures and conclude that the unconsidered use of parcels is never warranted, while, at the same time, the considered use of items cannot be dismissed out of hand.

Abstract: We examine the controversial practice of using parcels of items as manifest variables in structural equation modeling (SEM) procedures. After detailing arguments pro and con, we conclude that the unconsidered use of parcels is never warranted, while, at the same time, the considered use of parcels cannot be dismissed out of hand. In large part, the decision to parcel or not depends on one's philosophical stance regarding scientific inquiry (e.g., empiricist vs. pragmatist) and the substantive goal of a study (e.g., to understand the structure of a set of items or to examine the nature of a set of constructs). Prior to creating parcels, however, we recommend strongly that investigators acquire a thorough understanding of the nature and dimensionality of the items to be parceled. With this knowledge in hand, various techniques for creating parcels can be utilized to minimize potential pitfalls and to optimize the measurement structure of constructs in SEM procedures. A number of parceling techniques are des...

5,426 citations