scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Evaluating Cluster-Level Factor Models with lavaan and Mplus

01 Apr 2021-Psychosomatics (Multidisciplinary Digital Publishing Institute)-Vol. 3, Iss: 2, pp 134-152
TL;DR: A worrying convergence issue with the default settings in Mplus is discovered, resulting in seemingly converged solutions that are actually not, and shines a different light on earlier advice on the use of measurement models for shared factors.
About: This article is published in Psychosomatics.The article was published on 2021-04-01 and is currently open access. It has received 5 citations till now. The article focuses on the topics: Test statistic & Population.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a multilevel structural equation model was used to fit a large number of simulated data sets and compared different prespecified minimum ESS values with the actual (empirical) ESS.

6 citations

Journal ArticleDOI
TL;DR: This article used the three-pillar model of sustainability as a conceptual framework to examine how individuals evaluate climate policies and how these evaluations predict policy support, finding that individuals who anticipate more benefits and fewer harms of climate policies also tend to report greater aggregated support for climate change policies, and anticipating environmental benefits, economic impacts and social impacts to be above average for a certain policy (relative to other policies) predicts greater support for that policy.

3 citations

Journal ArticleDOI
TL;DR: In this article, a modification of the robust chi-square test of fit that yields more accurate type I error rates when the estimated model is at the boundary of the admissible space is described.

1 citations

Journal ArticleDOI
30 Jun 2022-Psych
TL;DR: In this article , the authors show that the maximum likelihood (ML) estimate of the variance of the latent factor is zero when the initial solution to the optimization problem (i.e., the solution provided by the default procedure) is a negative value.
Abstract: The default procedures of the software programs Mplus and lavaan tend to yield an inadmissible solution (also called a Heywood case) when the sample is small or the parameter is close to the boundary of the parameter space. In factor models, a negatively estimated variance does often occur. One strategy to deal with this is fixing the variance to zero and then estimating the model again in order to obtain the estimates of the remaining model parameters. In the present article, we present one possible approach for justifying this strategy. Specifically, using a simple one-factor model as an example, we show that the maximum likelihood (ML) estimate of the variance of the latent factor is zero when the initial solution to the optimization problem (i.e., the solution provided by the default procedure) is a negative value. The basis of our argument is the very definition of ML estimation, which requires that the log-likelihood be maximized over the parameter space. We present the results of a small simulation study, which was conducted to evaluate the proposed ML procedure and compare it with Mplus’ default procedure. We found that the proposed ML procedure increased estimation accuracy compared to Mplus’ procedure, rendering the ML procedure an attractive option to deal with inadmissible solutions.

1 citations

Journal ArticleDOI
02 Mar 2022-Psych
TL;DR: This research presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and expensive and therefore expensive and expensive process of manually cataloging and cataloging individual cells.
Abstract: Statistical software in psychometrics has made tremendous progress in providing open source solutions (e [...]
References
More filters
Journal ArticleDOI
TL;DR: The aims behind the development of the lavaan package are explained, an overview of its most important features are given, and some examples to illustrate how lavaan works in practice are provided.
Abstract: Structural equation modeling (SEM) is a vast field and widely used by many applied researchers in the social and behavioral sciences. Over the years, many software packages for structural equation modeling have been developed, both free and commercial. However, perhaps the best state-of-the-art software packages in this field are still closed-source and/or commercial. The R package lavaan has been developed to provide applied researchers, teachers, and statisticians, a free, fully open-source, but commercial-quality package for latent variable modeling. This paper explains the aims behind the development of the package, gives an overview of its most important features, and provides some examples to illustrate how lavaan works in practice.

14,401 citations

Journal ArticleDOI
TL;DR: In this article, the LISREL confirmatory factor analytic (CFA) model has been used to test the invariance of measurement parameters and mean structures for multidimensional self-concept data from high school adolescents.
Abstract: Addresses issues related to partial measurement in variance using a tutorial approach based on the LISREL confirmatory factor analytic model. Specifically, we demonstrate procedures for (a) using "sensitivity analyses" to establish stable and substantively well-fitting baseline models, (b) determining partially invariant measurement parameters, and (c) testing for the invariance of factor covariance and mean structures, given partial measurement invariance. We also show, explicitly, the transformation of parameters from an all-^fto an all-y model specification, for purposes of testing mean structures. These procedures are illustrated with multidimensional self-concept data from low (« = 248) and high (n = 582) academically tracked high school adolescents. An important assumption in testing for mean differences is that the measurement (Drasgow & Kanfer, 1985; Labouvie, 1980; Rock, Werts, & Haugher, 1978) and the structure (Bejar, 1980; Labouvie, 1980; Rock etal., 1978) of the underlying construct are equivalent across groups. One methodological strategy used in testing for this equivalence is the analysis of covariance structures using the LISREL confirmatory factor analytic (CFA) model (Joreskog, 1971). Although a number of empirical investigations and didactic expositions have used this methodology in testing assumptions of factorial invariance for multiple and single parameters, the analyses have been somewhat incomplete. In particular, researchers have not considered the possibility of partial measurement invariance. The primary purpose of this article is to demonstrate the application of CFA in testing for, and with, partial measurement invariance. Specifically, we illustrate (a) testing, independently, for the invariance of factor loading (i.e., measurement) parameters, (b) testing for the invariance of factor variance-covariance (i.e., structural) parameters, given partially invariant factor loadings, and (c) testing for the invariance of factor mean structures.1 Invariance testing across groups, however, assumes wellfitting single-group models; the problem here is to know when to stop fitting the model. A secondary aim of this article, then, is to demonstrate "sensitivity analyses" that can be used to establish stable and substantively meaningful baseline models.

3,395 citations

Journal ArticleDOI
TL;DR: In this article, a two-stage approach based on the unstructured mean and covariance estimates obtained by the EM-algorithm is proposed to deal with missing data in social and behavioral sciences, and the asymptotic efficiencies of different estimators are compared under various assump...
Abstract: Survey and longitudinal studies in the social and behavioral sciences generally contain missing data. Mean and covariance structure models play an important role in analyzing such data. Two promising methods for dealing with missing data are a direct maximum-likelihood and a two-stage approach based on the unstructured mean and covariance estimates obtained by the EM-algorithm. Typical assumptions under these two methods are ignorable nonresponse and normality of data. However, data sets in social and behavioral sciences are seldom normal, and experience with these procedures indicates that normal theory based methods for nonnormal data very often lead to incorrect model evaluations. By dropping the normal distribution assumption, we develop more accurate procedures for model inference. Based on the theory of generalized estimating equations, a way to obtain consistent standard errors of the two-stage estimates is given. The asymptotic efficiencies of different estimators are compared under various assump...

1,412 citations

Journal ArticleDOI
TL;DR: This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis, which replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors.
Abstract: This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.

1,045 citations

Journal ArticleDOI
TL;DR: Maximum likelihood estimation and empirical Bayes latent score prediction within the GLLAMM framework can be performed using adaptive quadrature in gllamm, a freely available program running in Stata.
Abstract: A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent variables. The response model generalizes GLMMs to incorporate factor structures in addition to random intercepts and coefficients. As in GLMMs, the data can have an arbitrary number of levels and can be highly unbalanced with different numbers of lower-level units in the higher-level units and missing data. A wide range of response processes can be modeled including ordered and unordered categorical responses, counts, and responses of mixed types. The structural model is similar to the structural part of a SEM except that it may include latent and observed variables varying at different levels. For example, unit-level latent variables (factors or random coefficients) can be regressed on cluster-level latent variables. Special cases of this framework are explored and data from the British Social Attitudes Survey are used for illustration. Maximum likelihood estimation and empirical Bayes latent score prediction within the GLLAMM framework can be performed using adaptive quadrature in gllamm, a freely available program running in Stata.

755 citations