Topic
Latent variable model
About: Latent variable model is a(n) research topic. Over the lifetime, 3589 publication(s) have been published within this topic receiving 235061 citation(s).
Papers published on a yearly basis
Papers
More filters
Book•
[...]
28 Apr 1989
TL;DR: The General Model, Part I: Latent Variable and Measurement Models Combined, Part II: Extensions, Part III: Extensions and Part IV: Confirmatory Factor Analysis as discussed by the authors.
Abstract: Model Notation, Covariances, and Path Analysis. Causality and Causal Models. Structural Equation Models with Observed Variables. The Consequences of Measurement Error. Measurement Models: The Relation Between Latent and Observed Variables. Confirmatory Factor Analysis. The General Model, Part I: Latent Variable and Measurement Models Combined. The General Model, Part II: Extensions. Appendices. Distribution Theory. References. Index.
18,996 citations
Book•
[...]
TL;DR: In this article, the authors present a detailed, worked-through example drawn from psychology, management, and sociology studies illustrate the procedures, pitfalls, and extensions of CFA methodology.
Abstract: "With its emphasis on practical and conceptual aspects, rather than mathematics or formulas, this accessible book has established itself as the go-to resource on confirmatory factor analysis (CFA). Detailed, worked-through examples drawn from psychology, management, and sociology studies illustrate the procedures, pitfalls, and extensions of CFA methodology. The text shows how to formulate, program, and interpret CFA models using popular latent variable software packages (LISREL, Mplus, EQS, SAS/CALIS); understand the similarities and differences between CFA and exploratory factor analysis (EFA); and report results from a CFA study. It is filled with useful advice and tables that outline the procedures. The companion website offers data and program syntax files for most of the research examples, as well as links to CFA-related resources. New to This Edition *Updated throughout to incorporate important developments in latent variable modeling. *Chapter on Bayesian CFA and multilevel measurement models. *Addresses new topics (with examples): exploratory structural equation modeling, bifactor analysis, measurement invariance evaluation with categorical indicators, and a new method for scaling latent variables. *Utilizes the latest versions of major latent variable software packages"--
7,576 citations
[...]
TL;DR: A new latent variable modeling approach is provided that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships.
Abstract: The ability to detect and accurately estimate the strength of interaction effects are critical issues that are fundamental to social science research in general and IS research in particular. Within the IS discipline, a significant percentage of research has been devoted to examining the conditions and contexts under which relationships may vary, often under the general umbrella of contingency theory (cf. McKeen et al. 1994, Weill and Olson 1989). In our survey of such studies, the majority failed to either detect or provide an estimate of the effect size. In cases where effect sizes are estimated, the numbers are generally small. These results have led some researchers to question both the usefulness of contingency theory and the need to detect interaction effects (e.g., Weill and Olson 1989). This paper addresses this issue by providing a new latent variable modeling approach that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships. The capacity of this approach at recovering true effects in comparison to summated regression is demonstrated in a Monte Carlo study that creates a simulated data set in which the underlying true effects are known. Analysis of a second, empirical data set is included to demonstrate the technique's use within IS theory. In this second analysis, substantial direct and interaction effects of enjoyment on electronic-mail adoption are shown to exist.
5,024 citations
[...]
TL;DR: In this paper, an unidimensional latent trait model for responses scored in two or more ordered categories is developed, which can be viewed as an extension of Andrich's Rating Scale model to situations in which ordered response alternatives are free to vary in number and structure from item to item.
Abstract: A unidimensional latent trait model for responses scored in two or more ordered categories is developed. This “Partial Credit” model is a member of the family of latent trait models which share the property of parameter separability and so permit “specifically objective” comparisons of persons and items. The model can be viewed as an extension of Andrich's Rating Scale model to situations in which ordered response alternatives are free to vary in number and structure from item to item. The difference between the parameters in this model and the “category boundaries” in Samejima's Graded Response model is demonstrated. An unconditional maximum likelihood procedure for estimating the model parameters is developed.
3,118 citations
[...]
TL;DR: In this paper, exact Bayesian methods for modeling categorical response data are developed using the idea of data augmentation, which can be summarized as follows: the probit regression model for binary outcomes is seen to have an underlying normal regression structure on latent continuous data, and values of the latent data can be simulated from suitable truncated normal distributions.
Abstract: A vast literature in statistics, biometrics, and econometrics is concerned with the analysis of binary and polychotomous response data. The classical approach fits a categorical response regression model using maximum likelihood, and inferences about the model are based on the associated asymptotic theory. The accuracy of classical confidence statements is questionable for small sample sizes. In this article, exact Bayesian methods for modeling categorical response data are developed using the idea of data augmentation. The general approach can be summarized as follows. The probit regression model for binary outcomes is seen to have an underlying normal regression structure on latent continuous data. Values of the latent data can be simulated from suitable truncated normal distributions. If the latent data are known, then the posterior distribution of the parameters can be computed using standard results for normal linear models. Draws from this posterior are used to sample new latent data, and t...
3,034 citations