scispace - formally typeset
Search or ask a question
Topic

Latent variable model

About: Latent variable model is a research topic. Over the lifetime, 3589 publications have been published within this topic receiving 235061 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The results are consistent with the idea that variation in the dynamics of free recall, WMC, and gF are primarily due to differences in search set size, but differences in recovery and monitoring are also important.
Abstract: A latent variable analysis was conducted to examine the nature of individual differences in the dynamics of free recall and cognitive abilities. Participants performed multiple measures of free recall, working memory capacity (WMC), and fluid intelligence (gF). For each free recall task, recall accuracy, recall latency, and number of intrusion errors were determined, and latent factors were derived for each. It was found that recall accuracy was negatively related to both recall latency and number of intrusions, and recall latency and number of intrusions were positively related. Furthermore, latent WMC and gF factors were positively related to recall accuracy, but negatively related to recall latency and number of intrusions. Finally, a cluster analysis revealed that subgroups of participants with deficits in focusing the search had deficits in recovering degraded representations or deficits in monitoring the products of retrieval. The results are consistent with the idea that variation in the dynamics of free recall, WMC, and gF are primarily due to differences in search set size, but differences in recovery and monitoring are also important.

45 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider latent variable models for an infinite sequence (or universe) of manifest (observable) variables that may be discrete, continuous or some combination of these.
Abstract: We consider latent variable models for an infinite sequence (or universe) of manifest (observable) variables that may be discrete, continuous or some combination of these. The main theorem is a general characterization by empirical conditions of when it is possible to construct latent variable models that satisfy unidimensionality, monotonicity, conditional independence, andtail-measurability. Tail-measurability means that the latent variable can be estimated consistently from the sequence of manifest variables even though an arbitrary finite subsequence has been removed. The characterizing,necessary and sufficient, conditions that the manifest variables must satisfy for these models are conditional association and vanishing conditional dependence (as one conditions upon successively more other manifest variables). Our main theorem considerably generalizes and sharpens earlier results of Ellis and van den Wollenberg (1993), Holland and Rosenbaum (1986), and Junker (1993). It is also related to the work of Stout (1990). The main theorem is preceded by many results for latent variable modelsin general—not necessarily unidimensional and monotone. They pertain to the uniqueness of latent variables and are connected with the conditional independence theorem of Suppes and Zanotti (1981). We discuss new definitions of the concepts of “true-score” and “subpopulation,” which generalize these notions from the “stochastic subject,” “random sampling,” and “domain sampling” formulations of latent variable models (e.g., Holland, 1990; Lord & Novick, 1968). These definitions do not require the a priori specification of a latent variable model.

45 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable selection, which can not only aid interpretation and understanding of the model but also be crucial for performing variable selection with the purpose of obtaining parsimonious models with high explanatory information content as well as predictive performance.
Abstract: The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable selection. Thus, these graphs provide visualization of the explanatory variables’ content of response related as well as systematic orthogonal variation at a quantitative level. Furthermore, these graphs are able to reveal and partition the explanatory variables into those that are crucial for both interpretation and predictive performance of the model, and those that are crucial for prediction performance but confounded by large contributions of orthogonal variation. Tools for assessment of explanatory variables may not only aid interpretation and understanding of the model but also be crucial for performing variable selection with the purpose of obtaining parsimonious models with high explanatory information content aswell as predictive performance. We show by example that by just using prediction performance as criterion for variable selection, it is possible to end up with a reducedmodel where the most selective variables are lost in the selection process.

45 citations

Proceedings ArticleDOI
20 Jun 2009
TL;DR: A LVM called the shared kernel information embedding (sKIE) is proposed, which defines a coherent density over a latent space and multiple input/output spaces and is easy to condition on a latent state, or on combinations of the input/ Output states.
Abstract: Latent variable models (LVM), like the shared-GPLVM and the spectral latent variable model, help mitigate over-fitting when learning discriminative methods from small or moderately sized training sets. Nevertheless, existing methods suffer from several problems: (1) complexity; (2) the lack of explicit mappings to and from the latent space; (3) an inability to cope with multi-modality; and (4) the lack of a well-defined density over the latent space. We propose a LVM called the shared kernel information embedding (sKIE). It defines a coherent density over a latent space and multiple input/output spaces (e.g., image features and poses), and it is easy to condition on a latent state, or on combinations of the input/output states. Learning is quadratic, and it works well on small datasets. With datasets too large to learn a coherent global model, one can use sKIE to learn local online models. sKIE permits missing data during inference, and partially labelled data during learning. We use sKIE for human pose inference.

45 citations

Journal ArticleDOI
TL;DR: Latent class membership at four months of age predicted longitudinal outcomes at four years of age, and multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation.
Abstract: Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.

45 citations


Network Information
Related Topics (5)
Statistical hypothesis testing
19.5K papers, 1M citations
82% related
Inference
36.8K papers, 1.3M citations
81% related
Multivariate statistics
18.4K papers, 1M citations
80% related
Linear model
19K papers, 1M citations
80% related
Estimator
97.3K papers, 2.6M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202375
2022143
2021137
2020185
2019142
2018159