scispace - formally typeset
Search or ask a question
Topic

Latent variable model

About: Latent variable model is a research topic. Over the lifetime, 3589 publications have been published within this topic receiving 235061 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The findings suggest that developmental research should integrate quantitative and qualitative perspectives of construct change over time and pay more attention to issues of measurement invariance and qualitative changes of constructs over time.
Abstract: Research has shown that the average values for academic interest decrease during adolescence. Looking beyond such quantitative decline, we explored qualitative change of interest in the domain of mathematics across adolescence. Study 1 was based on a longitudinal data set (annual assessments from Grade 5 to Grade 9; N = 3,193). Latent variable modeling showed that the measurement coefficients of the latent variable of interest (intercepts, structural weights, and error variances) significantly differed across time points, indicating structural changes of the construct. Study 2 was based on interviews with adolescents (Grades 5 and 9, N = 70). Cognitive validation was used to explore differences in subjective concepts of interest across age groups. As expected, there were significant age-related differences, indicating a shift from an affect-laden concept in 5th grade to a more cognitively oriented concept in 9th grade. The findings suggest that developmental research should integrate quantitative and qualitative perspectives of construct change over time and pay more attention to issues of measurement invariance and qualitative changes of constructs over time.

106 citations

Proceedings Article
03 Jun 2020
TL;DR: This work proposes a new deep sequential latent variable model for dimensionality reduction and data imputation of multivariate time series from the domains of computer vision and healthcare, and demonstrates that this approach outperforms several classical and deep learning-based data imputations methods on high-dimensional data.
Abstract: Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain However, naive applications of deep learning fall short in giving reliable confidence estimates and lack interpretability We propose a new deep sequential latent variable model for dimensionality reduction and data imputation Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates

105 citations

Journal ArticleDOI
TL;DR: The proposed model is applied to birth defects data, where continuous data on the size of infants who were exposed to anticonvulsant medications in utero are compared to controls.
Abstract: We discuss latent variable models that allow for fixed effect covariates, as well as covariates affecting the latent variable directly. Restricted maximum likelihood and maximum likelihood are used to estimate model parameters. A generalized likelihood ratio test can be used to test significance of the covariates effecting the latent outcomes. Special cases of the proposed model correspond to factor analysis, mixed models, random effects models, and simultaneous equations. The model is applied to birth defects data, where continuous data on the size of infants who were exposed to anticonvulsant medications in utero are compared to controls.

105 citations

Journal ArticleDOI
TL;DR: Simulations from many different distributions give a broader picture of the relative value of the beta-binomial and the finite mixture models, and provide some preliminary insights into the situations in which these models are useful.
Abstract: SummaryDorazio and Royle (2003, Biometrics59, 351–364) investigated the behavior of three mixture models for closed population capture–recapture analysis in the presence of individual heterogeneity of capture probability. Their simulations were from the beta-binomial distribution, with analyses from the beta-binomial, the logit-normal, and the finite mixture (latent class) models. In this response, simulations from many different distributions give a broader picture of the relative value of the beta-binomial and the finite mixture models, and provide some preliminary insights into the situations in which these models are useful.

104 citations

Journal ArticleDOI
TL;DR: In this paper, the latent structure model is used to estimate the proportions of the population in each latent class and the probabilities of positive responses to each item for individuals in each class from knowledge of the probabilities for positive responses for individuals from the population as a whole.
Abstract: The latent structure model considered here postulates that a population of individuals can be divided intom classes such that each class is “homogeneous” in the sense that for the individuals in the class the responses toN dichotomous items or questions are statistically independent. A method is given for deducing the proportions of the population in each latent class and the probabilities of positive responses to each item for individuals in each class from knowledge of the probabilities of positive responses for individuals from the population as a whole. For estimation of the latent parameters on the basis of a sample, it is proposed that the same method of analysis be applied to the observed data. The method has the advantages of avoiding implicitly defined and unobservable quantities, and of using relatively simple computational procedures of conventional matrix algebra, but it has the disadvantages of using only a part of the available information and of using that part asymmetrically.

104 citations


Network Information
Related Topics (5)
Statistical hypothesis testing
19.5K papers, 1M citations
82% related
Inference
36.8K papers, 1.3M citations
81% related
Multivariate statistics
18.4K papers, 1M citations
80% related
Linear model
19K papers, 1M citations
80% related
Estimator
97.3K papers, 2.6M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202375
2022143
2021137
2020185
2019142
2018159