scispace - formally typeset
Search or ask a question

Showing papers on "Latent variable model published in 2012"


Journal ArticleDOI
TL;DR: In this article, the authors provide a step-by-step guide to analysing measurement invariance of latent constructs, which is important in research across groups, or across time.
Abstract: The analysis of measurement invariance of latent constructs is important in research across groups, or across time. By establishing whether factor loadings, intercepts and residual variances are equivalent in a factor model that measures a latent concept, we can assure that comparisons that are made on the latent variable are valid across groups or time. Establishing measurement invariance involves running a set of increasingly constrained structural equation models, and testing whether differences between these models are significant. This paper provides a step-by-step guide to analysing measurement invariance.

1,457 citations


Journal ArticleDOI
TL;DR: This manuscript discusses a typology of (second-order) hierarchical latent variable models that include formative relationships, and provides an overview of different approaches that can be used to estimate the parameters in these models.

1,361 citations


Journal ArticleDOI
TL;DR: This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis, which replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors.
Abstract: This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.

1,045 citations


Journal ArticleDOI
TL;DR: In this paper, the main types of statistical models based on latent variables, on copulas and on spatial max-stable processes are described and compared by application to a data set on rainfall in Switzerland.
Abstract: The areal modeling of the extremes of a natural process such as rainfall or temperature is important in environmental statistics; for example, understanding extreme areal rainfall is crucial in flood protection. This article reviews recent progress in the statistical modeling of spatial extremes, starting with sketches of the necessary elements of extreme value statistics and geostatistics. The main types of statistical models thus far proposed, based on latent variables, on copulas and on spatial max-stable processes, are described and then are compared by application to a data set on rainfall in Switzerland. Whereas latent variable modeling allows a better fit to marginal distributions, it fits the joint distributions of extremes poorly, so appropriately-chosen copula or max-stable models seem essential for successful spatial modeling of extremes.

572 citations


Journal ArticleDOI
TL;DR: In this paper, a bias-corrected bootstrap confidence interval for testing a specific mediation effect in a complex latent variable model is presented, and the procedure is extended to construct a BC bootstrap interval for the difference between two specific mediation effects.
Abstract: This teaching note starts with a demonstration of a straightforward procedure using Mplus Version 6 to produce a bias-corrected (BC) bootstrap confidence interval for testing a specific mediation effect in a complex latent variable model. The procedure is extended to constructing a BC bootstrap confidence interval for the difference between two specific mediation effects. The extended procedure not only tells whether the strengths of any direct and mediation effects or any two specific mediation effects in a latent variable model are significantly different but also provides an estimate and a confidence interval for the difference. However, the Mplus procedures do not allow the estimation of a BC bootstrap confidence interval for the difference between two standardized mediation effects. This teaching note thus demonstrates the LISREL procedures for constructing BC confidence intervals for specific standardized mediation effects and for comparing two standardized mediation effects. Finally, procedures com...

299 citations


Book
29 Oct 2012
TL;DR: This book discusses Latent Markov Modeling as a guide to Bayesian inference via reversible jump, and its applications include selection and hypothesis testing, and modeling and inference of latent variable models and their applications.
Abstract: Overview on Latent Markov Modeling Introduction Literature review on latent Markov models Alternative approaches Example datasets Background on Latent Variable and Markov Chain Models Introduction Latent variable models Expectation-Maximization algorithm Standard errors Latent class model Selection of the number of latent classes Applications Markov chain model for longitudinal data Applications Basic Latent Markov Model Introduction Univariate formulation Multivariate formulation Model identifiability Maximum likelihood estimation Selection of the number of latent states Applications Constrained Latent Markov Models Introduction Constraints on the measurement model Constraints on the latent model Maximum likelihood estimation Model selection and hypothesis testing Applications Including Individual Covariates and Relaxing Basic Model Assumptions Introduction Notation Covariates in the measurement model Covariates in the latent model Interpretation of the resulting models Maximum likelihood estimation Observed information matrix, identifiability, and standard errors Relaxing local independence Higher order extensions Applications Including Random Effects and Extension to Multilevel Data Introduction Random-effects formulation Maximum likelihood estimation Multilevel formulation Application to the student math achievement dataset Advanced Topics about Latent Markov Modeling Introduction Dealing with continuous response variables Dealing with missing responses Additional computational issues Decoding and forecasting Selection of the number of latent states Bayesian Latent Markov Models Introduction Prior distributions Bayesian inference via reversible jump Alternative sampling Application to the labor market dataset Appendix: Software List of Main Symbols Bibliography Index

181 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: A new latent variable model for scene recognition that represents a scene as a collection of region models arranged in a reconfigurable pattern and uses a latent variable to specify which region model is assigned to each image region.
Abstract: We propose a new latent variable model for scene recognition. Our approach represents a scene as a collection of region models (“parts”) arranged in a reconfigurable pattern. We partition an image into a predefined set of regions and use a latent variable to specify which region model is assigned to each image region. In our current implementation we use a bag of words representation to capture the appearance of an image region. The resulting method generalizes a spatial bag of words approach that relies on a fixed model for the bag of words in each image region. Our models can be trained using both generative and discriminative methods. In the generative setting we use the Expectation-Maximization (EM) algorithm to estimate model parameters from a collection of images with category labels. In the discriminative setting we use a latent structural SVM (LSSVM). We note that LSSVMs can be very sensitive to initialization and demonstrate that generative training with EM provides a good initialization for discriminative training with LSSVM.

177 citations


Proceedings ArticleDOI
08 Jul 2012
TL;DR: By carefully handling words that are not in the sentences (missing words), a reliable latent variable model on sentences is trained and a new evaluation framework for sentence similarity is proposed: Concept Definition Retrieval.
Abstract: Sentence Similarity is the process of computing a similarity score between two sentences Previous sentence similarity work finds that latent semantics approaches to the problem do not perform well due to insufficient information in single sentences In this paper, we show that by carefully handling words that are not in the sentences (missing words), we can train a reliable latent variable model on sentences In the process, we propose a new evaluation framework for sentence similarity: Concept Definition Retrieval The new framework allows for large scale tuning and testing of Sentence Similarity models Experiments on the new task and previous data sets show significant improvement of our model over baselines and other traditional latent variable models Our results indicate comparable and even better performance than current state of the art systems addressing the problem of sentence similarity

144 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used an integrated choice and latent variable model in the context of beach visitors' willingness to pay for improvements in water quality, and showed how a latent attitudinal variable, which they refer to as a pro-intervention attitude, helps explain both the responses from the stated choice exercise and answers to various rating questions related to respondent attitudes.
Abstract: The study of human behaviour and in particular individual choices is of great interest in the field of environmental economics. Substantial attention has been paid to the way in which preferences vary across individuals, and there is a realisation that such differences are at least in part due to underlying attitudes and convictions. While this has been confirmed in empirical work, the methods typically employed are based on the arguably misguided use of responses to attitudinal questions as direct measures of underlying attitudes. As discussed in other literature, especially in transport research, this potentially leads to measurement error and endogeneity bias, and attitudes should rather be treated as latent variables. In this paper, we illustrate the use of such an Integrated Choice and Latent Variable model in the context of beach visitors’ willingness-to-pay for improvements in water quality. We show how a latent attitudinal variable, which we refer to as a pro-intervention attitude, helps explain both the responses from the stated choice exercise as well as answers to various rating questions related to respondent attitudes. The incorporation of the latent variable leads to important gains in model fit and substantially different willingness-to-pay patterns.

132 citations


Journal ArticleDOI
TL;DR: The paper provides a full theoretical foundation for the causal discovery procedure first presented by Eberhardt et al. (2010) by adapting the procedure to the problem of cellular network inference, applying it to the biologically realistic data of the DREAMchallenges.
Abstract: Identifying cause-effect relationships between variables of interest is a central problem in science. Given a set of experiments we describe a procedure that identifies linear models that may contain cycles and latent variables. We provide a detailed description of the model family, full proofs of the necessary and sufficient conditions for identifiability, a search algorithm that is complete, and a discussion of what can be done when the identifiability conditions are not satisfied. The algorithm is comprehensively tested in simulations, comparing it to competing algorithms in the literature. Furthermore, we adapt the procedure to the problem of cellular network inference, applying it to the biologically realistic data of the DREAMchallenges. The paper provides a full theoretical foundation for the causal discovery procedure first presented by Eberhardt et al. (2010) and Hyttinen et al. (2010).

112 citations


Journal ArticleDOI
TL;DR: The findings suggest that developmental research should integrate quantitative and qualitative perspectives of construct change over time and pay more attention to issues of measurement invariance and qualitative changes of constructs over time.
Abstract: Research has shown that the average values for academic interest decrease during adolescence. Looking beyond such quantitative decline, we explored qualitative change of interest in the domain of mathematics across adolescence. Study 1 was based on a longitudinal data set (annual assessments from Grade 5 to Grade 9; N = 3,193). Latent variable modeling showed that the measurement coefficients of the latent variable of interest (intercepts, structural weights, and error variances) significantly differed across time points, indicating structural changes of the construct. Study 2 was based on interviews with adolescents (Grades 5 and 9, N = 70). Cognitive validation was used to explore differences in subjective concepts of interest across age groups. As expected, there were significant age-related differences, indicating a shift from an affect-laden concept in 5th grade to a more cognitively oriented concept in 9th grade. The findings suggest that developmental research should integrate quantitative and qualitative perspectives of construct change over time and pay more attention to issues of measurement invariance and qualitative changes of constructs over time.

Journal ArticleDOI
TL;DR: This paper employed a latent variable modeling approach to examine the Simple View of Reading in a sample of students from 3rd, 7th, and 10th grades (N = 215, 188, and 180, respectively).
Abstract: The present study utilized a latent variable modeling approach to examine the Simple View of Reading in a sample of students from 3rd, 7th, and 10th grades (N = 215, 188, and 180, respectively). Latent interaction modeling and other latent variable models were employed to investigate (a) the functional form of the relationship between decoding and linguistic comprehension in third grade, (b) the contribution of these predictors as children progress through school, and (c) the contribution of passage fluency, working memory, and performance IQ to a model of reading for all grades. Results did not support the Simple View of Reading for students in grades 3, 7 and 10. Additionally, it was found that passage fluency significantly predicted reading comprehension after controlling for decoding and linguistic comprehension for students in grades 7 and 10. Likewise performance IQ was found to be a significant predictor of reading comprehension for students in third grade.

Posted Content
TL;DR: This paper presents a fully Bayesian latent variable model which exploits conditional nonlinear (in)-dependence structures to learn an efficient latent representation and introduces a relaxation to the discrete segmentation and allow for a "softly" shared latent space.
Abstract: In this paper we present a fully Bayesian latent variable model which exploits conditional nonlinear(in)-dependence structures to learn an efficient latent representation. The latent space is factorized to represent shared and private information from multiple views of the data. In contrast to previous approaches, we introduce a relaxation to the discrete segmentation and allow for a "softly" shared latent space. Further, Bayesian techniques allow us to automatically estimate the dimensionality of the latent spaces. The model is capable of capturing structure underlying extremely high dimensional spaces. This is illustrated by modelling unprocessed images with tenths of thousands of pixels. This also allows us to directly generate novel images from the trained model by sampling from the discovered latent spaces. We also demonstrate the model by prediction of human pose in an ambiguous setting. Our Bayesian framework allows us to perform disambiguation in a principled manner by including latent space priors which incorporate the dynamic nature of the data.

Reference EntryDOI
26 Sep 2012
TL;DR: Latent class analysis (LCA) as mentioned in this paper is a statistical approach to modeling a discrete latent variable using multiple, discrete observed variables as indicators, such that individuals belong to mutually exclusive and exhaustive unobservable subgroups.
Abstract: Often quantities of interest in psychology cannot be observed directly. These unobservable quantities, known as latent variables, tend to be complex, often multidimensional, constructs. In many cases these constructs are categorical, such that individuals belong to mutually exclusive and exhaustive unobservable subgroups. Latent class analysis (LCA) is a statistical approach to modeling a discrete latent variable using multiple, discrete observed variables as indicators. Examples of latent class variables that appear in the psychology literature include temperament type, substance use behavior, teaching style, stages of change in the transtheoretical model, and latent classes of risk. The first section of this chapter provides a conceptual introduction to the concept of a latent class followed by a technical introduction to the mathematical model, including multiple-groups LCA and LCA with covariates. This is followed by a discussion of parameter restrictions, model selection, and goodness-of-fit. The second section demonstrates LCA using the empirical example of depression subtypes in adolescence. Five latent classes were identified based on responses to eight questionnaire items assessing depression symptoms: Non-depressed (characterized by a low probability of reporting all eight depression symptoms), sad, disliked, sad + disliked, and depressed (characterized by a high probability of reporting all eight depression symptoms). The third section presents longitudinal extensions of the model, including repeated-measures LCA and latent transition analysis (LTA). The empirical example is extended to examine change in depression subtypes over time. The final sections describe recent extensions to the latent class model and areas that merit additional research in the future. Keywords: latent class analysis; latent transition analysis; latent variable model; categorical data

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper introduces a constrained latent variable model whose generated output inherently accounts for prior knowledge about the specific problem at hand, and proposes an approach that explicitly imposes equality and inequality constraints on the model's output during learning, thus avoiding the computational burden of having to account for these constraints at inference.
Abstract: Latent variable models provide valuable compact representations for learning and inference in many computer vision tasks. However, most existing models cannot directly encode prior knowledge about the specific problem at hand. In this paper, we introduce a constrained latent variable model whose generated output inherently accounts for such knowledge. To this end, we propose an approach that explicitly imposes equality and inequality constraints on the model's output during learning, thus avoiding the computational burden of having to account for these constraints at inference. Our learning mechanism can exploit non-linear kernels, while only involving sequential closed-form updates of the model parameters. We demonstrate the effectiveness of our constrained latent variable model on the problem of non-rigid 3D reconstruction from monocular images, and show that it yields qualitative and quantitative improvements over several baselines.

Journal ArticleDOI
TL;DR: In this paper, the authors revisited and discussed popular measurement invariance testing procedures for latent constructs evaluated by multiple indicators in distinct populations and recommended that empirical studies on constructs in multiple populations be concerned in general with alternative invariance examination and ensuring the inclusion of their invariance conditions in models aimed at investigating group differences and similarities in latent means, variances, and interrelationships.
Abstract: Popular measurement invariance testing procedures for latent constructs evaluated by multiple indicators in distinct populations are revisited and discussed. A frequently used test of factor loading invariance is shown to possess serious limitations that in general preclude it from accomplishing its goal of ascertaining this invariance. A process of mean intercept invariance evaluation is subsequently examined, and it is indicated that within this framework there is no statistical test available for group identity in them. Rather than pursuing these popular and widely used invariance testing procedures, it is recommended that empirical studies on constructs in multiple populations be concerned in general with alternative measurement invariance examination and ensuring the inclusion of their invariance conditions in models aimed at investigating group differences and similarities in latent means, variances, and interrelationships. The discussion is illustrated using data from a cognitive intervention study.

Journal ArticleDOI
TL;DR: Results demonstrate that the proposed probabilistic model for multiple-instrument automatic music transcription outperforms leading approaches from the transcription literature, using several error metrics.
Abstract: In this work, a probabilistic model for multiple-instrument automatic music transcription is proposed. The model extends the shift-invariant probabilistic latent component analysis method, which is used for spectrogram factorization. Proposed extensions support the use of multiple spectral templates per pitch and per instrument source, as well as a time-varying pitch contribution for each source. Thus, this method can effectively be used for multiple-instrument automatic transcription. In addition, the shift-invariant aspect of the method can be exploited for detecting tuning changes and frequency modulations, as well as for visualizing pitch content. For note tracking and smoothing, pitch-wise hidden Markov models are used. For training, pitch templates from eight orchestral instruments were extracted, covering their complete note range. The transcription system was tested on multiple-instrument polyphonic recordings from the RWC database, a Disklavier data set, and the MIREX 2007 multi-F0 data set. Results demonstrate that the proposed method outperforms leading approaches from the transcription literature, using several error metrics.

Journal ArticleDOI
TL;DR: It is concluded that the taxometric method may be an effective approach to distinguishing between dimensional and categorical structure but that other latent modeling procedures may be more effective for specifying the model.
Abstract: Statistical analyses investigating latent structure can be divided into those that estimate structural model parameters and those that detect the structural model type. The most basic distinction among structure types is between categorical (discrete) and dimensional (continuous) models. It is a common, and potentially misleading, practice to apply some method for estimating a latent structural model such as factor analysis without first verifying that the latent structure type assumed by that method applies to the data. The taxometric method was developed specifically to distinguish between dimensional and 2-class models. This study evaluated the taxometric method as a means of identifying categorical structures in general. We assessed the ability of the taxometric method to distinguish between dimensional (1-class) and categorical (2-5 classes) latent structures and to estimate the number of classes in categorical datasets. Based on 50,000 Monte Carlo datasets (10,000 per structure type), and using the comparison curve fit index averaged across 3 taxometric procedures (Mean Above Minus Below A Cut, Maximum Covariance, and Latent Mode Factor Analysis) as the criterion for latent structure, the taxometric method was found superior to finite mixture modeling for distinguishing between dimensional and categorical models. A multistep iterative process of applying taxometric procedures to the data often failed to identify the number of classes in the categorical datasets accurately, however. It is concluded that the taxometric method may be an effective approach to distinguishing between dimensional and categorical structure but that other latent modeling procedures may be more effective for specifying the model.

Journal ArticleDOI
TL;DR: In this article, an approach to describe change over time of the latent process underlying multiple longitudinal outcomes of different types (binary, ordinal, quantitative) by relying on random effect models is proposed.
Abstract: Multivariate ordinal and quantitative longitudinal data measuring the same latent construct are frequently collected in psychology. We propose an approach to describe change over time of the latent process underlying multiple longitudinal outcomes of different types (binary, ordinal, quantitative). By relying on random-effect models, this approach handles individually varying and outcome-specific measurement times. A linear mixed model describes the latent process trajectory while equations of observation combine outcome-specific threshold models for binary or ordinal outcomes and models based on flexible parameterized non-linear families of transformations for Gaussian and non-Gaussian quantitative outcomes. As models assuming continuous distributions may be also used with discrete outcomes, we propose likelihood and information criteria for discrete data to compare the goodness of fit of models assuming either a continuous or a discrete distribution for discrete data. Two analyses of the repeated measures of the Mini-Mental State Examination, a 20-item psychometric test, illustrate the method. First, we highlight the usefulness of parameterized non-linear transformations by comparing different flexible families of transformation for modelling the test as a sum score. Then, change over time of the latent construct underlying directly the 20 items is described using two-parameter longitudinal item response models that are specific cases of the approach.

Book ChapterDOI
07 Oct 2012
TL;DR: Latent Hough Transform (LHT) as discussed by the authors augments the Hough transform with latent variables in order to enforce consistency among votes, so that only votes that agree on the assignment of the latent variable are allowed to support a single hypothesis.
Abstract: Hough transform based methods for object detection work by allowing image features to vote for the location of the object. While this representation allows for parts observed in different training instances to support a single object hypothesis, it also produces false positives by accumulating votes that are consistent in location but inconsistent in other properties like pose, color, shape or type. In this work, we propose to augment the Hough transform with latent variables in order to enforce consistency among votes. To this end, only votes that agree on the assignment of the latent variable are allowed to support a single hypothesis. For training a Latent Hough Transform (LHT) model, we propose a learning scheme that exploits the linearity of the Hough transform based methods. Our experiments on two datasets including the challenging PASCAL VOC 2007 benchmark show that our method outperforms traditional Hough transform based methods leading to state-of-the-art performance on some categories.

Proceedings Article
26 Jun 2012
TL;DR: In this paper, a fully Bayesian latent variable model is proposed to capture structure underlying extremely high dimensional spaces by exploiting conditional nonlinear (in-dependence) structures to learn an efficient latent representation.
Abstract: In this paper we present a fully Bayesian latent variable model which exploits conditional nonlinear (in)-dependence structures to learn an efficient latent representation. The latent space is factorized to represent shared and private information from multiple views of the data. In contrast to previous approaches, we introduce a relaxation to the discrete segmentation and allow for a "softly" shared latent space. Further, Bayesian techniques allow us to automatically estimate the dimensionality of the latent spaces. The model is capable of capturing structure underlying extremely high dimensional spaces. This is illustrated by modelling unprocessed images with tenths of thousands of pixels. This also allows us to directly generate novel images from the trained model by sampling from the discovered latent spaces. We also demonstrate the model by prediction of human pose in an ambiguous setting. Our Bayesian framework allows us to perform disambiguation in a principled manner by including latent space priors which incorporate the dynamic nature of the data.

Journal ArticleDOI
TL;DR: The proposed model has a 92.9% accuracy rate in predicting customer types, is less impacted by prior probabilities, and has a significantly low Type I errors in comparison with the other five approaches.
Abstract: A Bayesian latent variable model with classification and regression tree approach is built to overcome three challenges encountered by a bank in credit-granting process. These three challenges include (1) the bank wants to predict the future performance of an applicant accurately; (2) given current information about cardholders' credit usage and repayment behavior, financial institutions would like to determine the optimal credit limit and APR for an applicant; and (3) the bank would like to improve its efficiency by automating the process of credit-granting decisions. Data from a leading bank in Taiwan is used to illustrate the combined approach. The data set consists of each credit card holder's credit usage and repayment data, demographic information, and credit report. Empirical study shows that the demographic variables used in most credit scoring models have little explanatory ability with regard to a cardholder's credit usage and repayment behavior. A cardholder's credit history provides the most important information in credit scoring. The continuous latent customer quality from the Bayesian latent variable model allows considerable latitude for producing finer rules for credit granting decisions. Compared to the performance of discriminant analysis, logistic regression, neural network, multivariate adaptive regression splines (MARS) and support vector machine (SVM), the proposed model has a 92.9% accuracy rate in predicting customer types, is less impacted by prior probabilities, and has a significantly low Type I errors in comparison with the other five approaches.

Journal ArticleDOI
TL;DR: A mixture of latent variables model is proposed for the model-based clustering, classification, and discriminant analysis of data comprising variables with mixed type, a generalization of latent variable analysis.

Journal ArticleDOI
TL;DR: This model induces a framework for functional response regression in which the distribution of the curves is allowed to change flexibly with predictors, allowing flexible effects on not only the mean curve but also the distribution about the mean.
Abstract: In studies involving functional data, it is commonly of interest to model the impact of predictors on the distribution of the curves, allowing flexible effects on not only the mean curve but also the distribution about the mean. Characterizing the curve for each subject as a linear combination of a high-dimensional set of potential basis functions, we place a sparse latent factor regression model on the basis coefficients. We induce basis selection by choosing a shrinkage prior that allows many of the loadings to be close to zero. The number of latent factors is treated as unknown through a highly-efficient, adaptive-blocked Gibbs sampler. Predictors are included on the latent variables level, while allowing different predictors to impact different latent factors. This model induces a framework for functional response regression in which the distribution of the curves is allowed to change flexibly with predictors. The performance is assessed through simulation studies and the methods are applied to data on blood pressure trajectories during pregnancy.

Journal ArticleDOI
TL;DR: Of the 5 estimation methods, it was found that overall the methods based on maximum likelihood estimation and the Bayesian approach performed best in terms of bias, root-mean-square error, standard error ratios, power, and Type I error control, although key differences were observed.
Abstract: Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent moderated structural equation method, (d) a fully Bayesian approach, and (e) marginal maximum likelihood estimation. Of the 5 estimation methods, it was found that overall the methods based on maximum likelihood estimation and the Bayesian approach performed best in terms of bias, root-mean-square error, standard error ratios, power, and Type I error control, although key differences were observed. Similarities as well as disparities among methods are highlight and general recommendations articulated. As a point of comparison, all 5 approaches were fit to a reparameterized version of the latent quadratic model to educational reading data.

Journal ArticleDOI
TL;DR: In this paper, the authors present a framework that aims to give a holistic view of the optimization formulations that can arise from the need to invert an LVRM, and an example of these formulations can be found in Figure 1.
Abstract: Latent variable regression model (LVRM) inversion is a useful tool to support the development of new products and their manufacturing conditions. The objective of the model inversion exercise is that of finding the best combination of regressors (e.g., raw material properties, process parameters) that are needed to obtain a desired response (e.g., product quality) from the model. Each of the published applications where model inversion has been applied utilizes a tailored approach to achieve the inversion, given the specific objectives and needs. These approaches range from the direct inversion of the LVRM to the formulation of an objective function that is optimized using nonlinear programming. In this paper we present a framework that aims to give a holistic view of the optimization formulations that can arise from the need to invert an LVRM. The different sets of equations that become relevant (either as a term within the objective function or as a constraint) are discussed, and an example of these sce...

Proceedings ArticleDOI
09 Jul 2012
TL;DR: The Intention-Driven Dynamics Model (IDDM), a latent variable model for inferring unknown human intentions, is proposed and an efficient approximate inference algorithm to infer the human’s intention from an ongoing movement is introduced.
Abstract: Inference of human intention may be an essential step towards understanding human actions and is hence important for realizing efficient human-robot interaction. In this paper, we propose the Intention-Driven Dynamics Model (IDDM), a latent variable model for inferring unknown human intentions. We train the model based on observed human movements/actions. We introduce an efficient approximate inference algorithm to infer the human’s intention from an ongoing movement. We verify the feasibility of the IDDM in two scenarios, i.e., target inference in robot table tennis and action recognition for interactive humanoid robots. In both tasks, the IDDM achieves substantial improvements over state-of-the-art regression and classification.

Journal ArticleDOI
30 Jan 2012
TL;DR: This paper describes the National Research Council of Canada's submission to the 2011 i2b2 NLP challenge on the detection of emotions in suicide notes, and presents a latent sequence model, which learns to segment the sentence into a number of emotion regions.
Abstract: This paper describes the National Research Council of Canada's submission to the 2011 i2b2 NLP challenge on the detection of emotions in suicide notes. In this task, each sentence of a suicide note is annotated with zero or more emotions, making it a multi-label sentence classification task. We employ two distinct large-margin models capable of handling multiple labels. The first uses one classifier per emotion, and is built to simplify label balance issues and to allow extremely fast development. This approach is very effective, scoring an F-measure of 55.22 and placing fourth in the competition, making it the best system that does not use web-derived statistics or re-annotated training data. Second, we present a latent sequence model, which learns to segment the sentence into a number of emotion regions. This model is intended to gracefully handle sentences that convey multiple thoughts and emotions. Preliminary work with the latent sequence model shows promise, resulting in comparable performance using fewer features.

Journal ArticleDOI
TL;DR: In this paper, the same latent process can manifest differently from one individual to another, thus recognizing that the process is general but its realization in a given person is to some degree idiosyncratic.
Abstract: To model processes we propose merging idiographic filter measurement with dynamic factor analysis. This involves testing whether or not the same latent dynamics (concurrent and lagged factor interrelations) can describe different individuals' observed multivariate time series. The methodology allows fitting, across different individuals, dynamic factor models that are invariant with respect to the latent dynamics, but not necessarily the factor loadings (measurement model). This methodology allows the same latent process to manifest differently from one individual to another, thus recognizing that the process is general but its realization in a given person is to some degree idiosyncratic. The approach is illustrated with empirical data.

Journal ArticleDOI
TL;DR: This research highlights the need to understand more fully the rationale behind the continued use of these medications and how these medications can be modified to address these problems.
Abstract: Funding: Part of this research is supported by the Institute of Education Sciences (R305B080016 and R305D100039) and the National Institute on Drug Abuse (R01DA026943 and R01DA030466). The views expressed here belong to the author and do not reflect the views or policies of the funding agencies.