scispace - formally typeset
Search or ask a question
Topic

Probabilistic latent semantic analysis

About: Probabilistic latent semantic analysis is a research topic. Over the lifetime, 2884 publications have been published within this topic receiving 198341 citations. The topic is also known as: PLSA.


Papers
More filters
Journal ArticleDOI
01 Sep 2006
TL;DR: The concept of a latent doodle space, a low‐dimensional space derived from a set of input doodles, or simple line drawings, is proposed and two practical applications are presented: first, a randomized stamp tool that creates a different image on every usage; and second, “personalized probabilistic fonts,” a handwriting synthesis technique that mimics the idiosyncrasies of one's own handwriting.
Abstract: We propose the concept of a latent doodle space, a low-dimensional space derived from a set of input doodles, or simple line drawings. The latent space provides a foundation for generating new drawings that are similar, but not identical to, the input examples. The two key components of this technique are 1) a heuristic algorithm for finding stroke correspondences between the drawings, and 2) the use of latent variable methods to automatically extract a low-dimensional latent doodle space from the inputs. We present two practical applications that demonstrate the utility of this idea: first, a randomized stamp tool that creates a different image on every usage; and second, “personalized probabilistic fonts,” a handwriting synthesis technique that mimics the idiosyncrasies of one's own handwriting. Keywords: sketch, by-example, style learning, scattered data interpolation, principal component analysis, radial basis functions, Gaussian processes, digital in-betweening, handwriting synthesis

41 citations

Book ChapterDOI
20 Oct 2007
TL;DR: By incorporating sparsification, dynamics and back-constraints within the LL-GPLVM, this paper develops a general framework for learning smooth latent models of different activities within a shared latent space, allowing the learning of specific topologies and transitions between different activities.
Abstract: Learned, activity-specific motion models are useful for human pose and motion estimation. Nevertheless, while the use of activity-specific models simplifies monocular tracking, it leaves open the larger issues of how one learns models for multiple activities or stylistic variations, and how such models can be combined with natural transitions between activities. This paper extends the Gaussian process latent variable model (GP-LVM) to address some of these issues. We introduce a new approach to constraining the latent space that we refer to as the locally-linear Gaussian process latent variable model (LL-GPLVM). The LL-GPLVM allows for an explicit prior over the latent configurations that aims to preserve local topological structure in the training data. We reduce the computational complexity of the GPLVM by adapting sparse Gaussian process regression methods to the GP-LVM. By incorporating sparsification, dynamics and back-constraints within the LL-GPLVM we develop a general framework for learning smooth latent models of different activities within a shared latent space, allowing the learning of specific topologies and transitions between different activities.

41 citations

Journal ArticleDOI
TL;DR: A novel technique called Affective Circumplex Transformation (ACT) is proposed for representing the moods of music tracks in an interpretable and robust fashion based on semantic computing of social tags and research in emotion modeling, and its performance is robust against a low number of track-level mood tags.
Abstract: Social tags inherent in online music services such as Last.fm provide a rich source of information on musical moods. The abundance of social tags makes this data highly beneficial for developing techniques to manage and retrieve mood information, and enables study of the relationships between music content and mood representations with data substantially larger than that available for conventional emotion research. However, no systematic assessment has been done on the accuracy of social tags and derived semantic models at capturing mood information in music. We propose a novel technique called Affective Circumplex Transformation (ACT) for representing the moods of music tracks in an interpretable and robust fashion based on semantic computing of social tags and research in emotion modeling. We validate the technique by predicting listener ratings of moods in music tracks, and compare the results to prediction with the Vector Space Model (VSM), Singular Value Decomposition (SVD), Nonnegative Matrix Factorization (NMF), and Probabilistic Latent Semantic Analysis (PLSA). The results show that ACT consistently outperforms the baseline techniques, and its performance is robust against a low number of track-level mood tags. The results give validity and analytical insights for harnessing millions of music tracks and associated mood data available through social tags in application development.

41 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the performance of several methods for parameterizing multilevel latent class analysis and compare them to Level 1 (individual) data given a correct specification of the number of latent classes at both levels.
Abstract: Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture modeling, as well as the application to complex multilevel sampling designs. The goal of this study was to investigate—through a Monte Carlo simulation study—the performance of several methods for parameterizing multilevel latent class analysis. Of particular interest was the comparison of several such models to adequately fit Level 1 (individual) data, given a correct specification of the number of latent classes at both levels (Level 1 and Level 2). Results include the parameter estimation accuracy as well as the quality of classification at Level 1.

41 citations

Book ChapterDOI
08 Nov 2010
TL;DR: This work learns latent spaces, and distributions within them, for image features and 3D poses separately first, and then learns a multi-modal conditional density between these two lowdimensional spaces in the form of Gaussian Mixture Regression.
Abstract: Discriminative approaches for human pose estimation model the functional mapping, or conditional distribution, between image features and 3D pose. Learning such multi-modal models in high dimensional spaces, however, is challenging with limited training data; often resulting in over-fitting and poor generalization. To address these issues latent variable models (LVMs) have been introduced. Shared LVMs attempt to learn a coherent, typically non-linear, latent space shared by image features and 3D poses, distribution of data in that latent space, and conditional distributions to and from this latent space to carry out inference. Discovering the shared manifold structure can, in itself, however, be challenging. In addition, shared LVMs models are most often non-parametric, requiring the model representation to be a function of the training set size. We present a parametric framework that addresses these shortcoming. In particular, we learn latent spaces, and distributions within them, for image features and 3D poses separately first, and then learn a multi-modal conditional density between these two lowdimensional spaces in the form of Gaussian Mixture Regression. Using our model we can address the issue of over-fitting and generalization, since the data is denser in the learned latent space, as well as avoid the necessity of learning a shared manifold for the data. We quantitatively evaluate and compare the performance of the proposed method to several state-of-the-art alternatives, and show that our method gives a competitive performance.

40 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Deep learning
79.8K papers, 2.1M citations
83% related
Object detection
46.1K papers, 1.3M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202319
202277
202114
202036
201927
201858