scispace - formally typeset
Search or ask a question
Topic

Latent Dirichlet allocation

About: Latent Dirichlet allocation is a research topic. Over the lifetime, 5351 publications have been published within this topic receiving 212555 citations. The topic is also known as: LDA.


Papers
More filters
Proceedings Article
03 Dec 2012
TL;DR: This work revisits independence assumptions for probabilistic latent variable models with a determinantal point process (DPP), leading to better intuition for the latent variable representation and quantitatively improved unsupervised feature extraction, without compromising the generative aspects of the model.
Abstract: Probabilistic latent variable models are one of the cornerstones of machine learning. They offer a convenient and coherent way to specify prior distributions over unobserved structure in data, so that these unknown properties can be inferred via posterior inference. Such models are useful for exploratory analysis and visualization, for building density models of data, and for providing features that can be used for later discriminative tasks. A significant limitation of these models, however, is that draws from the prior are often highly redundant due to i.i.d. assumptions on internal parameters. For example, there is no preference in the prior of a mixture model to make components non-overlapping, or in topic model to ensure that co-occurring words only appear in a small number of topics. In this work, we revisit these independence assumptions for probabilistic latent variable models, replacing the underlying i.i.d. prior with a determinantal point process (DPP). The DPP allows us to specify a preference for diversity in our latent variables using a positive definite kernel function. Using a kernel between probability distributions, we are able to define a DPP on probability measures. We show how to perform MAP inference with DPP priors in latent Dirichlet allocation and in mixture models, leading to better intuition for the latent variable representation and quantitatively improved unsupervised feature extraction, without compromising the generative aspects of the model.

99 citations

Proceedings ArticleDOI
16 Jun 2013
TL;DR: This work develops an adaptive learning rate for stochastic variational inference, which requires no tuning and is easily implemented with computations already made in the algorithm.
Abstract: Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic variational inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.

99 citations

Book ChapterDOI
20 Oct 2007
TL;DR: This work proposes a new method for human action recognition from video sequences using latent topic models, which achieves much better performance by utilizing the information provided by the class labels in the training set.
Abstract: We propose a new method for human action recognition from video sequences using latent topic models. Video sequences are represented by a novel "bag-of-words" representation, where each frame corresponds to a "word". The major difference between our model and previous latent topic models for recognition problems in computer vision is that, our model is trained in a "semi-supervised" way. Our model has several advantages over other similar models. First of all, the training is much easier due to the decoupling of the model parameters. Secondly, it naturally solves the problem of how to choose the appropriate number of latent topics. Thirdly, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification and irregularity detection results, and show improvement over previous methods.

99 citations

Journal ArticleDOI
TL;DR: The authors investigated the motivation and satisfaction of restaurant tourist customers coming from China and U.S. by investigating their online ratings and reviews and found that Chinese tourists are less inclined to assign lower ratings to restaurants, and are more strongly fascinated by the food offered.

99 citations

Proceedings ArticleDOI
16 Jun 2012
TL;DR: A discriminative latent topic model for scene recognition based on the modeling of two types of visual contexts, i.e., the category specific global spatial layout of different scene elements and the reinforcement of the visual coherence in uniform local regions is presented.
Abstract: We present a discriminative latent topic model for scene recognition. The capacity of our model is originated from the modeling of two types of visual contexts, i.e., the category specific global spatial layout of different scene elements, and the reinforcement of the visual coherence in uniform local regions. In contrast, most previous methods for scene recognition either only modeled one of these two visual contexts, or just totally ignored both of them. We cast these two coupled visual contexts in a discriminative Latent Dirichlet Allocation framework, namely context aware topic model. Then scene recognition is achieved by Bayesian inference given a target image. Our experiments on several scene recognition benchmarks clearly demonstrated the advantages of the proposed model.

98 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
86% related
Support vector machine
73.6K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023323
2022842
2021418
2020429
2019473
2018446