scispace - formally typeset
Open AccessJournal ArticleDOI

Latent dirichlet allocation

Reads0
Chats0
TLDR
This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract
We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

On Predicting the Popularity of Newly Emerging Hashtags in Twitter

TL;DR: This article proposes methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task and shows that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features.
Journal ArticleDOI

A Survey of Topic Modeling in Text Mining

TL;DR: Different models, such as topic over time (TOT), dynamic topic models (DTM), multiscale topic tomography, dynamic topic correlation detection, detecting topic evolution in scientific literature, etc. are discussed.
Proceedings ArticleDOI

OPTIMOL: automatic Online Picture collecTion via Incremental MOdel Learning

TL;DR: This work adapts a non-parametric graphical model and proposes an incremental learning framework that mimics the human learning process of iteratively accumulating model knowledge and image examples and is capable of collecting image datasets that are superior to Caltech 101 and LabelMe.
Proceedings Article

Scalable recommendation with hierarchical Poisson factorization

TL;DR: H hierarchical Poisson matrix factorization is developed, a novel method for providing users with high quality recommendations based on implicit feedback, such as views, clicks, or purchases, and it is shown that it more accurately captures the long-tailed user activity found in most consumption data.
Journal ArticleDOI

Challenges in discriminating profanity from hate speech

TL;DR: In this paper, the problem of distinguishing general profanity from hate speech in social media has been addressed, using a new dataset annotated with specifical information. But the work is limited to hate speech.
References
More filters
Book

Bayesian Data Analysis

TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Journal ArticleDOI

Indexing by Latent Semantic Analysis

TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Book

Introduction to Modern Information Retrieval

TL;DR: Reading is a need and a hobby at once and this condition is the on that will make you feel that you must read.
Book

Theory of probability

TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Related Papers (5)