scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Latent dirichlet allocation

TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
10 Aug 2015
TL;DR: The goal in this paper is to learn the semantics of substitutes and complements from the text of online reviews, trained using networks of products derived from browsing and co-purchasing logs and evaluated on the Amazon product catalog.
Abstract: To design a useful recommender system, it is important to understand how products relate to each other. For example, while a user is browsing mobile phones, it might make sense to recommend other phones, but once they buy a phone, we might instead want to recommend batteries, cases, or chargers. In economics, these two types of recommendations are referred to as substitutes and complements: substitutes are products that can be purchased instead of each other, while complements are products that can be purchased in addition to each other. Such relationships are essential as they help us to identify items that are relevant to a user's search.Our goal in this paper is to learn the semantics of substitutes and complements from the text of online reviews. We treat this as a supervised learning problem, trained using networks of products derived from browsing and co-purchasing logs. Methodologically, we build topic models that are trained to automatically discover topics from product reviews that are successful at predicting and explaining such relationships. Experimentally, we evaluate our system on the Amazon product catalog, a large dataset consisting of 9 million products, 237 million links, and 144 million reviews.

471 citations


Cites background or methods from "Latent dirichlet allocation"

  • ...Block-LDA: Jointly modeling entity-annotated text and entity-entity links....

    [...]

  • ...LDA associates each document in a corpus d ....

    [...]

  • ...We use topic models [4] to discover topics from product reviews and other sources of text....

    [...]

  • ...Topic models are a fundamental building block of text modeling [3, 4, 5] and form the cornerstone of our model....

    [...]

  • ...As with LDA, we assign each word to a topic (an integer between 1 and K) randomly, with probability proportional to the likelihood of that topic occurring with that word....

    [...]

Proceedings Article
21 Jun 2013
TL;DR: In this paper, the authors compare data collected using Twitter's sampled API service with data collected from the full, albeit costly, Firehose stream that includes every single published tweet, using common statistical metrics as well as metrics that allow them to compare topics, networks, and locations of tweets.
Abstract: Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.

469 citations

Proceedings ArticleDOI
31 May 2014
TL;DR: This work presents “AR-Miner” — a novel computational framework for App Review Mining, which performs comprehensive analytics from raw user reviews by first extracting informativeuser reviews by filtering noisy and irrelevant ones, then grouping the informative reviews automatically using topic modeling, and finally presenting the groups of most “informative” reviews via an intuitive visualization approach.
Abstract: With the popularity of smartphones and mobile devices, mobile application (a.k.a. “app”) markets have been growing exponentially in terms of number of users and downloads. App developers spend considerable effort on collecting and exploiting user feedback to improve user satisfaction, but suffer from the absence of effective user review analytics tools. To facilitate mobile app developers discover the most “informative” user reviews from a large and rapidly increasing pool of user reviews, we present “AR-Miner” — a novel computational framework for App Review Mining, which performs comprehensive analytics from raw user reviews by (i) first extracting informative user reviews by filtering noisy and irrelevant ones, (ii) then grouping the informative reviews automatically using topic modeling, (iii) further prioritizing the informative reviews by an effective review ranking scheme, (iv) and finally presenting the groups of most “informative” reviews via an intuitive visualization approach. We conduct extensive experiments and case studies on four popular Android apps to evaluate AR-Miner, from which the encouraging results indicate that AR-Miner is effective, efficient and promising for app developers.

468 citations


Cites background or methods from "Latent dirichlet allocation"

  • ..., Latent Dirichlet Allocation (LDA) [11] and Aspect and Sentiment Unification Model (ASUM) [30] (adopted in [22]) in our experiments....

    [...]

  • ...We vary the number of topics (denoted as K) and choose the appropriate K values according to (i) the perplexity scores [11] on 20% held-out data (should be small); and (ii) the results themselves (should be reasonable)....

    [...]

  • ...First, it cannot discover app-specific topics by using Latent Dirichlet Allocation (LDA) [11], since it links all the user reviews from the same app together as a document....

    [...]

Journal ArticleDOI
TL;DR: A methodological framework for social media analytics in political context is proposed that summarizes most important politically relevant issues from the perspective of political institutions and corresponding methodologies from different scientific disciplines.
Abstract: In recent years, social media are said to have an impact on the public discourse and communication in the society. In particular, social media are increasingly used in political context. More recently, microblogging services (e.g., Twitter) and social network sites (e.g., Facebook) are believed to have the potential for increasing political participation. While Twitter is an ideal platform for users to spread not only information in general but also political opinions publicly through their networks, political institutions (e.g., politicians, political parties, political foundations, etc.) have also begun to use Facebook pages or groups for the purpose of entering into direct dialogs with citizens and encouraging more political discussions. Previous studies have shown that from the perspective of political institutions, there is an emerging need to continuously collect, monitor, analyze, summarize, and visualize politically relevant information from social media. These activities, which are subsumed under “social media analytics,” are considered difficult tasks due to a large numbers of different social media platforms as well as the large amount and complexity of information and data. Systematic tracking and analysis approaches along with appropriate scientific methods and techniques in political domain are still lacking. In this paper, we propose a methodological framework for social media analytics in political context. More specifically, our framework summarizes most important politically relevant issues from the perspective of political institutions and corresponding methodologies from different scientific disciplines.

464 citations


Cites methods from "Latent dirichlet allocation"

  • ...These include, for example, the probabilistic latent semantic indexing (Hofmann 1999) or latent Dirichlet allocation models (Blei et al. 2003) along with algorithms such as singular value decomposition or nonnegative matrix factorization (Blei 2011)....

    [...]

Proceedings ArticleDOI
24 Aug 2014
TL;DR: This paper proposed a collapsed Gibbs Sampling algorithm for the Dirichlet Multinomial Mixture model for short text clustering and found that GSDMM can infer the number of clusters automatically with a good balance between the completeness and homogeneity of the clustering results, and is fast to converge.
Abstract: Short text clustering has become an increasingly important task with the popularity of social media like Twitter, Google+, and Facebook. It is a challenging problem due to its sparse, high-dimensional, and large-volume characteristics. In this paper, we proposed a collapsed Gibbs Sampling algorithm for the Dirichlet Multinomial Mixture model for short text clustering (abbr. to GSDMM). We found that GSDMM can infer the number of clusters automatically with a good balance between the completeness and homogeneity of the clustering results, and is fast to converge. GSDMM can also cope with the sparse and high-dimensional problem of short texts, and can obtain the representative words of each cluster. Our extensive experimental study shows that GSDMM can achieve significantly better performance than three other clustering models.

459 citations


Cites background or methods from "Latent dirichlet allocation"

  • ..., PLSA [10] and LDA [6]), GSDMM can also obtain the representative words of each cluster....

    [...]

  • ...As a result, GSDMM can obtain the representative words of each cluster like Topic Models (e.g., PLSA [10] and LDA [6])....

    [...]

  • ...We find that GSDMM has the following nice properties: 1) GSDMM can infer the number of clusters automatically; 2) GSDMM has a clear way to balance the completeness and homogeneity of the clustering results; 3) GSDMM is fast to converge; 4) Unlike the Vector Space Model (VSM)-based approaches, GSDMM can cope with the sparse and highdimensional problem of short texts; 5) Like Topic Models (e.g., PLSA [10] and LDA [6]), GSDMM can also obtain the representative words of each cluster....

    [...]

  • ...They compared DMAFP with other four clustering models: EM-DMM [20], K-means [13], LDA [6], and EDCM [7]....

    [...]

References
More filters
Book
01 Jan 1995
TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Abstract: FUNDAMENTALS OF BAYESIAN INFERENCE Probability and Inference Single-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian Approaches Hierarchical Models FUNDAMENTALS OF BAYESIAN DATA ANALYSIS Model Checking Evaluating, Comparing, and Expanding Models Modeling Accounting for Data Collection Decision Analysis ADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional Approximations REGRESSION MODELS Introduction to Regression Models Hierarchical Linear Models Generalized Linear Models Models for Robust Inference Models for Missing Data NONLINEAR AND NONPARAMETRIC MODELS Parametric Nonlinear Models Basic Function Models Gaussian Process Models Finite Mixture Models Dirichlet Process Models APPENDICES A: Standard Probability Distributions B: Outline of Proofs of Asymptotic Theorems C: Computation in R and Stan Bibliographic Notes and Exercises appear at the end of each chapter.

16,079 citations


"Latent dirichlet allocation" refers background in this paper

  • ...Finally, Griffiths and Steyvers (2002) have presented a Markov chain Monte Carlo algorithm for LDA....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to ashierarchical models(Gelman et al., 1995), or more precisely asconditionally independent hierarchical models(Kass and Steffey, 1989)....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to as hierarchical models (Gelman et al., 1995), or more precisely as conditionally independent hierarchical models (Kass and Steffey, 1989)....

    [...]

Journal ArticleDOI
TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Abstract: A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.

12,443 citations


"Latent dirichlet allocation" refers methods in this paper

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notably latent semantic indexing (LSI) (Deerwester et al., 1990)....

    [...]

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notablylatent semantic indexing (LSI)(Deerwester et al., 1990)....

    [...]

Book
01 Jan 1983
TL;DR: Reading is a need and a hobby at once and this condition is the on that will make you feel that you must read.
Abstract: Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd introduction to modern information retrieval as the choice of reading, you can find here.

12,059 citations


"Latent dirichlet allocation" refers background or methods in this paper

  • ...In the populartf-idf scheme (Salton and McGill, 1983), a basic vocabulary of “words” or “terms” is chosen, and, for each document in the corpus, a count is formed of the number of occurrences of each word....

    [...]

  • ...We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model....

    [...]

Book
01 Jan 1939
TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Abstract: 1. Fundamental notions 2. Direct probabilities 3. Estimation problems 4. Approximate methods and simplifications 5. Significance tests: one new parameter 6. Significance tests: various complications 7. Frequency definitions and direct methods 8. General questions

7,086 citations