scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Latent dirichlet allocation

TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
09 Feb 2011
TL;DR: This work proposes a set of features for characterizing social media authors, including both nodal and topical metrics, and shows how probabilistic clustering over this feature space, followed by a within-cluster ranking procedure, can yield a final list of top authors for a given topic.
Abstract: Content in microblogging systems such as Twitter is produced by tens to hundreds of millions of users. This diversity is a notable strength, but also presents the challenge of finding the most interesting and authoritative authors for any given topic. To address this, we first propose a set of features for characterizing social media authors, including both nodal and topical metrics. We then show how probabilistic clustering over this feature space, followed by a within-cluster ranking procedure, can yield a final list of top authors for a given topic. We present results across several topics, along with results from a user study confirming that our method finds authors who are significantly more interesting and authoritative than those resulting from several baseline conditions. Additionally our algorithm is computationally feasible in near real-time scenarios making it an attractive alternative for capturing the rapidly changing dynamics of microblogs.

380 citations


Cites background or methods from "Latent dirichlet allocation"

  • ...Latent Dirichlet Allocation....

    [...]

  • ...Note that we could take a LDA [2] type approach on the tweets extracted based on string matching to .nd other tweets with similar latent topical distribution which would be a more comprehensive corpus for the given topic....

    [...]

  • ...The notable excep­tion is TwitterRank, proposed by Jianshu et al. [20], which computes the topical distribution of a user based on La­tent Dirichlet Allocation [2] and constructs a weighted user graph where edge weight indicates the topical similarity of the two users....

    [...]

  • ...Note that we could take a LDA [2] type approach on the tweets extracted based on string matching to find other tweets with similar latent...

    [...]

  • ...[20], which computes the topical distribution of a user based on Latent Dirichlet Allocation [2] and constructs a weighted user...

    [...]

Proceedings Article
12 Jul 2012
TL;DR: Two new automated semantic evaluations to three distinct latent topic models are applied, revealing that LDA and LSA each have different strengths; LDA best learns descriptive topics while LSA is best at creating a compact semantic representation of documents and words in a corpus.
Abstract: We apply two new automated semantic evaluations to three distinct latent topic models. Both metrics have been shown to align with human evaluations and provide a balance between internal measures of information gain and comparisons to human ratings of coherent topics. We improve upon the measures by introducing new aggregate measures that allows for comparing complete topic models. We further compare the automated measures to other metrics for topic models, comparison to manually crafted semantic tests and document classification. Our experiments reveal that LDA and LSA each have different strengths; LDA best learns descriptive topics while LSA is best at creating a compact semantic representation of documents and words in a corpus.

379 citations


Cites background or methods from "Latent dirichlet allocation"

  • ...Furthermore, few evaluations have used the same metrics to compare distinct approaches such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003), Latent Semantic Analysis (LSA) (Landauer and Dutnais, 1997), and Non-negative Matrix Factorization (NMF) (Lee and Seung, 2000)....

    [...]

  • ...Conversely, both Non Negative Matrix factorization and Latent Dirichlet Allocation learn concise and coherent topics and achieved similar performance on our evaluations....

    [...]

  • ...1 Latent Dirichlet Allocation Latent Dirichlet Allocation (Blei et al., 2003) learns the relationships between words, topics, and documents by assuming documents are generated by a particular probabilistic model....

    [...]

  • ...Latent Dirichlet Allocation (Blei et al., 2003) learns the relationships between words, topics, and documents by assuming documents are generated by a particular probabilistic model....

    [...]

  • ...Latent Dirichlet Allocation 2....

    [...]

Proceedings ArticleDOI
27 Aug 2017
TL;DR: The proposed convolutional neural networks with dual attention model outperforms HFT and ConvMF+ in terms of mean square errors (MSE) and the superior quality of user/item embeddings learned from the model is compared.
Abstract: Recently, many e-commerce websites have encouraged their users to rate shopping items and write review texts. This review information has been very useful for understanding user preferences and item properties, as well as enhancing the capability to make personalized recommendations of these websites. In this paper, we propose to model user preferences and item properties using convolutional neural networks (CNNs) with dual local and global attention, motivated by the superiority of CNNs to extract complex features. By using aggregated review texts from a user and aggregated review text for an item, our model can learn the unique features (embedding) of each user and each item. These features are then used to predict ratings. We train these user and item networks jointly which enable the interaction between users and items in a similar way as matrix factorization. The local attention provides us insight on a user's preferences or an item's properties. The global attention helps CNNs focus on the semantic meaning of the whole review text. Thus, the combined local and global attentions enable an interpretable and better-learned representation of users and items. We validate the proposed models by testing on popular review datasets in Yelp and Amazon and compare the results with matrix factorization (MF), the hidden factor and topical (HFT) model, and the recently proposed convolutional matrix factorization (ConvMF+). Our proposed CNNs with dual attention model outperforms HFT and ConvMF+ in terms of mean square errors (MSE). In addition, we compare the user/item embeddings learned from these models for classification and recommendation. These results also confirm the superior quality of user/item embeddings learned from our model.

377 citations

Journal ArticleDOI
01 Jan 2017
TL;DR: It is believed that the truth inference problem is not fully solved, and the limitations of existing algorithms are identified and point out promising research directions.
Abstract: Crowdsourcing has emerged as a novel problem-solving paradigm, which facilitates addressing problems that are hard for computers, e.g., entity resolution and sentiment analysis. However, due to the openness of crowdsourcing, workers may yield low-quality answers, and a redundancy-based method is widely employed, which first assigns each task to multiple workers and then infers the correct answer (called truth) for the task based on the answers of the assigned workers. A fundamental problem in this method is Truth Inference, which decides how to effectively infer the truth. Recently, the database community and data mining community independently study this problem and propose various algorithms. However, these algorithms are not compared extensively under the same framework and it is hard for practitioners to select appropriate algorithms. To alleviate this problem, we provide a detailed survey on 17 existing algorithms and perform a comprehensive evaluation using 5 real datasets. We make all codes and datasets public for future research. Through experiments we find that existing algorithms are not stable across different datasets and there is no algorithm that outperforms others consistently. We believe that the truth inference problem is not fully solved, and identify the limitations of existing algorithms and point out promising research directions.

376 citations


Cites methods from "Latent dirichlet allocation"

  • ...For example, existing studies [19, 35] make use of the text description in each task and adopt topic model techniques [6, 56] to generate a vector of sizeK for the task; while Multi [51] learns aK-size vector without referring to external information (e....

    [...]

Journal ArticleDOI
TL;DR: The overall goal is to make LDA topic modeling more accessible to communication researchers and to ensure compliance with disciplinary standards by developing a brief hands-on user guide for applying L DA topic modeling.
Abstract: Latent Dirichlet allocation (LDA) topic models are increasingly being used in communication research. Yet, questions regarding reliability and validity of the approach have received little attentio...

375 citations


Cites background or methods from "Latent dirichlet allocation"

  • ...We completed our retrieval of relevant and recent studies by checking Google Scholar and also revisiting basic literature on topic modeling (e.g., Blei, 2012; Blei et al., 2003)....

    [...]

  • ...Enhancing interpretability and reliability with regularization techniques The seminal model proposed by Blei et al. (2003) is based on the idea that the clustering effect of the algorithm works well, even if the initial assignments for θ and ϕ are set completely at random....

    [...]

  • ...The perplexity metric is a measure used to determine the statistical goodness of fit of a topic model (Blei et al., 2003)....

    [...]

  • ...The most frequently used statistical measures are held-out likelihood or perplexity (Blei et al., 2003)....

    [...]

  • ...Biel and Gatica-Perez (2014) refer to standard values proposed by Blei et al. (2003)....

    [...]

References
More filters
Book
01 Jan 1995
TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Abstract: FUNDAMENTALS OF BAYESIAN INFERENCE Probability and Inference Single-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian Approaches Hierarchical Models FUNDAMENTALS OF BAYESIAN DATA ANALYSIS Model Checking Evaluating, Comparing, and Expanding Models Modeling Accounting for Data Collection Decision Analysis ADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional Approximations REGRESSION MODELS Introduction to Regression Models Hierarchical Linear Models Generalized Linear Models Models for Robust Inference Models for Missing Data NONLINEAR AND NONPARAMETRIC MODELS Parametric Nonlinear Models Basic Function Models Gaussian Process Models Finite Mixture Models Dirichlet Process Models APPENDICES A: Standard Probability Distributions B: Outline of Proofs of Asymptotic Theorems C: Computation in R and Stan Bibliographic Notes and Exercises appear at the end of each chapter.

16,079 citations


"Latent dirichlet allocation" refers background in this paper

  • ...Finally, Griffiths and Steyvers (2002) have presented a Markov chain Monte Carlo algorithm for LDA....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to ashierarchical models(Gelman et al., 1995), or more precisely asconditionally independent hierarchical models(Kass and Steffey, 1989)....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to as hierarchical models (Gelman et al., 1995), or more precisely as conditionally independent hierarchical models (Kass and Steffey, 1989)....

    [...]

Journal ArticleDOI
TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Abstract: A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.

12,443 citations


"Latent dirichlet allocation" refers methods in this paper

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notably latent semantic indexing (LSI) (Deerwester et al., 1990)....

    [...]

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notablylatent semantic indexing (LSI)(Deerwester et al., 1990)....

    [...]

Book
01 Jan 1983
TL;DR: Reading is a need and a hobby at once and this condition is the on that will make you feel that you must read.
Abstract: Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd introduction to modern information retrieval as the choice of reading, you can find here.

12,059 citations


"Latent dirichlet allocation" refers background or methods in this paper

  • ...In the populartf-idf scheme (Salton and McGill, 1983), a basic vocabulary of “words” or “terms” is chosen, and, for each document in the corpus, a count is formed of the number of occurrences of each word....

    [...]

  • ...We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model....

    [...]

Book
01 Jan 1939
TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Abstract: 1. Fundamental notions 2. Direct probabilities 3. Estimation problems 4. Approximate methods and simplifications 5. Significance tests: one new parameter 6. Significance tests: various complications 7. Frequency definitions and direct methods 8. General questions

7,086 citations