scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Latent dirichlet allocation

TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a mixed-methods approach was adopted that combines a longitudinal bibliographic network analysis, multiple correspondence analysis and k-means clustering, correlated topic modeling, historiographic citation analysis and a semantic content analysis.
Abstract: Circular economy (CE) has gained momentum in the political, economic and scientific fields. The growing popularity of the concept is accompanied by some definitional ambiguities and conceptual uncertainties. In particular, the relationship and contribution of CE to sustainable development (SD) and thus to a more sustainable society is currently under discussion. The purpose of this paper is to contribute to this discussion by providing new insights into the evolution and state of CE research over the past two decades, in general, and its sustainability connotation, in particular. For doing so, a mixed-methods approach was adopted that combines a longitudinal bibliographic network analysis, multiple correspondence analysis and k-means clustering, correlated topic modeling, historiographic citation analysis and a semantic content analysis. The results indicate that the CE literature body can be divided into management and technically-oriented studies that have either a beginning-of-life or an end-of-life focus. Recycling is the most referred to R-strategy, followed by remanufacturing, repair and reuse, which, however, occur one order of magnitude less frequently. CE research and SD were found to exhibit a subset relationship, as only a limited number of environmental aspects is directly addressed. Social aspects form a periphery. The qualitative analysis further portraits the conceptual evolution of the CE-SD relationship between 2000 and 2019 by following the citation network of the 30 most influential CE papers. The results contribute to positioning CE research within the general Sustainable Development debate and to identifying potential, sustainability-related shortcomings and blind spots.

161 citations

BookDOI
30 Apr 2015
TL;DR: In this article, the authors present a self-contained book for graduate students, researchers, and scholars studying brain science and related fields, where each chapter is self contained and aims to engage readers possessing various levels of modeling experience.
Abstract: Each chapter is self contained and aims to engage readers possessing various levels of modeling experience. For graduate students, researchers, and scholars studying brain science and related fields.

161 citations

Journal ArticleDOI
TL;DR: It is shown that deep learning models were more useful in predicting review helpfulness than other models, and combining review texts and user-provided photos produced the highest performance.

161 citations

Book ChapterDOI
18 Apr 2009
TL;DR: This work shows that topic models are effective for document smoothing, and generally, incorporating topics in the feedback documents for building relevance models can benefit the performance more for queries that have more relevant documents.
Abstract: We explore the utility of different types of topic models for retrieval purposes. Based on prior work, we describe several ways that topic models can be integrated into the retrieval process. We evaluate the effectiveness of different types of topic models within those retrieval approaches. We show that: (1) topic models are effective for document smoothing; (2) more rigorous topic models such as Latent Dirichlet Allocation provide gains over cluster-based models; (3) more elaborate topic models that capture topic dependencies provide no additional gains; (4) smoothing documents by using their similar documents is as effective as smoothing them by using topic models; (5) doing query expansion should utilize topics discovered in the top feedback documents instead of coarse-grained topics from the whole corpus; (6) generally, incorporating topics in the feedback documents for building relevance models can benefit the performance more for queries that have more relevant documents.

161 citations


Cites background or methods from "Latent dirichlet allocation"

  • ...We cannot possibly study all topic modeling approaches, so we select a few that are representative: the well-known Mixture of Unigrams (MU) model [1]; Latent Dirichlet Allocation (LDA) [2], a more complicated and computationally expensive topic model; and Pachinko Allocation Model (PAM) [3], a recently proposed new topic model which not only models the relations between words and identifies topics but also models the organization and co-occurrences of the topics themselves....

    [...]

  • ...Latent Dirichlet Allocation (LDA) [2] is a widely-used topic model which also assumes that there are multiple topics in the corpus but that a document can have multiple topics....

    [...]

  • ...We can automatically infer a set of topics either by simple clustering[1] or methods popularized by the machine learning community [2,3,4]....

    [...]

  • ...A natural question is whether these topics are useful to help retrieve documents on the same topic as a query – intuitively relevant documents have topic distributions that are likely to have generated the set of words associated with the query[2,5]....

    [...]

Journal ArticleDOI
TL;DR: This article summarizes recent progress about tag-aware recommender systems, emphasizing on the contributions from three mainstream perspectives and approaches: network-based methods, tensor- based methods, and the topic-based Methods.
Abstract: In the past decade, Social Tagging Systems have attracted increasing attention from both physical and computer science communities. Besides the underlying structure and dynamics of tagging systems, many efforts have been addressed to unify tagging information to reveal user behaviors and preferences, extract the latent semantic relations among items, make recommendations, and so on. Specifically, this article summarizes recent progress about tag-aware recommender systems, emphasizing on the contributions from three mainstream perspectives and approaches: network-based methods, tensor-based methods, and the topic-based methods. Finally, we outline some other tag-related studies and future challenges of tag-aware recommendation algorithms.

161 citations


Cites methods from "Latent dirichlet allocation"

  • ...Therefore, recently, a more widely used model, latent dirichlet allocation (LDA)([80]), was proposed to overcome this issue by allowing multiple latent topics with a priori Dirichlet distribution, a conjugate prior of multinomial distribution, assigned to each single document....

    [...]

  • ...Comparatively, LDA is more widely used for tag recommendation....

    [...]

  • ...Xi et al.[84] employed LDA for eliciting topics from the words in documents, as well as the co-occurrence tags, where words and tags form independent vocabulary spaces, and then recommended tags for target documents....

    [...]

  • ...Furthermore, Li et al.[90] combined LDA and GN community detection algorithm[91-92] to observe the topic distributions of communities, as well as community evolving over time in social tagging systems....

    [...]

  • ...Bundschus et al.[87] integrated both user information and tag information into LDA algorithm....

    [...]

References
More filters
Book
01 Jan 1995
TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Abstract: FUNDAMENTALS OF BAYESIAN INFERENCE Probability and Inference Single-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian Approaches Hierarchical Models FUNDAMENTALS OF BAYESIAN DATA ANALYSIS Model Checking Evaluating, Comparing, and Expanding Models Modeling Accounting for Data Collection Decision Analysis ADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional Approximations REGRESSION MODELS Introduction to Regression Models Hierarchical Linear Models Generalized Linear Models Models for Robust Inference Models for Missing Data NONLINEAR AND NONPARAMETRIC MODELS Parametric Nonlinear Models Basic Function Models Gaussian Process Models Finite Mixture Models Dirichlet Process Models APPENDICES A: Standard Probability Distributions B: Outline of Proofs of Asymptotic Theorems C: Computation in R and Stan Bibliographic Notes and Exercises appear at the end of each chapter.

16,079 citations


"Latent dirichlet allocation" refers background in this paper

  • ...Finally, Griffiths and Steyvers (2002) have presented a Markov chain Monte Carlo algorithm for LDA....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to ashierarchical models(Gelman et al., 1995), or more precisely asconditionally independent hierarchical models(Kass and Steffey, 1989)....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to as hierarchical models (Gelman et al., 1995), or more precisely as conditionally independent hierarchical models (Kass and Steffey, 1989)....

    [...]

Journal ArticleDOI
TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Abstract: A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.

12,443 citations


"Latent dirichlet allocation" refers methods in this paper

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notably latent semantic indexing (LSI) (Deerwester et al., 1990)....

    [...]

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notablylatent semantic indexing (LSI)(Deerwester et al., 1990)....

    [...]

Book
01 Jan 1983
TL;DR: Reading is a need and a hobby at once and this condition is the on that will make you feel that you must read.
Abstract: Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd introduction to modern information retrieval as the choice of reading, you can find here.

12,059 citations


"Latent dirichlet allocation" refers background or methods in this paper

  • ...In the populartf-idf scheme (Salton and McGill, 1983), a basic vocabulary of “words” or “terms” is chosen, and, for each document in the corpus, a count is formed of the number of occurrences of each word....

    [...]

  • ...We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model....

    [...]

Book
01 Jan 1939
TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Abstract: 1. Fundamental notions 2. Direct probabilities 3. Estimation problems 4. Approximate methods and simplifications 5. Significance tests: one new parameter 6. Significance tests: various complications 7. Frequency definitions and direct methods 8. General questions

7,086 citations