scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Latent dirichlet allocation

TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
06 Feb 2020
TL;DR: A comprehensive review of the application of ML techniques in soil science aided by a ML algorithm (latent Dirichlet allocation) to find patterns in a large collection of text corpora finds research gaps and finds that the interpretability of the ML models is an important aspect to consider when applying advanced ML methods in order to improve knowledge and understanding of soil.
Abstract: . The application of machine learning (ML) techniques in various fields of science has increased rapidly, especially in the last 10 years. The increasing availability of soil data that can be efficiently acquired remotely and proximally, and freely available open-source algorithms, have led to an accelerated adoption of ML techniques to analyse soil data. Given the large number of publications, it is an impossible task to manually review all papers on the application of ML in soil science without narrowing down a narrative of ML application in a specific research question. This paper aims to provide a comprehensive review of the application of ML techniques in soil science aided by a ML algorithm (latent Dirichlet allocation) to find patterns in a large collection of text corpora. The objective is to gain insight into publications of ML applications in soil science and to discuss the research gaps in this topic. We found that (a) there is an increasing usage of ML methods in soil sciences, mostly concentrated in developed countries, (b) the reviewed publications can be grouped into 12 topics, namely remote sensing, soil organic carbon, water, contamination, methods (ensembles), erosion and parent material, methods (NN, neural networks, SVM, support vector machines), spectroscopy, modelling (classes), crops, physical, and modelling (continuous), and (c) advanced ML methods usually perform better than simpler approaches thanks to their capability to capture non-linear relationships. From these findings, we found research gaps, in particular, about the precautions that should be taken (parsimony) to avoid overfitting, and that the interpretability of the ML models is an important aspect to consider when applying advanced ML methods in order to improve our knowledge and understanding of soil. We foresee that a large number of studies will focus on the latter topic.

168 citations

Journal ArticleDOI
TL;DR: To expand the reach of the message and maximize the potential for word-of-mouth marketing using Twitter, organizations need a strategic communications plan to ensure on-going social media conversations.
Abstract: One in eight women will develop breast cancer in her lifetime. The best-known awareness event is breast cancer awareness month (BCAM). BCAM month outreach efforts have been associated with increased media coverage, screening mammography and online information searching. Traditional mass media coverage has been enhanced by social media. However, there is a dearth of literature about how social media is used during awareness-related events. The purpose of this research was to understand how Twitter is being used during BCAM. This was a cross-sectional, descriptive study. We collected breast cancer- related tweets from 26 September - 12 November 2012, using Twitter’s application programming interface. We classified Twitter users into organizations, individuals, and celebrities; each tweet was classified as an original or a retweet, and inclusion of a mention, meaning a reference to another Twitter user with @username. Statistical methods included ANOVA and chi square. For content analysis, we used computational linguistics techniques, specifically the MALLET implementation of the unsupervised topic modeling algorithm Latent Dirichlet Allocation. There were 1,351,823 tweets by 797,827 unique users. Tweets spiked dramatically the first few days then tapered off. There was an average of 1.69 tweets per user. The majority of users were individuals. Nearly all of the tweets were original. Organizations and celebrities posted more often than individuals. On average celebrities made far more impressions; they were also retweeted more often and their tweets were more likely to include mentions. Individuals were more likely to direct a tweet to a specific person. Organizations and celebrities emphasized fundraisers, early detection, and diagnoses while individuals tweeted about wearing pink. Tweeting about breast cancer was a singular event. The majority of tweets did not promote any specific preventive behavior. Twitter is being used mostly as a one-way communication tool. To expand the reach of the message and maximize the potential for word-of-mouth marketing using Twitter, organizations need a strategic communications plan to ensure on-going social media conversations. Organizations may consider collaborating with individuals and celebrities in these conversations. Social media communication strategies that emphasize fundraising for breast cancer research seem particularly appropriate.

168 citations


Cites methods from "Latent dirichlet allocation"

  • ...Interestingly, the topics discovered by LDA reflected mostly the same frequent categories used in Table 1 (e.g., clothing, walks, fundraisers, etc.)....

    [...]

  • ...In addition to using the defined categories, unsupervised content clustering was also performed on the content using LDA....

    [...]

  • ...LDA is a probabilistic model that hypothesizes that each document (e.g., individual tweets) in a given corpus (e.g., all the tweets) has been generated as a mixture of unobserved, or latent, topics, where a topic is characterized by a categorical distribution over words....

    [...]

  • ...We used the popular MALLET implementation, an open-source software that contains the LDA algorithm [27,28]....

    [...]

  • ...We focused here on topic discovery through Latent Dirichlet Allocation (LDA) [27]....

    [...]

Proceedings Article
06 Dec 2010
TL;DR: In this article, the authors consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of l∞-norms over groups of variables, and they propose an efficient procedure which computes its solution exactly in polynomial time.
Abstract: We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of l∞-norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems.

168 citations

Proceedings Article
12 Dec 2011
TL;DR: This paper proposes the linear sub-modular bandits problem, which is an online learning setting for optimizing a general class of feature-rich submodular utility models for diversified retrieval, and presents an algorithm, called LSBGREEDY, and proves that it efficiently converges to a near-optimal model.
Abstract: Diversified retrieval and online learning are two core research areas in the design of modern information retrieval systems. In this paper, we propose the linear sub-modular bandits problem, which is an online learning setting for optimizing a general class of feature-rich submodular utility models for diversified retrieval. We present an algorithm, called LSBGREEDY, and prove that it efficiently converges to a near-optimal model. As a case study, we applied our approach to the setting of personalized news recommendation, where the system must recommend small sets of news articles selected from tens of thousands of available articles each day. In a live user study, we found that LSBGREEDY significantly outperforms existing online learning approaches.

168 citations


Cites background or methods from "Latent dirichlet allocation"

  • ...For the blog dataset, articles are represented using d = 100 topics generated using Latent Dirichlet Allocation [4], and w⇤ was derived from a preliminary version of our user study....

    [...]

  • ..., the topics and coverage probabilities can be derived from a topic model such as LDA [4]....

    [...]

  • ...We call this independence property conditional submodular independence, which we will leverage in 2E.g., the topics and coverage probabilities can be derived from a topic model such as LDA [4]....

    [...]

Proceedings ArticleDOI
25 Jul 2010
TL;DR: This paper formally defines the problem of popular event tracking in online communities (PET) and proposes a novel statistical method that models the the popularity of events over time, taking into consideration the burstiness of user interest, information diffusion on the network structure, and the evolution of textual topics.
Abstract: User generated information in online communities has been characterized with the mixture of a text stream and a network structure both changing over time. A good example is a web-blogging community with the daily blog posts and a social network of bloggers. An important task of analyzing an online community is to observe and track the popular events, or topics that evolve over time in the community. Existing approaches usually focus on either the burstiness of topics or the evolution of networks, but ignoring the interplay between textual topics and network structures. In this paper, we formally define the problem of popular event tracking in online communities (PET), focusing on the interplay between texts and networks. We propose a novel statistical method that models the the popularity of events over time, taking into consideration the burstiness of user interest, information diffusion on the network structure, and the evolution of textual topics. Specifically, a Gibbs Random Field is defined to model the influence of historic status and the dependency relationships in the graph; thereafter a topic model generates the words in text content of the event, regularized by the Gibbs Random Field. We prove that two classic models in information diffusion and text burstiness are special cases of our model under certain situations. Empirical experiments with two different communities and datasets (i.e., Twitter and DBLP) show that our approach is effective and outperforms existing approaches.

168 citations

References
More filters
Book
01 Jan 1995
TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Abstract: FUNDAMENTALS OF BAYESIAN INFERENCE Probability and Inference Single-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian Approaches Hierarchical Models FUNDAMENTALS OF BAYESIAN DATA ANALYSIS Model Checking Evaluating, Comparing, and Expanding Models Modeling Accounting for Data Collection Decision Analysis ADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional Approximations REGRESSION MODELS Introduction to Regression Models Hierarchical Linear Models Generalized Linear Models Models for Robust Inference Models for Missing Data NONLINEAR AND NONPARAMETRIC MODELS Parametric Nonlinear Models Basic Function Models Gaussian Process Models Finite Mixture Models Dirichlet Process Models APPENDICES A: Standard Probability Distributions B: Outline of Proofs of Asymptotic Theorems C: Computation in R and Stan Bibliographic Notes and Exercises appear at the end of each chapter.

16,079 citations


"Latent dirichlet allocation" refers background in this paper

  • ...Finally, Griffiths and Steyvers (2002) have presented a Markov chain Monte Carlo algorithm for LDA....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to ashierarchical models(Gelman et al., 1995), or more precisely asconditionally independent hierarchical models(Kass and Steffey, 1989)....

    [...]

  • ...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to as hierarchical models (Gelman et al., 1995), or more precisely as conditionally independent hierarchical models (Kass and Steffey, 1989)....

    [...]

Journal ArticleDOI
TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Abstract: A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.

12,443 citations


"Latent dirichlet allocation" refers methods in this paper

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notably latent semantic indexing (LSI) (Deerwester et al., 1990)....

    [...]

  • ...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notablylatent semantic indexing (LSI)(Deerwester et al., 1990)....

    [...]

Book
01 Jan 1983
TL;DR: Reading is a need and a hobby at once and this condition is the on that will make you feel that you must read.
Abstract: Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd introduction to modern information retrieval as the choice of reading, you can find here.

12,059 citations


"Latent dirichlet allocation" refers background or methods in this paper

  • ...In the populartf-idf scheme (Salton and McGill, 1983), a basic vocabulary of “words” or “terms” is chosen, and, for each document in the corpus, a count is formed of the number of occurrences of each word....

    [...]

  • ...We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model....

    [...]

Book
01 Jan 1939
TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Abstract: 1. Fundamental notions 2. Direct probabilities 3. Estimation problems 4. Approximate methods and simplifications 5. Significance tests: one new parameter 6. Significance tests: various complications 7. Frequency definitions and direct methods 8. General questions

7,086 citations