Latent dirichlet allocation
TLDR
This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.Abstract:
We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.read more
Citations
More filters
Proceedings ArticleDOI
A probabilistic approach to spatiotemporal theme pattern mining on weblogs
TL;DR: The proposed probabilistic model is general and can be used for spatiotemporal text mining on any domain with time and location information.
Journal ArticleDOI
Portable automatic text classification for adverse drug reaction detection via multi-corpus training
Abeed Sarker,Graciela Gonzalez +1 more
TL;DR: The research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks and that integration of information from compatible corpora can significant improve classification performance.
Posted Content
Document embedding with paragraph vectors
TL;DR: This work observes that the Paragraph Vector method performs significantly better than other methods, and proposes a simple improvement to enhance embedding quality, and shows that much like word embeddings, vector operations on Paragraph Vectors can perform useful semantic results.
Proceedings ArticleDOI
Software traceability with topic modeling
TL;DR: An automated technique that combines traceability with a machine learning technique known as topic modeling is proposed that automatically records traceability links during the software development process and learns a probabilistic topic model over artifacts.
Journal ArticleDOI
Automatic text summarization: A comprehensive survey
TL;DR: This research provides a comprehensive survey for the researchers by presenting the different aspects of ATS: approaches, methods, building blocks, techniques, datasets, evaluation methods, and future research directions.
References
More filters
Book
Bayesian Data Analysis
TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Journal ArticleDOI
Indexing by Latent Semantic Analysis
TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Book
Introduction to Modern Information Retrieval
Gerard Salton,Michael J. McGill +1 more
TL;DR: Reading is a need and a hobby at once and this condition is the on that will make you feel that you must read.
Book
Theory of probability
Harold Jeffreys,R. Bruce Lindsay +1 more
TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.