scispace - formally typeset
Search or ask a question
Topic

Latent Dirichlet allocation

About: Latent Dirichlet allocation is a research topic. Over the lifetime, 5351 publications have been published within this topic receiving 212555 citations. The topic is also known as: LDA.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes an unsupervised approach to automatically discover the aspects discussed in Chinese social reviews and also the sentiments expressed in different aspects, and applies the Latent Dirichlet Allocation model to discover multi-aspect global topics of social reviews.
Abstract: User-generated reviews on the Web reflect users' sentiment about products, services and social events. Existing researches mostly focus on the sentiment classification of the product and service reviews in document level. Reviews of social events such as economic and political activities, which are called social reviews, have specific characteristics different to the reviews of products and services. In this paper, we propose an unsupervised approach to automatically discover the aspects discussed in Chinese social reviews and also the sentiments expressed in different aspects. The approach is called Multi-aspect Sentiment Analysis for Chinese Online Social Reviews (MSA-COSRs). We first apply the Latent Dirichlet Allocation (LDA) model to discover multi-aspect global topics of social reviews, and then extract the local topic and associated sentiment based on a sliding window context over the review text. The aspect of the local topic is identified by a trained LDA model, and the polarity of the associated sentiment is classified by HowNet lexicon. The experiment results show that MSA-COSR cannot only obtain good topic partitioning results, but also help to improve sentiment analysis accuracy. It helps to simultaneously discover multi-aspect fine-grained topics and associated sentiment.

236 citations

Journal ArticleDOI
TL;DR: This paper conducts a systematic investigation of two representative probabilistic topic models, probabilistically latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval.
Abstract: Probabilistic topic models have recently attracted much attention because of their successful applications in many text mining tasks such as retrieval, summarization, categorization, and clustering. Although many existing studies have reported promising performance of these topic models, none of the work has systematically investigated the task performance of topic models; as a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly how to choose between competing models, how multiple local maxima affect task performance, and how to set parameters in topic models. In this paper, we address these questions by conducting a systematic investigation of two representative probabilistic topic models, probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval. The analysis of our experimental results provides deeper understanding of topic models and many useful insights about how to optimize the performance of topic models for these typical tasks. The task-based evaluation framework is generalizable to other topic models in the family of either PLSA or LDA.

236 citations

Book
01 Jan 2010
TL;DR: This work focuses on text mining and cybercrime, and on the development of nonnegative matrix factorization for email classification problems using NMF-based classification methods.
Abstract: List of Contributors. Preface. PART I TEXT EXTRACTION, CLASSIFICATION, ANDCLUSTERING. 1 Automatic keyword extraction from individualdocuments. 1.1 Introduction. 1.2 Rapid automatic keyword extraction. 1.3 Benchmark evaluation. 1.4 Stoplist generation. 1.5 Evaluation on news articles. 1.6 Summary. 1.7 Acknowledgements. 2 Algebraic techniques for multilingual documentclustering. 2.1 Introduction. 2.2 Background. 2.3 Experimental setup. 2.4 Multilingual LSA. 2.5 Tucker1 method. 2.6 PARAFAC2 method. 2.7 LSA with term alignments. 2.8 Latent morpho-semantic analysis (LMSA). 2.9 LMSA with term alignments. 2.10 Discussion of results and techniques. 2.11 Acknowledgements. 3 Content-based spam email classification usingmachine-learning algorithms. 3.1 Introduction. 3.2 Machine-learning algorithms. 3.3 Data preprocessing. 3.4 Evaluation of email classification. 3.5 Experiments. 3.6 Characteristics of classifiers. 3.7 Concluding remarks. 3.8 Acknowledgements. 4 Utilizing nonnegative matrix factorization for emailclassification problems. 4.1 Introduction. 4.2 Background. 4.3 NMF initialization based on feature ranking. 4.4 NMF-based classification methods. 4.5 Conclusions. 4.6 Acknowledgements. 5 Constrained clustering with k-means typealgorithms. 5.1 Introduction. 5.2 Notations and classical k-means. 5.3 Constrained k-means with Bregman divergences. 5.4 Constrained smoka type clustering. 5.5 Constrained spherical k-means. 5.6 Numerical experiments. 5.7 Conclusion. PART II ANOMALY AND TREND DETECTION. 6 Survey of text visualization techniques. 6.1 Visualization in text analysis. 6.2 Tag clouds. 6.3 Authorship and change tracking. 6.4 Data exploration and the search for novel patterns. 6.5 Sentiment tracking. 6.6 Visual analytics and FutureLens. 6.7 Scenario discovery. 6.8 Earlier prototype. 6.9 Features of FutureLens. 6.10 Scenario discovery example: bioterrorism. 6.11 Scenario discovery example: drug trafficking. 6.12 Future work. 7 Adaptive threshold setting for novelty mining. 7.1 Introduction. 7.2 Adaptive threshold setting in novelty mining. 7.3 Experimental study. 7.4 Conclusion. 8 Text mining and cybercrime. 8.1 Introduction. 8.2 Current research in Internet predation andcyberbullying. 8.3 Commercial software for monitoring chat. 8.4 Conclusions and future directions. 8.5 Acknowledgements. PART III TEXT STREAMS. 9 Events and trends in text streams. 9.1 Introduction. 9.2 Text streams. 9.3 Feature extraction and data reduction. 9.4 Event detection. 9.5 Trend detection. 9.6 Event and trend descriptions. 9.7 Discussion. 9.8 Summary. 9.9 Acknowledgements. 10 Embedding semantics in LDA topic models. 10.1 Introduction. 10.2 Background. 10.3 Latent Dirichlet allocation. 10.4 Embedding external semantics from Wikipedia. 10.5 Data-driven semantic embedding. 10.6 Related work. 10.7 Conclusion and future work. References. Index.

233 citations

Proceedings ArticleDOI
15 Oct 2008
TL;DR: In this article, the authors present an LDA-based static technique for bug localization based on the latent Dirichlet allocation (LDA) model, which has significant advantages over both LSI and probabilistic LSI.
Abstract: In bug localization, a developer uses information about a bug to locate the portion of the source code to modify to correct the bug Developers expend considerable effort performing this task Some recent static techniques for automatic bug localization have been built around modern information retrieval (IR) models such as latent semantic indexing (LSI); however, latent Dirichlet allocation (LDA), a modular and extensible IR model, has significant advantages over both LSI and probabilistic LSI (pLSI) In this paper we present an LDA-based static technique for automating bug localization We describe the implementation of our technique and three case studies that measure its effectiveness For two of the case studies we directly compare our results to those from similar studies performed using LSI The results demonstrate our LDA-based technique performs at least as well as the LSI-based techniques for all bugs and performs better, often significantly so, than the LSI-based techniques for most bugs

232 citations

Journal ArticleDOI
TL;DR: A heuristic approach based on analysis of variation of statistical perplexity during topic modelling is proposed to estimate the most appropriate number of topics, and the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector.
Abstract: Topic modelling is an active research field in machine learning. While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words. Latent Dirichlet Allocation (LDA) is the most commonly used topic modelling method across a wide number of technical fields. However, model development can be arduous and tedious, and requires burdensome and systematic sensitivity studies in order to find the best set of model parameters. Often, time-consuming subjective evaluations are needed to compare models. Currently, research has yielded no easy way to choose the proper number of topics in a model beyond a major iterative approach. Based on analysis of variation of statistical perplexity during topic modelling, a heuristic approach is proposed in this study to estimate the most appropriate number of topics. Specifically, the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector. We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed. The proposed RPC-based method is demonstrated to choose the best number of topics in three numerical experiments of widely different data types, and for databases of very different sizes. The work required was markedly less arduous than if full systematic sensitivity studies had been carried out with number of topics as a parameter. We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics.

230 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
86% related
Support vector machine
73.6K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023323
2022842
2021418
2020429
2019473
2018446