Latent dirichlet allocation
Citations
584 citations
580 citations
579 citations
Cites methods from "Latent dirichlet allocation"
...Most of the approaches in this section use LDA, which is a topic model proposed in [54]....
[...]
576 citations
570 citations
Cites methods from "Latent dirichlet allocation"
...Many studies used topic model [3] to extract semantic features for different tasks in software engineering [4, 29, 45, 47, 56, 67]....
[...]
...Taking code snippets in Figure 3 as an example, if we consider only “File1” and “File2”, the token vectors for “File1” and “File2” would be mapped to [1, 2, 3, 4] and [2, 3, 1, 4] respectively....
[...]
References
17,608 citations
16,079 citations
"Latent dirichlet allocation" refers background in this paper
...Finally, Griffiths and Steyvers (2002) have presented a Markov chain Monte Carlo algorithm for LDA....
[...]
...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to ashierarchical models(Gelman et al., 1995), or more precisely asconditionally independent hierarchical models(Kass and Steffey, 1989)....
[...]
...Structures similar to that shown in Figure 1 are often studied in Bayesian statistical modeling, where they are referred to as hierarchical models (Gelman et al., 1995), or more precisely as conditionally independent hierarchical models (Kass and Steffey, 1989)....
[...]
12,443 citations
"Latent dirichlet allocation" refers methods in this paper
...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notably latent semantic indexing (LSI) (Deerwester et al., 1990)....
[...]
...To address these shortcomings, IR researchers have proposed several other dimensionality reduction techniques, most notablylatent semantic indexing (LSI)(Deerwester et al., 1990)....
[...]
12,059 citations
"Latent dirichlet allocation" refers background or methods in this paper
...In the populartf-idf scheme (Salton and McGill, 1983), a basic vocabulary of “words” or “terms” is chosen, and, for each document in the corpus, a count is formed of the number of occurrences of each word....
[...]
...We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model....
[...]
7,086 citations