scispace - formally typeset
Search or ask a question
Topic

Probabilistic latent semantic analysis

About: Probabilistic latent semantic analysis is a research topic. Over the lifetime, 2884 publications have been published within this topic receiving 198341 citations. The topic is also known as: PLSA.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present the current state of a work in progress, whose objective is to better understand the effects of factors that significantly influence the performance of Latent Semantic Analysis (LSA).
Abstract: This paper presents the current state of a work in progress, whose objective is to better understand the effects of factors that significantly influence the performance of Latent Semantic Analysis (LSA). A difficult task, which consists in answering (French) biology Multiple Choice Questions, is used to test the semantic properties of the truncated singular space and to study the relative influence of main parameters. A dedicated software has been designed to fine tune the LSA semantic space for the Multiple Choice Questions task. With optimal parameters, the performances of our simple model are quite surprisingly equal or superior to those of 7th and 8th grades students. This indicates that semantic spaces were quite good despite their low dimensions and the small sizes of training data sets. Besides, we present an original entropy global weighting of answers' terms of each question of the Multiple Choice Questions which was necessary to achieve the model's success.

22 citations

Journal ArticleDOI
TL;DR: The large scale distributed composite language model gives drastic perplexity reduction over n-grams and achieves significantly better translation quality measured by the Bleu score and “readability” of translations when applied to the task of re-ranking the N-best list from a state-of-the-art parsing-based machine translation system.
Abstract: This paper presents an attempt at building a large scale distributed composite language model that is formed by seamlessly integrating an n-gram model, a structured language model, and probabilistic latent semantic analysis under a directed Markov random field paradigm to simultaneously account for local word lexical information, mid-range sentence syntactic structure, and long-span document semantic content. The composite language model has been trained by performing a convergent N-best list approximate EM algorithm and a follow-up EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer. The large scale distributed composite language model gives drastic perplexity reduction over n-grams and achieves significantly better translation quality measured by the Bleu score and "readability" of translations when applied to the task of re-ranking the N-best list from a state-of-the-art parsing-based machine translation system.

22 citations

Proceedings ArticleDOI
08 May 2007
TL;DR: This paper presents a novel approach for discovering web services that utilizes Probabilistic Latent Semantic Analysis (PLSA) to capture semantic concepts hidden behind words in the query and advertisements in services so that services matching is expected to carry out at concept level.
Abstract: Service discovery is one of challenging issues in Service-Oriented computing. Currently, most of the existing service discovering and matching approaches are based on keywords-based strategy. However, this method is inefficient and time-consuming. In this paper, we present a novel approach for discovering web services. Based on the current dominating mechanisms of discovering and describing Web Services with UDDI and WSDL, the proposed approach utilizes Probabilistic Latent Semantic Analysis (PLSA) to capture semantic concepts hidden behind words in the query and advertisements in services so that services matching is expected to carry out at concept level. We also present related algorithms and preliminary experiments to evaluate the effectiveness of our approach.

22 citations

Journal ArticleDOI
TL;DR: This work proposes a framework for unsupervised semantic analysis and organization of spoken documents and proposes two measures derived from latent topic analysis: latent topic significance and latent topic entropy that can be integrated into an application system with which the user can more easily navigate archives of spoken document.
Abstract: Spoken documents are audio signals and are thus not easily displayed on-screen and not easily scanned and browsed by the user. It is therefore highly desirable to automatically construct summaries, titles, latent topic trees and key term-based topic labels for these spoken documents to aid the user in browsing. We refer to this as semantic analysis and organization. Also, as network content is both copious and dynamic, with topics and domains changing everyday, the approaches here must be primarily unsupervised. We propose a framework for unsupervised semantic analysis and organization of spoken documents and for this purpose propose two measures derived from latent topic analysis: latent topic significance and latent topic entropy. We show that these can be integrated into an application system, with which the user can more easily navigate archives of spoken documents. Probabilistic latent semantic analysis is used as a typical example approach for unsupervised topic analysis in most experiments, although latent Dirichlet allocation is also used in some experiments to show that the proposed measures are equally applicable for different analysis approaches. All of the experiments were performed on Mandarin Chinese broadcast news.

22 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Deep learning
79.8K papers, 2.1M citations
83% related
Object detection
46.1K papers, 1.3M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202319
202277
202114
202036
201927
201858