Probabilistic question recommendation for question answering communities
read more
Citations
CQArank: jointly model topics and expertise in community question answering
Personalized task recommendation in crowdsourcing information systems - Current state of the art
Finding expert users in community question answering
Routing questions to appropriate answerers in community question answering services
Expert Finding for Question Answering via Graph Regularized Matrix Completion
References
Probabilistic latent semantic indexing
Knowledge sharing and yahoo answers: everyone knows something
Probabilistic Models for Unified Collaborative and Content-Based Recommendation in Sparse-Data Environments
Related Papers (5)
Frequently Asked Questions (7)
Q2. What is the common method of QA?
such as maintaining in user home pages a question list automatically generated based on features like posted time and ratings.
Q3. How many questions are trained from the data sets?
For each category, a PLSA model is trained from 85% of the question sets (questions and their corresponding answers), and the left are used for testing.
Q4. where w w1, w2,..., wl are words?
In order to deal with sparsity, the authors use a user-word aspect model instead, where the co-occurrence data represent the event that users type words in a particular question:Pr(u, w) = ∑zPr(u|z) Pr(w|z)Pr(z) (2)where w ∈ w1, w2, ..., wl are words which questions contain.
Q5. What is the key to a question recommender?
given a question collection, the distribution of users and their answered questions can be formulated as follows:Pr(u, q) = ∑zPr(u|z) Pr(q|z)Pr(z) (1)where u ∈ u1, u2, ..., un are users, q ∈ q1, q2, ..., qm are questions and z ∈ z1, z2, ..., zk are k topic models, each capturing one topic u.
Q6. What is the simplest way to find the answers to questions?
with the exponential growth in data volume, it is becoming more and more time-consuming for users to find the questions that are of interest to them.
Q7. What is the probability of a question being recomended?
w) = Pr(u|z) Pr(w|z) Pr(z) ∑z′ Pr(u|z′) Pr(w|z′)Pr(z′)(4)Pr(u|z) ∝ ∑wc(u, w)Pr(z|u, w) (5)Pr(w|z) ∝ ∑uc(u, w)Pr(z|u, w) (6)Pr(z) ∝ ∑u,wc(u, w)Pr(z|u, w) (7)The authors then model recommending questions to users as the posterior probability Pr(u|q), that is, according to how likely it is that user u will access the corresponding question q. According to Bayesian law, the authors can compute Pr(u|q) ∝ Pr(u, q), which is calculated as the product of the probabilities of the words q contains, normalized by the question length:Pr(u, q) =(∏iPr(u, wi))1/|q|(8)where wi are words in the question q , and |q| is the length of q.