scispace - formally typeset
Search or ask a question

Showing papers by "Kai Puolamäki published in 2008"


Proceedings ArticleDOI
05 Jul 2008
TL;DR: The probabilistic model is demonstrated to outperform an earlier kernel-based method in a small-scale information retrieval task and can be interpreted as a kind of transfer or meta-learning.
Abstract: In the absence of explicit queries, an alternative is to try to infer users' interests from implicit feedback signals, such as clickstreams or eye tracking. The interests, formulated as an implicit query, can then be used in further searches. We formulate this task as a probabilistic model, which can be interpreted as a kind of transfer or meta-learning. The probabilistic model is demonstrated to outperform an earlier kernel-based method in a small-scale information retrieval task.

20 citations


Journal ArticleDOI
TL;DR: This paper approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix by showing that such a solution yields a worst-case approximation ratio of 1+2 under L"1-norm for 0-1 valued matrices.

15 citations


Posted Content
TL;DR: An axiomatic proof that discriminative posterior is consistent for conditional inference is given, which is useful if the amount of data is small and compared to standard generative modeling results in better conditional inference if the model family is incorrect.
Abstract: We study Bayesian discriminative inference given a model family $p(c,\x, \theta)$ that is assumed to contain all our prior information but still known to be incorrect. This falls in between "standard" Bayesian generative modeling and Bayesian regression, where the margin $p(\x,\theta)$ is known to be uninformative about $p(c|\x,\theta)$. We give an axiomatic proof that discriminative posterior is consistent for conditional inference; using the discriminative posterior is standard practice in classical Bayesian regression, but we show that it is theoretically justified for model families of joint densities as well. A practical benefit compared to Bayesian regression is that the standard methods of handling missing values in generative modeling can be extended into discriminative inference, which is useful if the amount of data is small. Compared to standard generative modeling, discriminative posterior results in better conditional inference if the model family is incorrect. If the model family contains also the true model, the discriminative posterior gives the same result as standard Bayesian generative modeling. Practical computation is done with Markov chain Monte Carlo.

3 citations


01 Jan 2008
TL;DR: Macadamia is a two-year master’s programme for machine learning and data mining given in the Department of Information and Computer Science at Helsinki University of Technology and its curriculum and how the courses are organized are described.
Abstract: Macadamia is a two-year master’s programme for machine learning and data mining given in the Department of Information and Computer Science at Helsinki University of Technology. This paper describes its curriculum and how the courses are organized. The emphasis is on our three machine learning courses.

1 citations