M
Marie-Francine Moens
Researcher at Katholieke Universiteit Leuven
Publications - 410
Citations - 8987
Marie-Francine Moens is an academic researcher from Katholieke Universiteit Leuven. The author has contributed to research in topics: Information extraction & Language model. The author has an hindex of 45, co-authored 393 publications receiving 7779 citations. Previous affiliations of Marie-Francine Moens include Brandeis University & University of Copenhagen Faculty of Science.
Papers
More filters
Journal ArticleDOI
C-BiLDA extracting cross-lingual topics from non-parallel texts by distinguishing shared from unshared content
TL;DR: A new bilingual probabilistic topic model called comparable bilingual latent Dirichlet allocation (C-BiLDA), which is able to deal with such comparable data, and, unlike the standard bilingual LDA model, does not assume the availability of document pairs with identical topic distributions.
Journal Article
Finding the best picture: cross-media retrieval of content
TL;DR: It is demonstrated that an appearance or content model based on syntactic, semantic and discourse analysis of the short news text is only useful for finding the best picture of a person of object if the database contains photos each picturing many entities.
Proceedings ArticleDOI
Measuring Aboutness of an Entity in a Text
TL;DR: This paper presents a graph-based algorithm for giving an aboutness score to a text, when the input query is a person name and the aboutness with respect to the biographical data of that person is measured.
Patent
Method for the automatic determination of context-dependent hidden word distributions
TL;DR: The Latent Words Language Model (LWLM) as mentioned in this paper automatically determines context-dependent word distributions (called hidden or latent words) for each word of a text, which reflect the probability that another word of the vocabulary of a language would occur at that position in the text.
Posted Content
Speech-Based Visual Question Answering.
TL;DR: This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question and investigates the robustness of both methods by injecting various levels of noise into the spoken question.