W
Walter Daelemans
Researcher at University of Antwerp
Publications - 463
Citations - 13831
Walter Daelemans is an academic researcher from University of Antwerp. The author has contributed to research in topics: Language technology & Natural language. The author has an hindex of 57, co-authored 444 publications receiving 12732 citations. Previous affiliations of Walter Daelemans include VU University Amsterdam & Radboud University Nijmegen.
Papers
More filters
Proceedings ArticleDOI
Predicting age and gender in online social networks
TL;DR: This paper presents an exploratory study in which a text categorization approach is applied for the prediction of age and gender on a corpus of chat texts, which is collected from the Belgian social networking site Netlog.
Proceedings Article
MBT: A Memory-Based Part of Speech Tagger-Generator
TL;DR: A large-scale application of the memory-based approach to part of speech tagging is shown to be feasible, obtaining a tagging accuracy that is on a par with that of known statistical approaches, and with attractive space and time complexity properties when using IGTree, a tree-based formalism for indexing and searching huge case bases.
Journal ArticleDOI
Forgetting Exceptions is Harmful in Language Learning
TL;DR: It is shown that in language learning, contrary to received wisdom, keeping exceptional training instances in memory can be beneficial for generalization accuracy, and that decision-tree learning often performs worse than memory-based learning.
Journal ArticleDOI
Automatic detection of cyberbullying in social media text
Cynthia Van Hee,Gilles Jacobs,Chris Emmery,Bart Desmet,Els Lefever,Ben Verhoeven,Guy De Pauw,Walter Daelemans,Veronique Hoste +8 more
TL;DR: This paper describes the collection and fine-grained annotation of a cyberbullying corpus for English and Dutch and performs a series of binary classification experiments to determine the feasibility of automatic cyberbullies detection.
Journal ArticleDOI
Improving accuracy in word class tagging through the combination of machine learning systems
TL;DR: It is examined how differences in language models, learned by different data-driven systems performing the same NLP task, can be exploited to yield a higher accuracy than the best individual system.