scispace - formally typeset
S

Steffen Remus

Researcher at University of Hamburg

Publications -  25
Citations -  670

Steffen Remus is an academic researcher from University of Hamburg. The author has contributed to research in topics: Language model & Word (computer architecture). The author has an hindex of 8, co-authored 22 publications receiving 533 citations. Previous affiliations of Steffen Remus include Bar-Ilan University & Technische Universität Darmstadt.

Papers
More filters
Proceedings ArticleDOI

Do Supervised Distributional Methods Really Learn Lexical Inference Relations

TL;DR: This work investigates a collection of distributional representations of words used in supervised settings for recognizing lexical inference relations between word pairs, and shows that they do not actually learn a relation between two words, but an independent property of a single word in the pair.
Posted Content

Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings

TL;DR: This paper proposed a simple but effective approach to word sense disambiguation using a nearest neighbor classification on contextualized word embeddings (CWEs) and compared the performance of different CWE models for the task and reported improvements above the current state of the art for two standard WSD benchmark datasets.
Proceedings ArticleDOI

Hierarchical Multi-label Classification of Text with Capsule Networks

TL;DR: This paper applies and compares simple shallow capsule networks for hierarchical multi-label text classification and shows that they can perform superior to other neural networks, and non-neural network architectures such as SVMs.
Proceedings ArticleDOI

TAXI at SemEval-2016 Task 13: a Taxonomy Induction Method based on Lexico-Syntactic Patterns, Substrings and Focused Crawling

TL;DR: This work presents a system for taxonomy construction that reached the first place in all subtasks of the SemEval 2016 challenge on Taxonomy Extraction Evaluation and shows that this method outperforms more complex and knowledge-rich approaches on most domains and languages.

Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings

TL;DR: A simple but effective approach to WSD using a nearest neighbor classification on CWEs and it is shown that the pre-trained BERT model is able to place polysemic words into distinct 'sense' regions of the embedding space, while ELMo and Flair NLP do not seem to possess this ability.