scispace - formally typeset
A

Artem Shelmanov

Researcher at Skolkovo Institute of Science and Technology

Publications -  37
Citations -  227

Artem Shelmanov is an academic researcher from Skolkovo Institute of Science and Technology. The author has contributed to research in topics: Computer science & Information extraction. The author has an hindex of 7, co-authored 24 publications receiving 100 citations. Previous affiliations of Artem Shelmanov include Russian Academy of Sciences.

Papers
More filters
Posted Content

Neural Entity Linking: A Survey of Models Based on Deep Learning

TL;DR: This work distills a generic architecture of a neural EL system and discusses its components, such as candidate generation, mention-context encoding, and entity ranking, summarizing prominent methods for each of them.
Proceedings ArticleDOI

Active Learning with Deep Pre-trained Models for Sequence Tagging of Clinical and Biomedical Texts

TL;DR: An annotation tool empowered with active learning and deep pre-trained models that could be used for entity annotation directly from Jupyter IDE is proposed and a modification to a standard uncertainty sampling strategy is suggested to show that it could be beneficial for annotation of very skewed datasets.
Proceedings ArticleDOI

Semantic Role Labeling with Pretrained Language Models for Known and Unknown Predicates

TL;DR: The first full pipeline for semantic role labelling of Russian texts is built, and it is shown that embeddings generated by deep pretrained language models are superior to classical shallowembeddings for argument classification of both “known” and “unknown” predicates.
Book ChapterDOI

Exactus Like: Plagiarism Detection in Scientific Texts

TL;DR: An overview of Exactus Like – a plagiarism detection system that uses deep parsing for text alignment to find moderate forms of disguised plagiarism.
Proceedings ArticleDOI

Uncertainty Estimation of Transformer Predictions for Misclassification Detection

TL;DR: A vast empirical investigation of state-of-the-art UE methods for Transformer models on misclassification detection in named entity recognition and text classification tasks and two computationally efficient modifications are proposed, one of which approaches or even outperforms computationally intensive methods.