scispace - formally typeset
S

Seongmin Park

Publications -  3
Citations -  4

Seongmin Park is an academic researcher. The author has contributed to research in topics: Transformer (machine learning model) & Computer science. The author has an hindex of 1, co-authored 3 publications receiving 1 citations.

Papers
More filters
Posted Content

Finetuning Pretrained Transformers into Variational Autoencoders

TL;DR: This article proposed a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning, which is competitive with massively pretrained Transformer-based VAEs in some internal metrics while falling short on others.

Finetuning Pretrained Transformers into Variational Autoencoders

Seongmin Park, +1 more
TL;DR: This paper proposed a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning, which is competitive with massively pretrained Transformer-based VAEs in some internal metrics while falling short on others.
Posted Content

Improving Distinction between ASR Errors and Speech Disfluencies with Feature Space Interpolation

TL;DR: The authors proposed a scheme to improve existing pre-trained language models for ASR error detection, both in terms of detection scores and resilience to distracting auxiliary tasks, by adopting the popular mixup method in text feature space and can be utilized with any black-box ASR output.