scispace - formally typeset
S

Seunghyun Yoon

Researcher at Seoul National University

Publications -  62
Citations -  838

Seunghyun Yoon is an academic researcher from Seoul National University. The author has contributed to research in topics: Computer science & Language model. The author has an hindex of 11, co-authored 48 publications receiving 452 citations. Previous affiliations of Seunghyun Yoon include Samsung.

Papers
More filters
Proceedings ArticleDOI

Multimodal Speech Emotion Recognition Using Audio and Text

TL;DR: In this paper, a deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data has been proposed, and the proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.
Proceedings ArticleDOI

Speech Emotion Recognition Using Multi-hop Attention Mechanism

TL;DR: A framework to exploit acoustic information in tandem with lexical data using two bi-directional long short-term memory (BLSTM) for obtaining hidden representations of the utterance and an attention mechanism, referred to as the multi-hop, which is trained to automatically infer the correlation between the modalities.
Posted Content

Multimodal Speech Emotion Recognition Using Audio and Text

TL;DR: The proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.
Proceedings ArticleDOI

Comparative Studies of Detecting Abusive Language on Twitter.

TL;DR: This article conducted a comparative study of various learning models on hate and abusive speech on Twitter, and discussed the possibility of using additional features and context data for improving the performance of the models, concluding that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, are the most accurate model scoring 0.805 F1.
Proceedings ArticleDOI

A Compare-Aggregate Model with Latent Clustering for Answer Selection

TL;DR: A novel method for a sentence-level answer-selection task that is a fundamental problem in natural language processing by adopting a pretrained language model and proposing a novel latent clustering method to compute additional information within the target corpus.