scispace - formally typeset
Y

Yu-Hsuan Wang

Researcher at National Taiwan University

Publications -  4
Citations -  113

Yu-Hsuan Wang is an academic researcher from National Taiwan University. The author has contributed to research in topics: Autoencoder & Recurrent neural network. The author has an hindex of 4, co-authored 4 publications receiving 91 citations.

Papers
More filters
Proceedings ArticleDOI

Segmental Audio Word2Vec: Representing Utterances as Sequences of Vectors with Applications in Spoken Term Detection

TL;DR: In this article, a new segmental audio Word2Vec was proposed, in which unsupervised spoken word boundary segmentation and audio word2vec are jointly learned and mutually enhanced, so an utterance can be directly represented as a sequence of vectors carrying phonetic structure information.
Posted Content

Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries

TL;DR: The temporal structure of gate activation signals inside the gated recurrent neural networks is highly correlated with the phoneme boundaries, and this correlation is further verified by a set of experiments for phoneme segmentation.
Proceedings ArticleDOI

Gate Activation Signal Analysis for Gated Recurrent Neural Networks and its Correlation with Phoneme Boundaries.

TL;DR: In this paper, the gate activation signals inside the gated recurrent neural networks were analyzed, and the temporal structure of such signals was found to be highly correlated with the phoneme boundaries.
Posted Content

Segmental Audio Word2Vec: Representing Utterances as Sequences of Vectors with Applications in Spoken Term Detection

TL;DR: This paper proposes a new segmental audio Word2Vec, in which unsupervised spoken word boundary segmentation and audio Word1Vec are jointly learned and mutually enhanced, so an utterance can be directly represented as a sequence of vectors carrying phonetic structure information.