scispace - formally typeset
R

Ryoichi Takashima

Researcher at Kobe University

Publications -  86
Citations -  932

Ryoichi Takashima is an academic researcher from Kobe University. The author has contributed to research in topics: Computer science & Microphone. The author has an hindex of 14, co-authored 71 publications receiving 746 citations. Previous affiliations of Ryoichi Takashima include National Institute of Information and Communications Technology & Hitachi.

Papers
More filters
Proceedings ArticleDOI

Voice Conversion in High-order Eigen Space Using Deep Belief Nets

TL;DR: This paper presents a voice conversion technique using Deep Belief Nets (DBNs) to build high-order eigen spaces of the source/target speakers, where it is easier to convert the source speech to the target speech than in the traditional cepstrum space.
Proceedings ArticleDOI

Exemplar-based voice conversion in noisy environment

TL;DR: A voice conversion technique for noisy environments, where parallel exemplars are introduced to encode the source speech signal and synthesize the target speech signal, which is confirmed by comparing its effectiveness with that of a conventional Gaussian Mixture Model (GMM)-based method.
Journal ArticleDOI

GMM-Based Emotional Voice Conversion Using Spectrum and Prosody Features

TL;DR: Both prosody and voice quality are used for converting a neutral voice to an emotional voice, and it is able to obtain more expressive voices in comparison with conventional methods, such as prosody or spectrum conversion.
Proceedings ArticleDOI

An Investigation of a Knowledge Distillation Method for CTC Acoustic Models

TL;DR: To improve the performance of unidirectional RNN-based CTC, which is suitable for real-time processing, the knowledge distillation (KD)-based model compression method for training a CTC acoustic model is investigated and a frame-level and a sequence-level KD method are evaluated.
Journal ArticleDOI

Speech intonation in children with autism spectrum disorder

TL;DR: The extent of monotonous speech was related to the extent of social reciprocal interaction in children with ASD and that of children with typical development by using a new quantitative acoustic analysis.