scispace - formally typeset
Open AccessJournal ArticleDOI

Transformation of formants for voice conversion using artificial neural networks

Reads0
Chats0
TLDR
A scheme for developing a voice conversion system that converts the speech signal uttered by a source speaker to a speech signal having the voice characteristics of the target speaker using formants and a formant vocoder is proposed.
About
This article is published in Speech Communication.The article was published on 1995-02-01 and is currently open access. It has received 207 citations till now. The article focuses on the topics: Formant & Voice analysis.

read more

Citations
More filters
Proceedings ArticleDOI

Voice conversion using conditional restricted Boltzmann machine

TL;DR: Experimental results show that short-term temporal structure could be modeled well by CRBM, and the proposed method outperforms conventional joint density Gaussian mixture models based method significantly.
Proceedings ArticleDOI

A full band adaptive Harmonic Model based Speaker Identity Transformation using Radial Basis Function

TL;DR: The results reveal that the a-HM feature based speaker transformation performs profoundly well in contrast to the state of the art technique.
Dissertation

Silent Communication: whispered speech-to-clear speech conversion

Viet-Anh Tran
TL;DR: This research thesis investigated the technique using a phonetic pivot by combining Hidden Markov Model (HMM)-based speech recognition and HMM-based speech synthesis techniques to convert whispered speech data to audible one in order to compare the performance of the two state-of-the-art approaches.
Proceedings ArticleDOI

Voice Transformation Using Two-Level Dynamic Warping

TL;DR: Voice transformation, for example, from a male speaker to a female speaker, is achieved here using a two-level dynamic warping process, which spectrally aligns blocks of speech based on magnitude spectra (dynamic frequency warp).
Proceedings ArticleDOI

Voice conversion using coefficient mapping and neural network

TL;DR: An improved model that uses both linear predictive coding (LPC) and line spectral frequency (LSF) coefficients to parametrize the source speech signal was developed in this work to reveal the effect of over-smoothing.
References
More filters
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Book

Fundamentals of speech recognition

TL;DR: This book presents a meta-modelling framework for speech recognition that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually modeling speech.
Journal ArticleDOI

Analysis, synthesis, and perception of voice quality variations among female and male talkers

TL;DR: Perceptual validation of the relative importance of acoustic cues for signaling a breathy voice quality has been accomplished using a new voicing source model for synthesis of more natural male and female voices.
Journal ArticleDOI

Speech analysis and synthesis by linear prediction of the speech wave.

TL;DR: Application of this method for efficient transmission and storage of speech signals as well as procedures for determining other speechcharacteristics, such as formant frequencies and bandwidths, the spectral envelope, and the autocorrelation function, are discussed.
Proceedings ArticleDOI

Voice conversion through vector quantization

TL;DR: The authors propose a new voice conversion technique through vector quantization and spectrum mapping which makes it possible to precisely control voice individuality.
Related Papers (5)
Frequently Asked Questions (8)
Q1. What contributions have the authors mentioned in the paper "Transformation of formants for voice conversion using artificial neural networks" ?

In this paper the authors propose a scheme for developing a voice conversion system that converts the speech signal uttered by a source speaker to a speech signal having the voice characteristics of the target speaker. The scheme consists of a formant analysis phase, followed by a learning phase in which the implicit formant transformation is captured by a neural network. 

In this paper the authors train a neural network to learn a transformation function which can transform the speaker dependent parameters extracted from the speech of the source speaker to match with that of the target speaker. 

But in continuous speech, since the vocal tract changes its shape continuously, the extracted formants will have many transitions. 

Fant’s model (Fant, 1986) was used to excite the formant synthesizer for voiced frames and random noise for the case of unvoiced frames. 

The first three formants from these two corresponding steady voiced regions are used as a pair of input and output formant vectors to a neural network. 

prosodic modifications were incorporated in the excitation signal using PSOLA (Pitch Synchronous Overlap Add) technique and speech was synthesized using the transformed spectral parameters. 

In the present study suprasegmental features of the source speaker are retained, while using the transformed vocal tract parameters for synthesis. 

They are (1) identification of speaker characteristics or acquisition of speaker dependent knowledge in the analysis phase and (2) incorporation of the speaker specific knowledge while synthesis during the transformation phase. 

Trending Questions (1)
How do I save a voice message from signal?

In this paper we propose a scheme for developing a voice conversion system that converts the speech signal uttered by a source speaker to a speech signal having the voice characteristics of the target speaker.