scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Acoustics, Speech, and Signal Processing in 2016"


Journal Article
TL;DR: In this paper, the authors proposed a neural network-based speech coding framework for end-to-end speech analysis and synthesis without HMMs, which relies on a phonological subphonetic representation of speech.
Abstract: Most current very low bit rate VLBR speech coding systems use hidden Markov model HMM based speech recognition and synthesis techniques. This allows transmission of information such as phonemes segment by segment; this decreases the bit rate. However, an encoder based on a phoneme speech recognition may create bursts of segmental errors; these would be further propagated to any suprasegmental such as syllable information coding. Together with the errors of voicing detection in pitch parametrization, HMM-based speech coding leads to speech discontinuities and unnatural speech sound artifacts. In this paper, we propose a novel VLBR speech coding framework based on neural networks NNs for end-to-end speech analysis and synthesis without HMMs. The speech coding framework relies on a phonological subphonetic representation of speech. It is designed as a composition of deep and spiking NNs: a bank of phonological analyzers at the transmitter, and a phonological synthesizer at the receiver. These are both realized as deep NNs, along with a spiking NN as an incremental and robust encoder of syllable boundaries for coding of continuous fundamental frequency F0. A combination of phonological features defines much more sound patterns than phonetic features defined by HMM-based speech coders; this finer analysis/synthesis code contributes to smoother encoded speech. Listeners significantly prefer the NN-based approach due to fewer discontinuities and speech artifacts of the encoded speech. A single forward pass is required during the speech encoding and decoding. The proposed VLBR speech coding operates at a bit rate of approximately 360 bits/s.

4 citations