scispace - formally typeset
N

Nick Campbell

Researcher at Trinity College, Dublin

Publications -  217
Citations -  4970

Nick Campbell is an academic researcher from Trinity College, Dublin. The author has contributed to research in topics: Speech synthesis & Conversation. The author has an hindex of 34, co-authored 217 publications receiving 4700 citations. Previous affiliations of Nick Campbell include Nara Institute of Science and Technology & University of Glasgow.

Papers
More filters
Journal ArticleDOI

Emotional speech: towards a new generation of databases

TL;DR: The paper shows how the challenge of developing appropriate databases is being addressed in three major recent projects--the Reading--Leeds project, the Belfast project and the CREST--ESP project and indicates the future directions for the development of emotional speech databases.
PatentDOI

Concatenation of speech segments by use of a speech synthesizer

TL;DR: In this article, a speech unit selector searches for a combination of phoneme candidates which correspond to a phoneme sequence of an input sentence and which minimizes a cost including a target cost representing approximate costs between a target phoneme and the phoneme candidate and a concatenation cost corresponding approximate costs to be adjacently concatenated, and outputs index information on the searched out combination of candidates.
Proceedings Article

Optimising selection of units from speech databases for concatenative synthesis.

TL;DR: This paper presents a general method for unit selection in speech synthesis and addresses the problem of how to select between the many instances of units in the database.
Journal ArticleDOI

A corpus-based speech synthesis system with emotion

TL;DR: The results show that the proposed method can synthesize speech with a high intelligibility and gives a favorable impression, and has developed a workable text-to-speech system with emotion to support the immediate needs of nonspeaking individuals.
Proceedings ArticleDOI

Doubly-Attentive Decoder for Multi-modal Neural Machine Translation

TL;DR: The authors introduce a doubly-attentive decoder to attend to source-language words and parts of an image independently by means of two separate attention mechanisms as it generates words in the target language.