scispace - formally typeset
Proceedings ArticleDOI

User modeling for spoken dialogue system evaluation

TLDR
Using stochastic modeling of real users the authors can both debug and evaluate a speech dialogue system while it is still in the lab, thus substantially reducing the amount of field testing with real users.
Abstract
Automatic speech dialogue systems are becoming common. In order to assess their performance, a large sample of real dialogues has to be collected and evaluated. This process is expensive, labor intensive, and prone to errors. To alleviate this situation we propose a user simulation to conduct dialogues with the system under investigation. Using stochastic modeling of real users we can both debug and evaluate a speech dialogue system while it is still in the lab, thus substantially reducing the amount of field testing with real users.

read more

Citations
More filters
Journal ArticleDOI

POMDP-Based Statistical Spoken Dialog Systems: A Review

TL;DR: This review article provides an overview of the current state of the art in the development of POMDP-based spoken dialog systems.
Journal ArticleDOI

A stochastic model of human-machine interaction for learning dialog strategies

TL;DR: The experimental results show that it is indeed possible to find a simple criterion, a state space representation, and a simulated user parameterization in order to automatically learn a relatively complex dialog behavior, similar to one that was heuristically designed by several research groups.
Posted Content

Neural Approaches to Conversational AI

TL;DR: In this article, the authors present a survey of state-of-the-art neural approaches to conversational AI, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies.
Journal ArticleDOI

A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies

TL;DR: The role of the dialogue manager in a spoken dialogue system is summarized, a short introduction to reinforcement-learning of dialogue management strategies is given, the literature on user modelling for simulation-based strategy learning is reviewed and recent work on user model evaluation is described.
Proceedings ArticleDOI

Neural Approaches to Conversational AI

TL;DR: This tutorial surveys neural approaches to conversational AI that were developed in the last few years, and presents a review of state-of-the-art neural approaches, drawing the connection between neural approaches and traditional symbolic approaches.
References
More filters
Book

Speech Acts: An Essay in the Philosophy of Language

TL;DR: A theory of speech acts is proposed in this article. But it is not a theory of language, it is a theory about the structure of illocutionary speech acts and not of language.
Journal ArticleDOI

Speech Acts: An Essay in the Philosophy of Language

Alice Koller, +1 more
- 01 Mar 1970 - 
TL;DR: A theory of speech acts is proposed in this paper. But it is not a theory of language, it is a theory about the structure of illocutionary speech acts and not of language.
Journal ArticleDOI

How may I help you

TL;DR: This paper focuses on the task of automatically routing telephone calls based on a user's fluently spoken response to the open-ended prompt of “ How may I help you? ”.
Proceedings ArticleDOI

PARADISE: A Framework for Evaluating Spoken Dialogue Agents

TL;DR: Paradise (PARAdigm for DIalogue System Evaluation) as discussed by the authors is a general framework for evaluating spoken dialogue agents, which decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.
Proceedings ArticleDOI

PARADISE: a framework for evaluating spoken dialogue agents

TL;DR: Paradise (PARAdigm for DIalogue System Evaluation) as mentioned in this paper is a general framework for evaluating spoken dialogue agents, which decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.
Related Papers (5)