scispace - formally typeset
Search or ask a question

Showing papers by "Géraldine Damnati published in 2021"


Book ChapterDOI
01 Jan 2021
TL;DR: This article proposed skip-act turn embeddings, which are extracted as the common representation layer from a multi-task model that predicts both the previous and the next dialogue act, and showed regular improvements on the various dialogue acts.
Abstract: This paper compares several approaches for computing dialogue turn embeddings and evaluate their representation capacities in two dialogue act related tasks within a hierarchical Recurrent Neural Network architecture. These turn embeddings can be produced explicitely or implicitely by extracting the hidden layer of a model trained for a given task. We introduce skip-act, a new dialogue turn embeddings approach, which are extracted as the common representation layer from a multi-task model that predicts both the previous and the next dialogue act. The models used to learn turn embeddings are trained on a large dialogue corpus with light supervision, while the models used to predict dialog acts using turn embeddings are trained on a sub-corpus with gold dialogue act annotations. We compare their performances for predicting the current dialogue act as well as their ability to predict the next dialogue act, which is a more challenging task that can have several applicative impacts. With a better context representation, the skip-act turn embeddings are shown to outperform previous approaches both in terms of overall F-measure and in terms of macro-F1, showing regular improvements on the various dialogue acts.

2 citations