Adaptive dialogue management using intent clustering and fuzzy rules
read more
Citations
Dialogue Systems for Intelligent Human Computer Interactions.
Towards conversational technology to promote, monitor and protect mental health.
References
A comparison of event models for naive bayes text classification
A Neural Conversational Model
Partially observable Markov decision processes for spoken dialog systems
An approach to online identification of Takagi-Sugeno fuzzy models
POMDP-Based Statistical Spoken Dialog Systems: A Review
Related Papers (5)
Frequently Asked Questions (20)
Q2. What are the future works in this paper?
For future work the authors plan to apply the proposed technique to multi-domain tasks in order to measure the capability of their methodology to adapt efficiently to contexts that can vary dynamically. Finally, the authors also intend to extend the evaluation of the system considering additional measures related to user ’ s profiles that complement the proposed adaptation.
Q3. How many user turns did the corpus have?
The corpus consists of 6,280 user turns and 9,133 system turns, with an average number of 7.6 words per turn and the vocabulary has a size of 811 words.
Q4. How many system responses were defined for the Let’s Go task?
A total of 51 system responses were defined for the task (classified into confirmations of concepts and attributes, questions to require data from the user, and answers obtained after a query to the database).
Q5. How many clusters have been defined for the Let’s Go task?
A total of seven clusters have been defined for the Let’s Go task, seven for the Dihana Task, and eight clusters for the Edecan task.
Q6. What is the syntax used to perform the calculations?
To perform the calculations the authors have implemented a script in Python using the library scikit tovectorize the phrases, pandas and scipy to generate data frames and the squared distance matrix, and klearn for the computation of cosine distance.
Q7. What are the main characteristics of conversational systems?
The increasing maturity of speech and conversational technologies has made possible to integrate conversational interaction in a range of application domains, smart devices, and environments.
Q8. How many user dialogues were acquired for the Let’s Go task?
A corpus of 900 dialogues (10.8 hours) was acquired for the task by means of the Wizard of Oz (WoZ) technique with 225 real users, for which an initial dialogue strategy was defined by experts.
Q9. What are the three basic forms of showing user awareness in conversational agents?
In [38], the authors present three basic forms of showing user awareness in conversational agents: being aware of the previous interactions, re-prompt users when there is no input, reword and re-prompt messages with more detailed information based on previous interaction.
Q10. What is the purpose of the proposed technique?
For future work the authors plan to apply the proposed technique to multi-domain tasks in order to measure the capability of their methodology to adapt efficiently to contexts that can vary dynamically.
Q11. What is the structure of the fuzzy rule in the eClass0 model?
A fuzzy rule in the eClass0 model has the following structure:Rulei = IF (Feature1 is P1) AND . . . . . . AND (Featuren is Pn) AND (Cluster is Cm) THEN Class = ciwhere i represents the number of the rule; n is the number of input features (number of entities corresponding to the different slots defined for the DR); the vector Feature stores the observed features, whereas vector P stores the values of the features of one of the prototypes (coded in terms of the tree possible values, {0, 1, 2}).
Q12. What are the three groups of approaches used to perform this task?
The third group of approaches use machine learning approaches combining different measures, features and techniques: graph-based approaches [56], word embeddings [57], convolutional neural networks [58], dynamic finite state machines [59], etc.[60] showed the feasibility of this approach with application domains with a varied number of slots and also with real and simulated users. [61] describes a modification of the K-Means algorithm adapted to question-answering websites and forums. [62] presents a proposal to use clustering to create directed graphs with the main transitions between topics among dialogues.
Q13. How many successful dialogues were obtained in the Edecan corpus?
With regard to the previous studies that have used the Edecan corpus, a 78.80% percentage of successful dialogues were obtained for the statistical dialogue model and user simulation techniques proposed in [18], a percentage of 73.40% successful dialogues was obtained in a stochastic finite-state transducers model proposed in [74].
Q14. How many successful dialogues were obtained in the Dihana task?
With regard to the previous studies that have used the Dihana corpus, a 72.60% percentage of successful dialogues were obtained for the statistical dialogue model and user simulation techniques proposed in [79], a percentage of 82.90% successful dialogues was obtained in a HMM-based model proposed in [80], a percentage of 83.64% successful dialogues was obtained with the statistical methodology proposed in [27].
Q15. What types of statistical techniques have been proposed to achieve this goal?
Different types of statistical techniques have been proposed to achieve this goal: n-gram models[39], graph-based models [40], Hidden Markov Models [41], logistic regression [42], EM-based algorithms [43].
Q16. What is the percentage of successful dialogues obtained in the Dihana task?
the codification developed to represent the state of the dialogue and the good operation of the MLP classifiers make it possible for the number of responses that cause the failure of the system to be only 4.23%, 5.25% and 5.11% for the DM2 dialogue models, reducing the percentages obtained with the DM1 dialogue features.
Q17. What can be done with the information provided by the conversational system?
Dialogue systems can employ this information to provide personalized dialogue management strategies [17, 18, 19], for instance to adapt the services and information provided by the conversational system, optimize the overall time required to solve the users’ queries and favour user engagement and fidelity.
Q18. What is the percentage of successful dialogues obtained in the DM2 model?
The results, showed in Table 4, indicate that using the DM2 dialogue models there was an increment in the number of system turns that actually provide information to the user, which is consistent with the fact that the task completion rate is higher using their proposed dialogue model.
Q19. What is the need to have the explainability of the solution reached?
It is also required to have the explainability of the solution reached and the flexibility of transforming these statistical models into a set of rules to implement these models using already existing commercial infrastructures, such as Google DialogFlow, IBM Watson or Amazon Lex.
Q20. How many calls reached the stage of presenting results to the user?
With regard a version of the system developed by means of the DUDE development [77], the 62% of calls reached the stage of presenting results to the user.