On-line policy optimisation of Bayesian spoken dialogue systems via human interaction

M. Gasic, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, S. Young
2013 2013 IEEE International Conference on Acoustics, Speech and Signal Processing  
A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating the use of a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However,
more » ... rly studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems thought to be due to inconsistent user feedback. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy.
doi:10.1109/icassp.2013.6639297 dblp:conf/icassp/GasicBHKSTTY13 fatcat:fus6y7a6uneinbtiq6ie4qstty