A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Can We Talk? Methods for Evaluation and Training of Spoken Dialogue Systems
2005
Language Resources and Evaluation
There is a strong relationship between evaluation and methods for automatically training language processing systems, where generally the same resource and metrics are used both to train system components and to evaluate them. To date, in dialogue systems research, this general methodology is not typically applied to the dialogue manager and spoken language generator. I will argue that any metric that can be used to evaluate system performance should also be usable as a feedback function for
doi:10.1007/s10579-005-2696-1
fatcat:7rhidbfxarhx5a7g3wakcrrv3y