Evaluating natural language generated database records

Rita McCardell
1990 Proceedings of the workshop on Speech and Natural Language - HLT '90   unpublished
With the onslaught of various natural language processing (NLP) systems and their respective applications comes the inevitable task of determining a way in which to compare and thus evaluate the output of these systems. This paper focuses on one such evaluation technique that originated from the text understanding system called Project MURASAKI. This evaluation technique quantitatively and qualitatively measures the match (or distance) from the output of one text understanding system to the expected output of another.
doi:10.3115/116580.116607 fatcat:z6zc22juzrdmpo3qoqmtqh7nyu