Semi-Automatic Grading of Students' Answers Written in Free Text
The Electronic Journal of e-Learning
The correct grading of free text answers to exam questions during an assessment process is time consuming and subject to fluctuations in the application of evaluation criteria, particularly when the number of answers is high (in the hundreds). In consequence of these fluctuations, inherent to human nature, and largely determined by emotional factors difficult to mitigate, it is natural that small discrepancies arise in the ratings assigned to similar responses. This means that two answers with
... t two answers with similar quality may get a different grade which may generate inequities in the assessment process. Reducing the time required by the assessment process on one hand, and grouping the answers in homogenous groups, on the other hand, are the main motivations for developing the work presented here. We believe that it is possible to reduce unintentional inequities during an assessment process of free text answers by applying text mining techniques, in particular, automatic text classification, enabling to group answers in homogeneous sets comprising answers with uniform quality. Thus, instead of grading answers in random order, the teacher may assess similar answers in sequence, one after the other. The teacher may also choose, for example, to grade free text answers in decreasing order of quality, the best first, or in ascending order of quality, starting to grade the group of the worst answers. The active learning techniques we are applying throughout the grading process generate intermediary models to automatically organize the answers still not fixed in homogeneous groups. These techniques contribute to reduce the time required for the assessment process, to reduce the occurrence of grading errors and improve detection of plagiarism.