Problematizing Rating Scales in EFL Academic Writing Assessment: Voices from Iranian Context

Batoul Ghanbari, Hossein Barati, Ahmad Moinzadeh
2012 English Language Teaching  
Along with a more humanitarian movement in language testing, accountability to contextual variables in the design and development of any assessment enterprise is emphasized. However, when it comes to writing assessment, it is found that multiplicity of rating scales developed to fit diverse contexts is mainly headed by well-known native testing agencies. In fact, it seems that EFL/ESL assessment contexts are receptively influenced by the symbolic authority of native assessment circles. Hence,
more » ... t circles. Hence, investigating the actualities of rating practice in EFL/ESL contexts would provide a realistic view of the way assessment is conceptualized and practiced. To investigate the issue, present study launched a wide-scale survey in the Iranian EFL writing assessment context. Results of a questionnaire and subsequent interviews with Iranian EFL composition raters revealed that rating scale in its common sense does not exist. In fact, raters relied on their own internalized criteria developed through their long years of practice. Therefore, native speaker legitimacy in the design and development of scales for the EFL context is challenged and the local agency in the design and development of rating scales is emphasized.
doi:10.5539/elt.v5n8p76 fatcat:tp4uxbrvr5djdenen3jvyctl5m