Empirical assessment of languages for teaching concurrency: Methodology and application

Sebastian Nanz, Faraz Torshizi, Michela Pedroni, Bertrand Meyer
2011 2011 24th IEEE-CS Conference on Software Engineering Education and Training (CSEE&T)  
Concurrency has been rapidly gaining importance in computing, and correspondingly in computing curricula. Concurrent programming is, however, notoriously hard even for expert programmers. New language designs promise to make it easier, but such claims call for empirical validation. We present a methodology for comparing concurrent languages for teaching purposes. A critical challenge is to avoid bias, especially when (as in our example application) the experimenters are also the designers of
more » ... of the approaches under comparison. For a study performed as part of a course, it is also essential to make sure that no student is penalized. The methodology addresses these concerns by using self-study material and applying an evaluation scheme that minimizes opportunities for subjective decisions. The example application compares two object-oriented concurrent languages: multithreaded Java and SCOOP. The results show an advantage for SCOOP even though the study participants had previous training in writing multithreaded Java programs. The lessons should be of use to educators interested in teaching concurrency, to researchers looking for objective ways of assessing teaching techniques, and to researchers who want to avoid bias in assessing an approach or tool that they have themselves designed.
doi:10.1109/cseet.2011.5876128 dblp:conf/csee/NanzTPM11 fatcat:vk243kw3erhnxe25t2tivqm354