Competitive Live Evaluations of Activity-Recognition Systems

Hristijan Gjoreski, Simon Kozina, Matjaz Gams, Mitja Lustrek, Juan Antonio Alvarez-Garcia, Jin-Hyuk Hong, Anind K. Dey, Maurizio Bocca, Neal Patwari
2015 IEEE pervasive computing  
In order to ensure the validity and usability of activity recognition approaches, an agreement on a set of standard evaluation methods is needed. Due to the diversity of the sensors and other hardware employed, designing and accepting standard tests is a difficult task. This article presents an initiative to evaluate activity recognition systems: a living-lab evaluation established through an annual competition − EvAAL-AR (Evaluating Ambient Assisted Living Systems through Competitive
more » ... mpetitive Benchmarking − Activity Recognition). In the competition, each team brings their own activity-recognition system, which is evaluated live on the same activity scenario performed by an actor. The evaluation criteria attempt to capture the practical usability: recognition accuracy, user acceptance, recognition delay, installation complexity, and interoperability with ambient assisted living systems. The article also presents the competing systems with emphasis on two best-performing ones: (i) a system that achieved the best recognition accuracy, and (ii) a system that was evaluated as the best overall. Finally, the article presents lessons learned from the competition and ideas for future development of the competition and of the activity recognition field in general.
doi:10.1109/mprv.2015.3 fatcat:sqiezh3dcneslokncev5jgeaey