A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
A crowdsourcing system for integrated and reproducible evaluation in scientific visualization
2016
2016 IEEE Pacific Visualization Symposium (PacificVis)
User evaluations have gained increasing importance in visualization research over the past years, as in many cases these evaluations are the only way to support the claims made by visualization researchers. Unfortunately, recent literature reviews show that in comparison to algorithmic performance evaluations, the number of user evaluations is still very low. Reasons for this are the required amount of time to conduct such studies together with the difficulties involved in participant
doi:10.1109/pacificvis.2016.7465249
dblp:conf/apvis/EnglundKR16
fatcat:7zi4yb7de5aqze6qwkadfpq52q