Abstract
Evaluation of quality as perceived by the user
in his natural environment is a difficult and strenuous task.
Simulation of real world conditions in the laboratory is often
inefficient and expensive. Recently, crowdsourcing as novel
methodology for testing Quality of Experience (QoE) at the
end user side has been proposed. In this paper we discuss
(a) the challenges of performing subjective assessments in
the crowdsourcing domain and (b) highlight the importance
of proper filtering of unreliable users from the overall results.
In particular, we introduce various ways for detecting
unreliable users and compare results from two similar QoE
studies applying different screening techniques.
Users
Please
log in to take part in the discussion (add own reviews or comments).