Abstract
This paper reports useful observations made during the design and test of a crowdsourcing task with a high “imaginative load”, a term we introduce to designate a task that requires workers to answer questions from a hypothetical point of view that is beyond their daily experiences. We find that workers are able to deliver high quality responses to such HITs, but that it is important that the HIT title allows workers to formulate accurate expectations of the task. Also important is the inclusion of free-text justification questions that target specific items in a pattern that is not obviously predictable. These findings were supported by a small-scale experiment run on several crowdsourcing platforms.
Users
Please
log in to take part in the discussion (add own reviews or comments).