Techreport,

An on-line evaluation framework for recommender systems

, and .
TCD-CS-2002-19. Trinity College Dublin, Department of Computer Science, (2008)

Abstract

Several techniques are currently used to evaluate recommender systems. These techniques involve off-line analysis using evaluation methods from machine learning and information retrieval. We argue that while off-line analysis is useful, user satisfaction with a recommendation strategy can only be measured in an on-line context. We propose a new evaluation framework which involves a paired test of two recommender systems which simultaneously compete to give the best recommendations to the same user at the same time. The user interface and the interaction model for each system is the same. The framework enables you to specify an API so that different recommendation strategies may take part in such a competition. The API defines issues such as access to data, the interaction model and the means of gathering positive feedback from the user. In this way it is possible to obtain a relative measure of user satisfaction with the two systems.

Tags

Users

  • @jaeschke

Comments and Reviews