@aho

Using Explainability for Constrained Matrix Factorization

, and . Proceedings of the Eleventh ACM Conference on Recommender Systems, page 79--83. New York, NY, USA, ACM, (2017)
DOI: 10.1145/3109859.3109913

Abstract

Accurate model-based Collaborative Filtering (CF) approaches, such as Matrix Factorization (MF), tend to be black-box machine learning models that lack interpretability and do not provide a straightforward explanation for their outputs. Yet explanations have been shown to improve the transparency of a recommender system by justifying recommendations, and this in turn can enhance the user's trust in the recommendations. Hence, one main challenge in designing a recommender system is mitigating the trade-off between an explainable technique with moderate prediction accuracy and a more accurate technique with no explainable recommendations. In this paper, we focus on factorization models and further assume the absence of any additional data source, such as item content or user attributes. We propose an explainability constrained MF technique that computes the top-n recommendation list from items that are explainable. Experimental results show that our method is effective in generating accurate and explainable recommendations.

Links and resources

Tags

community

  • @brusilovsky
  • @aho
  • @dblp
@aho's tags highlighted