@brusilovsky

Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems

, and . Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, page 2060-2069. ACM, (July 2018)
DOI: 10.1145/3219819.3220072

Abstract

The widescale use of machine learning algorithms to drive decision-making has highlighted the critical importance of ensuring the interpretability of such models in order to engender trust in their output. The state-of-the-art recommendation systems use black-box latent factor models that provide no explanation of why a recommendation has been made, as they abstract their decision processes to a high-dimensional latent space which is beyond the direct comprehension of humans. We propose a novel approach for extracting explanations from latent factor recommendation systems by training association rules on the output of a matrix factorisation black-box model. By taking advantage of the interpretable structure of association rules, we demonstrate that predictive accuracy of the recommendation model can be maintained whilst yielding explanations with high fidelity to the black-box model on a unique industry dataset. Our approach mitigates the accuracy-interpretability trade-off whilst avoiding the need to sacrifice flexibility or use external data sources. We also contribute to the ill-defined problem of evaluating interpretability.

Description

Explanation Mining | Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining

Links and resources

Tags

community

  • @brusilovsky
  • @dblp
@brusilovsky's tags highlighted