Accurate model-based Collaborative Filtering (CF) approaches, such as Matrix Factorization (MF), tend to be black-box machine learning models that lack interpretability and do not provide a straightforward explanation for their outputs. Yet explanations have been shown to improve the transparency of a recommender system by justifying recommendations, and this in turn can enhance the user's trust in the recommendations. Hence, one main challenge in designing a recommender system is mitigating the trade-off between an explainable technique with moderate prediction accuracy and a more accurate technique with no explainable recommendations. In this paper, we focus on factorization models and further assume the absence of any additional data source, such as item content or user attributes. We propose an explainability constrained MF technique that computes the top-n recommendation list from items that are explainable. Experimental results show that our method is effective in generating accurate and explainable recommendations.
Proceedings of the Eleventh ACM Conference on Recommender Systems
year
2017
pages
79--83
publisher
ACM
series
RecSys '17
citeulike-article-id
14420517
citeulike-linkout-0
http://portal.acm.org/citation.cfm?id=3109913
isbn
978-1-4503-4652-8
citeulike-linkout-1
http://dx.doi.org/10.1145/3109859.3109913
comment
Suggested several types of explanabiliy - user-based (based on people you follow) and item based (similar rating for similar items).
Now, some items could be explainable, some might be not.
After that, authors develop a version of matrix factorization with regularization component that gets closer users with items that can be explained to them.
Beyond the presented approach, they had a user study that shows an increase of user satisfaction
%0 Conference Paper
%1 citeulike:14420517
%A Abdollahi, Behnoush
%A Nasraoui, Olfa
%B Proceedings of the Eleventh ACM Conference on Recommender Systems
%C New York, NY, USA
%D 2017
%I ACM
%K black-box explanation matrix-factorization recommender recsys2017
%P 79--83
%R 10.1145/3109859.3109913
%T Using Explainability for Constrained Matrix Factorization
%U http://dx.doi.org/10.1145/3109859.3109913
%X Accurate model-based Collaborative Filtering (CF) approaches, such as Matrix Factorization (MF), tend to be black-box machine learning models that lack interpretability and do not provide a straightforward explanation for their outputs. Yet explanations have been shown to improve the transparency of a recommender system by justifying recommendations, and this in turn can enhance the user's trust in the recommendations. Hence, one main challenge in designing a recommender system is mitigating the trade-off between an explainable technique with moderate prediction accuracy and a more accurate technique with no explainable recommendations. In this paper, we focus on factorization models and further assume the absence of any additional data source, such as item content or user attributes. We propose an explainability constrained MF technique that computes the top-n recommendation list from items that are explainable. Experimental results show that our method is effective in generating accurate and explainable recommendations.
%@ 978-1-4503-4652-8
@inproceedings{citeulike:14420517,
abstract = {{Accurate model-based Collaborative Filtering (CF) approaches, such as Matrix Factorization (MF), tend to be black-box machine learning models that lack interpretability and do not provide a straightforward explanation for their outputs. Yet explanations have been shown to improve the transparency of a recommender system by justifying recommendations, and this in turn can enhance the user's trust in the recommendations. Hence, one main challenge in designing a recommender system is mitigating the trade-off between an explainable technique with moderate prediction accuracy and a more accurate technique with no explainable recommendations. In this paper, we focus on factorization models and further assume the absence of any additional data source, such as item content or user attributes. We propose an explainability constrained MF technique that computes the top-n recommendation list from items that are explainable. Experimental results show that our method is effective in generating accurate and explainable recommendations.}},
added-at = {2017-11-15T17:02:25.000+0100},
address = {New York, NY, USA},
author = {Abdollahi, Behnoush and Nasraoui, Olfa},
biburl = {https://www.bibsonomy.org/bibtex/28def3664abf44a39237ab0f4b4a05ae6/brusilovsky},
booktitle = {Proceedings of the Eleventh ACM Conference on Recommender Systems},
citeulike-article-id = {14420517},
citeulike-linkout-0 = {http://portal.acm.org/citation.cfm?id=3109913},
citeulike-linkout-1 = {http://dx.doi.org/10.1145/3109859.3109913},
comment = {Suggested several types of explanabiliy - user-based (based on people you follow) and item based (similar rating for similar items).
Now, some items could be explainable, some might be not.
After that, authors develop a version of matrix factorization with regularization component that gets closer users with items that can be explained to them.
Beyond the presented approach, they had a user study that shows an increase of user satisfaction},
doi = {10.1145/3109859.3109913},
interhash = {ee8ffbfb443609b5e6b7f966381aeefc},
intrahash = {8def3664abf44a39237ab0f4b4a05ae6},
isbn = {978-1-4503-4652-8},
keywords = {black-box explanation matrix-factorization recommender recsys2017},
location = {Como, Italy},
pages = {79--83},
posted-at = {2017-08-28 11:09:23},
priority = {2},
publisher = {ACM},
series = {RecSys '17},
timestamp = {2019-06-09T09:35:10.000+0200},
title = {{Using Explainability for Constrained Matrix Factorization}},
url = {http://dx.doi.org/10.1145/3109859.3109913},
year = 2017
}