Abstract
Many online retailers, such as Amazon, use automated product recommender systems to encourage customer loyalty and cross-sell products. Despite significant improvements to the predictive accuracy of contemporary recommender system algorithms, they remain prone to errors. Erroneous recommendations pose potential threats to online retailers in particular, because they diminish customers’ trust in, acceptance of, satisfaction with, and loyalty to a recommender system. Explanations of the reasoning that lead to recommendations might mitigate these negative effects. That is, a recommendation algorithm ideally would provide both accurate recommendations and explanations of the reasoning for those recommendations. This article proposes a novel method to balance these concurrent objectives. The application of this method, using a combination of content-based and collaborative filtering, to two real-world data sets with more than 100 million product ratings reveals that the proposed method outperforms established recommender approaches in terms of predictive accuracy (more than five percent better than the Netflix Prize winner algorithm according to normalized root mean squared error) and its ability to provide actionable explanations, which is also an ethical requirement of artificial intelligence systems.
Users
Please
log in to take part in the discussion (add own reviews or comments).