We extend variational autoencoders (VAEs) to collaborative filtering for
implicit feedback. This non-linear probabilistic model enables us to go beyond
the limited modeling capacity of linear factor models which still largely
dominate collaborative filtering research.We introduce a generative model with
multinomial likelihood and use Bayesian inference for parameter estimation.
Despite widespread use in language modeling and economics, the multinomial
likelihood receives less attention in the recommender systems literature. We
introduce a different regularization parameter for the learning objective,
which proves to be crucial for achieving competitive performance. Remarkably,
there is an efficient way to tune the parameter using annealing. The resulting
model and learning algorithm has information-theoretic connections to maximum
entropy discrimination and the information bottleneck principle. Empirically,
we show that the proposed approach significantly outperforms several
state-of-the-art baselines, including two recently-proposed neural network
approaches, on several real-world datasets. We also provide extended
experiments comparing the multinomial likelihood with other commonly used
likelihood functions in the latent factor collaborative filtering literature
and show favorable results. Finally, we identify the pros and cons of employing
a principled Bayesian inference approach and characterize settings where it
provides the most significant improvements.
Description
Variational Autoencoders for Collaborative Filtering
%0 Generic
%1 liang2018variational
%A Liang, Dawen
%A Krishnan, Rahul G.
%A Hoffman, Matthew D.
%A Jebara, Tony
%D 2018
%K autoencoder unsupervised variational-ae
%T Variational Autoencoders for Collaborative Filtering
%U http://arxiv.org/abs/1802.05814
%X We extend variational autoencoders (VAEs) to collaborative filtering for
implicit feedback. This non-linear probabilistic model enables us to go beyond
the limited modeling capacity of linear factor models which still largely
dominate collaborative filtering research.We introduce a generative model with
multinomial likelihood and use Bayesian inference for parameter estimation.
Despite widespread use in language modeling and economics, the multinomial
likelihood receives less attention in the recommender systems literature. We
introduce a different regularization parameter for the learning objective,
which proves to be crucial for achieving competitive performance. Remarkably,
there is an efficient way to tune the parameter using annealing. The resulting
model and learning algorithm has information-theoretic connections to maximum
entropy discrimination and the information bottleneck principle. Empirically,
we show that the proposed approach significantly outperforms several
state-of-the-art baselines, including two recently-proposed neural network
approaches, on several real-world datasets. We also provide extended
experiments comparing the multinomial likelihood with other commonly used
likelihood functions in the latent factor collaborative filtering literature
and show favorable results. Finally, we identify the pros and cons of employing
a principled Bayesian inference approach and characterize settings where it
provides the most significant improvements.
@misc{liang2018variational,
abstract = {We extend variational autoencoders (VAEs) to collaborative filtering for
implicit feedback. This non-linear probabilistic model enables us to go beyond
the limited modeling capacity of linear factor models which still largely
dominate collaborative filtering research.We introduce a generative model with
multinomial likelihood and use Bayesian inference for parameter estimation.
Despite widespread use in language modeling and economics, the multinomial
likelihood receives less attention in the recommender systems literature. We
introduce a different regularization parameter for the learning objective,
which proves to be crucial for achieving competitive performance. Remarkably,
there is an efficient way to tune the parameter using annealing. The resulting
model and learning algorithm has information-theoretic connections to maximum
entropy discrimination and the information bottleneck principle. Empirically,
we show that the proposed approach significantly outperforms several
state-of-the-art baselines, including two recently-proposed neural network
approaches, on several real-world datasets. We also provide extended
experiments comparing the multinomial likelihood with other commonly used
likelihood functions in the latent factor collaborative filtering literature
and show favorable results. Finally, we identify the pros and cons of employing
a principled Bayesian inference approach and characterize settings where it
provides the most significant improvements.},
added-at = {2018-02-19T06:28:09.000+0100},
author = {Liang, Dawen and Krishnan, Rahul G. and Hoffman, Matthew D. and Jebara, Tony},
biburl = {https://www.bibsonomy.org/bibtex/2dd1ec2f44220133f4b6cc630aee01ffc/jk_itwm},
description = {Variational Autoencoders for Collaborative Filtering},
interhash = {8771551ee37dcf40dcb3c7ea8089c36b},
intrahash = {dd1ec2f44220133f4b6cc630aee01ffc},
keywords = {autoencoder unsupervised variational-ae},
note = {cite arxiv:1802.05814Comment: 10 pages, 3 figures. WWW 2018},
timestamp = {2018-02-19T06:28:09.000+0100},
title = {Variational Autoencoders for Collaborative Filtering},
url = {http://arxiv.org/abs/1802.05814},
year = 2018
}