Аннотация
For the past few years most published research on recommendation algorithms has been based on deep learning (DL) methods. Following common research practices in our field, these works usually demonstrate that a new DL method is outperforming other models not based on deep learning in offline experiments. This almost consistent success of DL based models is however not observed in recommendation-related machine learning competitions like the challenges that are held with the yearly ACM RecSys conference. Instead the winning solutions mostly consist of substantial feature engineering efforts and the use of gradient boosting or ensemble techniques. In this paper we investigate possible reasons for this surprising phenomenon. We consider multiple possible factors such as the characteristics and complexity of the problem settings, datasets, and DL methods; the background of the competition participants; or the particularities of the evaluation approach.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)