N. Konstantinov, and C. Lampert. (2019)cite arxiv:1901.10310Comment: Accepted to International Conference on Machine Learning (ICML), 2019; Camera-ready version.
Abstract
Modern machine learning methods often require more data for training than a
single expert can provide. Therefore, it has become a standard procedure to
collect data from external sources, e.g. via crowdsourcing. Unfortunately, the
quality of these sources is not always guaranteed. As additional complications,
the data might be stored in a distributed way, or might even have to remain
private. In this work, we address the question of how to learn robustly in such
scenarios. Studying the problem through the lens of statistical learning
theory, we derive a procedure that allows for learning from all available
sources, yet automatically suppresses irrelevant or corrupted data. We show by
extensive experiments that our method provides significant improvements over
alternative approaches from robust statistics and distributed optimization.
Description
[1901.10310] Robust Learning from Untrusted Sources
%0 Journal Article
%1 konstantinov2019robust
%A Konstantinov, Nikola
%A Lampert, Christoph
%D 2019
%K learning robustness theory
%T Robust Learning from Untrusted Sources
%U http://arxiv.org/abs/1901.10310
%X Modern machine learning methods often require more data for training than a
single expert can provide. Therefore, it has become a standard procedure to
collect data from external sources, e.g. via crowdsourcing. Unfortunately, the
quality of these sources is not always guaranteed. As additional complications,
the data might be stored in a distributed way, or might even have to remain
private. In this work, we address the question of how to learn robustly in such
scenarios. Studying the problem through the lens of statistical learning
theory, we derive a procedure that allows for learning from all available
sources, yet automatically suppresses irrelevant or corrupted data. We show by
extensive experiments that our method provides significant improvements over
alternative approaches from robust statistics and distributed optimization.
@article{konstantinov2019robust,
abstract = {Modern machine learning methods often require more data for training than a
single expert can provide. Therefore, it has become a standard procedure to
collect data from external sources, e.g. via crowdsourcing. Unfortunately, the
quality of these sources is not always guaranteed. As additional complications,
the data might be stored in a distributed way, or might even have to remain
private. In this work, we address the question of how to learn robustly in such
scenarios. Studying the problem through the lens of statistical learning
theory, we derive a procedure that allows for learning from all available
sources, yet automatically suppresses irrelevant or corrupted data. We show by
extensive experiments that our method provides significant improvements over
alternative approaches from robust statistics and distributed optimization.},
added-at = {2019-08-12T18:16:02.000+0200},
author = {Konstantinov, Nikola and Lampert, Christoph},
biburl = {https://www.bibsonomy.org/bibtex/26ff9e9dcb4d2d65e9116bce64eabfb8d/kirk86},
description = {[1901.10310] Robust Learning from Untrusted Sources},
interhash = {b0507ea6453f7b5ebf1146dbcc62e684},
intrahash = {6ff9e9dcb4d2d65e9116bce64eabfb8d},
keywords = {learning robustness theory},
note = {cite arxiv:1901.10310Comment: Accepted to International Conference on Machine Learning (ICML), 2019; Camera-ready version},
timestamp = {2019-08-12T18:16:02.000+0200},
title = {Robust Learning from Untrusted Sources},
url = {http://arxiv.org/abs/1901.10310},
year = 2019
}