Аннотация
One of the challenges in machine learning research is to ensure that
presented and published results are sound and reliable. Reproducibility, that
is obtaining similar results as presented in a paper or talk, using the same
code and data (when available), is a necessary step to verify the reliability
of research findings. Reproducibility is also an important step to promote open
and accessible research, thereby allowing the scientific community to quickly
integrate new findings and convert ideas to practice. Reproducibility also
promotes the use of robust experimental workflows, which potentially reduce
unintentional errors. In 2019, the Neural Information Processing Systems
(NeurIPS) conference, the premier international conference for research in
machine learning, introduced a reproducibility program, designed to improve the
standards across the community for how we conduct, communicate, and evaluate
machine learning research. The program contained three components: a code
submission policy, a community-wide reproducibility challenge, and the
inclusion of the Machine Learning Reproducibility checklist as part of the
paper submission process. In this paper, we describe each of these components,
how it was deployed, as well as what we were able to learn from this
initiative.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)