We introduce Bootstrap Your Own Latent (BYOL), a new approach to
self-supervised image representation learning. BYOL relies on two neural
networks, referred to as online and target networks, that interact and learn
from each other. From an augmented view of an image, we train the online
network to predict the target network representation of the same image under a
different augmented view. At the same time, we update the target network with a
slow-moving average of the online network. While state-of-the art methods rely
on negative pairs, BYOL achieves a new state of the art without them. BYOL
reaches $74.3\%$ top-1 classification accuracy on ImageNet using a linear
evaluation with a ResNet-50 architecture and $79.6\%$ with a larger ResNet. We
show that BYOL performs on par or better than the current state of the art on
both transfer and semi-supervised benchmarks. Our implementation and pretrained
models are given on GitHub.
Description
Bootstrap your own latent: A new approach to self-supervised Learning
%0 Generic
%1 grill2020bootstrap
%A Grill, Jean-Bastien
%A Strub, Florian
%A Altché, Florent
%A Tallec, Corentin
%A Richemond, Pierre H.
%A Buchatskaya, Elena
%A Doersch, Carl
%A Pires, Bernardo Avila
%A Guo, Zhaohan Daniel
%A Azar, Mohammad Gheshlaghi
%A Piot, Bilal
%A Kavukcuoglu, Koray
%A Munos, Rémi
%A Valko, Michal
%D 2020
%K cs.CV stat.ML
%T Bootstrap your own latent: A new approach to self-supervised Learning
%U http://arxiv.org/abs/2006.07733
%X We introduce Bootstrap Your Own Latent (BYOL), a new approach to
self-supervised image representation learning. BYOL relies on two neural
networks, referred to as online and target networks, that interact and learn
from each other. From an augmented view of an image, we train the online
network to predict the target network representation of the same image under a
different augmented view. At the same time, we update the target network with a
slow-moving average of the online network. While state-of-the art methods rely
on negative pairs, BYOL achieves a new state of the art without them. BYOL
reaches $74.3\%$ top-1 classification accuracy on ImageNet using a linear
evaluation with a ResNet-50 architecture and $79.6\%$ with a larger ResNet. We
show that BYOL performs on par or better than the current state of the art on
both transfer and semi-supervised benchmarks. Our implementation and pretrained
models are given on GitHub.
@misc{grill2020bootstrap,
abstract = {We introduce Bootstrap Your Own Latent (BYOL), a new approach to
self-supervised image representation learning. BYOL relies on two neural
networks, referred to as online and target networks, that interact and learn
from each other. From an augmented view of an image, we train the online
network to predict the target network representation of the same image under a
different augmented view. At the same time, we update the target network with a
slow-moving average of the online network. While state-of-the art methods rely
on negative pairs, BYOL achieves a new state of the art without them. BYOL
reaches $74.3\%$ top-1 classification accuracy on ImageNet using a linear
evaluation with a ResNet-50 architecture and $79.6\%$ with a larger ResNet. We
show that BYOL performs on par or better than the current state of the art on
both transfer and semi-supervised benchmarks. Our implementation and pretrained
models are given on GitHub.},
added-at = {2021-09-29T17:03:23.000+0200},
author = {Grill, Jean-Bastien and Strub, Florian and Altché, Florent and Tallec, Corentin and Richemond, Pierre H. and Buchatskaya, Elena and Doersch, Carl and Pires, Bernardo Avila and Guo, Zhaohan Daniel and Azar, Mohammad Gheshlaghi and Piot, Bilal and Kavukcuoglu, Koray and Munos, Rémi and Valko, Michal},
biburl = {https://www.bibsonomy.org/bibtex/2e57d0aad827cb018a6bd49b9f10040f7/aerover},
description = {Bootstrap your own latent: A new approach to self-supervised Learning},
interhash = {f423612698cd5febd237f38a18762170},
intrahash = {e57d0aad827cb018a6bd49b9f10040f7},
keywords = {cs.CV stat.ML},
note = {cite arxiv:2006.07733},
timestamp = {2021-09-29T17:03:23.000+0200},
title = {Bootstrap your own latent: A new approach to self-supervised Learning},
url = {http://arxiv.org/abs/2006.07733},
year = 2020
}