Understanding self-supervised and contrastive learning with “Bootstrap Your Own Latent” (BYOL)

Understanding self-supervised and contrastive learning with “Bootstrap Your Own Latent” (BYOL)

Understanding self-supervised and contrastive learning with “Bootstrap Your Own Latent” (BYOL)

Summary Unlike prior work like SimCLR and MoCo, the recent paper Bootstrap Your Own Latent (BYOL) from DeepMind demonstrates a state of the art method for self-supervised learning of image representations without an explicitly contrastive loss function. This simplifies training by removing the need for negative examples in the loss function. We highlight two surprising findings from our work on reproducing BYOL: (1) BYOL generally performs no better than random when batch normalization is removed, and (2) the presence of batch normalization implicitly causes a form of contrastive learning. These findings highlight the importance of contrast between positive and negative examples when learning representations and help us gain a more fundamental understanding of how and why self-supervised learning works. The code used for this post can be found at https://github.com/untitled-ai/self_supervised.

Source: untitled-ai.github.io/understanding-self-supervised-contrastive-learning.html

Subscribe to our Digest