SimCLR and simclr-pytorch
These are competing implementations of the same algorithm that serve the same purpose—choosing between them depends on whether you prioritize community adoption and simplicity (A) or multi-GPU optimization and result fidelity (B).
About SimCLR
sthalles/SimCLR
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Implements contrastive learning through momentum-based batch augmentation and a non-learnable memory bank, training ResNet encoders with NT-Xent loss across large minibatches to learn view-invariant representations. Supports mixed-precision training via PyTorch's native AMP, multi-GPU distributed training, and evaluation through linear probing on frozen features. Includes reference implementations for STL10 and CIFAR10 datasets with configurable projection head dimensionality and training hyperparameters.
About simclr-pytorch
AndrewAtanov/simclr-pytorch
PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results
Scores updated daily from GitHub, PyPI, and npm data. How scores work