Train Jigsaw model

VISSL reproduces the self-supervised approach Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles proposed by Mehdi Noroozi and Paolo Favaro in this paper.

How to train Jigsaw model

VISSL provides a yaml configuration file containing the exact hyperparameter settings to reproduce the model. VISSL implements all the components including data augmentations, collators etc required for this approach.

To train ResNet-50 model on 8-gpus on ImageNet-1K dataset using 2000 permutations.

python tools/ config=pretrain/jigsaw/jigsaw_8gpu_resnet

Training with different permutations

In order to adjust the permutations and retrain, you can do so from the command line. For example: to train for 10K permutations instead, VISSL provides the configuration files with the necessary changes. Run:

python tools/ config=pretrain/jigsaw/jigsaw_8gpu_resnet \

Similarly, you can train for 100 permutations and create new config files for a different permutations settings following the above configs as examples.

Vary the number of gpus

VISSL makes it extremely easy to vary the number of gpus to be used in training. For example: to train the Jigsaw model on 4 machines (32gpus) or 1gpu, the changes required are:

  • Training on 1-gpu:

python tools/ config=pretrain/jigsaw/jigsaw_8gpu_resnet \
  • Training on 4 machines i.e. 32-gpu:

python tools/ config=pretrain/jigsaw/jigsaw_8gpu_resnet \


Please adjust the learning rate following ImageNet in 1-Hour if you change the number of gpus.

Pre-trained models

See VISSL Model Zoo for the PyTorch pre-trained models with VISSL using Jigsaw approach and the benchmarks.


Following Goyal et al we use the exact permutation files for Jigsaw training available here and refer users to directly use the files from the above source.


    title={Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles},
    author={Mehdi Noroozi and Paolo Favaro},