Train models on CPU

VISSL supports training any model on CPUs. Typically, this involves correctly setting the MACHINE.DEVICE=cpu and adjusting the distributed settings accordingly. For example, the config settings will look like:

  DEVICE: cpu
  BACKEND: gloo           # set to "gloo" for cpu only training
  NUM_NODES: 1            # no change needed
  NUM_PROC_PER_NODE: 2    # user sets this to number of gpus to use
  INIT_METHOD: tcp        # set to "file" if desired
  RUN_ID: auto            # Set to file_path if using file method. No change needed for tcp and a free port on machine is automatically detected.

Train anything on 1-gpu

If you have a configuration file (any vissl compatible file) for any training, that you want to run on 1-gpu only (for example: train SimCLR on 1 gpu, etc), you don’t need to modify the config file. VISSL provides a helper script that takes care of all the adjustments. This can facilitate debugging by allowing users to insert pdb in their code.

VISSL also takes care of auto-scaling the Learning rate for various schedules (cosine, multistep, step etc.) if you have enabled the auto_scaling (see You can simply achieve this by using the script. An example usage:

cd $HOME/vissl
./dev/ config=test/integration_test/quick_swav

Train on SLURM cluster

VISSL supports SLURM by default for training models. VISSL code automatically detects if the training environment is SLURM based on SLURM environment variables like SLURM_NODEID, SLURMD_NODENAME, SLURM_STEP_NODELIST.

VISSL also provides a helper bash script dev/ that allows launching a given training on SLURM. This script uses the content of the configuration to allocate the right number of nodes and GPUs on SLURM.

More precisely, the number of nodes and GPU by node to allocate is driven by the usual DISTRIBUTED training configuration:

  NUM_NODES: 1            # no change needed
  NUM_PROC_PER_NODE: 2    # user sets this to number of gpus to use

While the more SLURM specific options are located in the “SLURM” configuration block:

# ----------------------------------------------------------------------------------- #
# DISTRIBUTED TRAINING ON SLURM: Additional options for SLURM node allocation
# (options like number of nodes and number of GPUs by node are taken from DISTRIBUTED)
# ----------------------------------------------------------------------------------- #
  # Whether or not to run the job on SLURM
  USE_SLURM: false
  # Name of the job on SLURM
  NAME: "vissl"
  # Comment of the job on SLURM
  COMMENT: "vissl job"
  # Partition of SLURM on which to run the job. This is a required field if using SLURM.
  # Where the logs produced by the SLURM jobs will be output
  # Maximum number of hours / minutes needed by the job to complete. Above this limit, the job might be pre-empted.
  # Additional constraints on the hardware of the nodes to allocate (example 'volta' to select a volta GPU)
  # GB of RAM memory to allocate for each node
  MEM_GB: 250
  # TCP port on which the workers will synchronize themselves with torch distributed
  PORT_ID: 40050
  # Number of CPUs per GPUs to request on the cluster.
  # Any other parameters for slurm (e.g. account, hint, distribution, etc.,) as dictated by submitit.
  # Please see

Users can customize these values by using the standard hydra override syntax (same as for any other item in the configuration), or can modify the script to fit their needs.


To run a linear evaluation benchmark on a chosen checkpoint, on the SLURM partition named “dev”, with the name “lin_eval”:

./dev/ \
    config=benchmark/linear_image_classification/imagenet1k/eval_resnet_8gpu_transfer_in1k_linear \
    config.MODEL.WEIGHTS_INIT.PARAMS_FILE=/path/to/my/checkpoint.torch \
    config.SLURM.NAME=lin_eval \

To run a distributed training of SwAV on 8 nodes where each machine has 8 GPUs and for 100 epochs, on the default partition, with the name “swav_100ep_rn50_in1k”:

./dev/ \
    config=pretrain/swav/swav_8node_resnet \
    config.OPTIMIZER.num_epochs=100 \