LC's Linux machines set processor affinity by default. This is handled at the operating system level with hooks into the SLURM resource manager. The default behavior attempts to accommodate multi-threaded tasks by assigning more than one CPU per task if the number of tasks running on the node is evenly divisible into the number of CPUs. If the number of tasks running is equal to the number of CPUs, then each task is assigned a CPU in sequential order. Users can control processor affinity by using the --auto-affinity option with the srun command (Note: disregard the SLURM documentation describing how to do this, since this has changed):



where args is a comma separated list of one or more of the following:

cpus_per_task=NAllocate [N] CPUs to each task

cpt=NShorthand for cpus_per_task

helpDisplay usage information

offDisable automatic CPU affinity

rev(erse)Allocate last CPU first instead of starting with CPU0

start=NStart affinity assignment at CPU [N]. If assigning CPUs in reverse, start [N] CPUs from the last CPU.

v(erbose)Print CPU affinity list for each remote task


srun -n64 --auto-affinity=verbose (display how affinity is assigned) srun -n64 --auto-affinity=off (turn off processor affinity)

Example output:

% srun -ppdebug -n8 --auto-affinity=verbose hostname auto-affinity: local task 1: CPUs: 2,3 auto-affinity: local task 2: CPUs: 4,5 auto-affinity: local task 0: CPUs: 0,1 hera7 hera7 hera7 auto-affinity: local task 3: CPUs: 6,7 auto-affinity: local task 6: CPUs: 12,13 auto-affinity: local task 4: CPUs: 8,9 auto-affinity: local task 5: CPUs: 10,11 auto-affinity: local task 7: CPUs: 14,15 hera7 hera7 hera7 hera7 hera7