A YAML cluster configuration file for a Slurm resource manager on an HPC cluster looks like:
# /etc/ood/config/clusters.d/my_cluster.yml --- v2: metadata: title: "My Cluster" login: host: "my_cluster.my_center.edu" job: adapter: "slurm" cluster: "my_cluster" bin: "/path/to/slurm/bin" conf: "/path/to/slurm.conf" # bin_overrides: # sbatch: "/usr/local/bin/sbatch" # squeue: "" # scontrol: "" # scancel: ""
with the following configuration options:
This is set to
The Slurm cluster name. Optional, passed to SLURM as
clusteroption is discouraged. This is because maintenance outages on the Slurm DB will propogate to Open OnDemand. Instead sites should use different
conffiles for each cluster to limit maintenance outages.
The path to the Slurm client installation binaries.
The path to the Slurm configuration file for this cluster. Optional
A different, optional host to ssh to and then issue commands. Optional
Replacements/wrappers for Slurm’s job submission and control clients. Optional
Supports the following clients:
If you do not have a multi-cluster Slurm setup you can remove the
"my_cluster" line from the above configuration file.
When installing Slurm ensure that all nodes on your cluster including the node running the Open OnDemand server have the same MUNGE key installed. Read the Slurm Quick Start Administrator Guide for more information on installing and configuring Slurm itself.