Unity
Unity
About
News
Events
Docs
Contact Us
code
search
login
Unity
Unity
About
News
Events
Docs
Contact Us
dark_mode
light_mode
code login
search

Documentation

  • Requesting An Account
  • Cluster Specifications
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • Node List
    • Partition List
      • Gypsum
    • Storage
  • Frequently Asked Questions
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Interactive CLI Jobs
    • Unity OnDemand
    • Slurm cheat sheet
  • Software Management
    • Conda
    • Modules
      • Module Usage
      • Module Hierarchy
    • Unity OnDemand
      • JupyterLab OnDemand
  • Tools & Software
    • ColabFold
    • R
    • Unity GPUs
  • Datasets
    • AI and ML
      • Code Llama
      • Imagenet
      • Imagenet 1K
      • LAION
      • Llama2
      • mixtral
    • Bioinformatics
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • dfam
      • EggNOG
      • GTDB
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • PDB70
      • PDB70 for ColabFold
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef100 BLAST database
      • UniRef30
      • UniRef90
  • HPC Resources

Documentation

  • Requesting An Account
  • Cluster Specifications
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • Node List
    • Partition List
      • Gypsum
    • Storage
  • Frequently Asked Questions
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Interactive CLI Jobs
    • Unity OnDemand
    • Slurm cheat sheet
  • Software Management
    • Conda
    • Modules
      • Module Usage
      • Module Hierarchy
    • Unity OnDemand
      • JupyterLab OnDemand
  • Tools & Software
    • ColabFold
    • R
    • Unity GPUs
  • Datasets
    • AI and ML
      • Code Llama
      • Imagenet
      • Imagenet 1K
      • LAION
      • Llama2
      • mixtral
    • Bioinformatics
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • dfam
      • EggNOG
      • GTDB
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • PDB70
      • PDB70 for ColabFold
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef100 BLAST database
      • UniRef30
      • UniRef90
  • HPC Resources
  1. Unity
  2. Documentation
  3. Submitting Jobs
  4. Interactive CLI Jobs

Using SALLOC to Submit Jobs

salloc is a so-called blocking command, as in it will not let you execute other commands until this command is finished (not necessarily the job, just the allocation). For example, if you run salloc srun /bin/hostname and resources are available right away it will print the hostname of the node allocated. If resources are not available, you will be stuck in the command while you are pending in the queue. Ctrl+C will cancel the request. Note it takes two Ctrl+C within one second to cancel the job once it is started.

lightbulb
When to use sbatch
Usually, if you have to run a single application multiple times, or if you are trying to run a non-interactive application, you should use sbatch instead of salloc, since sbatch allows you to specify parameters in the file, and is non-blocking.

Please note that like sbatch, you can run a batch file using salloc.

The command syntax is salloc <options> srun [executable] <args>

Options is where you can specify the resources you want for the executable, or define. The following are some of the options available; to see all available parameters run the command man salloc or go to Slurm’s salloc page.

  • -c <num> Number of CPUs (threads) to allocate to the job per task
  • -n <num> The number of tasks to allocate (for MPI)
  • -G <num> Number of GPUs to allocate to the job
  • --mem <num>[K|M|G|T] Memory to allocate to the job (in MB by default)
  • -p <partition> Partition to submit the job to

To run an interactive job with your default shell, just pass the resources required to salloc and do not specify a command:

salloc -c 6 -p cpu

To run an application on the cluster that uses a GUI, you must use an interactive job, in addition to the --x11 argument:

salloc -c 6 -p cpu --x11 xclock
warning
You cannot run an interactive/gui job using the sbatch command, you must use salloc.
Last modified: Monday, July 22, 2024 at 3:15 PM. See the commit on GitLab.
University of Massachusetts Amherst University of Massachusetts Amherst University of Rhode Island University of Rhode Island University of Massachusetts Dartmouth University of Massachusetts Dartmouth University of Massachusetts Lowell University of Massachusetts Lowell University of Massachusetts Boston University of Massachusetts Boston Mount Holyoke College Mount Holyoke College
search
close