Singularity containers on GHPC
GHPC provides Singularity (singularity-ce 4.2.1) for running containerized workflows on compute nodes.
Important policy: run images from /scratch
On GHPC, container images must be executed from /scratch.
This means singularity exec /run/shell must reference an image file located under /scratch/..
Allowed:
singularity exec /scratch/$USER/containers/myimage.sif <command>
Not allowed:
singularity exec $HOME/myimage.sif <command>
You can pull images yourself. The recommended workflow is:
- Start a job (interactive or batch)
- Pull the image into
/scratch(job-local scratch is recommended) - Run from
/scratch
Interactive example (srun)
Request an interactive session on a compute node:
srun -N 1 -n 1 --mem=1024 -t 1:00:00 -J my_ishell --pty bash
Pull a small test image to /scratch and run it:
mkdir -p /scratch/$USER/containers
cd /scratch/$USER/containers
singularity pull alpine_latest.sif docker://alpine:latest
singularity exec /scratch/$USER/containers/alpine_latest.sif cat /etc/os-release
Batch job template (recommended)
This SLURM template:
- Creates a job-local directory in
/scratch/$USER/$SLURM_JOBID - Pulls the image into that directory
- Runs the workload inside the container
- Cleans up at the end
Save as singularity_template.slurm:
#!/bin/bash
#--------------------------------------------------------------------------#
# Edit Job specifications #
#--------------------------------------------------------------------------#
#SBATCH -p ghpc # Name of the queue
#SBATCH -N 1 # Number of nodes(DO NOT CHANGE)
#SBATCH -n 1 # Number of CPU cores
#SBATCH --mem=1024 # Memory in MiB(10 GiB = 10 * 1024 MiB)
#SBATCH -J singularity_template # Name of the job
#SBATCH --output=slurm_%x_%A.out # STDOUT
#SBATCH --error=slurm_%x_%A.err # STDERR
#SBATCH -t 1:00:00 # Job max time
# Create a temporary directory for the job in local storage - DO NOT CHANGE #
TMPDIR=/scratch/$USER/$SLURM_JOBID
export TMPDIR
mkdir -p "$TMPDIR"
#=========================================================================#
# Singularity setup (GHPC policy: run images from /scratch) #
#=========================================================================#
IMG_DIR="$TMPDIR/containers"
IMG="$IMG_DIR/alpine_latest.sif"
mkdir -p "$IMG_DIR"
# Keep Singularity cache/tmp inside job scratch (faster, avoids $HOME)
export SINGULARITY_TMPDIR="$TMPDIR/sing_tmp"
export SINGULARITY_CACHEDIR="$TMPDIR/sing_cache"
mkdir -p "$SINGULARITY_TMPDIR" "$SINGULARITY_CACHEDIR"
echo "Job started at $(date '+%d_%m_%y_%H_%M_%S') on $(hostname)"
echo "TMPDIR=$TMPDIR"
# Pull container to /scratch (allowed) and run it from /scratch (required)
singularity pull "$IMG" docker://alpine:latest
singularity exec "$IMG" cat /etc/os-release
singularity exec "$IMG" id
#=========================================================================#
# Your job script (example workload inside container) #
#=========================================================================#
echo "Step 1: Generating and sorting random numbers (inside container)"
singularity exec --bind "$TMPDIR:/work" "$IMG" sh -c '
cd /work
for i in $(seq 1 500000); do
echo $RANDOM >> SomeRandomNumbers.txt
done
sort -n SomeRandomNumbers.txt > SomeRandomNumbers_sorted.txt
echo "Done. Output files:"
ls -lh SomeRandomNumbers*.txt | head
'
echo "Job completed at $(date '+%d_%m_%y_%H_%M_%S')"
#=========================================================================#
# Cleanup DO NOT REMOVE OR CHANGE #
#=========================================================================#
cd "$SLURM_SUBMIT_DIR"
rm -rf "/scratch/$USER/$SLURM_JOBID"
Submit it:
sbatch singularity_template.slurm
Notes
- Prefer running container workloads in batch jobs (sbatch) or interactive sessions (
srun), not on login/console nodes. - Use
--bindto mount your job directory into the container (example above mounts$TMPDIRto/work). - If you have questions about MPI, GPUs, or performance tuning, include your job script and error output when requesting support.