Interactive shell on cluster
As menioned in the Console section of this guide, console[1,2] servers are only a front end to submit jobs to the cluster and to occasionally build scripts and peek at results. If you need to do scientific work in an interactive fashion or have small amount of manual work, then you need to perform such work on a node in the cluster and NOT directly on console servers. To get a shell prompt on one of the cluster nodes, you can submit what is called as an interactive job, which will give you a command line prompt (instead of running a script) when the job runs.
The same arguments that you would use inside a job script applies to an interactive job as well. For example,
asampath@console1:[~] > srun -N1 -J testing_for_guide --pty bash
asampath@sky012:[~] >
Where,
You can see that I requested for 1 node and gave a job label of "testing_for_guide" so that I can identify this session later, and asked to be dropped into a bash session. The result of such request was the change of prompt, in this case to sky012 which is a cluster node.
If you open another session to console[1,2] and check the status of the job queue, you'd see.
asampath@console1:[~] > myst
JOBID PARTITION NAME USER ST TIME NODES CPUS MIN_MEMORY NODELIST(REASON)
446 ghpc testing_ asampath R 1:22 1 2 20G sky012.ghpc.au.dk
that shows that interactive sessions are also treated like jobs in SLURM just that you're working on it live.
If you type exit and hit return, you give up the interactive session and the job ends according to SLURM.
A more sophisticated example of an interactive job would be
asampath@console1:[~] > srun -N 1 -n 4 --mem=1024 -t 1:00:00 -J testing_for_guide --pty bash
asampath@sky012:[~] >
Where, I requested for 1 node, 4 cpus, 1024 MiB of memory, max time of 1 hour, and a label of "testing_for_guide", and to be dropped into a bash shell.
If you looked at the status of this job in queue, you'd see the resources allocated as expected.
asampath@console1:[~] > myst
JOBID PARTITION NAME USER ST TIME NODES CPUS MIN_MEMORY NODELIST(REASON)
449 ghpc testing_ asampath R 0:04 1 4 1G sky012.ghpc.au.dk
ishell
ishell was later introduced to simplify the process for users to run interactive jobs on the GHPC. It provides a fast and straightforward way to launch an interactive shell session on one of the compute nodes. Avoid using console servers for tasks like running quick scripts or performing activities unrelated to managing SLURM jobs. By running ishell, you are instantly connected to a compute node with access to 1 CPU core and 2 GiB of memory. In the background, ishell runs the following command: srun -p ghpc --mem=2G --pty bash
. It automatically selects the appropriate queue, whether ghpc or nav, based on whether you're a QGG or nav user.
thami@console1:[~] > ishell
srun: job 576839 queued and waiting for resources
srun: job 576839 has been allocated resources
(base) thami@sky006:~$
thami@console1:[~] > myst
JOBID PARTITION NAME USER ST TIME_LIMIT TIME NODES CPUS MIN_MEMORY NODELIST(REASON)
576839 ghpc bash thami R 12:00:00 1:04 1 1 2G sky006