User Tools

Site Tools


aurora_cluster:how_scheduling_works

This is an old revision of the document!


How Aurora scheduling works

In this section I try to explain how your job requests are treated on the hep partition.

Users within the same project are as one single user.

Inside a project, say, HEP 2016/1-4, it doesn't matter who you are, if you're member of the project your requests will be processed FIFO: first in, first out. The first to submit will be the first to have their jobs processed.

As long as the SLURM scheduler can find resources to match the ones requested by a job, the job will be started soon after being submitted.

But if there are no resources available, the job will have to wait in line and any job submitted earlier for the same project will have a higher priority due to a longer waiting time. There are some modifications of priority based on job size, but there is no priority or fair share among members of the same project.

It's as if all members were a single person submitting jobs, with many names.

The exception to this rule is that there are limits to how many jobs a single user can submit and how many resources a single user can utilise simultaneously. Thus, if a user has reached any of these limits, other members of the same project are still able to submit and run jobs within the bounds of similar, but higher limits for the project as a whole.

Considerations on interactive sessions

If the cluster is busy, requesting an interactive session may take time and fail. The scheduler will happily allocate resources for a user, but if the user asks for an interactive session with say 6 cores and there is no machine with 6 cores free, the scheduler cannot fulfil the request.

If the user submits a batch job, the scheduler will queue it with the LIFO strategy described above.

:!: So one suggestion from Lunarc is to submit the long, slow jobs first and the fast short ones later, so you give others more and more space as you finish processing the slow ones.

Fairness among projects running on the hep partition

Fairness is enforced among the three projects using the hep partition (HEP 2016/1-3, HEP 2016/1-4, HEP 2016/1-5) that means every project is allocated 1/3 of computing power (core/hours) per month. Once one project exceeds that 1/3, it will be harder for members of that project to get resources when the other projects are running, because there is a debt of computing power towards them.

This happens only when the cluster is being used intensively by all, which is quite rare at the moment of writing.

But if at some point each project is using a considerable amount of computing power, it is for sure that all project members will have to wait in the queue to be allocated. Remember that the allocation is what you ASK for in the sbatch script: once is allocated is yours and others cannot take it.

Suggestion to self-regulate the usage inside a project

  • The project members should interact on a regular basis to understand what are their expected computing needs;
  • Those needs should be translated into *expected resource allocation requests*
  • These resource allocation requests should be documented somewhere FIXME on the cluster
  • All users should use the suggested allocation request from the file when submitting
  • In order to preserve the possibility to use all of the nodes when needed, these allocations should be flexible enough to be changed on the fly according to needs of the members of the project
aurora_cluster/how_scheduling_works.1552899676.txt.gz · Last modified: 2019/03/18 09:01 by florido

Accessibility Statement