This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
aurora_cluster:running_on_aurora [2016/12/06 10:05] florido [Custom software] |
aurora_cluster:running_on_aurora [2016/12/06 11:16] florido [hep partition] |
||
---|---|---|---|
Line 19: | Line 19: | ||
* HEP storage is a bit different from the others and there are a few additional rules to use it. Please read [[aurora_cluster:storage]] | * HEP storage is a bit different from the others and there are a few additional rules to use it. Please read [[aurora_cluster:storage]] | ||
- | A typical direct submission to our nodes looks like: | + | * Basic usage of the batch system is described here: |
+ | * Using the SLURM batch system: http://lunarc-documentation.readthedocs.io/en/latest/batch_system/ | ||
+ | * Batch system rules. Note: these rules might be slightly different for us since we have our own partition. http://lunarc-documentation.readthedocs.io/en/latest/batch_system_rules/ | ||
+ | |||
+ | ===== Batch Scripts Examples ===== | ||
+ | |||
+ | ==== hep partition (Nuclear, Particle and Theoretical Phyisics) ==== | ||
+ | |||
+ | A typical direct submission to ''hep'' nodes looks like: | ||
<code bash>srun -p hep -A HEP2016-1-4 myscript.sh</code> | <code bash>srun -p hep -A HEP2016-1-4 myscript.sh</code> | ||
Line 34: | Line 42: | ||
</code> | </code> | ||
- | to be run with the command: | + | The script is submitted to the SLURM batch queue with the command: |
<code bash>sbatch slurmexample.sh</code> | <code bash>sbatch slurmexample.sh</code> | ||
- | Basic usage of the batch system is described here: | + | The results will be found in the folder where the above command is ran, in a file named after the slurm job ID. |
- | * Using the SLURM batch system: http://lunarc-documentation.readthedocs.io/en/latest/batch_system/ | + | |
- | * Batch system rules. Note: these rules might be slightly different for us since we have our own partition. http://lunarc-documentation.readthedocs.io/en/latest/batch_system_rules/ | + | |
+ | ==== lu partition ==== | ||
+ | |||
+ | A typical direct submission to ''lu'' nodes looks like: | ||
+ | <code bash>srun -p lu -A lu2016-2-10 --reservation=lu2016-2-10 myscript.sh</code> | ||
+ | |||
+ | Here is an example of a typical slurm submission script ''slurmexample.sh'' written in bash, that prints the hostname of the node where the job is executed and the PID of the bash process running the script. It will have this prologue: | ||
+ | |||
+ | <code bash> | ||
+ | #!/bin/bash | ||
+ | # | ||
+ | #SBATCH -A lu2016-2-10 | ||
+ | #SBATCH -p lu | ||
+ | #SBATCH --reservation=lu2016-2-10 | ||
+ | # | ||
+ | hostname | ||
+ | srun echo $BASHPID; | ||
+ | </code> | ||
+ | |||
+ | The script is submitted to the SLURM batch queue with the command: | ||
+ | <code bash>sbatch slurmexample.sh</code> | ||
+ | |||
+ | The results will be found in the folder where the above command is ran, in a file named after the slurm job ID. | ||
===== Interactive access to nodes for code testing ===== | ===== Interactive access to nodes for code testing ===== | ||
Line 50: | Line 78: | ||
<code bash> | <code bash> | ||
- | interactive -t 60 -p hep -A HEP2016-1-1 | + | interactive -t 00:60:00 -p hep -A HEP2016-1-4 |
+ | </code> | ||
+ | <code bash> | ||
+ | interactive -t 00:60:00 -p lu -A lu2016-2-10 --reservation=lu2016-2-10 | ||
</code> | </code> | ||
- | where ''-t 60'' is the time in minutes you want the interactive session to last. You can put as much as you want in the timer. Mind that whatever you're running will be killed after the specified time. | + | where ''-t 00:60:00'' is the time in hours:minutes:seconds you want the interactive session to last. You can put as much as you want in the timer. Mind that whatever you're running will be killed after the specified time. |
//slurm// will select a free node for you and open a bash terminal. From that moment on you can pretty much do the same as you were doing on Iridium testing nodes. | //slurm// will select a free node for you and open a bash terminal. From that moment on you can pretty much do the same as you were doing on Iridium testing nodes. | ||
Line 111: | Line 142: | ||
| Particle Physics | ''/projects/hep/nobackup/software/pp'' | | | Particle Physics | ''/projects/hep/nobackup/software/pp'' | | ||
| Theoretical Physics | ''/projects/hep/nobackup/software/tp'' | | | Theoretical Physics | ''/projects/hep/nobackup/software/tp'' | | ||
- | | Mathematical Phyisics | Due to a misunderstanding between Lunarc and Florido, please use your home folder for now. We are negotiating a 10GB project space. | | + | | Mathematical Phyisics | Please use your home folder for now. We are negotiating a 10GB project space. | |