User Tools

Site Tools


aurora_cluster:running_on_aurora

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
aurora_cluster:running_on_aurora [2016/12/06 10:14]
florido
aurora_cluster:running_on_aurora [2016/12/06 11:18]
florido
Line 6: Line 6:
 There'​s only few important things one needs to know: There'​s only few important things one needs to know:
  
-  ​* HEP nodes are running in a special **partition** (a.k.a. queue) called ''​hep''​. Whenever in the documentation you're asked to specify a queue, use ''​hep''​ +  * A set of nodes is selected by choosing ​a **partition**, ​a **project** and a **reservation**,​ one for each division. The HEP nodes do not require a special reservation flag to be accessed. The partitions, ​project names and reservation flags are listed in the table below:
-  ​* A set of nodes is selected by choosing a **project** and a **reservation**,​ one for each division. The HEP nodes do not require a special reservation flag to be accessed. The project names and reservation flags are listed in the table below:+
  
 ^ Your division ^ SLURM Partition ^ Project String ^ Reservation String ^ call srun with ^ ^ Your division ^ SLURM Partition ^ Project String ^ Reservation String ^ call srun with ^
Line 25: Line 24:
 ===== Batch Scripts Examples ===== ===== Batch Scripts Examples =====
  
-==== hep partition ====+==== hep partition ​(Nuclear, Particle and Theoretical Phyisics) ​====
  
 A typical direct submission to ''​hep''​ nodes looks like: A typical direct submission to ''​hep''​ nodes looks like:
Line 42: Line 41:
 </​code>​ </​code>​
  
-to be run with the command:+The script is submitted  ​to the SLURM batch queue with the command:
 <code bash>​sbatch slurmexample.sh</​code>​ <code bash>​sbatch slurmexample.sh</​code>​
  
-==== lu partition ====+The results will be found in the folder where the above command is ran, in a file named after the slurm job ID.
  
-==== hep partition ====+==== lu partition ​(Mathematical Physics) ​====
  
 A typical direct submission to ''​lu''​ nodes looks like: A typical direct submission to ''​lu''​ nodes looks like:
Line 65: Line 64:
 </​code>​ </​code>​
  
-to be run with the command:+The script is submitted  ​to the SLURM batch queue with the command:
 <code bash>​sbatch slurmexample.sh</​code>​ <code bash>​sbatch slurmexample.sh</​code>​
  
 +The results will be found in the folder where the above command is ran, in a file named after the slurm job ID.
  
 ===== Interactive access to nodes for code testing ===== ===== Interactive access to nodes for code testing =====
Line 77: Line 77:
  
 <code bash> <code bash>
-interactive -t 60 -p hep -A HEP2016-1-1+interactive -t 00:60:00 -p hep -A HEP2016-1-
 +</​code>​ 
 +<code bash> 
 +interactive -t 00:60:00 -p lu -A lu2016-2-10 --reservation=lu2016-2-10
 </​code>​ </​code>​
  
-where ''​-t 60''​ is the time in minutes you want the interactive session to last. You can put as much as you want in the timer. Mind that whatever you're running will be killed after the specified time.+where ''​-t ​00:60:00''​ is the time in hours:minutes:​seconds ​you want the interactive session to last. You can put as much as you want in the timer. Mind that whatever you're running will be killed after the specified time.
  
 //slurm// will select a free node for you and open a bash terminal. From that moment on you can pretty much do the same as you were doing on Iridium testing nodes. //slurm// will select a free node for you and open a bash terminal. From that moment on you can pretty much do the same as you were doing on Iridium testing nodes.
aurora_cluster/running_on_aurora.txt · Last modified: 2022/04/19 14:42 by florido

Accessibility Statement