User Tools

Site Tools


iridium_cluster:testnodes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
iridium_cluster:testnodes [2014/03/21 18:56]
florido created
iridium_cluster:testnodes [2015/01/21 13:15]
florido
Line 1: Line 1:
-====== Iridium Cluster Testing nodes ======+====== ​Interactive access to the cluster: ​Iridium Cluster Testing nodes ====== 
 + 
 +> **Q:** Why do I need this? 
 +>> **A:** You need this if you want to test your code on some powerful machine.
  
 The cluster has two testing nodes accessible from anywhere that should be used for testing your code. The cluster has two testing nodes accessible from anywhere that should be used for testing your code.
  
 ^ hostname ​                   ^ purpose ​                       ^ ^ hostname ​                   ^ purpose ​                       ^
-''​nptest-iridium''​.lunarc.lu.se | test node for nuclear physics ​ | +**nptest**-iridium.lunarc.lu.se | test node for nuclear physics ​ | 
-''​pptest-iridium''​.lunarc.lu.se | test node for particle physics |+**pptest**-iridium.lunarc.lu.se | test node for particle ​physics and theoretical ​physics |
  
 +They can also be used from time to time to host temporary courses.
  
-They will also be used from time to time to host courses.+They are meant for **interactive** access ​to the cluster, as opposed ​to the ''​arc-iridium.lunarc.lu.se''​ batch interface that can be used for batch submission of jobs. (link and explanation will come)
  
-They are meant for interactive access to the cluster, as opposed to the arc-iridium grid frontend that can be used for batch submission of jobs  
  
 +===== Accessing the Testing nodes =====
  
-===== Accessing the Testing ​nodes+These nodes can be simply accessed via ssh. Ask [[:Florido Paganelli]] or Pico for access credentials.
  
-==== User groups ====+Examples: 
 +<​code>​ssh myusername@pptest-iridium.lunarc.lu.se</​code>​
  
-Three main UNIX user groups are defined, as follows: +One can speed up access ​to the cluster ​by using the ssh ''​config''​ file (read [[:​it_tips:​ssh#​Speedup connection with the ssh config file|here]] for detailed explanation). You can copy-paste the following into your ''​.ssh/​config'' ​file **and modify it to your needs**:
-^ User group ^ Who belongs ​to it ^ Group hierarchy ^ +
-^ npusers | Researchers belonging to **Nuclear Physics** | primary | +
-^ ppusers | Researchers belonging to **Particle Physics** | primary | +
-^ clusterusers | All users accessing ​the cluster | secondary | +
- +
-Group hierarchy tells how your files are createdWhenever you create a file, its default ownership will be: +
-  ​user: your username +
-  ​groupgroup you belong+
  
 ---- ----
  
-===== Accessing Testing nodes =====+**Particle physics and Theoretical Physics:​** 
 +<​code>​ 
 +# access pptest-iridium 
 +Host pptest-iridium 
 +HostName pptest-iridium.lunarc.lu.se 
 +User <​username on iridium>​ 
 +ForwardX11 yes
  
-As said, currently ​access to nodes must happen via special machines.+# directly ​access ​iridium node X (change X to your preferred node!) 
 +Host iridiumnX 
 +User <​Username on iridium>​ 
 +ForwardX11 yes 
 +ProxyCommand ssh -q pptest-iridium.lunarc.lu.se -W nX:22 
 +</​code>​
  
-A typical access routine ​is the following:​ +**Example:​** My username ​is //​guest03//​. In the template above, I would change all the <​Username …> ​to //​guest03//,​ and //​nX// ​to //n12//.
-  - Access ​the special machines for your division. +
-  - login to the iridium access gateway +
-  - login to one of the nodes you are assigned to +
-  - setup the work environment+
  
-Let's see those in details.+then to login to n12 I will do:
  
-==== 1) Access the special machines for your division. ====+  ssh iridiumn12
  
-=== Particle Physics ===+And I will have to input 2 passwords: one for pptest-iridium and one for the node. 
  
-simply run: +The number of passwords can be reduced if one uses a [[_it_tips:ssh#Reduce passwords using a private/public ssh key pair|private/public ssh key pair.]]
-<​code>​ +
-ssh <​username>​@tjatte.hep.lu.se +
-</code> +
-where ''<​username>''​ is the same used to login to //teddi// or to your own laptop.+
  
-=== Nuclear Physics === +----
- +
-coming soon +
-==== 2) login to the iridium access gateway ====+
  
-simply run:+**Nuclear physics:**
 <​code>​ <​code>​
-ssh <​username>​@iridium.lunarc.lu.se+# access nptest-iridium 
 +Host nptest-iridium 
 +HostName nptest-iridium.lunarc.lu.se 
 +User <​username ​on iridium> 
 +ForwardX11 yes 
 + 
 +# directly access iridium node X (change X to your preferred node!) 
 +Host iridiumnX 
 +User <​Username on iridium>​ 
 +ForwardX11 yes 
 +ProxyCommand ssh -q nptest-iridium.lunarc.lu.se ​-W nX:22
 </​code>​ </​code>​
-where ''<​username>''​ is the your username on the cluster as given by the administrators. 
  
-You will be accessing a special shell in which you'll see which node you are assigned ​to. Assigned nodes can also be seen in [[#Assigned Nodes]]+**Example:​** My username is //​guest03//​. In the template above, I would change all the <​Username …> to //​guest03//,​ and //​nX// ​to //n12//.
  
-:!: for **Particle Physicists**,​ the username is the same used to login to //teddi// or to your own laptop.+then to login to n12 I will do:
  
-==== 3) login to one of the nodes you are assigned to ====+  ssh iridiumn12
  
-simply run: +And I will have to input 2 passwords: one for nptest-iridium and one for the node
-<​code>​ +
-ssh <​username>​@nX +
-</​code>​ +
-where  +
-  * ''<​username>''​ is the your username on the cluster as given by the administrators. +
-  * X is one of the node you're assigned in [[#Assigned Nodes]]+
  
-:!: **NOTE** :!: : There is no checking upon login. you **can** login into node that is **not** assigned to you. **PLEASE DON'T DO**. Please check. Security enforcement can be done but is not the purpose of this testing phase. If you encounter issues, we will be able to reduce access accordingly.+The number of passwords ​can be reduced if one uses [[_it_tips:​ssh#​Reduce passwords using a private/​public ssh key pair|private/​public ssh key pair.]]
  
-==== 4) setup the work environment ====+---- 
 + 
 +==== Setup the work environment ====
  
 Administrators provided scripts for quick setup of your work enviroment. Administrators provided scripts for quick setup of your work enviroment.
-Just execute the command in the column //Script to run// at the shell prompt, or add it to your ''​.bashrc''​ or ''​.bash_profile''​ file so that is executed every time you login.+Just execute the command in the column //Script to run// at the shell prompt, or add it to your ''​.bash_profile''​ file so that is executed every time you login
 + 
 +:​!:​**NOTE:​** do NOT add these scripts to ''​.bashrc''​ as suggested previously or you will not be able to rsync/scp. Contents of ''​.bashrc''​ are NOT supposed to generate output, but unfortunately some of these scripts do.
  
 The following are active now: The following are active now:
 ^ Environment | Script to run | Description | ^ Environment | Script to run | Description |
-^ ATLAS Experiment environment | ''​setupATLAS''​ | Will setup all the neeeded ​environment variables for ATLAS experiment, and present a selection of other environments that the user can setup. |+^ ATLAS Experiment environment | ''​setupATLAS''​ | Will setup all the needed ​environment variables for ATLAS experiment, and present a selection of other environments that the user can setup. | 
 +^ Various other environments through //module// | <​code>​module avail</​code>​ | Will show a list of available environments. To enable one, execute the command <​code>​module load <name of environment></​code>​ More info on modules on http://​modules.sourceforge.net/​ | 
 + 
 +==== Keeping your session alive and your work running even if you disconnect ==== 
 + 
 +The cluster offers you various tools, among these [[:​it_tips:​screen|Screen]] and the usual ''​nohup''​ command to detach from terminal output. 
 + 
 +However, I personally suggest a tool called byobu, that is essentially Screen or Tmux on steroids. 
 +You can read about it here: 
 +    https://​help.ubuntu.com/​community/​Byobu 
 + 
 +==== Local disk space on the test nodes ==== 
 + 
 +Every node has a local ''/​tmp''​ temporary disk space that can be used for computations. The contents of such space will be deleted regularly. Users can put any sort of data there. Currently the space available is 300GB. 
 + 
 + 
iridium_cluster/testnodes.txt · Last modified: 2015/09/25 08:42 by florido

Accessibility Statement