This is an old revision of the document!
Suggestions on how to make your life easier when using the cluster.
One can speedup logging in by configuring her/his own ssh client. This will also help in scp-ing data to the cluster.
My suggestion for Particle Physicists is to copy this piece of code inside their own
.ssh/config file, and change it to your specific needs:
# access tjatte Host tjatte HostName tjatte.hep.lu.se User <username on tjatte> ForwardX11 yes # directly access iridium gateway Host iridiumgw User <Username on iridium> ForwardX11 yes ProxyCommand ssh -q tjatte nc iridium.lunarc.lu.se 22 # directly access node X Host nX.iridium User <Username on iridium> ForwardX11 yes ProxyCommand ssh -q iridiumgw nc nX 22 # directly access node Y Host nY.iridium User <Username on iridium> ForwardX11 yes ProxyCommand ssh -q iridiumgw nc nY 22
Example: My user is
florido. In the template above, I would change all the
<Username …> to
then to login to n12 I will do:
And I will have to input 3 passwords: one for tjatte, one for the gateway and one for the node.
If you want to access the cluster nodes from outside the division, you must go through teddi and eventually copy the above setup in your home
If you don't have an account on teddi or direct access to some other division machine, you should ask me to create one.
Where X and Y is the nodes you're allowed to run.
note that with the above you will be requested to input as many password as the number of machines in the connection. A way to ease this pain is to copy ssh keys to the nodes. Copying ssh keys to the gateway is not (yet) possible, hence you will always need two passwords: one for the ssh key and one for the gateway.
An alternative method of authenticating via ssh is by using ssh keys. It will ease the pain of writing many passwords. The only password you will need is to unlock your key.
PLEASE DO NOT USE PASSWORDLESS KEYS. IT IS A GREAT SECURITY RISK.
Read about them here:
Use screen. GNU screen is an amazing tool that opens a remote terminal that is independent on your ssh connection. If the connection drops or you accidentally close the ssh window, it will still run your jobs on the cluster.
A quick and dirty tutorial can be read here, but there's plenty more on the internet.