User Tools

Site Tools


iridium_cluster:basic_information

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
iridium_cluster:basic_information [2014/03/21 20:15]
florido
iridium_cluster:basic_information [2017/11/21 17:41] (current)
florido [Common files organization]
Line 1: Line 1:
 ===== Basic information about the Cluster ====== ===== Basic information about the Cluster ======
  
-The cluster is currently composed of the following: two **nodes for testing**, a **storage server**, an interface for **batch processing** than manages shared access to **10 computing nodes**. A **gateway** is still maintained for a short time.+These are things one should read before using the cluster.
  
-  * the **nodes for testing** allow users to run interactive jobs. More about them can be read in [[iridium_cluster:​testnodes]] +==== Cluster components ==== 
-  * the **storage server** ​is used to maintain user home folders, software and data. See [[#Common files organization]] below for details. + 
-  * the **batch processing** inrterface allows ​you to run many jobs and schedules them in a fair way using the [[http://​www.nordugrid.org/​arc|ARC software]] +The cluster is currently composed of the following:​ 
-  the **nodes** are the place where you will actually run your code. Each node has: + 
-    * a simple name: //nX//, where //X// is the number of the node. i.e. //n1// is node number //1//. +  * Two **nodes for interactive access/testing**: they allow users to run interactive jobs. More about them can be read in [[iridium_cluster:​testnodes]] 
-    * 16 cores +  * **storage server**used to maintain user home folders, software and data. See [[#Common files organization]] below for details. 
-    * 64GB of RAM +  * **batch processing** inrterfaceallows ​researchers ​to run many jobs and schedules them in a fair way using the [[http://​www.nordugrid.org/​arc|ARC software]]. It manages the cluster nodes. 
-    * access to all folder served by the storage server. This means that a researcher will have her own home folder regardless of the node she's logging in.+    * **10 nodes**: these are machines ​where you will actually run your code. Currently they can be directly accessed, but in the future this will change. The **batch processing** interface will be the only entry point.  
 +    * Each node has: 
 +      * a simple name: //nX//, where //X// is the number of the node. i.e. //n1// is node number //1//. 
 +      * 16 cores 
 +      * 64GB of RAM 
 +      * access to all folder served by the storage server. This means that a researcher will have her own home folder regardless of the node she's logging in.
     * the **gateway** was used for users to access the cluster, but now will be phased out or allocated for other purposes.     * the **gateway** was used for users to access the cluster, but now will be phased out or allocated for other purposes.
  
Line 16: Line 21:
 Contact [[:Florido Paganelli]] or Pico to get access. Contact [[:Florido Paganelli]] or Pico to get access.
  
 +==== Common files organization ====
 +
 +Every node of the cluster can access the shared storage. All users can access the shared storage, but can only access areas assigned to the working groups they belong.
 +The shared storage is organized as follows:
 +^ Folder name ^ Folder location ^ Folder purpose ^ expected filesize ^ Description ^ Subfolders ^
 +^ users | ''/​nfs/​users''​ | User homes | files smaller than 100MB each | This folder contains each user's private home folder. In this folder one should save his own code and eventually private data. Data that can also be used by others and the single files are bigger than 100MB should **not** be in this folder. Use the **shared** folder instead. | /<​username>​ each user her own folder|
 +^ ::: | ::: | ::: | ::: | ::: | ''/​npguests/<​username>''​ for Nuclear Physics guests |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​ppguests/<​username>''​ for Particle Physics guests |
 +^ **software** | ''/​nfs/​software''​ | Application software ​ | files smaller than 100MB each | This folder hosts software that is not accessible via cvmfs (see later). This usually includes user/​project specific libraries and frameworks. | ''/​np''​ for Nuclear Physics users |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​pp''​ for Particle Physics users |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​tp''​ for Theoretical Physics users |
 +^ **shared** | ''/​nfs/​shared/''​ | Data that will stay for long term | Any file, especially big ones | This folder should be used for long-term stored data. For example, data needed for the whole duration of a phd project or shared among people belonging to the same research group. | ''/​np''​ for Nuclear Physics users |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​pp''​ for Particle Physics users |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​tp''​ for Theoretical Physics users |
 +^ **scratch** | ''/​nfs/​scratch/''​ | Data that will stay for short term | Any file, especially big ones | This folder should be used for short-term stored data. For example, data needed for a week long calculation or temporary calculation. This folder should be considered unreliable as its contents will be purged from time to time. The cleanup interval is yet to be decided | ''/​np''​ for Nuclear Physics users |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​pp''​ for Particle Physics users |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​tp''​ for Theoretical Physics users |
 +^ **cvmfs** | ''/​cvmfs''​ | Special folder containing CERN maintained software | user cannot write | This special folder is dedicated to software provided by CERN. This folder is read-only. Usually the content of this folder are managed via specific scripts that a user can run. If you need to add some software that you cannot find, contact the administrators. | ''/​geant4.cern.ch''​ for Nuclear Physics users |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​atlas.cern.ch''​ for Particle Physics users |
 +^ **nodestmp** | ''/​nodestmp''​ | Contains the /slurmtmp folder of each node, where slurm writes job data. | Any size. The disk is limited to 270GB | Files used by your code that you want to be directly on the node. It is recommended to copy the files there as this will speedup the calculation. |  |
 +^ **nobackup_aurora** | ''/​nfs/​nobackup_aurora''​ | Contains the ''/​projects/​hep/​fs2-hep/''​ folder of the storage connected to the [[:​aurora_cluster|Aurora cluster]], to easily move data to and fro the two clusters. This machine can be used to transfer data to the cluster at a fast rate, [[:​aurora_cluster:​moving_data|check these instructions]] | Any size. Maximum available space for each Division is 18.3TB | Any file | ''/​shared''​ |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​scratch'' ​ |
 +^ ::: | ::: | ::: | ::: | ::: | ''/​software''​ |
 +
 +==== User groups ====
 +
 +Five main UNIX user groups are defined, as follows:
 +^ User group ^ Who belongs to it ^ Group hierarchy ^
 +^ npusers | Researchers belonging to **Nuclear Physics** | primary |
 +^ npguests | Visitors belonging to **Nuclear Physics** | primary |
 +^ ppusers | Researchers belonging to **Particle Physics** | primary |
 +^ ppguests | Visitors belonging to **Particle Physics** | primary |
 +^ tpusers | Researchers belonging to **Theoretical Physics** | primary |
 +^ clusterusers | All users accessing the cluster | secondary for npusers, ppusers and tpusers |
 +
 +Group hierarchy tells how your files are created. Whenever you create a file, its default ownership will be:
 +  * user: your username
 +  * group: group you belong
  
 +----
  
  
iridium_cluster/basic_information.1395429345.txt.gz · Last modified: 2014/03/21 20:15 by florido