This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
aurora_cluster:moving_data:grid [2017/07/31 15:43] florido |
aurora_cluster:moving_data:grid [2017/09/22 08:31] florido |
||
---|---|---|---|
Line 3: | Line 3: | ||
The recommended mode to move data is using the special ''fs2-hep'' storage node. | The recommended mode to move data is using the special ''fs2-hep'' storage node. | ||
The SLURM method described below is deprecated, to be used only if the other method doesn't work. | The SLURM method described below is deprecated, to be used only if the other method doesn't work. | ||
+ | |||
+ | **SSH** to the //fs2-hep// server from Aurora frontend and start //screen// as described in [[aurora_cluster:moving_data|Moving data to and from the cluster]] to perform the steps below. //Screen// will ensure that your transfer is not lost if you disconnect. | ||
===== Using GRID tools on ''fs2-hep'' (recommended) ===== | ===== Using GRID tools on ''fs2-hep'' (recommended) ===== | ||
Line 14: | Line 16: | ||
Otherwise read below. | Otherwise read below. | ||
- | ===Prerequisites:=== | + | ==== Prerequisites: ==== |
The following has to be **done in the order it is presented**. | The following has to be **done in the order it is presented**. | ||
- Get a CERN account. Ask your senior team members. | - Get a CERN account. Ask your senior team members. | ||
Line 25: | Line 27: | ||
- Identify the names of the datasets you want to handle (usually the senior person in your research group knows) | - Identify the names of the datasets you want to handle (usually the senior person in your research group knows) | ||
- | === Setting up the GRID tools === | + | ==== Setting up the GRID tools ==== |
One should interact with the storage using Rucio. | One should interact with the storage using Rucio. | ||
+ | - Start ''screen'' or ''byobu''. The transfer might take long time. | ||
- Setup the ATLAS environment using ''cvmfs'' as explained in [[aurora_cluster:running_on_aurora#cvmfs]] | - Setup the ATLAS environment using ''cvmfs'' as explained in [[aurora_cluster:running_on_aurora#cvmfs]] | ||
- Run: <code bash>lsetup rucio</code> | - Run: <code bash>lsetup rucio</code> | ||
Line 35: | Line 38: | ||
- General quick documentation on how to access SE-SNIC-T2_LUND_LOCALGROUPDISK and special endpoint names are explained here: http://www.hep.lu.se/grid/localgroupdisk.html | - General quick documentation on how to access SE-SNIC-T2_LUND_LOCALGROUPDISK and special endpoint names are explained here: http://www.hep.lu.se/grid/localgroupdisk.html | ||
- | === Downloading data from the grid to SE-SNIC-T2_LUND_LOCALGROUPDISK === | + | ==== Downloading data from the grid to SE-SNIC-T2_LUND_LOCALGROUPDISK ==== |
This task is about moving the data within GRID storage element. To move a dataset to SE-SNIC-T2_LUND_LOCALGROUPDISK one needs to perform a data transfer request :!: WIP FIXME :!: | This task is about moving the data within GRID storage element. To move a dataset to SE-SNIC-T2_LUND_LOCALGROUPDISK one needs to perform a data transfer request :!: WIP FIXME :!: | ||
- | === Downloading data from SE-SNIC-T2_LUND_LOCALGROUPDISK to Aurora === | + | ==== Downloading data from SE-SNIC-T2_LUND_LOCALGROUPDISK to Aurora ==== |
* Select a dataset to download: <code bash>rucio list-datasets-rse SE-SNIC-T2_LUND_LOCALGROUPDISK | less</code> | * Select a dataset to download: <code bash>rucio list-datasets-rse SE-SNIC-T2_LUND_LOCALGROUPDISK | less</code> | ||
Line 95: | Line 98: | ||
</code> | </code> | ||
- | === Downloading data from any ATLAS SE to Aurora === | + | ==== Downloading data from any ATLAS SE to Aurora ==== |
If you omit the ''--rse'' parameter in any of the commands above, //rucio// will automatically search for the best Storage Element. However the speed and duration of such download is not guaranteed. Please ask the collaboration for their policies regarding such transfers. | If you omit the ''--rse'' parameter in any of the commands above, //rucio// will automatically search for the best Storage Element. However the speed and duration of such download is not guaranteed. Please ask the collaboration for their policies regarding such transfers. | ||
- | === Uploading data from Aurora to SE-SNIC-T2_LUND_LOCALGROUPDISK === | + | ==== Uploading data from Aurora to SE-SNIC-T2_LUND_LOCALGROUPDISK ==== |
Before uploading files, it is **very important** to understand the ATLAS data model. Please read https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/RucioClientsHowTo#Creating_data . | Before uploading files, it is **very important** to understand the ATLAS data model. Please read https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/RucioClientsHowTo#Creating_data . | ||
Line 146: | Line 149: | ||
</code> | </code> | ||
- | === Uploading data from Aurora to any other GRID Storage Element === | + | ==== Uploading data from Aurora to any other GRID Storage Element ==== |
Please read carefully https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/RucioClientsHowTo#Creating_data | Please read carefully https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/RucioClientsHowTo#Creating_data | ||
- | ==== Using SLURM Scripts (inefficient, deprecated) ==== | + | ===== Using SLURM Scripts (inefficient, deprecated) ===== |
This was a workaround when there was no data mover frontend (''fs2-hep''). The reason why this technique is discontinued is because it uses the 1GB link of the node to perform the download, while ''fs2-hep'' has a dedicated 10GB link just to do the transfers. | This was a workaround when there was no data mover frontend (''fs2-hep''). The reason why this technique is discontinued is because it uses the 1GB link of the node to perform the download, while ''fs2-hep'' has a dedicated 10GB link just to do the transfers. |