User Tools

Site Tools


aurora_cluster:moving_data:grid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
aurora_cluster:moving_data:grid [2017/07/31 15:44]
florido
aurora_cluster:moving_data:grid [2022/02/21 17:27] (current)
florido
Line 1: Line 1:
 +FIXME this page requires a lot of updates.
 ====== Moving data between the GRID and Aurora ​ ====== ====== Moving data between the GRID and Aurora ​ ======
  
 The recommended mode to move data is using the special ''​fs2-hep''​ storage node. The recommended mode to move data is using the special ''​fs2-hep''​ storage node.
 The SLURM method described below is deprecated, to be used only if the other method doesn'​t work. The SLURM method described below is deprecated, to be used only if the other method doesn'​t work.
 +
 +**SSH** to the //fs2-hep// server from Aurora frontend and start //screen// as described in [[aurora_cluster:​moving_data|Moving data to and from the cluster]] to perform the steps below. //Screen// will ensure that your transfer is not lost if you disconnect.
  
 ===== Using GRID tools on ''​fs2-hep''​ (recommended) ===== ===== Using GRID tools on ''​fs2-hep''​ (recommended) =====
Line 28: Line 31:
 One should interact with the storage using Rucio. One should interact with the storage using Rucio.
  
 +  - Start [[it_tips:​screen]] or ''​byobu''​. The transfer might take long time.
   - Setup the ATLAS environment using ''​cvmfs''​ as explained in [[aurora_cluster:​running_on_aurora#​cvmfs]]   - Setup the ATLAS environment using ''​cvmfs''​ as explained in [[aurora_cluster:​running_on_aurora#​cvmfs]]
   - Run: <code bash>​lsetup rucio</​code>​   - Run: <code bash>​lsetup rucio</​code>​
Line 34: Line 38:
   - Read the Rucio HowTo, it is hosted by CERN here (only accessible using your CERN account): https://​twiki.cern.ch/​twiki/​bin/​viewauth/​AtlasComputing/​RucioClientsHowTo and play with rucio commands.   - Read the Rucio HowTo, it is hosted by CERN here (only accessible using your CERN account): https://​twiki.cern.ch/​twiki/​bin/​viewauth/​AtlasComputing/​RucioClientsHowTo and play with rucio commands.
   - General quick documentation on how to access SE-SNIC-T2_LUND_LOCALGROUPDISK and special endpoint names are explained here: http://​www.hep.lu.se/​grid/​localgroupdisk.html   - General quick documentation on how to access SE-SNIC-T2_LUND_LOCALGROUPDISK and special endpoint names are explained here: http://​www.hep.lu.se/​grid/​localgroupdisk.html
 +   
 +  * :!: What follows applies to any GRID storage, not only to ''​SE-SNIC-T2_LUND_LOCALGROUPDISK''​. You just need the GRID name of the storage. Ask some storage expert in your group.
 ==== Downloading data from the grid to SE-SNIC-T2_LUND_LOCALGROUPDISK ==== ==== Downloading data from the grid to SE-SNIC-T2_LUND_LOCALGROUPDISK ====
  
Line 42: Line 47:
 ==== Downloading data from SE-SNIC-T2_LUND_LOCALGROUPDISK to Aurora ==== ==== Downloading data from SE-SNIC-T2_LUND_LOCALGROUPDISK to Aurora ====
  
-  * Select ​dataset to download: <code bash>​rucio list-datasets-rse SE-SNIC-T2_LUND_LOCALGROUPDISK | less</​code>​+  * Get list of the datasets present in our storage:<code bash>​rucio list-datasets-rse SE-SNIC-T2_LUND_LOCALGROUPDISK | less</​code> ​to identify ​ the **name** of a dataset to download.
   * Select a folder in fs3-hep or fs4-hep as a destination target to download, for example ''/​projects/​hep/​fs4/​scratch/<​yourusername>/​test''​   * Select a folder in fs3-hep or fs4-hep as a destination target to download, for example ''/​projects/​hep/​fs4/​scratch/<​yourusername>/​test''​
  
aurora_cluster/moving_data/grid.1501515865.txt.gz · Last modified: 2017/07/31 15:44 by florido

Accessibility Statement