Table of contents:


Support

Please contact us via SmartDesk, Email servicedesk@id.ethz.ch  or by phone +41 44 632 77 77

Service Information



Please subscribe to the LTS sympa mailing list for notifications about maintenances, services, news: 

Sympa Mailing-Lists Informatikdienste      → Search for Lists → Search Form → lts → subscribe to lts@sympa.ethz.ch


StrongLink LTS


The LTS system has been migrated from StrongBox Appliance to our new StrongLink environment in July 2022.

New system names are:

lts12 ( replaces lts11 )

lts22 ( replaces lts21 )


NFS Share Mounts



All Clients must use NFSv3 as mount options - Client Advisory StrongLink Data Solutions ( 14/09/22 )


ETH / StrongLink NFS Shares / Client Advisory for mount of StrongLink NFS Shares - 14/09/22  ( StrongLink Data Solutions )
  • all Clients must use NFSv3 as Mount Options, NFSv4 is no longer an valid and supported Feature in this Environment
  • for mount the StrongLink NFS Share over NFSv3 there is a need to use the StrongLink Namespace Name as "export_name"
  • Please contact the LTS ETH Admin Team for further Instructions


mount -t nfs <StrongLink hostname or IP address>:/<export name> -o 
vers=3,rsize=1048576,wsize=1048576 /<mount point>

Additional comment ( ETH ):  "showmount -e lts12" / "showmount -e lts22" will show the correct names to be used for mount.

For NFS Clients using RSYNC for transfer files and doing an TAR -TF <filename> to validate the TAR File when the file is successfully written, 
the Recommended RSYNC Options are:
 
 /usr/bin/rsync --archive --delete --inplace --checksum --timeout=10800 --verbose --no-p --no-o --no-g --chmod=Du=rwx,Fu=rw --chown=openxxx:openxxx /nasxx/id_xx_xx_nfs_lts12/openxxx_123/20220729140202904-xxxxx-xxxxxxx-xxxxx.tar /test_06/harvest_02/

These RSYNC Options will also allow an TAR -TF <filename> after the file is successfully written on the primary StrongLink NFS Share 

Use RSYNC without further Options (Default) will create a temporary file name during the transfer which got renamed after it's successfully written.

RSYNC Default is no longer recommended to use with StrongLink NFS Shares.
Example:    /usr/bin/rsync  /nas22/id_xxx_xx_xxx_lts12/openxxx_xxx/20220729140202904-xxxxxx-xxxxxxxxx-xxxxxxx.tar /test_06/harvest_02/



Mount of NFS shares is only possible for registered hosts. For registration of new hosts on your share please open a ticket with DBR support.

Write operations are only possible on primary shares.

For EULER please read the following additional LTS login nodes information:

Euler Cluster - LTS Login Nodes


The following NFSv4 mount options are currently not supported ( see Client Advisory further up )

Syntax examples below are with primary share names, depending on where your primary share is located, you need to use lts12 or lts22 ( same as on the old system ).

Usually replica shares will not be mounted. If you need to mount them just replace primary / secondary share-names accordingly.


Example for Primary Share on lts12 ( former lts11 )

mount -t nfs lts12:/your_primary_share_name -o vers=4.1,rsize=1048576,wsize=1048576  /mount_point         

Example for Primary Share on lts22 ( former lts21 )

mount -t nfs lts22:/your_primary_share_name -o vers=4.1,rsize=1048576,wsize=1048576 /mount_point


Fstab entry ( see client advisory above, NFSv4 not supported any more, please use nfs3 )

lts12.ethz.ch:/your_primary_share      /your_mount_point     nfs4      vers=4.1,rsize=1048576,wsize=1048576,noauto      

lts22.ethz.ch:/your_primar_share        /your_mount_point     nfs4      vers=4.1,rsize=1048576,wsize=1048576,noauto

Obsolete mount options ( old StrongBox system )

lts11 / lts21 ( old system ):  path:/shares/primary_share_name   →   lts12 / lts22 ( new system ):  /shares/ in front of the share name is not needed any more for NFS mounts


The following mount options in use for the old LTS system ( StrongBox ) should be replaced by the new ones recommended above.

OLD:  mount -t nfs -o hard,intr,retrans=10,timeo=300,rsize=65536,wsize=1048576,vers=3


NFS V4 FAQs


Showmount is an NFSv3 command and cannot display virtual share names.

For all StrongBox migrated shares:

  • Please keep using the known share names of the StrongBox system and replace the host-names by the newly communicated hostnames
  • remove the /shares/ in front of the mount path ( old path StrongBox )

vers=4.1,rsize=1048576,wsize=1048576



SMB Share Mounts


Recall of Files


Please take under consideration that the LTS system needs some time to get the files back from tape during a Recall.

The Recall is triggered during the first access, after that a tape will be loaded into a drive in the tape-libarry, and the file(s) will be copied back into the disk cache.

This process takes some time in general, and also depending on the workload of the LTS system.

The file will be copied back completely from tape, we do not keep the first 4MB in the landing_zone like on the old system.

→ this immediately leads to an IO-error, if the file still is on tape.

The file can be copied as soon as it is available in the landing_zone.


Below you find the "best practices" for the different access types during Recalls.

Recall of Files with Windows Explorer

Only recall of single files reasonable  with the Windows Explorer.

Recall of files with ROBOCOPY

Robocopy is ideal for the recall of multiple files or folders.

The options RETRY and MT:1 should be added as robocopy options.


Recall using Linux mounts ( mount -t cifs )

Retry commands need to be added when using the copy command for cifs mounted shares under Linux ( if used with a script ).

If you run cp command from command line, you need to wait a certain time and retry ( once or multiple times, until the file is available ).



FAQs


If the file is not fully copied from tape yet, and therefor is not readable for the copy-process, it is possible that you receive an IO-error message.

Please consider the recommendations for robocopy and Linux copy above and add retries to your commands.

Windows Explorer is waiting until the file is available, and copies the file after that.


  • No labels