056ff9850b621a6ac5b319286769a255f0695012
bch3-kirchhausen.md
... | ... | @@ -2,47 +2,73 @@ |
2 | 2 | |
3 | 3 | # Overview |
4 | 4 | |
5 | -SBGrid provides comprehensive system administration support for computers that are used for data collection and analysis with Lattice Light Sheet Microscopy (LLSM) in the Kirchhausen lab. All servers are on the HMS network and are are physically located in a BCH managed network closet on the 3rd floor of CLSB. SBgrid also creates accounts on the Confocal data server on the BCH network. |
|
5 | +SBGrid provides comprehensive system administration support for computers that are used for data collection and analysis with Lattice Light Sheet Microscopy (LLSM) in the Kirchhausen lab. All servers are on the HMS network and are are physically located in a BCH managed network closet on the 3rd floor of CLSB. SBGrid also creates accounts on the Confocal data server on the BCH network. |
|
6 | 6 | |
7 | 7 | For access to the file servers is available to lab members upon approval. Please email the SBGrid at [help@sbgrid.org](mailto:help@sbgrid.org) and CC your Lab Admin. |
8 | 8 | |
9 | 9 | Lab workstations (OS X,Windows) are managed by lab members. Please contact your Lab Admin for help. |
10 | 10 | |
11 | -[[_TOC_]] |
|
12 | - |
|
13 | 11 | # Get Help |
14 | 12 | |
15 | -For general questions and server conenction issues email SBGrid at [help@sbgrid.org](mailto:help@sbgrid.org) and include your name, your lab and workstation name. |
|
13 | +For general questions and server connection issues email SBGrid at [help@sbgrid.org](mailto:help@sbgrid.org) and include your name, your lab and workstation name. |
|
16 | 14 | |
17 | 15 | New lab members should contact [help@sbgrid.org](mailto:help@sbgrid.org) to complete their initial setup. |
18 | 16 | |
19 | -# Cluster |
|
17 | +# CPU Cluster |
|
18 | + |
|
19 | +> * Lab members can submit CPU jobs from any Linux workstation in the lab. Current job submissions can be provisioned to the tk-cpu job queue. |
|
20 | + |
|
21 | +# DGX systems |
|
20 | 22 | |
21 | -A new CPU and GPU cluster is being provisioned. Lab members can submit jobs from any Linux workstation in the lab. Current job submitions can be provisioned to the tk-cpu job queue. |
|
23 | +* dgx-a100-01 and dgx-a100-02 Are on an isolated network direct connection can be 'jumped' like ssh -J username@head.gpucluster.crystal.harvard.edu username@dgx-a100-01 |
|
24 | +* OnDemand desktops can be launched from <https://ood.gpucluster.crystal.harvard.edu/pun/sys/dashboard> (must be on VPN or local network) |
|
25 | +* you username is required to launch a desktop, just select a queue. |
|
26 | +* For longer term desktops use month long queue, a dedicated queue for this resource in planned. |
|
22 | 27 | |
23 | 28 | # Linux Systems |
24 | 29 | |
25 | -All Lab linux systems are located across lab space in WAB. The Older HPC systems are in Dreamspace computing room in WAB 133. The Visual learning Systems are located in the new Visual learning center in WAB room 144. All Linux system access is managed by the SBgrid ARC team. |
|
30 | +All Lab linux systems are located across lab space in WAB. The Older HPC systems are in Dreamspace computing room in WAB 133. The Visual learning Systems are located in the new Visual learning center in WAB room 144. All Linux system access is managed by the SBGrid ARC team. |
|
31 | + |
|
32 | +* Lab workstation names |
|
33 | +* tkhpc32.med.harvard.edu |
|
34 | +* tkhpc32b.med.harvard.edu |
|
35 | +* tkhpc32c.med.harvard.edu |
|
36 | +* tkhpc48.med.harvard.edu |
|
37 | +* tkhpc36a.med.harvard.edu |
|
38 | +* tkhpc36b.med.harvard.edu |
|
39 | +* tkhpc36c.med.harvard.edu |
|
40 | +* tkl1.med.harvard.edu |
|
41 | +* tkl2.med.harvard.edu |
|
42 | +* tkl3.med.harvard.edu |
|
43 | + |
|
44 | +* DGX Systems |
|
45 | +* tk-dgx-1.med.harvard.edu |
|
46 | +* dgx-a100-01 Note: connect like ssh -J username@head.gpucluster.crystal.harvard.edu username@dgx-a100-01 |
|
47 | +* dgx-a100-02 Note: connect like ssh -J username@head.gpucluster.crystal.harvard.edu username@dgx-a100-01 |
|
48 | + |
|
49 | +* See additional tips section in [Password-less ssh authentication](faq-setting-up-key-based-ssh) |
|
26 | 50 | |
27 | 51 | # Linux Systems mount points |
28 | 52 | |
29 | -- LLSM project folders - /nfs/data1expansion |
|
30 | -- LLSM project folders - /nfs/tkdata2 |
|
31 | -- User home folders - /nfs/tkhome |
|
32 | -- SBGrid Project use - /nfs/sbgrid |
|
33 | -- Cluster SSD scratch Server - /scratch or /nfs/scratch |
|
34 | -- Cluster NVME scratch storage - /scratch2 or /nfs/scratch2 |
|
35 | -- New Microscope data - /llsm |
|
53 | +* LLSM project folders - /nfs/data1expansion |
|
54 | +* LLSM project folders - /nfs/tkdata2 |
|
55 | +* User home folders - /nfs/tkhome |
|
56 | +* SBGrid Project use - /nfs/sbgrid |
|
57 | +* Cluster SSD scratch Server - /scratch or /nfs/scratch |
|
58 | +* Cluster NVME scratch storage - /scratch2 or /nfs/scratch2 |
|
59 | +* DGX Cluster scratch storage - /scratch1 or /nfs/scratch1 |
|
60 | +* New Microscope data - /llsm |
|
36 | 61 | |
37 | 62 | # Connecting To Linux |
38 | 63 | |
39 | 64 | After setting up your account with SBGrid you can log into any of the local Linux systems in your lab. No matter which system you use you will keep the same home folder. |
40 | 65 | |
41 | -# Connecting To Your Labs Linux Systems Remotly |
|
66 | +# Connecting To Your Labs Linux Systems Remotely |
|
42 | 67 | |
43 | 68 | You can connect to your labs workstations by using the ssh program to connect through our bastions hosts. Please review our instructions the following, as you will need to request external SSH access: |
44 | 69 | |
45 | 70 | * [faq-remote-access-to-linux-computers](faq-remote-access-to-linux-computers) |
71 | +* **Please make your life easier** [and set up ssh-keys](faq-setting-up-key-based-ssh) |
|
46 | 72 | |
47 | 73 | Also see: |
48 | 74 | |
... | ... | @@ -55,7 +81,7 @@ sbatch scripts and interactive srun command are available on all lab Linux works |
55 | 81 | # Data Storage |
56 | 82 | |
57 | 83 | Processed data copied to the **datasync2** directory and flagged to be archived are synced to HMS orchestra storage. |
58 | -On the main storage server, directories located at **/data1/home/tk/public/datasync2** must be flagged with a 'transfer.txt' file in order for the data to sync with Orchestra transfer node. Users must manually create the file. While transfering, a 'transfering.txt' file is created inside the directory. Once the transfer is done, a 'transfered.txt' file is created inside the same directory to indicate that it finished. |
|
84 | +On the main storage server, directories located at **/data1/home/tk/public/datasync2** must be flagged with a 'transfer.txt' file in order for the data to sync with Orchestra transfer node. Users must manually create the file. While transferring, a 'transferring.txt' file is created inside the directory. Once the transfer is done, a 'transfered.txt' file is created inside the same directory to indicate that it finished. |
|
59 | 85 | |
60 | 86 | # Networking and General IT Support |
61 | 87 |