42295f156a15307edefc76a3f0035a146f8cef97
bch3-kirchhausen.md
... | ... | @@ -6,7 +6,7 @@ SBGrid provides comprehensive system administration support for computers that a |
6 | 6 | |
7 | 7 | For access to the file servers is available to lab members upon approval. Please email the SBGrid at [help@sbgrid.org](mailto:help@sbgrid.org) and CC your Lab Admin. |
8 | 8 | |
9 | -Lab workstations (OS X, Linux, Windows) are managed by lab members. Please contact your Lab Admin for help. |
|
9 | +Lab workstations (OS X,Windows) are managed by lab members. Please contact your Lab Admin for help. |
|
10 | 10 | |
11 | 11 | [[_TOC_]] |
12 | 12 | |
... | ... | @@ -16,27 +16,50 @@ For general questions and server conenction issues email SBGrid at [help@sbgrid. |
16 | 16 | |
17 | 17 | New lab members should contact [help@sbgrid.org](mailto:help@sbgrid.org) to complete their initial setup. |
18 | 18 | |
19 | -# LLSM |
|
19 | +# Cluster |
|
20 | 20 | |
21 | -The Lattice Light Sheet Microscopy (LLSM) setup consists of a 208-core compute workstation cluster located in the Warren Alpert Building, |
|
22 | -where the data is also acquired. This local cluster is connected via 80GB link aggregated fiber connections to the server infrastructure at the CLS building, where the data is stored and later archived to HMS storage. A 41 TB capacity scratch fast SSD server is configured to allow temporary storage of the data being processed and is racked in the CLS building. It is also connected to a 261 TB capacity spinning disk data storage file server via 40GB copper links. |
|
21 | +A new CPU and GPU cluster is being provisioned. Lab members can submit jobs from any Linux workstation in the lab. Current job submitions can be provisioned to either the tk-cpu or tk-gpu job queues. |
|
23 | 22 | |
24 | -A new 348 CPU cluster is being provisioned. You can ssh with your labs account to tkv1.med.harvard.edu and submit your CPU job there. As the new Linux workstations are ser up in the lab jobs will be able to directly submit jobs from there. |
|
23 | +# Linux Systems |
|
24 | + |
|
25 | +All Lab linux systems are located across lab space in WAB. The Older HPC systems are in Dreamspace computing room in WAB 133. The Visual learning Systems are located in the new Visual learning center in WAB room 144. All Linux system access is managed by the SBgrid ARC team. |
|
26 | + |
|
27 | +# Linux Systems mount points |
|
28 | + |
|
29 | +- LLSM project folders - /tkstorage/data |
|
30 | +- LLSM project folders - /tkstorage/data1expansion |
|
31 | +- User home folders - /tkstorage/home |
|
32 | +- SBGrid Project use - /tkstorage/sbgrid-share |
|
33 | +- Cluster SSD scratch Server - /scratch |
|
34 | +- Cluster NVME scratch storage - /vscratch |
|
35 | + |
|
36 | +# Connecting To Linux |
|
37 | + |
|
38 | +After setting up your account with sbgrid you can log into any of the local Linux systems in your lab. No matter which system you use you will keep the same home folder. |
|
39 | + |
|
40 | +# Connecting To Your Labs Linux Systems Remotly |
|
41 | + |
|
42 | +You can connect to your labs workstations by using the ssh program to connect through our bastions hosts. Please review our instructions the following, as you will need to request external SSH access: |
|
43 | + |
|
44 | +* [faq-remote-access-to-linux-computers](faq-remote-access-to-linux-computers) |
|
45 | + |
|
46 | +Also see: |
|
47 | + |
|
48 | +* [faq-remote](faq-remote) |
|
49 | + |
|
50 | +# Submitting Job to Cluster |
|
51 | +sbatch scripts and interactive srun command are available on all lab Linux workstations. |
|
52 | + |
|
53 | +From your linux account you can vrowse to the following folder to see some test scripts. |
|
54 | +* /tkstorage/sbgrid-share/lab-cluster-101 |
|
55 | + |
|
56 | +Look over the README.md file and test your first job. |
|
25 | 57 | |
26 | 58 | # Data Storage |
27 | 59 | |
28 | 60 | Processed data copied to the **datasync2** directory and flagged to be archived are synced to HMS orchestra storage. |
29 | 61 | On the main storage server, directories located at **/data1/home/tk/public/datasync2** must be flagged with a 'transfer.txt' file in order for the data to sync with Orchestra transfer node. Users must manually create the file. While transfering, a 'transfering.txt' file is created inside the directory. Once the transfer is done, a 'transfered.txt' file is created inside the same directory to indicate that it finished. |
30 | 62 | |
31 | -# Linux Systems mount points |
|
32 | - |
|
33 | -/tkstorage/data Current LLSM project folders |
|
34 | -/tkstorage/data1expansion Current LLSM project folders |
|
35 | -/tkstorage/home New cluster home folders |
|
36 | -/tkstorage/sbgrid-share SBGrid use |
|
37 | -/scratch SSD scratch Server |
|
38 | -/vscratch New cluster NVME scratch storage |
|
39 | - |
|
40 | 63 | # Networking and General IT Support |
41 | 64 | |
42 | 65 | For networking and general IT support for lab system connected to the BCH network contact the Children’s help desk at (617) 919-4357 or at [help.desk@childrens.harvard.edu](mailto:help.desk@childrens.harvard.edu). HMS IT has access to the networking closets at the Warren Alpert Building. |