High Performance Computing

The High Performance Computing Cluster (HPCC) consists of ~15K cores (Intel and AMD), SGI UV200 with 512 Intel cores and 4TBs RAM, an FDR based Infiniband (IB) network and a 10GE network for the storage environment. We have high memory nodes with up to 4TBs of RAM available for computations, most nodes have 196GB-512GB RAM.

HPCbanner 

High Performance Computing

High Performance Computing (HPC) uses distributed computational cycles to decrease the amount of time a single job would take. HPC processing jobs typically consist of searching or time and process jobs. Processing string searching for genomic data comparisons assists with the speed-up of “needle” and “haystack” processing and analysis

Researchers utilize custom and open software to analyze, distribute, and calculate large data sets. Utilizing best practices users can decrease job run time several fold by using distribution and cluster related protocols such as MPI.

Examples of HPC distribution include: Monte Carlo computations, time and space computations, and string (over DNA, etc.) matching algorithms. Users can create programs, and scripts, on the cluster here at UMASS for pattern matching and general search based needs; for example using HG18 we can create simple shell scrip(s) using the Perl scripting language for effective pattern matching. We, ARCS, can assist with these needs and help you create optimal routines and searing as needed.

The HPC environment runs the IBM LSF scheduling software for job management. The High Performance Computing Cluster (HPCC) consists of the following hardware:

  • A (56Gb) FDR based Infiniband (IB) network and a 10GE network for the storage environment
  • Nine (9) (42) GPU nodes (Intel with 256GB RAM) with NVIDIA Tesla C2075 - GPU computing processor - Tesla C2075 - 6 GB GDDR5 - PCI Express 2.0 x16 units or K80 GPUs
  • Thirteen (13) AMD Opteron(tm) Processor 6380 based Dell chassis with 64 cores / 512GB RAM per blade (48 blades)
  • Seven (7) AMD (2x AMD Opteron 6278, 2.4GHz, 16C, Turbo CORE, 16M L2/16M L3, 1600Mhz ) based Dell chassis with 64 cores / 512GB RAM per blade (42 blades)
  • Three (3) Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, QPI, Turbo, 20c
  • Nine (9) Intel (Xeon E5-2650 2.00GHz, 20M Cache, 8.0GT/s QPI, Turbo, 8C, 95W, Max Mem 1600MHz) based chassis with 16 cores / 196GB RAM per blade (16 blades)
  • Two (2) SGI UV200 with 512 Intel (Intel® Xeon® processor E5-4600) cores and 4TBs of fully addressable memory
  • One (1) AMD based Dell chassis with 128 cores Quad-Core AMD Opteron(tm) Processor 2376 and 256GB RAM
  • Three (3) Intel (six-core Intel(R) Xeon(R) CPU X5650 @ 2.67GHz ) based Dell chassis with 12 cores / 48GB RAM per blade (16 blades).

HPC Accounts and Training:

If you are interested in an HPC account, please use this link to request access. Individual training for cluster usage and distribution of computations is available.  We also offer group training sessions for Linux, HPC, and distribution. For training information please contact hpcc-support@umassmed.edu.

▴ Back To Top