Attention SCI Cluster Users:
We have a report of MacOS Sonoma failing to open hpc.umassmed.edu and ood.umassmed web pages with Safari. If you encounter this please try using another browser, such as Chrome, and let us know at hpc@umassmed.edu.
High Performance Computing (HPC) uses distributed computational cycles to decrease the amount of time a single job would take. HPC processing jobs typically consist of searching or time and process jobs. Processing string searching for genomic data comparisons assists with the speed-up of “needle” and “haystack” processing and analysis
Researchers utilize custom and open software to analyze, distribute, and run calculations on large data sets. Utilizing best practices users can decrease job run time several fold by using distribution and cluster related protocols such as MPI.
Examples of HPC distribution include: Monte Carlo computations, time and space computations, and string (over DNA, etc.) matching algorithms. Users can create programs, and scripts, on the cluster here at UMASS for pattern matching and general search based needs; for example using HG18 we can create simple shell scrip(s) using the Perl scripting language for effective pattern matching.
Resources
- SCI Cluster HPC Wiki (VPN required)
- Request a SCI Cluster HPC account (VPN required)
AlphaFold
Free for staff and faculty
AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment. AlphaFold is available on the SCI cluster. If you don't have an account on the SCI Cluster, you can request one here: https://hpcportal.umassmed.edu/
Priority Queues for faculty/labs.
- PI or Campus purchases equipment with agreement of 5-year life span.
- PI retains access to the 5,000+ core shared cluster.
- PI has priority queue on purchased equipment.
- Jobs from short queue (<=4 hrs) are allowed to backfill when priority queues are idle. Backfilled jobs will not be preempted (comparable to the large queue).
- Electricity, Rent, Software and Hardware maintenance, and cluster administration are all included in the capital purchase cost.
- All software modules available to the Shared Cluster will be available to your Priority Queue cluster nodes
How to procure a Priority Queue.
- Contact:
- The UMass Chan HPC Support Team: hpc@umassmed.edu
- Provide your hardware requirements (cores, memory, storage)
- HPC team will prepare a quote for you & order the hardware for you, using UMass Chan's deep discounts with DELL and Lenovo.
- HPC team will rack and administer your hardware for you.
- Sign a Memorandum of Understanding. The key conditions are:
- Your hardware will be attached to the SCI Cluster.
- A new queue will be created giving your team priority access to those cores.
- Your hardware will be removed in five years.
The HPC environment runs the IBM LSF scheduling software for job management. The SCI Cluster consists of the following hardware:
Networking:
- EDR/FDR based Infiniband (IB) network
- 100 Gigabit Ethernet network for the storage environment
Storage:
- 2+ petabytes of Panasas parallel high performance storage
Data storage and pricing details
Computing:
Scheduling Software:
- The HPC environment runs the IBM LSF scheduling software for job management
HPC Accounts and Training:
If you are interested in an HPC account, please use this link to request access (VPN required). Individual support for cluster usage is available. For more information please contact hpc@umassmed.edu.