Skip to main content

Opuntia Cluster

Opuntia is a shared campus resource provided by the RCDC. It is able to provide more than 15 Million SUs per year, targeting large scale parallel jobs from the UH research community. Opuntia contains 1,860 cores within 80 HP Proliant SL 230 compute blades (nodes), and 4 HP Proliant SL 250 Nvidia K40 GPGPU blades, 1 HP Proliant DL 380 login node. The system is also equipped with 3 large memory nodes – 1 HP Proliant DL 580 with 1 TB of main memory and 2 HP DL 560 each with 512 GB of main memory. Each compute node has 64 GB of memory, and the login/development node have 64 GB. The system storage includes a ~600 TB shared file system, and 85 TB of local compute-node disk space (~1 TB/node). Opuntia also provides access to eight nodes containing two NVIDIA GPU’s, giving users access to high-throughput computing and remote visualization capabilities respectively. A 56 Gb/s Ethernet Mellanox switch fabric interconnects the nodes (I/O and compute).The cluster currently runs Rocks 6.1.1 and Red Hat Enterprise Linux 6.9. 

Opuntia is housed in the RCDC. If you plan to use this system, please request an account. Please make sure your PI has been granted an allocation. Refer to the Opuntia sections in the User Guide to find out about specifics of running jobs, selecting resources and available software on this cluster. 

Node Type

CPU Type CPU Socket Count Cores Memory Disk Space Node Count
Login HP DL380 Intel Xeon E5-2680 v2 2.8 GHz 2 20 64 GB 2.4 TB 1
Compute HP SL230 Intel Xeon E5-2680v2 2.8 GHz 2 20 64 GB 1 TB 80
Large Memory HP DL560 Intel Xeon E5-4650v2 2.4 GHz 4 40 512 GB 1 TB 2
XLarge Memory HP DL580 Intel Xeon E7-4880v2 2.5 GHz 4 60 1024 GB 1 TB 1
GPU Accelerator HP SL250 Intel Xeon E5-2680v2 2.8 GHz
Tesla K40m GPU
CPU: 2
GPU: 1
CPU: 20
GPU: 2880
1 TB 4

Storage: ~600 TB of NFS storage

Interconnect: Opuntia nodes are connected via 56 Gb/s Ethernet interconnect.