Resources - University of Houston
Skip to main content
The Hewlett Packard Enterprise Data Science Institute (HPE DSI) owns and maintains several high performance computing (HPC) platforms that are housed in RCDC and managed by the UH Research Computing Center (RCC). We support researchers and projects by offering leading-edge computational resources. Capabilities include high-capacity storage and backup, parallel and big data applications, high-speed networking and access to widely used software. The HPE DSI has two large clusters (Opuntia and Sabine) to support scientific computation, big data applications and large-scale data storage. All the nodes of each one of these clusters are connected by high-speed networks. Between local and shared disks, the clusters offer more than 1600 TBs of high-performance total disk space.

Download Facilities Descriptions (.docx)
Carya Cluster photo


The Carya cluster offers a total of 208 Hewlett Packard Enterprise compute HPE nodes and 64 Nvidia Volta V100 GPUs. Theoretical peak performance of 770 Teraflops is provided by ~10K CPU cores, 327K GPU cores, 45 TB of main memory and 2 TB of high bandwidth GPU memory. Interconnect: Carya nodes are connected via Mellanox HDR Infiniband switch with 100Gb/s Line Rate. Storage: Carya has 1,560 TB of shared hard-disk based storage and 122 TB of shared flash storage space.
Opuntia Cluster


Opuntia contains 1,860 cores within 80 HP Proliant SL 230 compute blades (nodes), and 4 HP Proliant SL 250 Nvidia K40 GPGPU blades, 1 HP Proliant DL 380 login node. The system is also equipped with 3 large memory nodes – 1 HP Proliant DL 580 with 1 TB of main memory and 2 HP DL 560 each with 512 GB of main memory. Each compute node has 64 GB of memory, and the login/development node have 64 GB. The system storage includes a ~600 TB shared file system, and 85 TB of local compute-node disk space (~1 TB/node). Opuntia also provides access to eight nodes containing two NVIDIA GPU’s, giving users access to high-throughput computing and remote visualization capabilities respectively. A 56 Gb/s Ethernet Mellanox switch fabric interconnects the nodes (I/O and compute).The cluster currently runs Rocks 6.1.1 and Red Hat Enterprise Linux 6.9.
Sabine Cluster


The Sabine cluster offers a total of 124 compute nodes with 3,472 cores and 25 TB of main memory within 116 HP Proliant XL170r nodes, and 8 HP ProLiant XL190r nodes. Interconnect: Sabine nodes are connected via Intel OmniPath switch with a 100Gb Line Rate. Storage: Sabine has a shared ~725TB NFS storage.
Photo of Visualization Theater

Visualization Theater

The Center for Advanced Computing and Data Science Visualization Theater features seating for 30 people and a 16’x9′ screen supporting 4K digital cinema resolutions up to 4096 x 2160 and both active and passive stereo 3D modes. The system is powered by an upgraded dual boot Linux/Windows workstation with 64 GB RAM and dual Intel Xeon Haswell processors (E5-2618L v3), 8 cores each at 2.3 GHz, and 2 TB of local storage. Two AMD v8800 graphics cards drive two Sony SRX-S105 projectors with polarizing shutter filters at 4096 x 2160 pixels. The system is controlled by a RGB Spectrum MediaWall 4500 processor with 24 inputs and 12 outputs. The video processor provides digital and analog video inputs for TV/DVD/laptop computers. The workstation is compatible with most digital video formats. The room is also equipped with 7.1 surround sound.


Compilers: We have the latest GNU and Intel C/C++/Fortran compiler and PGI compiler suites, as well as Nvidia’s CUDA compiler for GPU computing. We also maintain compilers for other languages as requested including Java.

Programming environments: We maintain several versions of Python, Matlab, and R, as well as several other languages and environments. These environments include many commonly used third-party libraries and packages.

Data-processing tools: These include programs for dealing with large-scale data formats, like HDF5 and NetCDF.

Numerical libraries: These include Intel’s Math Kernel Library (MKL), a set of highly tuned linear algebra routines; the GNU Scientific Library (GSL); FFTW Fourier transform library; and others.

Community codes: We have a large number of commonly-used scientific software packages including codes for molecular dynamics, such as LAMMPS and NAMD; and visualization, such as ParaView; and many more.

Training and Workshops

We provide trainings and workshops to learn about more advanced features and capabilities of our high performances computers and software. Courses are offered through the HPE DSI.