Skip to main content
Members get updates about announcements, events, and outages.

Rockfish

Johns Hopkins University's Rockfish is a community-shared cluster at Johns Hopkins University. It follows the "condominium model" with three main integrated units. The first unit is based on a National Science Foundation (NSF) Major Research Infrastructure Grant (#1920103) and other main grants like DURIP/DoD, a second unit contains medium-size condos (Schools' condos), and the last unit is the collection of condos purchased by individual research groups. All three units share a base infrastructure, and resources are shared by all users. Rockfish provides resources and tools to integrate traditional High Performance Computing (HPC) with Data Intensive Computing and Machine Learning (ML). As a multi-purpose resource for all fields of science, it will provide High Performance and Data Intensive Computing services to Johns Hopkins University, Morgan State University and ACCESS researchers as a level 2 Service Provider.

Rockfish's compute nodes consist of two 24-core Intel Xeon Cascade Lake 6248R processors, 3.0GHz base frequency and 1 TB NMVe local drive. The regular and GPU nodes have 192GB of DDR4 memory, whereas the large memory nodes have 1.5TB of DDr4 memory. The GPU nodes also have 4 Nvidia A100 GPUs.

Rockfish cluster at Johns Hopkins University - Regular Memory nodes

Johns Hopkins University will participate in the ACCESS Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (368) Regular Memory nodes are intended for regular-purpose computing, machine learning and data analytics. Each regular compute node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 192GB of memory, 3.0GHz base frequency, 48 cores per node and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via ACCESS.

Rockfish cluster at Johns Hopkins University - Large Memory nodes

Johns Hopkins University participates in the ACCESS Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (10) Large Memory nodes are intended for applications that need more than 192GB or memory (up to 1.5TB), machine learning and data analytics. Each large memory node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 1,524GB of memory, 3.0GHz base frequency, 48 cores per node and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via ACCESS.

Rockfish cluster at Johns Hopkins University - GPU nodes

Johns Hopkins University participates in the ACCESS Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (10) GPU nodes are intended for applications that need gpu-processing, machine learning and data analytics. Each gpu node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 192GB of memory, 3.0GHz base frequency, 48 cores per node, (4) Nvidia A100 GPUs and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via ACCESS.

CI Links

There are no CI Links at this time. Please check back later or visit the CI Links page.

Announcements

There are no announcements at this time. Please check back later or visit the Announcements page.