← Back to News List

June HPCF Newsletter

Hi everyone,


I’m writing to give everyone an update regarding the UMBC HPCF, particularly the new Retriever Research Storage System (RRStor). The Research Computing team within DoIT works to send these newsletters to keep everyone using our infrastructure informed and aware of any changes and new support structures we generate. 


System Status

The chip HPC is operating normally and is fully operational.


Updates: Migration of Storage Volumes

University budget cuts have compelled us to carefully examine the enterprise storage systems we maintain. While the new Ceph Storage Cluster (RRStor)  is here to stay, the Isilon storage system will be sunset at the end of this calendar year. The Isilon storage system was purchased at the start of the COVID-19 Pandemic, but its proprietary file system, high cost of expansion, and high software support costs make it no longer feasible for us to maintain.


Instead, we’ll be taking a portion of the saved expense from the Isilon and dedicating it to expanding RRStor and its capabilities. This will require that we migrate research storage volumes from the Isilon Storage System to Ceph before the end of the calendar year. This represents the careful movement of more than 2PiB of data, so DoIT Research Computing staff will be working with researchers to schedule the migration of their storage volumes. Note that the duration of each volume migration will depend on the size of the volume.


Research Volume Name:To avoid conflicts with volume names as this process begins and to avoid future confusion with group names and research volume names, we’ll be deploying new research volumes with the following syntax: “/umbc/rs/groupName” . So group “pi_doit” will find its data under “/umbc/rs/pi_doit” and group “nsf2346667” under “/umbc/rs/nsf2346667” . Each faculty PI group storage quota will be at least 10TiB.


Migration Process:Our plan for migrating data from the old system to the new system would entail the Research Computing team contacting each PI and scheduling a transition date before October 15. Before this date, the Research Computing team will make a copy of the research volume on the new system. On this date, the Research Computing team will disable the old volumes and the new volumes will be active. A month later, the old volumes will be deleted as we disable the entire old storage system. In most cases, there should be nothing for individual users or PIs to do when migrating this data.

General flow for data transition


User Support: Introducing Drop-in Office Hours

We’re pleased to announce a series of drop-in office hours that will occur each week on Mondays in ENGR 102 from 1500ET-1600ET. Participants may join in person or virtually. See the HPCF myUMBC Group Events page for more details. Otherwise …

  • Please continue to make use of the HPCF Office Hours.

  • We’re always working to make our Wiki Documentation better for users. Please let us know if we’ve missed something.

  • Stay tuned for more in-person training and tutorials.


SIG

The SIG-CPU and SIG-GPU Committees have met and selected members. Please see the Shared Infrastructure Governance webpage for more information. The Slurm workload manager and faculty contribution models are among the first topics discussed in these groups.


Publications

If you have any publications, presentations, theses, or other works that made use of the campus cluster(s), please submit an RT Ticket with bibliographic information so that we can accurately reflect this work in our records and on the HPCF Website.


Need Help?

As always, please communicate any issues/questions to the Research Computing RT Queue (hpcf.umbc.edu > User Support > Request Help). 


Max Breitmeyer

HPC and Unix GA

Tags:

Posted: June 26, 2025, 1:19 PM