High-end Systems

October 2018 update

Since external funding for our systems ceased at the end of 2016, we have all been working hard to ensure these valuable resources were accessible to all stakeholders and, that as they reached the end of their life, replacements and upgrades were available. At the strategic level, this has meant we’ve been fully engaged in the University’s planning for this next stage in the maturing of research computing as it is being supported through the Petascale Campus Initiative (PCI) (search for updates about the PCI on the Staff Hub). At the operational level, the move to incorporate our systems and people into the University’s Research Platform Services (ResPlat) was an obvious one.

Throughout 2018 our very experienced staff have been working with the ResPlat team to make this transition. We are very pleased to know that our systems support staff will officially transfer to ResPlat at the end of the year and that this continuity of service will be available to our users. Further, with our Snowy cluster being added to Spartan, we’ve been able to arrange for those migrating projects to have a private partition with the first 11 nodes of Snowy. This is up and running jobs right now. All the Snowy nodes will be dedicated to Melbourne Bioinformatics users until the hardware upgrades coming through the PCI come online in early 2019.

Meanwhile, we encourage users to do training on launching and managing jobs on Spartan as we continue to help with managing your data storage needs.

So from now on, for all help with your research computing needs, please contact the whole support team now via hpc-support@nullunimelb.edu.au. If it relates specifically to Melbourne Bioinformatics expertise, your query will be forwarded to our experts. The only exceptions to this are if you are still working on Barcoo (up to 19 November close-off) or your query is to do with data stored on Melbourne Bioinformatics filesystems (up to 14 December), please still use help@nullmelbournebioinformatics.org.au.

Our best wishes to you for success with your research in the future, we look forward to continuing to share our experience with the University data-intensive research community through a range of exciting projects and activities.

Systems

IBM iDataplex x86 system – Barcoo

  • Peak performance of 20 teraFLOPS – with Xeon Phi cards* running nominally at 1 teraFLOP each.
  • 1120 Intel Sandybridge compute cores running at 2.7GHz.
  • 67 nodes with 256GB RAM and 16 cores per node.
  • 3 nodes with 512GB RAM and 16 cores per node.
  • 20 Xeon Phi 5110P cards installed across 10 nodes.
  • Connected to a high speed, low latency Mellanox FDR14 InfiniBand switch for inter-process communications.
  • The system runs the RHEL 6 operating system, a variety of Linux.

Lenovo x86 system – Snowy

  • Peak performance – compute nodes currently performing at 30 teraFLOPS.
  • 992 Intel Haswell compute cores running at 2.3GHz.
  • 29 nodes with 128GB RAM and 32 cores per node.
  • 2 nodes with 512GB RAM and 32 cores per node.
  • Connected to a high speed, low latency FDR Mellanox InfiniBand switch for inter-process communications.
  • The system runs the RHEL 6 operating system, a variety of Linux.

Storage infrastructure 

  • 700TB GPFS Parallel Data Store
  • 1PB HSM tape system, made available through GPFS