High-end Systems

Systems

IBM iDataplex x86 system – Barcoo

  • Peak performance of 20 teraFLOPS – with Xeon Phi cards* running nominally at 1 teraFLOP each.
  • 1120 Intel Sandybridge compute cores running at 2.7GHz.
  • 67 nodes with 256GB RAM and 16 cores per node.
  • 3 nodes with 512GB RAM and 16 cores per node.
  • 20 Xeon Phi 5110P cards installed across 10 nodes.
  • Connected to a high speed, low latency Mellanox FDR14 InfiniBand switch for inter-process communications.
  • The system runs the RHEL 6 operating system, a variety of Linux.

Lenovo x86 system – Snowy

  • Peak performance – compute nodes currently performing at 30 teraFLOPS.
  • 992 Intel Haswell compute cores running at 2.3GHz.
  • 29 nodes with 128GB RAM and 32 cores per node.
  • 2 nodes with 512GB RAM and 32 cores per node.
  • Connected to a high speed, low latency FDR Mellanox InfiniBand switch for inter-process communications.
  • The system runs the RHEL 6 operating system, a variety of Linux.

Storage infrastructure (shared by all systems)

  • 700TB GPFS Parallel Data Store
  • 1PB HSM tape system, made available through GPFS

Access

Applications for new Melbourne Bioinformatics projects have closed.  Existing projects will continue to be supported in 2018.

The University of Melbourne provides access to compute and storage resources through Research Platform Services.  For more information, and to apply, please see:

Research Platform Services

Project management and reporting

  • Access to resources are managed through a project.
  • Each project has a Project Manager that is responsible for all resources used by the project.
  • Project Managers are responsible for keeping records of account holders on the systems up to date.
  • Project Supervisors are required to submit one annual report on research outcomes (in February of the year following the year when systems were used).

Storage

Melbourne Bioinformatics has limited data storage capacity. Currently, data storage is intended only for current work, not long-term archive. Data storage and backup capacity is becoming a serious problem and we need your assistance. By regularly removing your completed data from the system you make the system more responsive, minimise backups, and help us from running out of storage space.  Please ensure all project members remove any data not needed for their immediate compute needs.

Systems documentation (including detailed terms and conditions)

Costs of Access

There will be no charge for use of computational resources. Usage of data storage beyond that granted for a project may incur a charge.

There is some budget to install application software and data sets that are commonly used by the life sciences research community. Commercial software required by only one project may be purchased at the Facility Manager’s discretion.

Users are expected to access the Facility via AARNet and charges are covered but users must pay for any communications charges billed directly to them from AARNet and any other data communications organisations that they may use.

Principles governing resource access

The governing principles of the allocation processes are informed by the requirements and priorities of any member institutes and our observations of how researchers use our systems. In summary, we are:

  • simplifying reporting processes to reduce the burden on researchers
  • introducing a more flexible allocation process for the majority of our users
  • maximising productivity of our systems by finding cloud and local cluster solutions for smaller jobs, if appropriate, and continuing to offer support and training to ensure jobs run efficiently

Access from 1 January 2018 – notice to holders of legacy projects

State Government funding of VLSCI ceased at the end of 2016. As part of the transition to Melbourne Bioinformatics, we continued in goodwill to support existing projects from all Victorian institutes during 2017. From 2018, these legacy projects will no longer be eligible to run on our systems – final compute jobs only ran up until 11:59pm, 31 December 2017.