ESM-Symposium: Exploiting Tier-0/1 Supercomputer Hardware in Jülich for Earth System Science
27-29 May 2019, Jülich Supercomputing Centre
The ESM symposium 2019 brings together the users of the Helmholtz ESM partition with computational scientists and system administrators at JSC. It will take stock of the achievements in Earth system modelling that can be realised through the ESM partition, provide tutorials for problem solving (with a focus on GPU usage), and provide a forum to discuss the future use of this resource and its integration in the planned Helmholtz-wide initiatives Pilot Lab Exascale Earth System Modelling (PL-EESM), and Joint Lab EESM.
In 2017, Helmholtz requested and received supplementary funding for an additional compute partition (ESM partition) on the next generation Tier-0/1 system JUWELS at Forschungszentrum Jülich, dedicated specifically to Earth system science simulations and Big Data workflows. This extension comes in two instalments: the first part operates since July 2018, while the second part will be set-up in 2020.
JUWELS is a modular supercomputer consisting of a cluster and a booster component which have fast access to a large and hierarchically organised storage cluster. The cluster has been installed mid-2018 and contains about 2,500 standard Dual Intel Skylake compute nodes of which about 50 are accelerated by 4 Nvidia V100 GPUs each (www.fz-juelich.de/ias/jsc/JUWELS). It will be substantially extended by accelerator hardware in 2020. The main funding for JUWELS comes from the Gauss Centre for Supercomputing (GCS) through BMBF and the North Rhine-Westphalian MKW. The key idea of the Helmholtz ESM partition is to provide dedicated next generation Tier-0/1 capability to Earth system science. This can be used for next generation scientific software implementation, testing, performance analysis and tuning on the one hand and for production runs and preparation of ensuing frontier simulations on the other hand. Because the ESM “partition” is not realised as dedicated hardware, but instead as an equivalent share of compute time on the entire system, it offers enormous flexibility to Helmholtz Earth system scientists. They can utilise the dedicated “partition” under their own governance, while at the same time they can make use of the full capability of the Tier-0/1 system at JSC, e.g. to perform frontier simulations. This concept offers strong synergies with respect to code optimisation and project distribution within a fully equipped HPC system.