Predictions for HPC in 2018

The epyc comeback of AMD

Intel has dominated sovereignly the HPC field for the last couple of years, ever since AMD's Opteron line started to decline, likely due to the failed Bulldozer core. During that reign, ARM-based CPUs have shown promise of challenging Intel, as has IBM's Power line, but neither has really posed a serious threat to Intel Xeon as the default choice for an HPC server. They have shown great figures in e.g. power efficiency, but as a non-x86 architecture, HPC centers have been a bit hesitant to adopt these due to the anticipated difficulties with e.g. third-party commercial software.

The situation may, and is likely to change with the resurrection of AMD (that we did anticipate year ago). They have introduced a novel x86 architecture called Naples, and have given indications that the next generation of this Epyc product line, codenamed "Rome", will be even juicier. These chips give solid real-world application performance with greater memory bandwidth than the latest Intel Xeon "Skylake", even if peak floating-point performance is up to four times lower. This makes the chip an interesting option providing good price/performance ratio especially for memory-bound applications.

It is not very hard to start with a prediction that a large number of Top500 entries in June, but especially on the November list, will feature the new AMD chips. I would even go as far as 2.5% and 5%, respectively.

Scale-out storage solutions, meet HPC

Lustre will still be the governing HPC storage technology, but the complex needs that scientific research poses for data management and handling are hard to meet with Lustre. It is not only about the available raw capacity and performance, but the need for storing and sharing data sets asks for features for e.g. multi-tenancy, security and multi-protocol support that are difficult or cost-inefficient to implement with the traditional HPC parallel storage systems.

Computing centres will be expanding their storage strategy with an additional layer between the high-performance parallel storage and tape-based archive capacity, consisting of software-defined scale-out storage solutions, i.e. cheap spinning disk server farms that can be grown organically, providing a platform for implementing the needed additional capabilities.


"The fast growth and interest in using container technologies in HPC may
have hit the sweet spot between
the benefits of virtualization and
performance penalty."


Container technologies into mainstream

HPC in virtualized environments, such as in OpenStack based cloud computing platforms, are really not there yet. It will still be impossible to run MPI jobs over thousands of nodes in any of the "cloud HPC" platforms for codes featuring any non-trivial communication patterns.

However, the fast growth and interest in using container technologies in HPC may have hit the sweet spot between the benefits of virtualization and performance penalty. Technologies like Singularity enable using containers in "modules on steroids" fashion, helping out in solving complex library dependencies and other programming environment headaches with near-bare-metal performance. This is so appealing that I would assume almost all HPC sites will start supporting these over the course of the next year. Some sites may even go further and adopt Kubernetes or similar, and really start running their whole HPC environments in "container orchestrated" fashion.

Quantum supremacy?

There has been a lot of momentum around quantum computing during 2017. D-Wave Systems Inc. keeps going on with their quantum annealer, and announced some new customers and proof-of-concept studies this year. Meanwhile, technology giants like IBM and Google have had a small-scale race towards a sufficiently large gate-model quantum computer. Also country-level race may be heating up after China has started to invest heavily on quantum computer research.

It looks like many technological obstacles have been solved lately, so perhaps the race may even yield the first signs of quantum supremacy this year. The term refers to an implementation and a solution of scientific problem with a quantum computer, which would be intractable (really) by the means of classical computers. This may be a bit far-fetched still, but this would be a cool leap forward.

HPC at CSC in 2018?

At CSC, we are hopefully able to reveal the prime vendor of our next-generation computing environment around the autumn timeframe, and have the first installations powered on by the end of the year. The new environment is going to address the challenges in management of scientific data and provide world-class computing capabilities for current and emerging workloads. I have full trust that it will be a very modern and fresh approach, meeting the needs of research in Finland for a number of years to come.


Pekka Manninen

Pekka Manninen

Dr Pekka Manninen is the manager of Science Support group at CSC, and a long-term HPC geek. Twitter:@pekkamanninen

To front page
Trackback URL:

comments powered by Disqus