HPC Networking

There are four main networking technologies in HPC clusters. The first two, InfiniBand and Ethernet, are for the main connectivity between compute nodes, while the latter two, Fiber Channel and SAS, are for communication between storage systems.


InfiniBand was developed to improve upon the limitations of older Ethernet technologies, and it is currently the fastest networking option available for an HPC cluster. The advantages of InfiniBand are high throughput (40Gb/s) and low latency, which is especially important if you are planning to rely on internet connectivity for your cluster.

InfiniBand acts as a independent messaging service between the various nodes in the HPC cluster. Ethernet networks require communication between nodes to go through the operating system and thus utilize more CPU resources. InfiniBand, on the other hand, bypasses the operating system and allows for direct communication between nodes.

InfiniBand is currently the best networking technology for HPC clusters, but it is more expensive than Ethernet. Applications which require maximum possible performance - such as oil and gas exploration modeling and financial computation, for example - should build their clusters with InfiniBand networking technology.


Although InfiniBand currently outperforms it, Ethernet is the most common wired networking technology. It is available in two varieties for HPC clusters - Gigabit Ethernet and 10 Gigabit Ethernet (10 GigE).

Gigabit Ethernet is the entry-level networking standard and is not recommended unless the project has very tight budgetary constraints. 10 Gigabit Ethernet is also a less expensive alternative to InfiniBand but has a nominal speed that is 4 times slower than it (10 Gb/s to InfiniBand's 40 GB/s). Both 10 GigE and InfiniBand are heavily utilized in today's HPC clusters.