Great introduction to Infiniband

The InfiniBand Trade Association published a very useful white paper that is an introduction to InfiniBand technology. InfiniBand is a network technology that greatly boosts computing performance by allowing applications in different parts of a standard network to communicate without the standard network’s usual communication channels. By using only the server resources those applications need and little else, InfiniBand can significantly increase computational speed and performance. The white paper is definitely worth a read for anybody interested in high-performance computing (HPC).

Although InfiniBand has established itself as a standard for computing performance, you may also want to check out a different perspective about the competing technologies that are seeking to overtake InfiniBand.

As Nebojsa Novakovic writes in an article for The Inquirer, Intel, which created InfiniBand, may be putting it on the back-burner to promote 10GB Ethernet  technology instead. Novakovic writes,

IB has very decent application support in high performance computing these days, however its protocol stack is fattened by its envisioned need to act as a common fabric for everything from storage access to networking and clustering, which naturally increases CPU load and latency.

So, if you really want a common single interconnect architecture for your datacentre or supercomputer, 10GE might make more sense, since all applications you might ever think of run on it anyway.

Intel is under pressure from AMD, its competitor. AMD has developed its own networking standard called High Node Count Hypertransport, which uses the hardware of Infiniband but not its exclusive protocols. Novakovic thinks that InfiniBand’s future may be in doubt, but until the new technologies that seek to challenge it mature, InfiniBand is still a sure bet for HPC.

Correction: Intel, as stated above, was not the sole creator of InfiniBand. Intel was one of several companies working on “Next Generation I/O”, a project which later merged with a competing venture to form the InfiniBand Trade Association (see the O’Reilly Introduction to InfiniBand Architecture). Thanks to “B” for the clarification.

  • B 10.05.2010

    Let’s be clear about Intel. They did not create InfiniBand nor have products that address it. They are part of the IBTA and have a strong relationship with IB because their CPUs require high-bandwidth interconnect like InfiniBand to ensure users can maximize the CPU capabilities. Intel bought a small startup that did 10GigE and now sell that…yet it’s not as good as a performance match.

  • admin 10.05.2010

    B, thank you for your post. You are correct in saying that Intel did not create InfiniBand by itself. Nevertheless, Intel was one of the companies working on “Next Generation I/O”, which later merged with a competing project to make InfiniBand (

    My source for writing that Intel was the ‘creator’ was the Inquirer article above (“Infiniband, originally created by Intel, has become a quasi standard…”). But they were not specific enough, and I should have fact-checked as well. Thank you for the clarification; I have added a correction to the post.

  • Paul Grun 11.05.2010

    InfiniBand was formed as the merger of two competing industry consortiums, called Future I/O and NGIO. Each consortium was composed of a group of major industry players. So it is not even close to accurate to say that Intel created IB, although it is accurate to say that Intel has been continuously active in the InfiniBand Trade Association since its formation, serving continuously in the role of Steering Committe co-chair and supplying members and co-chairs to the technical working groups.

    Intel publicly halted its IB product development efforts quite a few years ago because it believed there were sufficient competitors in the marketplace. Within the past year or two it purchased NetEffect which was a developer and manufacturer of iWARP technology, which is another form of RDMA.

    From an architectural standpoint, QPI and IB (or 10GbE for that matter) are entirely different animals. QPI has a memory coherency protocol and is intended to support a distributed memory controller. This is helpful for large scale single image machines, but not especially helpful for clustering or I/O.

  • admin 11.05.2010

    Thank you for the useful clarification, Paul. Intel indeed has not been as actively involved in InfiniBand as I previously thought.

  • Introductions to GPU and Infiniband | Server Problems Solved 19.05.2010

    […] without further adieu, I invite you to check out our introduction to InfiniBand post on the ICC blog. Also, if you are interested about how GPU (graphics-processing unit) servers […]