Top of the page

InfiniBand: The Competitive Advantage (Server Talk Episode 6)

If you are familiar with networking technology in the modern data center, you've probably heard the term InfiniBand before. Often we find that there is some confusion over exactly what InfiniBand is and does. We hope to help clear up these misunderstandings (if only partially). InfiniBand is a complicated technology, but we have one of our solutions engineer Andrew Brant with us in today's special feature of our ""Server Talk"" podcast.

While Ethernet remains the most commonly used networking protocol, InfiniBand (IB) is often regarded as the technology with superior performance and lower latency, as seen by its widespread adoption in high performance computing (HPC). In this podcast, Andrew talks about our new IB product and solution offerings utilizing the Intel® True Scale Fabricline. With True Scale Fabric, Intel competes on some levels with the other major InfiniBand provider Mellanox. However, Intel's products are available at a much lower price point while lacking features needed for full enterprise applications and data center environments. Intel is specifically focused on the HPC niche in a very competitive fashion by targeting the needs of that space at a lower cost to the end user. Andrew gives us a high-level overview of the InfiniBand protocol, then outlines some unique aspects of Intel's IB product line, including QDR-80 for use in dual socket configurations.

A text overview of InfiniBand is provided below the podcast.

Why InfiniBand at ICC?

ICC possesses the expertise that can help you determine if you need InfiniBand and how to put the best solution(s) together for your unique business and challenges, from pre-sales technical design and consulting to post-sales support.Check out our True Scale based InfiniBand products, or contact us today for more assistance in building your InfiniBand solution.


Continue reading to learn more about InfiniBand.

So... what exactly is InfiniBand again?

InfiniBand is a switched fabric (a fancy term for a hardware/software setup that moves data in and out of network nodes connected by switches) used to facilitate network communications in high-speed computing environments.Simply put, InfiniBand enables computing points in a network to exchange more data more quickly. As computing platforms have evolved and become more powerful, they have required higher speed networks to communicate within and between systems. Without this higher speed, the flowof data between systems cannot keep up with the actual amount of data being produced. In other words... bottlenecks.

And... why does IB matter?

Many people ask the question: what are the benefits of InfiniBand over other networking protocols out there such as Fibre Channel and Ethernet?We've already established that InfiniBand offers low latency and vast bandwidth. Theoretically this would imply overall lower costs for end users who need that level of performance.For those that need this performance, it is indeed often the more attractive option.

  • Low latency with measured delays of 1µs (microsecond)end-to-end
  • High performance with actual data rates up to 300Gb/s (EDR 12x)
  • Data integrity through CRCs (cyclic redundancy checks)across the fabric to ensure the data is correctly transferred
  • High efficiency with direct support for Remote Direct Memory Access (RDMA) and other advanced transport protocols
  • Less jitter(jitter refers to variation in the consistency of transfers) due to small message size (see below)

These benefits can result in better I/O performance and management, as well as potential savings in server count and power. Ultimately, it can reduce the total cost of ownership.

Okay... what makes IB different?

InfiniBand is a flat network architecture (as opposed to Ethernet which is a hierarchical switched network). The message size of the IB protocol can be reduced all the way down to only 256 bytes. This is critical in HPC applications, as most supercomputers are able to attain high performance by breaking up message requests into small fragments and then distributing these fragments across multiple nodes. The metric MPI (message passing rate) is key to supercomputing, and the small message size is one primary reason for InfiniBand's low latency and less jitter. Additionally,InfiniBand utilizes local addressing: keeping transfers within the cluster. In some cases other networking protocols are designed to function in the opposite manner. For example, a key benefit of Fibre Channel is large block file transfers (vs. InfiniBand's small message size) which are beneficial in storage area networks, and Ethernet is commonly used to connect private and public networks (vs. InfiniBand keeping transfers within its own network).

While many will refer to InfiniBand as a networking protocol, in many ways it is more of an I/O technology. Its primary function is often to achieve maximum performance and minimum latency (greatest possible I/O) within a network, rather than maximum connectivity between networks. There is certainly much overlap among these technologies, but at the same time IB is highly appealing to cases where Ethernet or Fibre Channel is simply not intended to be used.

Now... what do I need for an IB solution?

InfiniBand requires specialized hardware equipment: a host adapter, switches, and cables. The adapters function as expansion cards that are inserted in a PCI slot on a motherboard. Special cables are then used to connect the adapters to InfiniBand switches, specifically designed to facilitate high data rate transfers.IB connections are serial links with a range of five different data rate types - single, double, quad, fourteen and enhanced.A single EDR (enhanced data rate) 12x connection can deliver up to 300Gb/s of throughput.

Check out our True Scale based InfiniBand products, or contact us today for more assistance in building your InfiniBand solution.

ICC InfiniBand Products

More IB Resources

InfiniBand: The Competitive Advantage (Server Talk Episode 6)

If you are familiar with networking technology in the modern data center, you've probably heard the term InfiniBand before. Often we find that there is some confusion over exactly what InfiniBand is and does. We hope to help clear up these misunderstandings (if only partially). InfiniBand is a complicated technology, but we have one of our solutions engineer Andrew Brant with us in today's special feature of our ""Server Talk"" podcast.

While Ethernet remains the most commonly used networking protocol, InfiniBand (IB) is often regarded as the technology with superior performance and lower latency, as seen by its widespread adoption in high performance computing (HPC). In this podcast, Andrew talks about our new IB product and solution offerings utilizing the Intel® True Scale Fabricline. With True Scale Fabric, Intel competes on some levels with the other major InfiniBand provider Mellanox. However, Intel's products are available at a much lower price point while lacking features needed for full enterprise applications and data center environments. Intel is specifically focused on the HPC niche in a very competitive fashion by targeting the needs of that space at a lower cost to the end user. Andrew gives us a high-level overview of the InfiniBand protocol, then outlines some unique aspects of Intel's IB product line, including QDR-80 for use in dual socket configurations.

A text overview of InfiniBand is provided below the podcast.

Why InfiniBand at ICC?

ICC possesses the expertise that can help you determine if you need InfiniBand and how to put the best solution(s) together for your unique business and challenges, from pre-sales technical design and consulting to post-sales support.Check out our True Scale based InfiniBand products, or contact us today for more assistance in building your InfiniBand solution.


Continue reading to learn more about InfiniBand.

So... what exactly is InfiniBand again?

InfiniBand is a switched fabric (a fancy term for a hardware/software setup that moves data in and out of network nodes connected by switches) used to facilitate network communications in high-speed computing environments.Simply put, InfiniBand enables computing points in a network to exchange more data more quickly. As computing platforms have evolved and become more powerful, they have required higher speed networks to communicate within and between systems. Without this higher speed, the flowof data between systems cannot keep up with the actual amount of data being produced. In other words... bottlenecks.

And... why does IB matter?

Many people ask the question: what are the benefits of InfiniBand over other networking protocols out there such as Fibre Channel and Ethernet?We've already established that InfiniBand offers low latency and vast bandwidth. Theoretically this would imply overall lower costs for end users who need that level of performance.For those that need this performance, it is indeed often the more attractive option.

  • Low latency with measured delays of 1µs (microsecond)end-to-end
  • High performance with actual data rates up to 300Gb/s (EDR 12x)
  • Data integrity through CRCs (cyclic redundancy checks)across the fabric to ensure the data is correctly transferred
  • High efficiency with direct support for Remote Direct Memory Access (RDMA) and other advanced transport protocols
  • Less jitter(jitter refers to variation in the consistency of transfers) due to small message size (see below)

These benefits can result in better I/O performance and management, as well as potential savings in server count and power. Ultimately, it can reduce the total cost of ownership.

Okay... what makes IB different?

InfiniBand is a flat network architecture (as opposed to Ethernet which is a hierarchical switched network). The message size of the IB protocol can be reduced all the way down to only 256 bytes. This is critical in HPC applications, as most supercomputers are able to attain high performance by breaking up message requests into small fragments and then distributing these fragments across multiple nodes. The metric MPI (message passing rate) is key to supercomputing, and the small message size is one primary reason for InfiniBand's low latency and less jitter. Additionally,InfiniBand utilizes local addressing: keeping transfers within the cluster. In some cases other networking protocols are designed to function in the opposite manner. For example, a key benefit of Fibre Channel is large block file transfers (vs. InfiniBand's small message size) which are beneficial in storage area networks, and Ethernet is commonly used to connect private and public networks (vs. InfiniBand keeping transfers within its own network).

While many will refer to InfiniBand as a networking protocol, in many ways it is more of an I/O technology. Its primary function is often to achieve maximum performance and minimum latency (greatest possible I/O) within a network, rather than maximum connectivity between networks. There is certainly much overlap among these technologies, but at the same time IB is highly appealing to cases where Ethernet or Fibre Channel is simply not intended to be used.

Now... what do I need for an IB solution?

InfiniBand requires specialized hardware equipment: a host adapter, switches, and cables. The adapters function as expansion cards that are inserted in a PCI slot on a motherboard. Special cables are then used to connect the adapters to InfiniBand switches, specifically designed to facilitate high data rate transfers.IB connections are serial links with a range of five different data rate types - single, double, quad, fourteen and enhanced.A single EDR (enhanced data rate) 12x connection can deliver up to 300Gb/s of throughput.

Check out our True Scale based InfiniBand products, or contact us today for more assistance in building your InfiniBand solution.

ICC InfiniBand Products

More IB Resources