At ICC we’ve now deployed the ICC AXIS™ R-725a. Testing on the EPYC 7000 series in our lab showed an increase of ~.1GHz in performance per core on these high core count processors with liquid cooling vs. air. While that does not seem like a big number, it yields a massive performance increase across 32, 64, or 128 cores.
ICC AXIS™ R-725a
- Custom inRack CDU Liquid Cooling
- Dual AMD EPYC™ 7002 Series Processor /// 7H12/node
- Form Factor: Dense 2U 4-node server platform /// up to 88 nodes/rack
- RAM: 8-Channel 3200 RDIMM/LRDIMM DDR4 per processor, 16 x DIMMs /// up to 2TB ram/node
- Hard Drive: Up to 2x 2.5″ drives (Optional 2xM.2)/node
- Power Supply: Redundant 2200W 80 PLUS Platinum/node
- Additional Features: 4 x PCIe Gen4 x 16 expansion slots /// 1 x OCP 2.0 Gen3 x16 mezzanine slots supporting both IB
As the future continues to bombard us with technological advances, in the world of computing it is also making us sweat. Heat is the hurdle to overcome until something changes. We see this across all types of systems, the better the cooling the more they can be pushed. For EPYC, if they can be kept cool they truly shine.
The ICC AXIS™ R-725a is a full rack populated with 2U 4-node servers with up to 88 nodes per rack. Complemented by a custom in-rack CDU from Asetek. The custom direct-to-chip cooling from the CDU allows the full rack to maintain sustained performance throughput, able to remove up to 80kW of heat per rack.
This solution comes with many benefits, all resulting in lower costs over time. Lower Data Center energy usage compared to alternatives. Reduced carbon footprint. Sustained compute without CPU throttling. Lower up front cost. Higher compute density. Last but not least, lower maintenance costs. All it takes to look at a node is to power down, remove cables, and take out the quick connects for the cooling.
In a world where we’re more frequently measuring overall performance in petaflops, air cooling isn’t going to cut it anymore.