
Intel releases in 2025, with AI taking a key focus
Intel's 2025 lineup features AI-powered Core Ultra 200 processors, next-gen low-power E-cores, and the upcoming Arrow Lake-S. Get expert integration with our premier solutions.
The Aquarius R-117A is ICC's flagship immersion-native 1U server, integrating six NVIDIA H200 GPUs alongside an AMD EPYC Turin processor with up to 192 cores and 3TB of DDR5 RAM. Designed from the ground up for dielectric oil immersion, it delivers three to four times the compute density of comparable air-cooled systems and is built for AI training, seismic processing, reservoir simulation, and quantitative finance.
We recently showcased the Aquarius R-117A at two major industry events, ICC Connect during GTC and the Rice University Oil & Gas Show. The system is an ultra-dense 1U server engineered specifically for dielectric oil immersion environments and built to serve the most demanding AI and HPC workloads across any industry.
Unlike the vast majority of servers used in immersion deployments today, which were originally designed for air-cooled data centres and adapted afterwards, the Aquarius R-117A was conceived, designed, and validated exclusively for immersion from the outset. The result is a level of compute density that would be thermally impossible through any other means.

Six NVIDIA H200 GPUs in One Rack Unit
The centrepiece of the Aquarius R-117A is its GPU configuration. Six NVIDIA H200 SXM GPUs are fitted into a single 1U chassis, each carrying 141GB of HBM3e memory at 4.8 TB/s bandwidth. Across all six cards, that amounts to 846GB of aggregate GPU memory, enough to hold the largest AI models, simulation datasets, and complex workloads entirely in GPU memory without distributing across multiple nodes.
Under full load each H200 dissipates up to 700W of heat, meaning the system manages more than 4.2kW of GPU thermal output in a single rack unit before accounting for the CPU, memory, and networking. This is only achievable because the chassis was designed around oil immersion from day one, with component placement and chassis geometry optimised for fluid-based heat transfer rather than airflow.
The six H200s are interconnected via NVLink fabric, enabling high-bandwidth GPU-to-GPU memory access that far exceeds what PCIe can offer. For distributed AI training, large model inference, or data-intensive simulation where information must move continuously between accelerators, this fabric is what makes the system behave as a unified compute platform rather than a collection of independent cards.
AMD EPYC Turin: Built to Keep Up
Hosting six H200 GPUs demands a CPU that can match them. The Aquarius R-117A is built around a single-socket AMD EPYC Turin processor offering up to 192 Zen 5 cores on TSMC's 3nm process node. Its 12-channel DDR5 memory controller supports the system's 24 DIMM slots and up to 3TB of ECC RAM, while its PCIe Gen 5 lane count ensures the OCP 3.0 networking expansion and NVMe storage operate without contention. The single-socket configuration also eliminates NUMA complexity, which can quietly degrade performance in multi-socket GPU server deployments.
Immersion Native, Not Immersion Adapted
Most servers in oil immersion tanks today were built for air-cooled environments first. Their layouts, PCB stackups, and power delivery architectures assume airflow, and the immersion is an afterthought. The Aquarius R-117A has no such constraint. There are no airflow assumptions in its design. Components are positioned to maximise contact with the circulating dielectric fluid, and the chassis promotes natural convective flow even in passive tank setups. This co-designed approach to thermal and compute architecture is what enables the density figures the system achieves.
In optimised immersion deployments, this design approach yields a Power Usage Effectiveness (PUE) profile approaching 1.03, compared to 1.2 to 1.4 for the best air-cooled facilities. Over the lifetime of a high-density deployment, the difference is significant.
Power and Connectivity
To sustain the full system power envelope, the Aquarius R-117A ships with a choice of Dual 3200W PSUs or Dual 5200W PSUs with Anderson connectors. The Anderson connector option is well suited to facilities where high-current DC distribution is standard and connector durability in demanding environments is a requirement.
On the networking side, the system's OCP 3.0 x16 expansion slot supports dual 100GbE or single 400GbE NICs, providing the fabric bandwidth needed for multi-node distributed workloads. Onboard connectivity includes two 10GbE ports via Broadcom BCM57416 and two 1GbE ports via Intel i210 for management.
Full Specifications
Form Factor: 1U, immersion-native chassis
CPU: Single AMD EPYC Turin Series, up to 192 cores
GPU: Up to 6x NVIDIA H200 SXM, 141GB HBM3e per card, NVLink interconnect. NVIDIA RTX6000 also validated.
Memory: 24x 128GB DDR5 RDIMM slots, up to 3TB ECC RAM
Storage: 4x EDSFF E.S (E1.S) NVMe drives
Onboard Networking: 2x 10GbE (Broadcom BCM57416), 2x 1GbE (Intel i210)
Networking Expansion: Single OCP 3.0 x16, Dual 100G or Single 400G
Cooling: Dielectric oil immersion, immersion-native design
PSU: Dual 3200W or Dual 5200W with Anderson connectors

Who It Is Built For
The Aquarius R-117A is built for any workload that is memory-bandwidth bound at scale. That covers a broad range of industries and use cases.
For AI and machine learning teams, the 846GB of aggregate GPU memory combined with NVLink fabric and a 192-core host CPU makes it a serious platform for large model training, fine-tuning, and inference workloads that would otherwise require multiple nodes.
For HPC and simulation, the system handles complex multi-physics modelling, computational fluid dynamics, and large-scale scientific workloads with the kind of in-memory capacity that eliminates the need to partition jobs across a cluster.
For financial services and trading, the combination of raw compute throughput, high memory bandwidth, and low-latency fabric makes the platform well suited to quantitative modelling, Monte Carlo risk engines, derivative pricing, and real-time analytics at the speeds modern trading infrastructure demands.
For industries like oil and gas, the same architecture supports seismic processing, full-waveform inversion, and reservoir simulation at resolutions that previously required whole HPC clusters.
The system delivers three to four times the compute density of comparable air-cooled infrastructure, with a direct impact on infrastructure economics. Fewer rack units, fewer power feeds, fewer switch ports, and a much smaller physical footprint per unit of compute delivered.
To find out more about the Aquarius R-117A, request a spec sheet, or discuss deployment, get in touch with the ICC team at [email protected] or visit www.icc-usa.com.

Intel's 2025 lineup features AI-powered Core Ultra 200 processors, next-gen low-power E-cores, and the upcoming Arrow Lake-S. Get expert integration with our premier solutions.

NVIDIA's groundbreaking CES 2025 announcements highlight AI innovation with Blueprints for AI Agents, Project DIGITS supercomputer, RTX 50 GPUs, Cosmos platform for Physical AI, and enterprise partnerships with Toyota and Accenture.

Explore AMD's exciting 2025 releases, including cutting-edge AI processors, new Radeon GPUs, and Ryzen innovations for gaming, handheld devices, and content creation. As a trusted AMD partner, we support these in our systems.
Tell us about your project and we'll get back to you with a customized solution.
Our experts are ready to help you build the perfect solution.
877.422.8729
Our technical specialists are ready to discuss your HPC and AI infrastructure requirements.