02.12.2025 | Blog Posts, News, Products & Technology

Five Fast Facts on Compute Efficiency

Liquid cooling’s role in the data center has evolved dramatically and AI is accelerating that shift faster than ever.

In this Q&A, Ben Sutton, product marketing manager at CoolIT Systems discusses how direct liquid cooling (DLC) is changing the way companies scale AI and HPC.

TechArena: CoolIT customers have reported PUE improvements from 1.30 to 1.02 when implementing Direct Liquid Cooling solutions. Can you explain how liquid cooling creates this dramatic efficiency improvement, and what specific architectural advantages enable such significant PUE reductions compared to traditional air cooling?

Cooling the latest processors has become a serious challenge. With air cooling this means bigger heat sinks, faster airflow and colder air temperatures. That leads to three problems. First, density decreases because each server takes up more rack space. Second, fans draw more and more power to move the required air. Third, HVAC systems must run harder and longer to reduce the data center’s ambient air temperature.

DLC – or direct liquid cooling – takes a different approach.  DLC targets only the components that dissipate the most heat, mainly the processors, and leaves peripheral components to the ambient air. By focusing on the source of the heat, liquid cooling cuts energy use in both fans and facility cooling.

The reason it works so effectively is simple physics. Liquids absorb heat far more efficiently than air. Air is an insulator. Water, for example, can store about 4000 times more heat in a given volume. This is what drives the dramatic improvement in energy efficiency and PUE when liquid cooling is deployed.

When we scale this up to a system level, the benefits compound. A single CDU – or coolant distribution unit – with two pumps can eliminate the need for a massive volume of airflow, drastically reducing the power consumption required for cooling. These benefits increase further when liquid cooling is extended to peripheral components such as DIMMs (memory) and OSFPs (Octal Small Form-factor Pluggable) modules. Together this can deliver a PUE as low as 1.02, which is an ideal outcome for modern data centers.

TechArena: With AI and HPC workloads driving rack densities toward 300kW per rack, where are we with adoption of liquid cooling across customer segments?

AI has brought liquid cooling into the mainstream. HPC was an early adopter, but now AI workloads are pushing densities and thermal loads beyond the limits of air cooling. NVIDIA’s latest platforms, like the Blackwell and Blackwell Ultra, require liquid cooling to handle their high power draw and heat dissipation. That has made liquid cooling a necessity for cutting-edge compute environments globally.

End customers are approaching this in different ways. Hyperscaler cloud service providers have the knowledge, engineering experience and ownership stake to move fast. Due to their scale, Hyperscaler energy efficiency gains will have the greatest effect. Colocation providers are following closely—many lease space to Hyperscalers and need to match their cooling capabilities.

Enterprise data centers are lagging in the adoption of liquid cooling. This is because of the lower power nature of their applications and also because they can often tap into cloud-based AI rather than in-house infrastructure. However, looking ahead, we see that by 2028 general-purpose data center CPUs are expected to reach 500 to 600 W TDP. That level of heat will demand high-performance liquid cooling just as AI GPUs do today. This parallel trend means direct liquid cooling will no longer be limited to AI accelerators. General compute will require it too, making thermal strategy a central design factor for the next generation of data center infrastructure.

TechArena: What are the tradeoffs between liquid cooling approaches across direct cooling and immersion cooling?

On paper, the physics behind immersion looks very promising, but in practice, there are hurdles.

Data centers today are set up for air cooling. Direct liquid cooling or DLC provides the opportunity for a hybrid approach, targeting the highest TDP components in each server. You can keep using standard rack-based infrastructure, so both new builds and retrofits are straightforward. Because the liquid is fully contained inside the DLC system, server maintenance is simple.

Immersion, on the other hand, can cool all components, but it demands specialized tanks and infrastructure. Maintenance becomes more complex because a server must be lifted out of the fluid and drained before work can begin. Servers also need to be certified for immersion, and manufacturers often need to adjust designs to avoid chemical reactions or degradation when components contact the fluid.

At the system level, cost and ease of installation are the deciding factors. Immersion requires large volumes of expensive dielectric fluids, custom-certified servers, and new tank infrastructure, often in horizontal configurations. That adds significant cost for the owner. DLC is simpler to design and install, more cost-effective, and does not require redesigned data center infrastructure.

Single-phase DLC is already mature and is being adopted at scale. Immersion cooling is earlier in its adoption cycle and still faces technical and operational challenges before it can be broadly deployed.

TechArena: You mention that customers typically see at least a 10% reduction in energy bills and over 50% decrease in capital expenditure with CoolIT solutions. How do these efficiency gains compound across different aspects of data center operations, and what’s driving the CAPEX reduction beyond just the cooling infrastructure?

Cooling can account for 30 to 40 percent of total energy use in a conventionally air-cooled data center. By moving to liquid cooling, operators see immediate reductions in operational costs because they can cut back on mechanical cooling. In practice, this translates to at least a 10 percent drop in energy bills, and often more, depending on workload intensity and local climate. Those savings compound over time and support operators in meeting energy efficiency and sustainability targets while increasing compute.

Capital expenditure is also reduced, and not just on cooling hardware. Air cooling consumes space, both within the rack and across the facility. As heat loads rise, air-cooled servers require larger heat sinks, more airflow and lower inlet temperatures. That drives up the size and cost of the entire data center footprint. Liquid cooling removes this constraint by supporting much higher rack density.

With single-phase direct liquid cooling, operators can run up to five times more power per rack—50 to 100 kW compared to 15 to 30 kW with air cooling. That allows them to scale compute capacity dramatically within the same physical footprint. For new builds, it reduces land and construction costs. For existing sites, it lets operators expand capacity without expanding the facility.

When you combine energy efficiency with space efficiency, the benefits multiply. Lower power bills reduce OPEX, while higher density reduces CAPEX tied to construction, land and power distribution infrastructure. Together, these effects make liquid cooling one of the most effective levers available to control both operating and capital costs in the era of AI-scale compute.

TechArena: CoolIT’s Split-Flow coldplate technology directs the coolest liquid to the hottest processor areas first. How does this design innovation improve thermal efficiency compared to conventional liquid cooling approaches, and what impact does this have on component performance and reliability at scale?

CoolIT has long been known for our strong product offering, particularly around coldplates that feature our IP. With the ever-growing demands for TDP, heat flux and also lower maximum junction temperatures, coldplate performance is critical to the performance of the whole system. Innovative designs, such as ours offer lower thermal resistance and lower pressure drop.  This enables the adoption of not only today’s processors but also tomorrows. They also provide reliable temperature uniformity across the processor allowing it to operate more efficiently under higher loads.  Plus the lifetime of the processor is increased when the silicon is operating within ideal temperature ranges.

We continue to collaborate with our customers for production programs, but also for R&D as it is seen as a critical function of what we do, and therefore it garners continued investment.

Article originally published on TechArena.