START TYPING AND PRESS ENTER TO SEARCH

Examining Data Center Liquid Cooling: Immersion vs. Direct-to-Chip Systems

Posted by Dave Bercovich on November 13, 2023

The rapid adoption of machine learning and artificial intelligence (AI) applications is driving increased density in the data center. General-purpose data center racks have a power density of 10kW to 20kW. Racks of equipment used to train AI models generally have a power density of over 20kW. The GPUs and high-end CPUs used for AI consume a lot of power — and generate a lot of heat.

So much heat that a growing number of data centers are opting for liquid cooling. As a rule of thumb, air cooling becomes ineffective when you exceed 30kW per rack. The fans simply cannot remove the heat. Liquids, on the other hand, have up to 3,000 times the cooling efficacy of air. At high densities, liquid cooling becomes essential for data center thermal management.

Data Center Liquid Cooling Systems

Liquid cooling has a long history in the data center. It was used to cool early mainframes because air simply couldn’t support the heat density of the equipment. 

Immersion Cooling

In immersion cooling, server components are placed in a sealed container with a nonconductive dielectric liquid. Single-phase immersion cooling continuously circulates and cools the fluid. Two-phase immersion cooling uses a liquid that boils and turns to vapor at low temperatures. The vapor rises to the top of the container, where it cools and condenses back into the system.

Direct-to-Chip Cooling

With direct-to-chip cooling, small tubes carry liquid to plates that sit directly atop high-power components. The fluid draws heat off the components and then is circulated through a heat exchanger and back to the plates. Rear-door heat exchangers work on a similar principle. Hot air exhausted by fans from the IT equipment is circulated through a liquid-cooled device mounted in the back of the server rack. The air is cooled and returned to the room.

Contact the Experts

Benefits and Drawbacks of Liquid Cooling

Liquid cooling enables data centers to support the greater power densities needed for AI and other compute-intensive applications. At the same time, liquid cooling systems require minimal power, reducing energy costs and creating a more sustainable data center. Immersion and direct-to-chip cooling systems use less water than traditional cooling systems and require little space. They also minimize the risk of hotspots, extending the life of IT equipment. All of this adds up to lower operational costs. 

However, data centers face significant capital investments when they install liquid cooling. Full-scale deployment requires substantial infrastructure changes and many of the products on the market are proprietary, creating the risk of vendor lock-in. Staff will face steep learning curves and must adopt new operational procedures. They may have to rely on the vendor to perform some routine maintenance.

Leakage is, of course, a significant concern, particularly with direct-to-chip cooling. A leak could seriously damage hardware and cause downtime. Replacing a component in a server that uses immersion cooling would be a daunting task.

Still, data centers should consider the total cost of ownership when evaluating liquid cooling systems. The OPEX savings of liquid cooling may offset the capex costs and maintenance challenges. Fortunately, liquid cooling isn’t an all-or-nothing proposition. New solutions are being developed by the day. Legacy air-cooled data centers can implement rack or row-level liquid cooling solutions designed to run alongside or in conjunction with existing air cooling instead of investing in a major infrastructure overhaul.

Enconnex Helps Support Higher Data Center Densities

Liquid cooling is just one aspect of thermal management in today’s high-density data center. Airflow still plays a critically important role in maximizing cooling efficiency.

Enconnex server racks are designed with proper airflow in mind. For example, the Enconnex InfiniRack data center cabinet features brush panels that allow cable passthrough while mitigating leakage of hot exhaust air. An optional air dam kit seals gaps around the perimeter of the front mounting rails, and 31.5″ (800 mm) wide cabinets include grommet-sealed cable openings in the equipment rails.

Our data center infrastructure specialists can help you choose the right products for your high-density data center; just get in touch.

Explore Our Products


Posted by Dave Bercovich on November 13, 2023

Dave has 20 years of data center and IT infrastructure sales experience. He has represented manufacturing organizations such as Avaya, Server Technology, & The Siemon Company. As Sales Director with Enconnex, he builds relationships and grows the Enconnex business working with partners, and resellers.

Learn more about Enconnex

Get to know Enconnex with a customized fit-out