Table of Contents
LiquidStack noted that the traditional efficiency metric of choice — the power usage effectiveness ratio — will lose relevance as operators prioritize economic throughput over raw energy savings
In sum – what to know:
Economic performance will surpass energy efficiency as the key metric – LiquidStack said operators will shift from PUE to “tokens per watt per dollar” as power becomes the gating factor for AI expansion.
Rapid deployment models move mainstream – Modular facilities and temporary structures accelerate time-to-power amid overwhelming demand and slow construction cycles.
Two-phase cooling becomes the next frontier – Rising rack densities push operators toward modular, high-capacity liquid cooling systems ready for multi-megawatt loads.
LiquidStack executives are predicting significant shifts in how AI data centers are measured, built, and cooled in 2026, as rapid compute growth forces operators to rethink everything from performance metrics to deployment models and thermal design.
The forecasts come from Kevin Roof, director of offer and capture management, and Angela Taylor, chief of staff and head of strategy, who outlined the trends they expect to shape next year’s AI infrastructure buildout.
Roof said the traditional efficiency metric of choice — the power usage effectiveness (PUE) ratio — will lose relevance as operators prioritize economic throughput over raw energy savings. “Data center investors and operators will trade in the classic PUE metric for ‘tokens per watt per dollar,’” he said. With major AI clusters now directly tied to revenue generation, Roof argued that performance per unit of power is becoming the most important measurement.
The LiquidStack executive added that power availability remains the fundamental limiting factor for building out new AI capacity. “Stranded power represents lost revenue,” Roof said, noting that operators need to extract as much value as possible from every watt they have access to.
Speed of deployment will also be a defining theme next year, Roof predicted. “We’ve reached a point where data centers can’t be built fast enough,” he said, pointing to hyperscale efforts to capture available power immediately. “To jumpstart deployments and ensure available power is utilized immediately, we’ll see more organizations turn to modular data centers and temporary buildings, such as Microsoft’s use of tents, for data center operations until a permanent on-site facility is completed.”
Taylor, meanwhile, expects a significant shift in cooling technologies as rack densities continue to increase. “Two-phase direct-to-chip cooling technologies will become the successor to today’s one-phase liquid cooling systems as rack densities climb up to and beyond one megawatt,” she said. According to Taylor, the market will see “a wave of two-phase direct-to-chip cooling solutions” announced in 2026, with the broader ecosystem beginning to align around supply chains and standards. She believes the industry will be preparing throughout the year for a larger-scale transition starting in 2027.
Modularity will also shape cooling architectures. “As AI workloads continue to drive power densities ever higher, data center operators will seek out more powerful, modular liquid cooling systems that can be easily deployed and scaled incrementally as thermal regulation needs grow,” Taylor said. She expects that “by late 2026, expect to see skidded, modular units starting at 2 MW (and reaching well beyond) become the de facto models for high-density data center builds.”
As data center operators race to meet the demands of AI workloads, they are currently facing mounting obstacles across power infrastructure, cooling capacity and capital planning. In a recent interview with RCR Wireless News, LiquidStack CEO Joe Capes noted that the current environment is creating major pain points that slow deployment and strain budgets.