Table of Contents
Serving the enterprise makes flexibility and agility a key focus, so a smaller footprint and compatibility with commodity IT equipment are differentiators.
Colovore has pioneered ultra-high-density, liquid-cooled data centers purpose-built for AI and high-performance computing (HPC) workloads. The company has become a trusted testbed for visionary AI hardware companies like Nvidia, Cerebras, Groq, SambaNova, and d-Matrix. Before his participation in the DCD > Connect panel discussion “Building the AI Ecosystem from Chip to Grid,” I had the opportunity to speak with Colovore’s Tomek Mackowiak, vice president of product strategy, about the laser focus on enterprise customers, and what differentiates Colovore from data center giants like Crusoe, who shared the stage at the event.
What are the similarities and contrasts between a smaller, niche player like Colovore and a bigger company like Crusoe?
We participate in the same revolution, but with a different focus. We serve the enterprise and the agentic workloads they are trying to develop. We are not pursuing commodity inference like Stargate and other GW-scale neocloud and hyperscale campuses. We can appreciate that someone like Sam Altman is going head-to-head with Dario Amodei, pursuing what some call “humanity’s greatest endeavor in terms of scale,” but that’s not what we do.
We have an appetite of about 20-50 MWs, and we continue to push the envelope for what’s important to the enterprise: compliance, low latency, flexibility. Our enterprises have their own security and data protection concerns, so we are mindful of ISO, SOC, HITRUST. In addition, we target areas where we can get distribution from local utilities, as well as proximity to tier-one peering points. For example, our data centers in Santa Clara are right in the thick of things — a stone’s throw away from Nvidia, Google, Apple, and near 529 Bryant in Palo Alto.
And how, for 10 years, have you continued to innovate for your enterprise customers?
About a decade ago, we pioneered the space by building the world’s first fully liquid-cooled retail colocation facility. Now, we are very motivated by what our customers need. We stick to the path we chose, designing data centers for maximum flexibility and low latency. That means if enterprises have standard, commodity IT equipment, like CPU-based servers, they have the option to put them next to Nvidia Superpods or Cerebras clusters. We can make that happen.
Since enterprises want agility and flexibility, 8 to 20 MW are the typical chunks we build, but the numbers can get big very quickly.
We also look at power differently than the GW data centers. For liquid-cooled data centers, power is a pretty big deal. To the bigger data center folks, power isn’t a big problem. They can go and build where there is power. But for us, it’s the opposite. Power is a major consideration because we want to come off the utility, as close to the workloads and the data gravity as possible.
We are similar in that we are building liquid-cooled advanced data center infrastructure, specifically for HPC and AI, but we are building for different purposes and in a smaller footprint. Those differences become pretty obvious in how we provision and how we design. We uniquely build design elements for enterprise flexibility, like a raised floor and unique water distribution underneath, whereas Stargate is built to support heterogenous deployment, with data halls of 150 – 200 MWs.
And why is that raised floor a hallmark design for you?
Because we’ve been doing this for more than 10 years, and we had the luxury of designing a liquid cooled data center without the pressure of the AI revolution bearing down on us. We were able to go with what we thought was a very obvious design direction. I sometimes joke that if you asked an 8-12 year-old to design a data center, this is what they’d come up with. With the water distribution under a raised floor, and the machines on top separated so that any leaks can be rapidly localized, circumvented, and managed.
Though it’s never happened, if a manifold burst up top, it would leak down and we’d have a failure in a single cabinet, but we wouldn’t have to worry about the catastrophic failures that a bigger data center could have. For the bigger ones, when running liquid cooling, you’re dealing with high pressure. If there’s a crack, it could lead to an explosion of water, but we decided early on not to put the water up above and not to deal with ultra-high pressures. We wanted to minimize the potential blast radius.
Can customers with air-cooled setups or legacy setups be compatible with your design?
That’s what we’re made for. When we built Colovore 1, it was mostly air-cooled servers in existence. We built rear-door heat exchangers on the back, so you can do high density with air cooled systems. It plays well with how our DCs are designed. You can mix and match. Flexibility. That’s the key.
We love when we are challenged to have air-cooled and liquid-cooled systems next to each other. That elicits a reaction because we get to test our infrastructure, and now it’s become common. It’s of course cheaper and faster to build for one purpose — to make it all liquid cooled with containment and fan walls, but we are flexible in our design, if that’s what our customers need.
We’ve also thought about how the environment affects the techs that are there day in and day out. We want them to be comfortable all the time, rather than sweating in one aisle and freezing in other. That’s why we put rear-door heat exchangers on every single cabinet—as a requirement, and that means people working on the machines don’t have to wear a hoodie in one aisle and a sweaty t-shirt in another.