Home AI Infrastructure NewsletterAre we approaching a post-chip era of ‘Data Centers in a Box?’

Are we approaching a post-chip era of ‘Data Centers in a Box?’

by Susana SchwartzSusana Schwartz
0 comments

Are we approaching a post-chip era of ‘Data Centers in a Box?’

Screenshot 2025-11-17 at 8.10.48 AM

I think not, but that was the topic of a Wall Street Journal opinion piece, “The Microchip Era is About to End,” by tech futurist George Gilder. In it, he made a bold statement that “all efforts to save microchip production in the U.S. come amid undeniable portents of the end of microchips.” He also makes the “inexorable reticle limits of chips” the culprit in vast hyperscale data centers buildouts, with a hypothetical that “data centers in a box of wafer scale processors” could be the way by which U.S. beats China in the post-chip era, which he alludes might be closer than we think. 

 

In a repost of Gilder’s opinion piece, Cerebras CEO Andrew Feldman seemed to agree with some points, referring to “reticle” limits as “the ceiling for progress,” with “more parts, more wiring, and more overhead — all to work around the reticle limit.”

 

While that is true, wafer-scale processors and integration bring their own set of challenges, such as much higher probability of defects because of the larger surface area, as well as incredible heat due to the massive computational performance and speed of wafer-scale engines. Where an advanced Nvidia chip contains about 208 billion transistors, a Cerebras wafer-scale processor features 4 trillion. That can generate approximately 23 kW of heat (when operating at full capacity). That compounds considerably in the new CS-3 system cluster, which stacks CS-3s 16-fold, with as many as 64 trillion transistors. That’s a lot of heat!

 

In today’s RCR article, “Are we approaching a post-chip era of Data Centers-in-a-box?” I draw the conclusion that I think most people would: Wafer-scale makes sense for highly specialized workloads, like those of research institutions, national laboratories, and AI companies that require massive HPC, AI training, and inference workloads. Beyond that, it’s not practical or cost effective, at this point in time. We are not on the precipice of the post-chip era, but surely, today’s innovations will push us toward a day when a “data center in a box” will hopefully replace today’s sprawling, energy-hungry data centers.

Susana 2

Susana Schwartz
Technology Editor
RCRTech

AI Infrastructure Top Stories

Some think a “post-chip era” is closer than we think: Wafer-scale processors that bypass today’s reticle-limited chips were the focus of a Wall Street Journal opinion piece that spurred some debate about how close we may be to a post-chip era.

Focus on the successes, not the failures in gen AI: If 95% of gen AI projects fail, what do the 5% know? Geoff Hollingworth of Rakuten Symphony tells RCR: “rather than focus on the 95% failure rate, it’s more important to interrogate what the 5% of companies getting it right are doing.”

New normal of long-term AI investments: JPMorgan forecasts 122 GW of new data center capacity between 2026 and 2030, with hyperscalers collectively spending a quarter-trillion dollars per year on R&D, highlighting the structural shift toward long-term AI investment.

Logos SMCI NVIDIA 2021 2400x700 1 1 1
In partnership with

AI-Powered Telecom Infrastructure
Supermicro, in collaboration with NVIDIA, delivers AI-powered infrastructure tailored for telcos, enhancing operational efficiency, network management, and customer experiences. Explore now 

AI Today: What You Need to Know

Intensifying competition between AI and cloud giants: Alphabet’s Google announced Friday it would invest $40 billion to build three new data centers in two Texas counties: one in Armstrong County and the other two in Haskell County, a stretch of West Texas near Abilene.

Jeff Bezos focusing on “AI with real-world applications”: Not long after saying AI is in a bubble, Jeff Bezos is betting big on AI, creating an AI start-up called Prometheus, which has already raised $6.2 billion (much of it from Bezos himself).

Renewables will be a big part of the DC boom: The IEA’s flagship “World Energy Outlook” is out, and it say renewables will supply the majority of new data center power by 2035, regardless of whether countries maintain current policies or more aggressively pursue lower emissions. 

U.S. firm to build DCs in Spain: Florida-based EdgeMode has signed an agreement with Blackberry AIF to acquire and develop five large-scale data center sites in Spain, representing over 1.5 GW of planned Tier 3 DC capacity. This makes it one of the largest AI-focused infrastructure development initiatives underway in Europe.

APAC firm to build DC in Seoul: APAC real estate firm ESR is developing a data center in Seoul, South Korea, that will be leased to Princeton Digital Group. Construction of the 9-story KR1 facility, located in Incheon’s Bupyeong district, will start this week. KR1 will be operational in 2028.

Nvidia and Samsung partner in semiconductor AI: Here’s a closer look at a new semiconductor AI factory with 50,000 Nvidia GPUs, which is set to boost manufacturing and supply chain efficiency through digital twins. Samsung and Nvidia claim they’ve seen 20x greater performance and scalable deployment across semiconductor manufacturing processes. 

Upcoming Events

This one-day virtual event will discuss the critical issues and challenges impacting the AI infrastructure ecosystem, examining the growth and evolution of the AI ecosystem as it scales and the need for flexible, sustainable solutions. 

Industry Resources

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More