RCRTech engages industry communities through research-driven content, conversations, and connections. Building on 40+ years of RCR Wireless News excellence, RCRTech delivers trusted insights informing and connecting technology buyers with innovators shaping connectivity and compute.
RCRTech engages industry communities through research-driven content, conversations, and connections. Building on 40+ years of RCR Wireless News excellence, RCRTech delivers trusted insights informing and connecting technology buyers with innovators shaping connectivity and compute.




Schneider Electric said the first reference design provides a unified framework for power and liquid cooling management In sum – what to know: Reference frameworks – Schneider Electric has released two validated designs with Nvidia, integrating power management, liquid cooling, and operational controls to accelerate AI data center deployment. Nvidia supports – The second design targets AI factories of up to 142 kW per rack, …
Under the deal, Hitachi will supply power transmission and distribution equipment to OpenAI’s data centers In sum – what to know: Hitachi power – Hitachi will supply transmission and distribution systems to OpenAI, helping to reduce energy consumption across its global AI infra network. Stargate project – Hitachi joins SoftBank, Oracle, and MGX in the Stargate initiative to scale compute capacity through …
OpenAI will use AMD as a core compute partner for large-scale AI workloads, extending an existing collaboration In sum – what to know: 6GW of AMD GPUs – The multi-year agreement begins with a 1GW deployment of MI450 GPUs in 2026, expanding to 6GW across future generations. Equity incentives – OpenAI receives up to 160 million AMD shares, vesting as …
Groq currently operates data centers across the U.S., Canada, Europe, and the Middle East, with new sites planned in Asia In sum – what to know: Post-2025 expansion – Groq plans to exceed its 2025 buildout of 12 data centers, with new sites coming online across Asia and other regions next year. Surging AI demand – Chairman Jonathan Ross said …
Buying up old GPUs for AI might be the way to go for some smaller AI outfits Every time Nvidia drops a new flagship accelerator, the entire AI processing landscape reshapes practically overnight. Hyperscalers scramble for allocation and last generation’s hardware gets treated like it’s ancient history. But what doesn’t get anywhere near as much attention is the outgoing generation of accelerators — which …
New 102.4 tbps Silicon One promises efficiency gains for hyperscalers In sum – what we know: It’s easy to focus on the GPU arms race when talking about AI infrastructure, but the networking connecting all those accelerators is arguably almost as important. You can rack up as many GPUs as your budget allows — but if the data can’t shuttle between them …
How can quantization turn massive models into efficient tools without ruining their accuracy? Running large language models is expensive. The biggest ones pack hundreds of billions of parameters, each stored as a high-precision number that chews through memory, power, and premium hardware. But do we actually need all that precision? Increasingly, the answer is no. That realization has driven the …
Replacing copper with optical pipes could have a significant impact on the AI data bottleneck The semiconductor industry has been following the same steps for decades, revolving around shrinking the transistor and packing them more into a chip. It’s worked remarkably well. But there’s a problem emerging that transistor density simply can’t fix — getting data from point A to …
Enterprise

