RCRTech engages industry communities through research-driven content, conversations, and connections. Building on 40+ years of RCR Wireless News excellence, RCRTech delivers trusted insights informing and connecting technology buyers with innovators shaping connectivity and compute.
RCRTech engages industry communities through research-driven content, conversations, and connections. Building on 40+ years of RCR Wireless News excellence, RCRTech delivers trusted insights informing and connecting technology buyers with innovators shaping connectivity and compute.



![]()
OpenAI will use AMD as a core compute partner for large-scale AI workloads, extending an existing collaboration In sum – what to know: 6GW of AMD GPUs – The multi-year agreement begins with a 1GW deployment of MI450 GPUs in 2026, expanding to 6GW across future generations. Equity incentives – OpenAI receives up to 160 million AMD shares, vesting as …
Groq currently operates data centers across the U.S., Canada, Europe, and the Middle East, with new sites planned in Asia In sum – what to know: Post-2025 expansion – Groq plans to exceed its 2025 buildout of 12 data centers, with new sites coming online across Asia and other regions next year. Surging AI demand – Chairman Jonathan Ross said …
The acquisition would extend BlackRock’s AI-linked portfolio just a year after acquiring GIP for $12.5 billion In sum – what to know: BlackRock eyes $40B acquisition – GIP is nearing a deal to buy Aligned Data Centers, which operates 78 facilities in the Americas, in one of the year’s largest infrastructure transactions. AI demand drives M&A – The planned purchase …
Fujitsu and Nvidia will co-develop an AI agent platform focused on sectors such as healthcare, manufacturing, and robotics In sum – what to know: AI collaboration – the partnership will deliver sector-specific AI agents and computing infrastructure built on Monaka CPUs, Nvidia GPUs, and NVLink Fusion interconnect. Industry apps – a joint AI platform will target manufacturing, healthcare, and robotics, with agents …
AI compute isn’t one thing. It’s two. Under the umbrella of “AI workloads,” training and inference represent distinct computational worlds with different goals, hardware profiles, and economics. They often get lumped together, but the split matters — especially as it relates to the compute capacities of the data centers that are used for these two different tasks. Understanding the divide …
For decades, compute has scaled faster than memory. Processors can execute more operations every year, but the speed at which data moves in and out of memory has lagged behind. That mismatch, known as the “memory wall,” is now one of the defining constraints in artificial intelligence. AI makes the problem even worse. These days, training and serving large models …
The semiconductor industry is changing quickly, especially as it relates to AI. As AI workloads grow ever more demanding, old monolithic chips are giving way to new chiplet-based designs. But what exactly are chiplets and how will they radically improve performance for AI? Here’s a look. What are chiplets? Chiplets are small, functional blocks of silicon, each optimized for a …
Artificial intelligence has reshaped the semiconductor industry, driving an endless chase for better performance and efficiency. But as transistor scaling slows and Moore’s Law fades, the gains from smaller nodes are running into a wall. Now, packaging is where the real action is. In this new phase, performance breakthroughs aren’t being won by shrinking transistors — but instead by innovating …
Enterprise

