RCRTech engages industry communities through research-driven content, conversations, and connections. Building on 40+ years of RCR Wireless News excellence, RCRTech delivers trusted insights informing and connecting technology buyers with innovators shaping connectivity and compute.
RCRTech engages industry communities through research-driven content, conversations, and connections. Building on 40+ years of RCR Wireless News excellence, RCRTech delivers trusted insights informing and connecting technology buyers with innovators shaping connectivity and compute.



![]()
Under the terms of the agreement, Cisco will power, connect, and secure a large-scale AI cluster operated by G42 In sum – what to know: Expanded AI cooperation – Cisco and G42 will jointly build secure AI infrastructure aligned with the US–UAE AI Acceleration Partnership and digital transformation goals. AMD-powered AI cluster deployment – The large-scale system features AMD MI350X …
A multibillion-dollar makeover clears the way for SoftBank’s $22 billion investment, new funding rounds, and a possible IPO in the future In sum, what to know: More freedom: OpenAI is freed from the constraints of the capped-profit model that scared away some investors, and Microsoft is freed from the constraints of the “AGI clause” that stifled its pursuit of AGI. …
A Qualcomm spokesperson told RCR Wireless News that this new collaboration is in line with its broader strategy to expand from mobile into large-scale AI infrastructure and data center markets In sum – what to know: 200 MW AI deployment planned – Humain will integrate Qualcomm’s AI200 and AI250 rack systems starting in 2026 to deliver large-scale inference services across …
In Saudi Arabia, Groq is collaborating with Aramco Digital to build what it calls the ‘world’s largest inferencing data center’ In sum – what to know: AI data center investment – Groq sees the Kingdom’s surplus energy and land as ideal for large-scale AI infrastructure. Expanded collaboration – Groq and Aramco Digital plan to build the world’s largest AI inferencing …
AI compute isn’t one thing. It’s two. Under the umbrella of “AI workloads,” training and inference represent distinct computational worlds with different goals, hardware profiles, and economics. They often get lumped together, but the split matters — especially as it relates to the compute capacities of the data centers that are used for these two different tasks. Understanding the divide …
For decades, compute has scaled faster than memory. Processors can execute more operations every year, but the speed at which data moves in and out of memory has lagged behind. That mismatch, known as the “memory wall,” is now one of the defining constraints in artificial intelligence. AI makes the problem even worse. These days, training and serving large models …
The semiconductor industry is changing quickly, especially as it relates to AI. As AI workloads grow ever more demanding, old monolithic chips are giving way to new chiplet-based designs. But what exactly are chiplets and how will they radically improve performance for AI? Here’s a look. What are chiplets? Chiplets are small, functional blocks of silicon, each optimized for a …
Artificial intelligence has reshaped the semiconductor industry, driving an endless chase for better performance and efficiency. But as transistor scaling slows and Moore’s Law fades, the gains from smaller nodes are running into a wall. Now, packaging is where the real action is. In this new phase, performance breakthroughs aren’t being won by shrinking transistors — but instead by innovating …
Enterprise

