Table of Contents
It’s been a big week for AI infrastructure as a whole, but especially for semiconductors. Increasingly more companies are joining to race to power future AI data centers, while there’s more movement on the transition to next-generation high-bandwidth memory — though like everything else in the world of semiconductors, supply constraints are coming. There’s also some movement on sovereign buildouts, both internationally and closer to home.
Talks of bubbles in the industry continue to build, but this week is all about the business of AI chips. Read more below.

Christian de Looper
Editor
RCR Wireless News
Top Stories
Qualcomm enters the data-center AI race with AI200 and AI250 accelerators
Qualcomm has officially launched its AI200 and AI250 data-center accelerators, representing a big move to beyond edge hardware for a company that has largely stuck to on-device hardware in the past. To be clear, these new chips follow on from the Cloud AI 100 Ultra that the company launched last year, however the new AI200 and AI250 are more designed for rack-scale deployment rather than taking the form of a card. Qualcomm says that the AI-200 will go on sale in 2026, with the AI-250 planned for 2027. They’ll both be available in a system that can fill a liquid-cooled server rack.
Qualcomm could have a lot to bring to the table here, given its experience building highly efficient chips. The AI200 and AI250 accelerators specifically are both designed around Qualcomm’s Hexagon NPUs, and scaling them for data center use seems like a smart move. However, it remains to be seen how much of an impact the likes of Qualcomm can make on an industry dominated by Nvidia and AMD.
Samsung and Nvidia deepen partnership with an AI “megafactory”
Samsung Electronics confirmed that it is in advanced talks to supply Nvidia with its next-generation HBM4 memory while unveiling plans for an “AI megafactory” — a semiconductor production hub driven by Nvidia’s Omniverse and accelerated-computing platforms. The project will deploy more than 50,000 Nvidia GPUs to model, simulate, and optimize chip production in real time, effectively merging AI design with AI manufacturing. The two companies have yet to confirm any concrete deal for Samsung to supply HBM4 to Nvidia for next-generation accelerators.
According to the companies, the Samsung AI Factory will use NVIDIA Omniverse to create high-fidelity digital twins of Samsung’s semiconductor production lines. These virtual environments will allow Samsung to simulate and optimize its manufacturing workflows before implementation, helping to identify inefficiencies and test new processes safely. Meanwhile, Samsung said in its statement that the two companies are “working together on HBM4.”
AI Semiconductors: What You Need to Know
Nvidia’s 260,0000-GPU deal with South Korea: Nvidia has confirmed that it will supply over 260,000 Blackwell GPUs to South Korea, with 50,000 of them set to be used for public AI projects and a national computer center, in a push for sovereign AI infrastructure.
DOE + AMD’s $1B “Lux” and “Discovery” systems: AMD and the US Department of Energy are working together to build two AI “supercomputers” powered by AMD AI accelerators, according to Reuters. The “Lux” system will arrive in 2026, while “Discovery” is slated for 2029.
SK Hynix posts record profit, but HBM4 supply is sold out: With HBM and DRAM in short supply, SK Hynix reportsthat customers have already reserved manufacturing slots for well into 2026, and that it won’t be able to take on new orders next year.
Apple’s U.S.-built AI servers begin shipping: Apple’s Houston facility has started delivering AI servers built with custom Apple Silicon for its “Private Cloud Compute” initiative. The servers will be installed in data centers around the US.
Nvidia hits $5T market cap: The first company in history to reach a $5 trillion valuation, Nvidia’s milestone highlights how global compute capacity is now the defining metric of value in the AI economy.
Upcoming Events
AI Infrastructure Forum 2025
This one-day virtual event will discuss the critical issues and challenges impacting the AI infrastructure ecosystem, examining the growth and evolution of the AI ecosystem as it scales and the need for flexible, sustainable solutions.
Industry Resources
Report: AI infrastructure will power the next economic revolution
Report: How to test and assure telco AI infrastructure
On-demand webinar: How to test and assure telco AI infrastructure