Today’s top stories focus predominantly on the big players: OpenAI, AMD, Nvidia, et al. They highlight a major trend in AI semiconductors – Nvidia is the reigning champ right now, but AMD is coming for it, and the rest of the industry seemingly wants to lessen their dependence on either of them. That is, unless you’re Oracle, who has signed a second AMD-related major deal that will see AMD supply the computer for Oracle’s cloud services ambitions. Arguably, Oracle is already a hyperscaler – but could its cloud service push see it put in the same category as the likes of Google and AWS?
Read on for more on today’s top AI semiconductor stories.
Top Stories
OpenAI and Broadcom confirm strategic partnership to build custom AI chips
After months of speculation, OpenAI and Broadcom have officially announced a strategic collaboration to co-develop OpenAI’s first in-house AI processor. The companies confirmed the partnership in a joint statement on OpenAI’s website, noting that the collaboration will combine OpenAI’s expertise in model architecture with Broadcom’s experience in chip design and systems integration. The effort, internally known as Project Stargate, is aimed at building next-generation AI accelerators and networked systems to support OpenAI’s expanding compute needs. The announcement follows earlier media reports that the two companies had been working together on early silicon development since mid-2024.
While technical details remain scarce, the collaboration signals OpenAI’s intention to diversify beyond Nvidia and AMD GPUs and follow the path of other major AI players, like Google with its TPUs and Amazon with its Trainium chips. Industry analysts expect Broadcom to leverage its strengths in networking silicon, custom ASICs, and advanced packaging, potentially incorporating HBM memory and 2.5D packaging techniques common in large-scale AI accelerators. Given OpenAI’s recent blockbuster deals with Nvidia and AMD, it’s clear that the partnership is more of a long-term bet on self-designed silicon rather than a short-term effort to bolster compute.
AMD to beat Nvidia to the punch with first 2nm GPU
In an interview with Yahoo Finance, AMD CEO Lisa Su confirmed that the company’s upcoming Instinct MI450 data-center GPU will be built on TSMC’s 2nm process node, marking the first officially announced GPU to leverage the foundry’s most advanced technology. The MI450, slated for 2026, will extend AMD’s chiplet-based design used in the MI300 series, combining multiple compute dies connected by Infinity Fabric and stacked with HBM3E memory for extreme bandwidth. The shift to 2nm will deliver major improvements in power efficiency, transistor density, and thermal behavior, allowing for higher core counts and faster on-package interconnects. The MI450 is also expected to expand support for FP8 and FP4 precision, delivering higher matrix throughput for large-scale AI training and inference.
It should be noted that it’s currently unclear exactly how much of the MI450 will be built on the 2nm process. Industry reports suggest that only the main compute die will be built on 2nm, while other components will be built on 3nm instead. With confirmed customers including OpenAI and Oracle, AMD is taking the fight to Nvidia in a big way. To be clear, AMD has long been a major player, and not just an alternative to Nvidia – but it’s hard to deny Nvidia’s dominance in AI compute. While Nvidia continues to dominate the software ecosystem through CUDA and TensorRT, AMD’s early move to 2nm gives it a process leadership advantage heading into the next GPU generation. If the MI450’s performance-per-watt scales as projected, AMD could meaningfully narrow the efficiency gap, and in some workloads, even surpass NVIDIA’s forthcoming Blackwell-class GPUs.
AI Semiconductors: What you need to know
Oracle to offer cloud services powered by AMD’s next-gen GPUs: Oracle Cloud Infrastructure (OCI) will integrate AMD’s upcoming Instinct MI450 accelerators into its high-performance AI superclusters, extending the companies’ existing collaboration. The move gives Oracle customers an alternative to NVIDIA-based instances and deepens AMD’s foothold in hyperscale AI deployments — particularly in training large language models.
Navitas stock surges amid NVIDIA partnership speculation: Shares of Navitas Semiconductor, a GaN and SiC power electronics maker, jumped after reports linked the company to potential design wins in NVIDIA’s next-generation AI systems. While no deal has been confirmed, the speculation underscores rising demand for gallium nitride power devices capable of improving efficiency in AI data centers.
University of Glasgow develops AI-assisted analog IC sizing: Researchers from the University of Glasgow and partner institutions unveiled a new AI-driven analog IC sizing technique that reduces simulation time by over 70%. The approach applies machine learning to transistor-level design, potentially accelerating development of mixed-signal and sensor chips vital for edge AI and IoT systems.
AMD, Intel explore on-package memory using UCIe standard: A new academic paper outlines advances in using the UCIe (Universal Chiplet Interconnect Express) standard to integrate on-package memory modules directly alongside compute dies. The study, which references collaborative efforts involving AMD and Intel, demonstrates how UCIe-enabled memory subsystems could dramatically improve bandwidth density and power efficiency in future chiplet-based architectures.