Table of Contents
The new Vera Rubin Space Module processed massive satellite datasets directly in space
In sum – what we know:
- AI at the ultimate edge – Nvidia’s Vera Rubin Space Module (Space-1) creates orbital data centers to process large language models and foundation models directly in space.
- Solving the downlink bottleneck – Processing data in orbit eliminates the need to send enormous, raw datasets to Earth, bypassing current latency and bandwidth constraints.
- A unified architecture – The space-bound hardware shares the same “Vera Rubin” architecture as Nvidia’s new terrestrial chips, providing a seamless development ecosystem from ground to orbit.
Apparently it’s not enough for Nvidia’s stock to go to the moon — Nvidia wants its chips in space too. At GTC 2026, Nvidia CEO Jensen Huang pulled the curtain back on the Vera Rubin Space Module, also known as Nvidia Vera Rubin Space Module. The idea is straightforward in concept if staggering in ambition — build orbital data centers that can run large language models and advanced foundation models directly in space, cutting out the need to downlink enormous datasets to Earth for processing.
The whole system runs on solar power, which Nvidia positions as enabling high-performance, energy-efficient AI at the edge. The edge here just happens to be orbit.
The tech
On the performance side, Nvidia says the Vera Rubin Space Module pushes up to 25 times the AI compute of the H100 for orbital inference workloads. That’s a big leap on paper, though it’s worth flagging that this benchmark is specific to inference in the space environment — power and thermal constraints in orbit make apples-to-apples comparisons with ground-based hardware genuinely difficult.
Architecturally, the module is built around a tightly integrated CPU-GPU design with high-bandwidth interconnects. It’s engineered to ingest and process large data streams from space-based instruments, like Earth observation satellites, communications arrays, scientific payloads– all in real-time, without first shipping raw data down to ground stations. That’s really the point of this — the ability to handle the data where it’s created, sidestepping the latency, bandwidth bottlenecks, and expense of downlinking everything to Earth.
There’s also what Nvidia describes as a unified, trusted execution environment for advanced AI reasoning. That matters for use cases where data integrity and security are non-negotiable — defense applications, sensitive commercial operations, that sort of thing. How robust that trusted execution environment proves against the realities of space radiation and long-duration missions is something that’ll need watching as the platform evolves.
Getting off the ground
Six commercial space companies are already working with the platform across orbital and ground environments, according to Nvidia. The roster includes Aetherflux, Axiom Space, Planet Labs PBC, Sophia Space, and Starcloud — spanning everything from space stations to Earth observation to cloud computing in orbit.
Kepler Communications is going a somewhat different route, rolling out Nvidia’s Jetson Orin across its satellite constellation for AI-driven data management. Kepler CEO Mina Mitry stated: “Nvidia Jetson Orin brings advanced AI directly to our satellites, allowing us to intelligently manage and route data across our constellation.” Smart data routing at the constellation level is arguably one of the more immediately practical applications here, given that satellite operators are already drowning in the sheer volume of data their constellations produce.
The diversity of early adopters is encouraging, though it’s fair to note that “deploying” can cover a wide spectrum — from active integration to early-stage testing. The real test comes when these companies can demonstrate sustained, reliable operations with this hardware actually in orbit.
The wider context
The Space Module is part of the wider Vera Rubin platform launch, which encompasses seven new chips entering production. The Vera CPU at the platform’s core packs 88 Nvidia-designed cores and delivers up to 1.2 TB/s of LPDDR5X memory bandwidth. On the interconnect front, Nvidia’s sixth-generation NVLink provides 3.6 terabytes per second of bandwidth per GPU — critical for the tightly coupled CPU-GPU workloads the space module is built to handle.
The broader Vera Rubin ecosystem extends to the Vera Rubin NVL72, a massive configuration pairing 72 Rubin GPUs with 36 Vera CPUs, alongside the more compact HGX Rubin NVL8 with eight Rubin GPUs. These are primarily terrestrial products, but they share the same underlying architecture — giving Nvidia a unified development narrative that stretches from the data center all the way to orbit.
As for when you can actually get your hands on a Vera Rubin Space Module, Nvidia is being characteristically noncommittal, saying only that it’ll be available “at a later date.” In the meantime, products like IGX Thor, Jetson Orin, and the RTX PRO 6000 Blackwell Server Edition are already shipping, offering space companies a stepping stone to start building on Nvidia’s platform while they wait for the flagship hardware to actually materialize.