Meta continues high-stakes pivot toward AI infrastructure

Home AI Infrastructure News Meta continues high-stakes pivot toward AI infrastructure

With Nebius deal, Meta is doing whatever it takes to build out or acquire data centers, silicon, and software – both reducing its reliance on other players and diversifying who it relies on for AI infrastructure.

Yesterday’s announcement that Meta has entered a five-year deal for up to $27 billion with Dutch neocloud provider Nebius Group helps validate Nebius’s standing as a serious alternative to traditional hyperscalers for AI infrastructure, and as a serious competitor to CoreWeave. The contract, which builds on last year’s $3 billion deal with Meta, might exceed Nebius’ entire market value, since its market cap is around $25-28 billion.

Following the announcement, Nebius stock surged 14%–16%, with analysts noting it solidifies Nebius’s position as a top-tier “neocloud” player alongside CoreWeave. It wasn’t long ago that Nebius was not all that well known to Wall Street, but yesterday, its stock rose 14-16% – up 35% YTD – and 300% overall for the past year.

What’s the attraction? Unlike traditional clouds that are retrofitted for AI, Nebius’ data centers are fundamentally AI native, designed from the ground up to more efficiently handle the enormous power and cooling requirements of next-gen GPUs, such as NVIDIA’s Vera Rubin, which will be featured in this large-scale deployment at Meta.

With Nebius, Meta further bolsters its $135 billion capex plan for AI infrastructure, showing a willingness to diversify to win the AI race with a two-part contract:

  • $12 billion for dedicated AI compute capacity, with Nebius providing exclusive NVIDIA-powered infrastructure (starting next year) for Meta’s AI research.
  • $15 billion in additional capacity, with Meta agreeing to purchase any capacity Nebius does not sell to other customers, which bolsters Nebius’ revenue and mitigates risk.

Meta’s growing AI-forward strategy

Meta is not a cloud computing company, but it has a lot of compute demand, so in buying a large volume of chips from AMD and Nvidia, building its own chips (MTIA), and leasing/renting capacity from a number of other players, it is certainly building an AI-forward strategy. Its Llama model isn’t known to be one of the best, but these moves will greatly accelerate Llama’s evolution, and will no doubt secure massive GPU capacity in a race to gain a lead in hyperscaler AI wars.

Its recent “Meta Compute” initiative is meant to build tens-of-GWs of infrastructure, led by former Google exec Santosh Janardhan and Daniel Gross, who joined Meta last year from Safe Superintelligence. Both will work with Powell McCormick on partnering with governments and sovereigns to build, deploy, and finance AI infrastructure for Meta.

According to analysts who follow Meta closely, Meta’s strategy on long-term efficiency and the monetization of AI investments could lead to cost and performance advantages that will be hard to ignore.

For example,  following the Meta-Nebius announcement, Brian Nowak, Morgan Stanley’s managing director and senior internet analyst, issued a flash note to investors explaining how this “asset-light” approach to GPU capacity allows Meta to outpace competitors who are solely building in-house. During his fireside chat with CFO Susan Li during a TMT Conference earlier this month, she said Mark Zuckerberg has tremendous focus to the problem at hand, calling him a “recruiter in chief” who is rapidly bringing people on board to address the problems at hand. In other words, Meta’s layoffs go beyond cost-cutting and signify a “broader shift” toward realigning skillsets for AI-specific research and engineering talent.

Similarly, the drive for more capacity is pushing creativity around AI infrastructure to new limits, with Li at the fireside chat talking about puncture-, water-, and even tornado-resistant tents Meta is using as “rapid deployment structures” to meet capacity needs. “When [Mark] saw we didn’t have enough data center capacity to put the servers in, he encouraged us to be creative about data center infrastructure.. We are never a company that doesn’t respond to the challenge at hand with the most focus and energy we can bring to bear,” said Li, during the chat with Nowak.

At the same time Meta is accelerating and diversifying its strategy for AI infrastructure, it is internally doing the same, leveraging AI for leaner operations and productivity gains. Zuckerberk considers “AI core to how we work” and is using Metamate and tools from Google and OpenAI to enhance internal productivity and workflows efficiencies, even preparing to change employee assessments based on AI-driven performance, with a goal of automating 50% of the company’s internal coding in 2026.

In parallel to its massive infrastructure investments, Meta is aggressively pursuing superintelligence and AGI, with former Scale AI CEO Alexandr Wang and former GitHub CEO Nat Friedman at the helm of Meta’s MSL. And pivoting from its VR/Metaverse focus toward AI-powered hardware, like its Ray-Ban Meta series, which Zuckerberg has said will become the primary way humans interact with AI agents in daily life.

All of this shows that Zuckerberg believes as he has stated: “2026 will be the year AI dramatically changes the way we work.” Meta’s pivot toward an “AI-native” internal culture, its massive infrastructure commitments, and its aggressive buildout of data center and chip strategy, shows that it wants to become a vertically integrated AI infrastructure powerhouse.

What you need to know in 5 minutes

Join 37,000+ professionals receiving the AI Infrastructure Daily Newsletter

This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More