Table of Contents
The focus in AI is often models, hardware, algorithms, but perhaps its cross-industry collaboration that will be AI’s most transformative innovation.
Cross-sector AI collaboration may be the single greatest advancement in the era of AI, as companies are coming together in unconventional ways to create solutions they would not be able to achieve independently: data center owners, operators, general contractors, suppliers, PEMBs, banks, telcos, finance, hyperscalers, chipmakers, energy – they’re all combining their respective areas of expertise, and resources, to push the envelope and drive innovation.
“There’s been a huge shift over the last two or three years toward collaboration and building an ecosystem of partners,” says Joe Reele, Vice President of Solution Architects for Schneider Electric, whose hardware, software and services span grid-scale power plants, microgrids, and mission-critical facilities (i.e., hospitals, air traffic control, telecom sites). “Like a bed of nails with each nail supporting equal weight, rather than one supporting most of it, it’s driving innovation. Rather than just your own goals and KPIs, you have a shared vision.”
The “Big Bang” moment in the modern AI era might have been back in 2019 when Microsoft announced its $1 billion investment in OpenAI. At the time, it seemed revolutionary, but by 2023, Google, Amazon, Meta, AWS, and other hyperscalers were racing toward alliances that have since expanded through circular deals and what would seem like “frenemy” collaborations. Some examples would be: Nvidia-OpenAI-Microsoft; CoreWeave-Nvidia-Microsoft Web ; and startup investment loops like AWS/Azure/GCP with Anthropic.
Another iconic deal that changed perspective was BlackRock’s $30 billion deal to build data centers and energy infrastructure with Nvidia, Microsoft, and MGX through the AI Infrastructure Partnership (AIP). BlackRock also sat between real estate developers and utility companies in projects involving Dominion and NextEra.
Industry/Sector focus
Those foundational deals have since led to offshoots targeted at different industries and sectors, such as finance or healthcare. For example, to support growing enterprise demand for AI, traditional rivals now co-locate hardware, as with Oracle Exadata hardware now installed within Microsoft Azure data centers. And for Hyperscalers like Meta and Microsoft, whose data centers are under construction and constrained by power issues, there have been contracts with neoclouds like CoreWeave, Crusoe, Nscale, and Nebius. Additionally, companies like chip companies like Nvidia and AMD are also striking deals with neoclouds to rapidly scale AI infrastructure, circumvent traditional hyperscalers’ bottlenecks, and diversify for AI training and inference.
There is also a “democratization” emerging across industries, as with the recent IBM-Palantir partnership, the Google and Mayo Clinic Alliance, or the Siemens-Databricks Manufacturing Alliance.
As enterprises in different sectors focus on AI, more companies will co-develop vertically focused alliances, as just happened with Microsoft and Epic for “Healthcare Intelligence,” or Google and Ascension for “Project Nightingale.”
Telecom alliances
In the past year, partnerships between hyperscalers and telcos has shifted to be deeply integrated with each side depending on the other, with a particular focus on fiber deployment. Telcos like Singtel, Softbank, SK Telecom, and Telus work closely with hyperscalers and with chipmakers to try and solve several constraints, including power, edge AI, and sovereign AI, while also moving toward Network-as-a-Service and Sovereign AI-as-a-service models.
Recently, there have also been announcements around hyperscaler-telco partnerships in high-capacity subsea cables that ultimately link global data centers. For example, Microsoft and Meta worked with Telefónica in the Marea cable venture, and Orange, Tata and Telecom Italia (Sparkle) have joined forces with Google, Meta, Microsoft, and AWS in joint projects, including: AWS Fastnet; Meta’s Waterworth; Meta, Amazon, and Telin’s Bifrost; and Google’s Grace Hopper. There are many others slated for the next few years.
Real estate-engineering and design nexus
In real estate, companies like JLL and InfraPartners have partnered to create prefabricated AI data centers that accelerate the deployment and operation of AI infrastructure. To solve the power constraint in its projects, JLL acquired and integrated SKAE’s power systems, technical services, and project management for data centers.
Bechtel recently partnered with NVIDIA to modularize its 1 GW AI data center designs, aiming to speed up construction for global AI infrastructure. Additionally, it collaborated with Nautilus Data Technologies to build high-performance data centers, which focus on drastically cutting the power needed for cooling and environmental sustainability.
Jacobs is building “AI factory digital twins with Nvidia, as well as integrating Palantir’s Foundry and AI Platform into domain-specific workflows. Jacobs is using the NVIDIA Omniverse Blueprint to model complex power, mechanical, and electrical systems for GW-scale AI infrastructure.
Last week during the India AI Impact Summit, “ports-to-power” giant Adani announced it would work with Google and invest $100 billion to build India’s largest GW-scale AI data center campus in Visakhapatnam, Andhra Pradesh, while also pledging to work with Microsoft to develop additional AI data center campuses in Hyderabad and Pune.
Grid Operators and Data Centers
“Grid operators, in particular, benefit if siloed approaches give way to partnership approaches, as data centers will require elasticity of grid reliability as workloads get larger,” noted Reele. He believes data centers and large loads will ultimately become the “great grid stabilizer,” as opposed to disruptors. “Instead of relying on only utilities to produce, data centers’ energy-producing equipment will work with the grid and share. That will be a game changer.”
Indeed, data centers might increasingly share their on-site power generation and storage assets with utility providers, like PJM, NYISO, and ISO-NE, to alleviate grid stress, provide emergency power, and manage demand response during peak periods. This “prosumer” model would use “behind-the-meter” (BTM) equipment, allowing data centers to operate independently and, in some cases, export excess energy back to the grid.
PJM Interconnection, for example, the largest U.S. grid operator, is working with Google and Alphabet’s Tapestry to use AI to accelerate grid interconnections, and MISO (Midcontinent Independent System Operator) announced recently it will work with Microsoft to deploy its technologies across the grid.
Increasingly, grid operators want to dispel the perception that data centers connecting to utility power make microgrids that bridge loads “stranded assets.” For example, in a recent Unison Energy blog, there is discussion about the ways in which data center developers can increasingly leverage microgrids concurrently with utility power to back up power supply during outages. Alternatively, “developers could employ microgrids that replace utility power entirely…and participate in ancillary services that center around cleaner energy and renewable energy credits and carbon offset credits.”
According to Reele, there are three pillars that have to come together for large electrical loads to work with electrical providers: “You need policy, technology, and a digital thread so that regulations catch up to where AI is headed.”
In terms of policy, utilities are heavily regulated, so it takes 5 to 10 years to get a new high-voltage transmission line approved and built, even though an AI data center can be built in less than two years. Because residential power is prioritized over corporate “mega-loads,” it’s important to consider how permitting can balance the need to prevent blackouts with the need for data centers to pay for grid upgrades.
In Reele’s reference to technology, the physical hardware needed has to evolve because AI workloads are bursty in nature and can cause grid instability. Standard transformers no longer cut it. Grid-scale batteries, microgrids, and liquid cooling are needed.
The “digital thread” is the continuous stream of data that has to connect every piece of equipment—from the power plant to the individual AI chip. By sharing a “digital thread,” the data center would tell the utility company: “In 10 minutes, we are starting a massive AI training run; prepare the surge.” This transparency would allow the utility to manage the load in real-time rather than reacting to a problem after the fact.
In some cases, like with Adani, it’s possible to have control of all three facets. But for most looking to build everything to and inside the data center “shell,” the policy, technology, and digital thread will have to come through diverse partnerships and alliances. Complex, unconventional partnerships will increasingly be essential for building both the “shell” (infrastructure) and the “core” (IT equipment) in the AI era.