Table of Contents
A Qualcomm spokesperson told RCR Wireless News that this new collaboration is in line with its broader strategy to expand from mobile into large-scale AI infrastructure and data center markets
In sum – what to know:
200 MW AI deployment planned – Humain will integrate Qualcomm’s AI200 and AI250 rack systems starting in 2026 to deliver large-scale inference services across Saudi Arabia and abroad.
Hybrid edge-to-cloud model – The initiative combines regional and global expertise to develop what the partners describe as the world’s first fully optimized hybrid AI system.
Part of Saudi AI strategy – The collaboration supports the Kingdom’s goal to expand its AI and semiconductor ecosystem and attract global AI-driven investments.
Saudi Arabia’s artificial intelligence (AI) venture Humain, backed by Saudi Arabia’s Public Investment Fund (PIF), and Qualcomm Technologies announced a collaboration to deploy large-scale AI infrastructure in Saudi Arabia.
The project seeks to build what the companies describe as the world’s first fully optimized edge-to-cloud hybrid AI system, offering global inferencing services.
Under the program, Humain plans to deploy 200 megawatts of Qualcomm’s AI200 and AI250 rack solutions starting in 2026. These systems will be used to deliver high-performance AI inference capabilities for enterprises and government institutions in Saudi Arabia and internationally. The initiative is designed to support large-scale AI workloads while maintaining competitive total cost of ownership (TCO).
According to the companies, the collaboration combines Humain’s regional infrastructure and expertise in full-stack AI with Qualcomm’s semiconductor and AI technologies.
As part of the collaboration, Humain’s Saudi-developed AI ALLaM models will be integrated with Qualcomm’s AI platforms to support new applications across industries. The pair also plans to develop tailored solutions for enterprise and government clients within and beyond Saudi Arabia.
The Qualcomm AI200 and AI250 systems are designed to deliver rack-scale performance and enhanced memory capacity for generative AI inferencing. Both products are intended to enable scalable and efficient hybrid AI operations across data centers and edge environments.
A Qualcomm spokesperson told RCR Wireless News that this new collaboration is in line with its broader strategy to expand from mobile into large-scale AI infrastructure and data center markets.
“This is very well aligned with and evidence of our diversification strategy. As AI continues to scale and inference begins to outpace training, we’re starting to see a shift in the market. Efficiency—both in terms of tokens per dollar and energy consumption—is becoming the new benchmark. That shift creates a significant opportunity for us,” the spokesperson said.
The spokesperson also highlighted that the Qualcomm AI200 and AI250 solutions offer rack-scale performance and superior memory capacity for fast generative AI inference at industry-leading total cost of ownership (TCO). “Qualcomm AI200 and AI250 use next-gen Qualcomm Hexagon NPU technology and are purpose-built for AI inference. Qualcomm AI250 will debut with an innovative memory architecture based on near-memory computing, providing a generational leap in efficiency and performance for AI inference workloads,” he added.