Why AI’s thermal wall is making liquid cooling mandatory The AI hardware arms race has other problems than just raw compute — like heat. GPU makers are pushing thermal design …
Semiconductor News
-
-
As data centers hit an energy bottleneck, analog chips and in-memory computing offer a low-power alternative AI training and running large models demands massive computational resources, and the GPUs doing …
-
Buying up old GPUs for AI might be the way to go for some smaller AI outfits Every time Nvidia drops a new flagship accelerator, the entire AI processing landscape …
-
New 102.4 tbps Silicon One promises efficiency gains for hyperscalers In sum – what we know: It’s easy to focus on the GPU arms race when talking about AI infrastructure, …
-
How can quantization turn massive models into efficient tools without ruining their accuracy? Running large language models is expensive. The biggest ones pack hundreds of billions of parameters, each stored …
-
Replacing copper with optical pipes could have a significant impact on the AI data bottleneck The semiconductor industry has been following the same steps for decades, revolving around shrinking the …
-
FPGAs may not be as powerful as GPUs, but they’re a whole lot more flexible Field-programmable gate arrays, sit in an interesting middle ground in the AI hardware landscape, somewhere …
-
Connecting AI chips with interconnects is arguably just as important as the chips themselves Modern AI training has moved far beyond what any single GPU can accomplish. Training large language …
-
Specs bumped to 2.3 kW and 22.2 TB/s bandwidth to cement leadership before launch later this year In sum – what we know: Nvidia’s grip on the AI accelerator market …
- Semiconductor News
Hyperscalers are all making ASICs — so why are they still buying from Nvidia and AMD?
Will ASICs completely take over from the GPU workhorses? The world’s largest technology companies are doing something that looks, at first glance, like a bit of a contradiction. Amazon, Google, …