Buying up old GPUs for AI might be the way to go for some smaller AI outfits Every time Nvidia drops a new flagship accelerator, the entire AI processing landscape …
Semiconductor News
-
-
New 102.4 tbps Silicon One promises efficiency gains for hyperscalers In sum – what we know: It’s easy to focus on the GPU arms race when talking about AI infrastructure, …
-
How can quantization turn massive models into efficient tools without ruining their accuracy? Running large language models is expensive. The biggest ones pack hundreds of billions of parameters, each stored …
-
Replacing copper with optical pipes could have a significant impact on the AI data bottleneck The semiconductor industry has been following the same steps for decades, revolving around shrinking the …
-
FPGAs may not be as powerful as GPUs, but they’re a whole lot more flexible Field-programmable gate arrays, sit in an interesting middle ground in the AI hardware landscape, somewhere …
-
Connecting AI chips with interconnects is arguably just as important as the chips themselves Modern AI training has moved far beyond what any single GPU can accomplish. Training large language …
-
Specs bumped to 2.3 kW and 22.2 TB/s bandwidth to cement leadership before launch later this year In sum – what we know: Nvidia’s grip on the AI accelerator market …
- Semiconductor News
Hyperscalers are all making ASICs — so why are they still buying from Nvidia and AMD?
Will ASICs completely take over from the GPU workhorses? The world’s largest technology companies are doing something that looks, at first glance, like a bit of a contradiction. Amazon, Google, …
-
Multi-year Broadcom partnership bypasses cloud intermediaries to lock in massive AI scale In sum – what we know: Anthropic has made a relatively large AI infrastructure, locking in $21 billion worth …
-
It’s the latest in a series of major OpenAI deals In sum – what we know: OpenAI has inked yet another deal with a chip provider. The company has partnered …