AI compute isn’t one thing. It’s two. Under the umbrella of “AI workloads,” training and inference represent distinct computational worlds with different goals, hardware profiles, and economics. They often get …
Semiconductor News
-
-
For decades, compute has scaled faster than memory. Processors can execute more operations every year, but the speed at which data moves in and out of memory has lagged behind. …
-
The semiconductor industry is changing quickly, especially as it relates to AI. As AI workloads grow ever more demanding, old monolithic chips are giving way to new chiplet-based designs. But …
-
Artificial intelligence has reshaped the semiconductor industry, driving an endless chase for better performance and efficiency. But as transistor scaling slows and Moore’s Law fades, the gains from smaller nodes …
-
The memory wall is more of an issue than ever in AI workloads. How will it be fixed? As AI workloads scale, compute performance is increasing far faster than memory …
-
Is one kind of AI processor likely to reign supreme? GPUs have long been the workhorse behind much of the AI infrastructure buildup. But as AI needs have grown, specialized …
-
Edge AI silicon is getting much more capable, but how much will edge AI really handle? Artificial intelligence is moving outside massive data centers. As silicon evolves, AI processing is …
-
The Nvidia DGX Spark is a so-called “supercomputer” built for AI — but what does that mean, and why would you want one?
-
Semiconductor packaging is rapidly advancing, but how will it impact AI?
-
There might be movement from vendors like Nvidia and AMD, but supply issues run a whole lot deeper A few short years ago, it would have been hard to predict …